Retire stackforge/healthnmon

This commit is contained in:
Monty Taylor 2015-10-17 16:03:11 -04:00
parent 980c492005
commit 0d27b79152
196 changed files with 5 additions and 51843 deletions

23
.gitignore vendored
View File

@ -1,23 +0,0 @@
Authors
AUTHORS
ChangeLog
healthnmon/versioninfo
*.pyc
/coverage.xml
/.project
/pep8.txt
/nova.sqlite
/nosetests.xml
/clean.sqlite
/.coverage
/.pydevproject
/tests.sqlite
/pylint.txt
/.settings
/.tox
/.cache.bundle
/dist
/covhtml
/target
/healthnmon.egg-info
/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/healthnmon.git

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,8 +0,0 @@
graft etc
include run_tests.sh
include MANIFEST.in Authors
include tox.ini
include LICENSE
include healthnmon/db/sqlalchemy/migrate_repo/migrate.cfg
include healthnmon/db/sqlalchemy/migrate_repo/versions/*.sql
global-exclude *.pyc

View File

@ -1,18 +0,0 @@
healthnmon
==========
Healthnmon aims to deliver “Cloud Resource Monitor”, an extensible service to OpenStack Cloud Operating system by providing monitoring service for Cloud Resources and Infrastructure with a pluggable framework for “Inventory Management”, “Alerts and notifications” and “Utilization Data”
HP has proposed following Blueprints for OpenStack and has the initial implementation of the same in this repository
Cloud Inventory Manager (https://blueprints.launchpad.net/openstack-devops/+spec/cloud-inventory-manager)
Alerts and Notification (https://blueprints.launchpad.net/openstack-devops/+spec/resource-monitor-alerts-and-notifications)
Utilization Data (https://blueprints.launchpad.net/openstack-devops/+spec/utilizationdata)
Refer the Wiki and Downloads section for more information: https://github.com/healthnmon/healthnmon
Healthnmon module RPM which works with OpenStack ESSEX release can be downloaded from https://github.com/healthnmon/healthnmon/downloads

View File

@ -1,16 +1,7 @@
Healthnmon
==========
This project is no longer maintained.
Healthnmon aims to deliver “Cloud Resource Monitor”, an extensible service to OpenStack Cloud Operating system by providing monitoring service for Cloud Resources and Infrastructure with a pluggable framework for “Inventory Management”, “Alerts and notifications” and “Utilization Data”
The contents of this repository are still available in the Git source code
management system. To see the contents of this repository before it reached
its end of life, please check out the previous commit with
"git checkout HEAD^1".
HP has proposed following Blueprints for OpenStack and has the initial implementation of the same in this repository
Cloud Inventory Manager (https://blueprints.launchpad.net/openstack-devops/+spec/cloud-inventory-manager)
Alerts and Notification (https://blueprints.launchpad.net/openstack-devops/+spec/resource-monitor-alerts-and-notifications)
Utilization Data (https://blueprints.launchpad.net/openstack-devops/+spec/utilizationdata)
Refer the Wiki and Downloads section for more information: https://github.com/healthnmon/healthnmon
Healthnmon module RPM which works with OpenStack ESSEX release can be downloaded from https://github.com/healthnmon/healthnmon/downloads

View File

@ -1,65 +0,0 @@
#!/usr/bin/env python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import config
from nova import service, utils
from nova.openstack.common import cfg
from healthnmon import log as logging
import eventlet
import gettext
import os
import sys
import healthnmon
from healthnmon.collector.service import HealthnmonCollectorService
"""Starter script for healthnmon module."""
healthnmon_collector_opts = [
cfg.StrOpt('healthnmon_collector_manager',
default=
'healthnmon.collector.manager.HealthnMonCollectorManager',
help='The healthnmon collector manager class to use'),
cfg.StrOpt('healthnmon_collector_topic',
default='healthnmon.collector',
help='The topic used by healthnmon-collector'), ]
CONF = cfg.CONF
CONF.register_opts(healthnmon_collector_opts)
eventlet.monkey_patch(
all=False, os=True, select=True, socket=True, thread=False, time=True)
# If ../healthnmon/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'healthnmon', '__init__.py')):
sys.path.insert(0, possible_topdir)
gettext.install('healthnmon', unicode=1)
if __name__ == '__main__':
logging.healthnmon_collector_setup()
utils.monkey_patch()
config.parse_args(sys.argv)
server = HealthnmonCollectorService.create(
binary='healthnmon-collector',
topic=CONF.healthnmon_collector_topic,
manager=CONF.healthnmon_collector_manager)
service.serve(server)
service.wait()

View File

@ -1,317 +0,0 @@
#!/usr/bin/env python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
CLI interface for healthnmon management.
"""
import errno
import gettext
import optparse
import os
import sys
import getpass
# If ../healthnmon/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'healthnmon', '__init__.py')):
sys.path.insert(0, POSSIBLE_TOPDIR)
gettext.install('healthnmon', unicode=1)
from nova import context
from nova.openstack.common import cliutils
from nova import crypto
from healthnmon import db
from nova import exception
from nova import config
from healthnmon import log as logging
from nova import utils
from nova import version
from healthnmon.db import migration
from nova.db import api as db_api
from healthnmon.common import ssh_configuration
from nova.openstack.common import rpc
from oslo.config import cfg
api_opts = [
cfg.StrOpt('healthnmon_topic',
default='healthnmon',
help='the topic healthnmon service listen on')
]
CONF = cfg.CONF
CONF.register_opts(api_opts)
def args(*args, **kwargs):
def _decorator(func):
func.__dict__.setdefault('args', []).insert(0, (args, kwargs))
return func
return _decorator
class DbCommands(object):
"""Class for managing the database."""
def __init__(self):
pass
@args('--version', dest='version', metavar='<version>',
help='Database version')
def sync(self, version=None):
"""Sync the database up to the most recent version."""
return migration.db_sync(version)
def version(self):
"""Print the current database version."""
print migration.db_version()
class SSHCommands(object):
"""Class for setting up SSH connection between host and appliance"""
def configure(self, hostname, user):
"""Set up SSH key pair between the appliance and host"""
service_records = db_api.service_get_all_by_host(
context.get_admin_context(), hostname)
if len(service_records) == 0:
print _("Unable to find the host '%s' in the list of \
compute nodes. For a list of known compute nodes try \
'nova-manage service list'" % (hostname))
return
remoteuser = str(user) + "@" + str(hostname)
password = getpass.getpass("Enter password for %s: " % remoteuser)
ssh_configuration.configure_host(hostname, user, password)
class ProfileCommands(object):
"""Class for managing the profiling ."""
def __init__(self):
pass
@args('--state', dest='state', metavar='<state>',
help='Whether to enable/disable CPU profile')
@args('--module', dest='module', metavar='<module>',
help='module name for which CPU profiler decorator is applied')
@args('--decorator', dest='decorator', metavar='<decorator>',
help='CPU profiler decorator name')
def cputime(self,
state,
module,
decorator='healthnmon.profiler.\
profile_cpu.profile_cputime_decorator'):
"""enables/disables decorator to profile CPU in healthnmon """
if module is None:
print _("Specify module name for profiling cpu time")
status = None
if state == 'disable':
status = False
elif state == 'enable':
status = True
else:
print _("Wrong arguments supplied. \
Possible arguments are enable/disable")
return
result = rpc.call(context.get_admin_context(),
CONF.healthnmon_topic,
{"method": "profile_cputime",
"args": {"module": module,
"decorator": decorator,
"status": status}})
@args('--state', dest='state', metavar='<state>',
help='Whether to enable/disable memory profile')
@args('--method', dest='method', metavar='<method>',
help='method name for which memory profiler \
decorator need to be applied')
@args('--setref', dest='setref', action="store_true", default=False,
help='Whether to Set reference for enabling \
relative memory profling')
@args('--decorator', dest='decorator', metavar='<decorator>',
help='memory profiler decorator name')
def memory(self, state, method,
decorator='healthnmon.profiler.profile_mem.\
profile_memory_decorator', setref=False):
"""enables/disables decorator to profile memory in healthnmon """
if method is None:
print _("Specify method name for profiling memory")
status = None
if state == 'disable':
status = False
elif state == 'enable':
status = True
else:
print _("Wrong arguments supplied. Possible \
arguments are enable/disable")
return
result = rpc.call(context.get_admin_context(),
CONF.healthnmon_topic,
{"method": "profile_memory",
"args": {"method": method,
"decorator": decorator,
"status": status,
"setref": setref}})
class LogCommands(object):
"""Class for managing the healthnmon logger."""
def __init__(self):
pass
@args('--level', dest='level', metavar='<level>',
help='log level')
@args('--module', dest='module', metavar='<module>',
help='module')
def setlevel(self, level='INFO', module='healthnmon'):
"""enables in setting the required log level to healthnmon logger."""
result = rpc.call(context.get_admin_context(),
CONF.healthnmon_topic,
{"method": "setLogLevel",
"args": {"level": level,
"module": module}})
CATEGORIES = {'db': DbCommands,
'ssh': SSHCommands,
'profile': ProfileCommands,
'log': LogCommands}
def lazy_match(name, key_value_tuples):
"""Finds all objects that have a key that case insensitively contains
[name] key_value_tuples is a list of tuples of the form (key, value)
returns a list of tuples of the form (key, value)"""
result = []
for (k, v) in key_value_tuples:
if k.lower().find(name.lower()) == 0:
result.append((k, v))
if len(result) == 0:
print "%s does not match any options:" % name
for k, _v in key_value_tuples:
print "\t%s" % k
sys.exit(2)
if len(result) > 1:
print "%s matched multiple options:" % name
for k, _v in result:
print "\t%s" % k
sys.exit(2)
return result
def methods_of(obj):
"""Get all callable methods of an object that don't start with underscore
returns a list of tuples of the form (method_name, method)"""
result = []
for i in dir(obj):
if callable(getattr(obj, i)) and not i.startswith('_'):
result.append((i, getattr(obj, i)))
return result
def add_command_parsers(subparsers):
parser = subparsers.add_parser('version')
parser = subparsers.add_parser('bash-completion')
parser.add_argument('query_category', nargs='?')
for category in CATEGORIES:
command_object = CATEGORIES[category]()
parser = subparsers.add_parser(category)
parser.set_defaults(command_object=command_object)
category_subparsers = parser.add_subparsers(dest='action')
for (action, action_fn) in methods_of(command_object):
parser = category_subparsers.add_parser(action)
action_kwargs = []
for args, kwargs in getattr(action_fn, 'args', []):
action_kwargs.append(kwargs['dest'])
kwargs['dest'] = 'action_kwarg_' + kwargs['dest']
parser.add_argument(*args, **kwargs)
parser.set_defaults(action_fn=action_fn)
parser.set_defaults(action_kwargs=action_kwargs)
parser.add_argument('action_args', nargs='*')
category_opt = cfg.SubCommandOpt('category',
title='Command categories',
help='Available categories',
handler=add_command_parsers)
def main():
"""Parse options and call the appropriate class/method."""
CONF.register_cli_opt(category_opt)
try:
config.parse_args(sys.argv)
logging.healthnmon_manage_setup()
except cfg.ConfigFilesNotFoundError:
cfgfile = CONF.config_file[-1] if CONF.config_file else None
if cfgfile and not os.access(cfgfile, os.R_OK):
st = os.stat(cfgfile)
print _("Could not read %s. Re-running with sudo") % cfgfile
try:
os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv)
except Exception:
print _('sudo failed, continuing as if nothing happened')
print _('Please re-run healthnmon-manage as root.')
sys.exit(2)
fn = CONF.category.action_fn
fn_args = [arg.decode('utf-8') for arg in CONF.category.action_args]
fn_kwargs = {}
for k in CONF.category.action_kwargs:
v = getattr(CONF.category, 'action_kwarg_' + k)
if v is None:
continue
if isinstance(v, basestring):
v = v.decode('utf-8')
fn_kwargs[k] = v
# call the action with the remaining arguments
# check arguments
try:
cliutils.validate_args(fn, *fn_args, **fn_kwargs)
except cliutils.MissingArgs as e:
print fn.__doc__
CONF.print_help()
print e
sys.exit(1)
try:
fn(*fn_args, **fn_kwargs)
rpc.cleanup()
sys.exit(0)
except Exception:
print _("Command failed, please check log for more info")
raise
if __name__ == '__main__':
main()

View File

@ -1,67 +0,0 @@
#!/usr/bin/env python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import config
from nova import service, utils
from healthnmon import log as logging
from oslo.config import cfg
import eventlet
import gettext
import os
import sys
import healthnmon
from healthnmon.virtproxy.service import HealthnmonVirtProxyService
"""Starter script for healthnmon module."""
eventlet.monkey_patch(
all=False, os=True, select=True, socket=True, thread=False, time=True)
# If ../healthnmon/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'healthnmon', '__init__.py')):
sys.path.insert(0, possible_topdir)
gettext.install('healthnmon', unicode=1)
healthnmon_virtproxy_opts = [
cfg.StrOpt('healthnmon_virtproxy_manager',
default=
'healthnmon.virtproxy.manager.HealthnMonVirtProxyManager',
help='The healthnmon virtproxy manager class to use'),
cfg.StrOpt('healthnmon_virtproxy_topic',
default='healthnmon.virtproxy',
help='The topic used by healthnmon-virtproxy'), ]
CONF = cfg.CONF
CONF.register_opts(healthnmon_virtproxy_opts)
if __name__ == '__main__':
logging.healthnmon_virtproxy_setup()
utils.monkey_patch()
config.parse_args(sys.argv)
server = HealthnmonVirtProxyService.create(
binary='healthnmon-virtproxy',
topic=CONF.healthnmon_virtproxy_topic,
manager=CONF.healthnmon_virtproxy_manager)
service.serve(server)
service.wait()

View File

@ -1,101 +0,0 @@
#!/bin/bash
function usage {
echo " -c, Run Unit Test Cases"
echo " -t, Create Healthnmon Tarball"
echo " -r, Create Healthnmon RPM"
echo " -d, Create Healthnmon DEBIAN"
exit
}
create_rpm=0
create_tar=0
create_deb=0
run_tests=0
if [[ $# -eq 0 ]]
then
usage
exit 1
fi
while getopts "ctrd" OPTION
do
case $OPTION in
c)
run_tests=1
;;
t)
create_tar=1
;;
r)
create_tar=1
create_rpm=1
;;
d)
create_tar=1
create_deb=1
;;
?)
usage
exit 1
;;
esac
done
if [ $run_tests -eq 1 ]; then
tox -epy26
status=$?
if [[ $status -ne 0 ]]
then
exit 1
fi
fi
if [ $create_tar -eq 1 ]; then
rm -rf healthnmon/versioninfo
python setup.py sdist
status=$?
if [[ $status -ne 0 ]]
then
echo "Error: Failed to create healthnmon tar"
exit 1
else
echo "Successfully created healthnmon tar."
fi
fi
if [ $create_rpm -eq 1 ]; then
ver=`python rpm_util.py`
rpmBuildPath=`pwd`/target/rpmbuild
rm -rf $rpmBuildPath
mkdir -p $rpmBuildPath/SOURCES
cp dist/healthnmon*.tar.gz $rpmBuildPath/SOURCES
cp rpm/healthnmon*.init $rpmBuildPath/SOURCES
cp rpm/copyright $rpmBuildPath/SOURCES
rpmbuild --define "_topdir $rpmBuildPath" --define "ver $ver" --define "release `date +%Y%m%d.%H%M%S`" -ba rpm/healthnmon.spec
status=$?
if [[ $status -ne 0 ]]
then
echo "Error: Failed to create healthnmon RPM"
exit 1
else
echo "Successfully created healthnmon RPM."
fi
fi
if [ $create_deb -eq 1 ]; then
tarPath=`pwd`/dist/healthnmon-*.tar.gz
python builddeb.py $tarPath "Changelog comments"
status=$?
if [[ $status -ne 0 ]]
then
echo "Error: Failed to create healthnmon DEBIAN"
exit 1
else
echo "Successfully created healthnmon DEBIAN."
fi
fi

View File

@ -1,46 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import os
if not os.path.exists('tmp'):
os.makedirs('tmp')
os.chdir('tmp')
tar_file = sys.argv[1]
if os.path.isfile(tar_file) is False:
print 'Invalid path for the tar ball'
sys.exit()
os.system('tar -xzf %s' % tar_file)
source_dir = sys.argv[1][:-7].split('/')[-1]
print 'Source_dir created = %s' % source_dir
os.system('cp -r ../debian %s' % source_dir)
os.chdir(source_dir)
os.system('dch --increment %s' % sys.argv[2])
res = os.system('debuild --no-tgz-check -us -uc')
if res != 0:
print 'Build failed'
sys.exit(1)
os.chdir('../')
files = os.listdir('.')
for fileName in files:
if fileName.endswith(".deb"):
if not os.path.exists('../target/debbuild'):
os.makedirs('../target/debbuild')
os.system('mv %s ../target/debbuild' % fileName)
print 'Debian packages created successfully'
print 'Check in the debbuild directory'
print 'Now removing tmp directory'
os.chdir('../')
os.system('rm -rf tmp')

5
debian/changelog vendored
View File

@ -1,5 +0,0 @@
healthnmon (2013.1-3) precise; urgency=low
* Initial release
-- healthnmon <healthnmon@lists.launchpad.net> Thu, 28 Jun 2012 03:42:25 -0700

1
debian/compat vendored
View File

@ -1 +0,0 @@
8

56
debian/control vendored
View File

@ -1,56 +0,0 @@
Source: healthnmon
Section: net
Priority: extra
Maintainer: divakar-padiyar-nandavar <divakar.padiyar-nandavar@hp.com>
Build-Depends: debhelper (>= 8.0.0)
Standards-Version: 3.9.2
Homepage: https://launchpad.net/healthnmon/
#Vcs-Git: git://git.debian.org/collab-maint/healthnmon.git
#Vcs-Browser: http://git.debian.org/?p=collab-maint/healthnmon.git;a=summary
Package: healthnmon
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, python-healthnmon
Description: This project provides health and monitoring service for cloud
Package: python-healthnmon
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends},
python (>= 2.6),
python (< 2.8),
python-boto,
m2crypto,
python-pycurl,
python-daemon,
python-carrot,
python-kombu,
python-lockfile,
python-gflags,
openssl,
python-libxml2,
python-ldap,
python-sqlalchemy,
python-eventlet,
python-routes,
python-webob,
python-cheetah,
python-netaddr,
python-paste,
python-pastedeploy,
python-tempita,
python-migrate,
python-glance,
python-novaclient,
python-simplejson,
python-lxml,
python-feedparser,
python-xattr,
python-suds,
sudo,
python-crypto,
python-libvirt,
sudo,
python-paramiko
Description: Healthnmon project provides health and monitoring service for cloud
This package contains the Python libraries.

15
debian/copyright vendored
View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,94 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: healthnmon
# Required-Start: $network $local_fs $remote_fs $syslog
# Required-Stop: $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Health and monitoring service
# Description: Health n monitoring service initiator
### END INIT INFO
# Author: divakar-padiyar-nandavar <divakar.padiyar-nandavar@hp.com>
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Health and monitoring service"
NAME=healthnmon
DAEMON=/bin/healthnmon
DAEMON_ARGS="--config-file=/etc/nova/nova.conf"
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
NOVA_USER=nova
LOCK_DIR=/var/lock/healthnmon/
# Exit if the package is not installed
[ -x $DAEMON ] || exit 0
mkdir -p ${LOCK_DIR}
chown ${NOVA_USER} ${LOCK_DIR}
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
. /lib/lsb/init-functions
do_start()
{
start-stop-daemon --start --background --quiet --chuid nova:nova --make-pidfile --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS|| return 1
}
do_stop()
{
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE
RETVAL="$?"
rm -f $PIDFILE
return "$RETVAL"
}
case "$1" in
start)
log_daemon_msg "Starting $DESC " "$NAME"
do_start
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac

View File

@ -1,2 +0,0 @@
bin/*
etc/*

View File

@ -1,15 +0,0 @@
#!/bin/bash
mkdir -p /var/log/healthnmon
touch /var/log/healthnmon/healthnmon.log
touch /var/log/healthnmon/healthnmon-manage.log
chown -R nova:nova /var/log/healthnmon/
chmod 0700 /var/log/healthnmon
#create directory for healthmon service in var/run
mkdir -p /var/run/healthnmon
if ! grep -q sql_connection /etc/nova/nova.conf
then
su -s /bin/sh -c 'healthnmon-manage db sync' nova
fi
service healthnmon restart

View File

@ -1 +0,0 @@
healthnmon/* /usr/share/pyshared/healthnmon/

17
debian/rules vendored
View File

@ -1,17 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
#%:
# dh $@
WITH_PYTHON2 = $(shell test -f /usr/bin/dh_python2 && echo "--with python2")
%:
dh $@ ${WITH_PYTHON2}

View File

@ -1,52 +0,0 @@
[loggers]
keys=root,nova,healthnmon
[handlers]
keys=sysout,healthnmon_logfile,healthnmon_audit_logfile
[formatters]
keys=healthnmon_formatter,healthnmon_audit_formatter
[logger_root]
level=WARN
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
[logger_healthnmon]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=healthnmon
[logger_nova]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=nova
[handler_sysout]
class=StreamHandler
level=NOTSET
formatter=healthnmon_formatter
args=(sys.stdout,)
[handler_healthnmon_logfile]
class=healthnmon.log.HealthnmonLogHandler
level=NOTSET
formatter=healthnmon_formatter
args=('/var/log/healthnmon/collector.log',)
[handler_healthnmon_audit_logfile]
class=healthnmon.log.HealthnmonAuditHandler
level=AUDIT
formatter=healthnmon_audit_formatter
args=('/var/log/healthnmon/collector_audit.log',)
[formatter_healthnmon_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonFormatter
[formatter_healthnmon_audit_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonAuditFormatter

View File

@ -1,42 +0,0 @@
[loggers]
keys=root,nova,healthnmon-manage
[handlers]
keys=sysout,healthnmon-manage_logfile
[formatters]
keys=healthnmon_formatter
[logger_root]
level=WARN
handlers=sysout,healthnmon-manage_logfile
[logger_healthnmon-manage]
level=INFO
handlers=sysout,healthnmon-manage_logfile
propagate=0
qualname=healthnmon-manage
[logger_nova]
level=INFO
handlers=sysout,healthnmon-manage_logfile
propagate=0
qualname=nova
[handler_sysout]
class=StreamHandler
level=NOTSET
formatter=healthnmon_formatter
args=(sys.stdout,)
[handler_healthnmon-manage_logfile]
class=healthnmon.log.HealthnmonLogHandler
level=NOTSET
formatter=healthnmon_formatter
args=('/var/log/healthnmon/healthnmon-manage.log',)
[formatter_healthnmon_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonFormatter

View File

@ -1,52 +0,0 @@
[loggers]
keys=root,nova,healthnmon
[handlers]
keys=sysout,healthnmon_logfile,healthnmon_audit_logfile
[formatters]
keys=healthnmon_formatter,healthnmon_audit_formatter
[logger_root]
level=WARN
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
[logger_healthnmon]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=healthnmon
[logger_nova]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=nova
[handler_sysout]
class=StreamHandler
level=NOTSET
formatter=healthnmon_formatter
args=(sys.stdout,)
[handler_healthnmon_logfile]
class=healthnmon.log.HealthnmonLogHandler
level=NOTSET
formatter=healthnmon_formatter
args=('/var/log/healthnmon/virtproxy.log',)
[handler_healthnmon_audit_logfile]
class=healthnmon.log.HealthnmonAuditHandler
level=AUDIT
formatter=healthnmon_audit_formatter
args=('/var/log/healthnmon/virtproxy_audit.log',)
[formatter_healthnmon_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonFormatter
[formatter_healthnmon_audit_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonAuditFormatter

View File

@ -1,52 +0,0 @@
[loggers]
keys=root,nova,healthnmon
[handlers]
keys=sysout,healthnmon_logfile,healthnmon_audit_logfile
[formatters]
keys=healthnmon_formatter,healthnmon_audit_formatter
[logger_root]
level=WARN
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
[logger_healthnmon]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=healthnmon
[logger_nova]
level=INFO
handlers=sysout,healthnmon_logfile,healthnmon_audit_logfile
propagate=0
qualname=nova
[handler_sysout]
class=StreamHandler
level=NOTSET
formatter=healthnmon_formatter
args=(sys.stdout,)
[handler_healthnmon_logfile]
class=healthnmon.log.HealthnmonLogHandler
level=NOTSET
formatter=healthnmon_formatter
args=('/var/log/healthnmon/healthnmon.log',)
[handler_healthnmon_audit_logfile]
class=healthnmon.log.HealthnmonAuditHandler
level=AUDIT
formatter=healthnmon_audit_formatter
args=('/var/log/healthnmon/healthnmon_audit.log',)
[formatter_healthnmon_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonFormatter
[formatter_healthnmon_audit_formatter]
format=%(asctime)s %(levelname)-8s %(name)-15s %(message)s
datefmt=
class=healthnmon.log.HealthnmonAuditFormatter

View File

@ -1 +0,0 @@
/coverage.xml

View File

@ -1,33 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`healthnmon` -- Health and Monitoring module for cloud
===================================================================
.. synopsis:: Health and Monitoring module for cloud
.. moduleauthor:: Divakar Padiyar Nandavar <divakar.padiyar-nandavar@hp.com>
.. moduleauthor:: Suryanarayana Raju <snraju@hp.com>
"""
import os
def get_healthnmon_location():
""" Get the location of the healthnmon package
"""
return os.path.join(os.path.dirname(__file__))

View File

@ -1,19 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Health and monitoring API as nova compute extension API's
'''

View File

@ -1,380 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Base controller class for healthnmon extensions
'''
import json
import os
import webob
from webob import exc
from sqlalchemy import exc as sql_exc
from nova.exception import Invalid
from nova.api.openstack import common
from nova.openstack.common import timeutils
from ..api import util
from ..api import constants
from oslo.config import cfg
from nova.openstack.common import log as logging
from ..constants import DbConstants
from ..resourcemodel import healthnmonResourceModel
from types import *
import calendar
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class Controller(common.ViewBuilder):
'''
Base controller class for healthnmon extensions. The methods in this class
can be used by the derived resource controllers and actions to
build and return webob responses for various API operations.
'''
def __init__(self, collection_name, member_name, model_name):
''' Initialize controller.
:param collection_name: collection name for the resource
:param member_name: member name(used for each member) in the resource
:param model_name: healthnmon resource model class name for the
resource
'''
self._collection_name = collection_name
self._member_name = member_name
self._model_name = model_name
def _index(self, req, items, collection_links):
""" List all resources as simple list with appropriate resource links
:param req: webob request
:param items: list of items to be listed
:param collection_links: next/prev links for collection list.
:returns: webob response for simple list operation.
"""
item_dict_list = []
for item in items:
itemdict = {'id': item.get_id(),
'name': item.get_name(),
'links': self._get_links(req, item.get_id(),
self._collection_name)
}
item_dict_list.append(itemdict)
LOG.debug(_('Appending item:' + str(itemdict)))
resources_dict = {
self._collection_name: item_dict_list
}
if collection_links:
resources_dict[self._collection_name + '_links'] = collection_links
# Create response
nsmap = {None: constants.XMLNS_HEALTHNMON_EXTENSION_API,
'atom': constants.XMLNS_ATOM}
if util.get_content_accept_type(req) == 'xml':
return util.create_response('application/xml',
util.get_entity_list_xml(
resources_dict, nsmap,
self._collection_name,
self._member_name))
else:
return util.create_response('application/json',
json.dumps(resources_dict))
def _detail(self, req, items, collection_links):
""" List all resources as a detailed list with appropriate
resource links
:param req: webob request
:param items: list of items to be listed in detail
:param collection_links: next/prev links for collection list.
:returns: webob response for detail list operation.
"""
content_type = util.get_content_accept_type(req)
nsmap = {None: constants.XMLNS_HEALTHNMON_EXTENSION_API,
'atom': constants.XMLNS_ATOM}
# Create an empty parent xml
parent_xml = util.get_entity_list_xml({self._collection_name: {}},
nsmap,
self._collection_name,
self._model_name)
item_list = []
for item in items:
(resource_xml,
out_dict) = self._get_resource_xml_with_links(req, item)
if content_type == 'xml':
parent_xml = util.append_xml_as_child(parent_xml, resource_xml)
else:
converted_dict = util.xml_to_dict(resource_xml)
if converted_dict is None:
converted_dict = {}
resource_dict = {self._model_name: converted_dict}
# The following is commented since we've incorporated
# the required functionality in xml to dict method. A separate
# call to this method, hence is not required.
# util.update_dict_using_xpath(resource_dict, out_dict)
LOG.debug(_('Dict after conversion %s'
% str(resource_dict)))
LOG.debug(_('Appending item:' + str(resource_dict)))
item_list.append(resource_dict[self._model_name])
resources_dict = {self._collection_name: item_list}
if collection_links:
resources_dict[self._collection_name + '_links'] = collection_links
for link in resources_dict[self._collection_name + '_links']:
parent_xml = util.append_xml_as_child(parent_xml,
util.get_next_xml(link))
if util.get_content_accept_type(req) == 'xml':
return util.create_response('application/xml', parent_xml)
else:
return util.create_response('application/json',
json.dumps(resources_dict))
def _get_resource_xml_with_links(self, req, item):
""" Get item resource as xml updated with
reference links to other resources.
:param req: request object
:param item: resource object as per resource model
:returns: (resource_xml, out_dict) tuple where,
resource_xml is the updated xml and
out_dict is a dictionary with keys as
the xpath of replaced entities and
value is the corresponding entity dict.
"""
proj_id = req.environ["nova.context"].project_id
resource_xml = util.dump_resource_xml(item, self._model_name)
out_dict = {}
resource_xml_update = util.replace_with_links(
resource_xml,
self._get_resource_tag_dict_list(req.application_url,
proj_id),
out_dict)
field_list = util.get_query_fields(req)
if field_list is not None:
resource_xml_update = \
util.get_select_elements_xml(resource_xml_update,
field_list, 'id')
return (resource_xml_update, out_dict)
def _get_resource_tag_dict_list(self, application_url, proj_id):
""" Get the list of tag dictionaries applicable to
resource
:param application_url: application url from request
:param proj_id: project id
:returns: list of tag dictionaries for resources
"""
return []
def _show(self, req, item):
""" Display details for particular resource
identified by resource id.
:param req: webob request
:param item: resource item to be shown
:returns: complete resource details for the specified item and
request.
"""
(resource_xml, out_dict) = self._get_resource_xml_with_links(req, item)
if util.get_content_accept_type(req) == 'xml':
return util.create_response('application/xml', resource_xml)
else:
# Parsing back xml to remove instance state attributes
# in the object
converted_dict = util.xml_to_dict(resource_xml)
if converted_dict is None:
converted_dict = {}
resource_dict = {self._model_name: converted_dict}
util.update_dict_using_xpath(resource_dict, out_dict)
LOG.debug(_('Dict after conversion %s'
% str(resource_dict)))
return util.create_response('application/json',
json.dumps(resource_dict))
def get_all_by_filters(self, req, func):
"""
Get all items from the resource interface with filters parsed from the
request.
:param req: webob request
:param func: resource interface function taking parameters context,
filters, sort_key and sort_dir
:returns: all filtered items of the resource model type.
"""
ctx = util.get_project_context(req)[0]
filters, sort_key, sort_dir = self.get_search_options(
req,
getattr(healthnmonResourceModel, self._model_name))
try:
return func(ctx, filters, sort_key, sort_dir)
except sql_exc.DataError, e:
LOG.error(_('Data value error %s ' % str(e)), exc_info=1)
raise Invalid(message=_('Invalid parameter values'))
def get_search_options(self, req, model):
""" Get search options from WebOb request which can be
input to xxx_get_all_by_filters DB APIs
Arguments:
req - WebOb request object
model - Resource model object for which this API is invoked
Returns:
tuple containing dictonary of filters,
sort_key and sort direction
"""
query_params = {}
query_params.update(req.GET)
for key in query_params:
if(len(req.GET.getall(key)) > 1):
query_params[key] = req.GET.getall(key)
filters = {}
# Parse ISO 8601 formatted changes-since input to epoch millisecs
if 'changes-since' in query_params:
try:
parsed = timeutils.parse_isotime(query_params['changes-since'])
utctimetuple = parsed.utctimetuple()
epoch_ms = long(calendar.timegm(utctimetuple) * 1000L)
except ValueError:
msg = _('Invalid changes-since value')
raise exc.HTTPBadRequest(explanation=msg)
filters['changes-since'] = epoch_ms
if 'deleted' in query_params:
if query_params['deleted'].lower() == 'true':
filters['deleted'] = 'true'
elif query_params['deleted'].lower() == 'false':
filters['deleted'] = 'false'
else:
msg = _('Invalid deleted value')
raise exc.HTTPBadRequest(explanation=msg)
# By default, dbs xxx_get_all_by_filters() will return deleted rows.
# If an admin hasn't specified a 'deleted' search option, we need
# to filter out deleted rows by setting the filter ourselves.
# ... Unless 'changes-since' is specified, because 'changes-since'
# should return recently deleted rows also.
if 'deleted' not in query_params:
if 'changes-since' not in query_params:
# No 'changes-since', so we only want non-deleted rows
filters['deleted'] = 'false'
model_members = model.get_all_members()
for key in query_params:
if key in model_members:
value = model_members[key]
# For enum the value.data_type would be as
# [<Enumname>, xs:String]
if (type(value.data_type) == ListType):
value.data_type = value.data_type[1]
if not hasattr(healthnmonResourceModel, value.data_type):
filters[key] = query_params[key]
sort_key = None
sort_dir = DbConstants.ORDER_DESC
if 'createEpoch' in model_members:
sort_key = 'createEpoch'
sort_dir = DbConstants.ORDER_DESC
else:
sort_key = 'id'
sort_dir = DbConstants.ORDER_DESC
return (filters, sort_key, sort_dir)
def limited_by_marker(self, items, request,
max_limit=CONF.osapi_max_limit):
"""
Return a tuple with slice of items according to the requested marker
and limit and a set of collection links
:params items: resource item list
:params request: webob request
:params max_limit: maximum number of items to be returned
:returns: (limited item list, collection links list) as a tuple
"""
collection_links = []
params = common.get_pagination_params(request)
limit = params.get('limit', max_limit)
marker = params.get('marker')
limit = min(max_limit, limit)
if limit == 0:
return ([], [])
start_index = 0
if marker:
start_index = -1
for i, item in enumerate(items):
# NOTE(siva): getter from generateDS
if item.get_id() == marker:
start_index = i + 1
break
if start_index < 0:
msg = _('marker [%s] not found') % marker
raise webob.exc.HTTPBadRequest(explanation=msg)
range_end = start_index + limit
prev_index = start_index - limit
try:
items[range_end]
items[range_end - 1]
except Exception:
pass
else:
collection_links.append({
'rel': 'next',
'href': self._get_next_link(
request,
str(items[range_end - 1].get_id()), self._collection_name)
})
if prev_index > 0:
collection_links.append({
'rel': 'previous',
'href': self._get_previous_link(
request,
str(items[prev_index - 1].get_id()), self._collection_name)
})
elif prev_index == 0:
collection_links.append({
'rel': 'previous',
'href': self._get_previous_link(request,
None, self._collection_name)
})
return (items[start_index:range_end], collection_links)
def _get_previous_link(self, request, identifier, collection_name):
"""
Return href string with proper limit and marker params. If identifier
is not specified, no marker would be added.
:params request: webob request
:params identifier: unique identifier for the resource
:returns: href string with limit and marker params.
"""
params = request.params.copy()
if identifier:
params["marker"] = identifier
elif "marker" in params:
del params["marker"]
prefix = self._update_link_prefix(request.application_url,
CONF.osapi_compute_link_prefix)
url = os.path.join(prefix,
request.environ["nova.context"].project_id,
collection_name)
return "%s?%s" % (url, common.dict_to_query_str(params))
# NOTE(siva): This method is overridden to retain filtered output.
def _get_href_link(self, request, identifier, collection_name):
"""Return an href string pointing to this object."""
prefix = self._update_link_prefix(request.application_url,
CONF.osapi_compute_link_prefix)
url = os.path.join(prefix,
request.environ["nova.context"].project_id,
collection_name,
str(identifier))
if 'fields' in request.params:
return "%s?%s" % (url,
common.dict_to_query_str(
{'fields': request.params['fields']}))
else:
return url

View File

@ -1,60 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
''' Constants for api module '''
XMLNS_HEALTHNMON_EXTENSION_API = \
'http://docs.openstack.org/ext/healthnmon/api/v2.0'
XMLNS_ATOM = 'http://www.w3.org/2005/Atom'
ATOM = '{%s}' % XMLNS_ATOM
STORAGEVOLUME_COLLECTION_NAME = 'storagevolumes'
VMHOSTS_COLLECTION_NAME = 'vmhosts'
VM_COLLECTION_NAME = 'virtualmachines'
SUBNET_COLLECTION_NAME = 'subnets'
VIRTUAL_SWITCH_COLLECTION_NAME = 'virtualswitches'
MEMBER_MAP = {
'vmhost': VMHOSTS_COLLECTION_NAME,
'virtualmachine': VM_COLLECTION_NAME,
'subnet': SUBNET_COLLECTION_NAME,
'virtualswitch': VIRTUAL_SWITCH_COLLECTION_NAME,
'storagevolume': STORAGEVOLUME_COLLECTION_NAME,
}
QUERY_FIELD_KEY = 'fields'
PERFORMANCE_DATA_ATTRIBUTES = (
'cpuUserLoad',
'cpuSystemLoad',
'hostCpuSpeed',
'hostMaxCpuSpeed',
'ncpus',
'diskRead',
'diskWrite',
'netRead',
'netWrite',
'totalMemory',
'freeMemory',
'configuredMemory',
'uptimeMinute',
'reservedSystemCapacity',
'maximumSystemCapacity',
'relativeWeight',
'reservedSystemMemory',
'maximumSystemMemory',
'memoryRelativeWeight',
'uuid',
)

View File

@ -1,69 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.api.openstack import extensions
from ..api import storagevolume
from ..api import vmhosts
from ..api import vm
from ..api import subnet
from ..api import virtualswitch
from ..api import constants
from .. import log as logging
LOG = logging.getLogger('healthnmon.api')
class Healthnmon(extensions.ExtensionDescriptor):
""" Health and monitoring API as nova compute extension API's
"""
name = 'healthnmon'
alias = 'healthnmon'
namespace = constants.XMLNS_HEALTHNMON_EXTENSION_API
updated = '2012-01-22T13:25:27-06:00'
def get_resources(self):
LOG.info(_('Adding healthnmon resource extensions'))
resources = []
vmhosts_resource = \
extensions.ResourceExtension(constants.VMHOSTS_COLLECTION_NAME,
vmhosts.VmHostsController(),
collection_actions={'detail': 'GET'})
vm_resource = \
extensions.ResourceExtension(constants.VM_COLLECTION_NAME,
vm.VMController(),
collection_actions={'detail': 'GET'})
storage_resource = \
extensions.ResourceExtension(
constants.STORAGEVOLUME_COLLECTION_NAME,
storagevolume.StorageVolumeController(),
collection_actions={'detail': 'GET'})
subnet_resource = \
extensions.ResourceExtension(constants.SUBNET_COLLECTION_NAME,
subnet.SubnetController(),
collection_actions={'detail': 'GET'})
virtual_switch_resource = \
extensions.ResourceExtension(
constants.VIRTUAL_SWITCH_COLLECTION_NAME,
virtualswitch.VirtualSwitchController(),
collection_actions={'detail': 'GET'})
resources.append(vmhosts_resource)
resources.append(vm_resource)
resources.append(storage_resource)
resources.append(subnet_resource)
resources.append(virtual_switch_resource)
return resources

View File

@ -1,110 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import util
from .. import log as logging
from webob.exc import HTTPNotFound
import os
from .. import healthnmon_api as api
from ..api import constants
from ..api import base
LOG = logging.getLogger(__name__)
class StorageVolumeController(base.Controller):
""" Controller class for StorageVolume resource extension """
def __init__(self):
''' Initialize controller with resource specific param values '''
base.Controller.__init__(self,
constants.STORAGEVOLUME_COLLECTION_NAME,
'storagevolume',
'StorageVolume')
def index(self, req):
""" List all StorageVolumes as a simple list
:param req: webob request
:returns: simple list of StorageVolumes with appropriate
resource links.
"""
storagevolumes = self.get_all_by_filters(
req,
api.storage_volume_get_all_by_filters)
if not storagevolumes:
storagevolumes = []
limited_list, collection_links = self.limited_by_marker(storagevolumes,
req)
return self._index(req, limited_list, collection_links)
def detail(self, req):
"""
List all StorageVolumes as a detailed list with appropriate
resource links
:param req: webob request
:returns: webob response for detail list operation.
"""
storagevolumes = self.get_all_by_filters(
req,
api.storage_volume_get_all_by_filters)
if not storagevolumes:
storagevolumes = []
limited_list, collection_links = self.limited_by_marker(
storagevolumes,
req)
return self._detail(req, limited_list, collection_links)
def _get_resource_tag_dict_list(self, application_url, proj_id):
""" Get the list of tag dictionaries applicable to the resource
:param application_url: application url from request
:param proj_id: project id
:returns: list of tag dictionaries for the resource
"""
return [{
'tag': 'vmHostId',
'tag_replacement': 'vmhost',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id, constants.VMHOSTS_COLLECTION_NAME),
'tag_attrib': None,
}]
def show(self, req, id):
""" Display details for particular StorageVolume
identified by resource id.
:param req: webob request
:param id: unique id to identify StorageVolume resource.
:returns: complete StorageVolume resource details for the
specified id and request.
"""
try:
LOG.debug(_('Show storagevolume id : %s' % str(id)))
(ctx, proj_id) = util.get_project_context(req)
storagevolume_list = api.storage_volume_get_by_ids(ctx, [id])
LOG.debug(_('Project id: %s Received storagevolumes from database'
% proj_id))
if storagevolume_list:
return self._show(req, storagevolume_list[0])
except Exception, err:
LOG.error(_('Exception while fetching data from healthnmon api %s'
% str(err)), exc_info=1)
return HTTPNotFound()

View File

@ -1,92 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ..api import util
from ..api import constants
from ..api import base
from .. import healthnmon_api as api
from .. import log as logging
from webob.exc import HTTPNotFound
LOG = logging.getLogger(__name__)
class SubnetController(base.Controller):
'''
Subnet controller for handling subnet
resource api calls.
'''
def __init__(self):
''' Initialize controller with resource specific param values '''
base.Controller.__init__(self,
constants.SUBNET_COLLECTION_NAME,
'subnet',
'Subnet')
def index(self, req):
""" List all subnets as a simple list
:param req: webob request
:returns: simple list of subnets with resource links to each subnet.
"""
subnet_list = self.get_all_by_filters(req,
api.subnet_get_all_by_filters)
if not subnet_list:
subnet_list = []
limited_list, collection_links = self.limited_by_marker(
subnet_list,
req)
return self._index(req, limited_list, collection_links)
def detail(self, req):
"""
List all subnets as a detailed list with appropriate
resource links
:param req: webob request
:returns: webob response for detail list operation.
"""
subnet_list = self.get_all_by_filters(req,
api.subnet_get_all_by_filters)
if not subnet_list:
subnet_list = []
limited_list, collection_links = self.limited_by_marker(
subnet_list,
req)
return self._detail(req, limited_list, collection_links)
def show(self, req, id):
""" Display details for particular subnet
identified by resource id.
:param req: webob request
:param id: unique id to identify subnet resource.
:returns: complete subnet resource details for the specified id and
request.
"""
try:
LOG.debug(_('Show subnet id : %s' % str(id)))
(ctx, proj_id) = util.get_project_context(req)
subnet_list = api.subnet_get_by_ids(ctx, [id])
LOG.debug(_('Project id: %s Received subnets from the database'
% proj_id))
if subnet_list:
return self._show(req, subnet_list[0])
except Exception, err:
LOG.error(_('Exception while fetching data from healthnmon api %s'
% str(err)), exc_info=1)
return HTTPNotFound()

View File

@ -1,603 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import StringIO
from xml.dom.minidom import parseString
from xml.dom.minidom import Node
from lxml import etree
from lxml.etree import Element
from lxml.etree import SubElement
from webob import Response
from oslo.config import cfg
from .. import log as logging
from nova.api.openstack import common
from ..api import constants
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def get_path_elements(path_str):
""" Get path tokens of an absolute path of
an XML element. This also considers
the string between '[' and ']' along with
'/' as separator.
:returns: each token one at a time.
"""
L = path_str.rsplit('/')
for word in L:
if '[' in word:
k = word.split('[')
yield k[0]
for num in k[1::]:
# XPath is indexed at 1
yield (int(num[0:len(num) - 1]) - 1)
else:
yield word
def xml_to_dict(xml_str):
""" Convert xml to a dict object
:param xml_str: well-formed xml string
:returns: simple dict object containing xml as key-value pairs.
"""
xml_str = xml_str.replace('\n', '')
xml_str = xml_str.replace('\r', '')
doc = parseString(xml_str)
unlink_whitespace(doc.documentElement)
return element_to_dict(doc.documentElement)
def element_to_dict(parent):
""" Convert dom element to dictionary
:param parent: parent dom element
:returns: dict object for the dom element
"""
def _get_links_dict(element):
href = element.getAttribute('href')
rel = element.getAttribute('rel')
return {
'href': href,
'rel': rel,
}
def _get_member_dict(element):
named_map = element.attributes
d = {}
for i in xrange(0, len(named_map)):
name = named_map.item(i).name
if str(name).startswith('xmlns'):
continue
d[named_map.item(i).localName] = named_map.item(i).value
child_dict = element_to_dict(element)
if child_dict is not None:
d.update(child_dict)
else:
d.update({'links': []})
return d
child = parent.firstChild
if not child:
return None
elif child.nodeType == Node.TEXT_NODE:
return child.nodeValue
d = {}
while child is not None:
if child.nodeType == Node.ELEMENT_NODE:
try:
# We are hard coding here as atom:link is hard coded elsewhere
# and there is no one particular standard to convert to json
# when xml namespaces are involved. The other reason is to
# be consistent with openstack responses.
# NOTE(Siva): As a resultant of the above, if there is a
# child xml element with name 'links'
# that would get added to the links dictionaries.
if child.tagName == 'atom:link':
child.tagName = 'links'
elif child.tagName in constants.MEMBER_MAP:
child.tagName = constants.MEMBER_MAP[child.tagName]
d[child.tagName]
except KeyError:
if child.tagName == 'links':
d[child.tagName] = [_get_links_dict(child)]
elif child.tagName in constants.MEMBER_MAP.values():
d[child.tagName] = [_get_member_dict(child)]
else:
d[child.tagName] = element_to_dict(child)
child = child.nextSibling
continue
if not isinstance(d[child.tagName], list):
first_element = d[child.tagName]
d[child.tagName] = [first_element]
if child.tagName == 'links':
d[child.tagName].append(_get_links_dict(child))
elif child.tagName in constants.MEMBER_MAP.values():
d[child.tagName].append(_get_member_dict(child))
else:
d[child.tagName].append(element_to_dict(child))
child = child.nextSibling
return d
def unlink_whitespace(node, unlink=True):
""" Unlink whitespace nodes from the dom element """
remove_list = []
for child in node.childNodes:
if child.nodeType == Node.TEXT_NODE and not child.data.strip():
remove_list.append(child)
elif child.hasChildNodes():
unlink_whitespace(child, unlink)
for node in remove_list:
node.parentNode.removeChild(node)
if unlink:
node.unlink()
def replace_with_links(xml_str, tag_dict_list, replace_dict_out):
""" Replace entity nodes in input xml with entity references.
tag_dict_list should contain tag dictionaries; each dict should
contain the following keys:
tag: element tag name
tag_replacement : replacement tag in output xml, same as input tag
if None.
tag_key: Key element tag, if entity does not contain any child
elements, this would be added as attribute with value as
element text.
tag_attrib: list of child element tags that are used as attributes
in the replaced entity reference.
tag_collection_url: collection url to be used for creation
of the link.
If the child is an element containing only a single text node
and tag_key is null, it's data is taken as the tag_key.
:param xml_str: input xml with no default namespace prefixes
:param tag_dict_list: list of tag dictionaries
:param replace_dict_out: the xpath of elements replaced are put in the
out parameter dict
:returns: output xml containing entity references
:raises TagDictionaryError: if resource references cannot be
constructed using the given tag dictionary.
"""
def _validate_tag_dict(tag_dict):
if not tag_dict:
return False
try:
tag_dict['tag']
tag_dict['tag_key']
tag_dict['tag_replacement']
tag_dict['tag_attrib']
tag_dict['tag_collection_url']
except KeyError:
return False
return True
def _get_tag_dict_values(tag_dict):
return (tag_dict['tag'], tag_dict['tag_key'],
tag_dict['tag_replacement'], tag_dict['tag_attrib'],
tag_dict['tag_collection_url'])
if not tag_dict_list:
return xml_str
# if (not replace_dict_out):
# replace_dict_out = {}
tree = etree.parse(StringIO.StringIO(xml_str),
etree.XMLParser(remove_blank_text=True))
root = tree.getroot()
rootNS = ''
if not root.prefix and root.nsmap:
rootNS = root.nsmap[None]
elif root.nsmap and root.prefix is not None:
rootNS = root.nsmap[root.prefix]
ROOTNS = '{%s}' % rootNS
for tag_dict in tag_dict_list:
if _validate_tag_dict(tag_dict):
try:
(tag, tag_key, tag_replacement, tag_attrib_list,
tag_collection_url) = _get_tag_dict_values(tag_dict)
elements_to_be_replaced = []
for element in root.iter(ROOTNS + str(tag)):
nsmap = {'atom': constants.XMLNS_ATOM}
out_dict = {}
if not tag_replacement:
tag_replacement = tag
replace_element = Element(ROOTNS + tag_replacement,
nsmap=nsmap)
if tag_attrib_list is not None:
for tag in tag_attrib_list:
if element.find(tag) is not None:
replace_element.attrib[ROOTNS + tag] = \
element.find(tag).text
out_dict[tag] = element.find(tag).text
resource_key = None
if not tag_key or len(element) == 0:
resource_key = element.text
elif tag_key is not None and element.find(
ROOTNS + tag_key) is not None and \
element.find(ROOTNS + tag_key).text is not None:
resource_key = element.find(ROOTNS
+ tag_key).text
if not resource_key:
raise TagDictionaryError(
'No resource key found from tag dictionary:',
tag_dict)
if tag_key is not None:
replace_element.attrib[ROOTNS + tag_key] = \
resource_key
out_dict[tag_key] = resource_key
href = os.path.join(tag_collection_url,
str(resource_key))
bookmark = \
os.path.join(
common.remove_version_from_href(
tag_collection_url),
str(resource_key))
links = [{'rel': 'self', 'href': href},
{'rel': 'bookmark', 'href': bookmark}]
for link_dict in links:
SubElement(replace_element, constants.ATOM
+ 'link', attrib=link_dict)
out_dict['links'] = links
elements_to_be_replaced.append((element,
replace_element, out_dict))
for (element, replace_element, out_dict) in \
elements_to_be_replaced:
if element.getparent() is None:
tree._setroot(replace_element)
else:
element.getparent().replace(element,
replace_element)
for (element, replace_element, out_dict) in \
elements_to_be_replaced:
LOG.debug(_('Replaced element path: %s'
% replace_element.getroottree().getpath(
replace_element)))
replace_dict_out.update(
{tree.getpath(replace_element): out_dict})
except (KeyError, IndexError, ValueError), err:
LOG.error(_('Lookup Error while finding tag \
healthnmon api... %s ' % str(err)), exc_info=1)
return etree.tostringlist(tree.getroot())[0]
def dump_resource_xml(resource_obj, tag):
"""Serialize object using resource model """
LOG.debug(_('Exporting tag: %s as xml...' % tag))
xml_out_file = StringIO.StringIO()
resource_obj.export(xml_out_file, 0, name_=tag)
return xml_out_file.getvalue()
def get_project_context(req):
""" Get project context from request
:param req: request object from which context would be fetched.
:returns: project context tuple (context, project_id)
"""
context = None
project_id = ''
try:
context = req.environ['nova.context']
project_id = context.project_id
except KeyError, err:
LOG.error(_('Exception while fetching nova context from request... %s '
% str(err)), exc_info=1)
return (context, project_id)
def get_content_accept_type(req):
""" Returns either xml or json depending on the type
specified in http request path or in the accept header of the
request.
The content type specified in the request path takes
priority.
"""
def is_path_accept(req, data_type):
""" Returns True if the request path ends with the specified
data type
"""
if str(req.path_info).endswith('.' + data_type):
return True
else:
return False
def is_header_accept(req, content_type):
""" Returns True if the content_type matches any of the accept headers
specified in the request
"""
for header in list(req.accept):
try:
str(header).index(content_type)
except ValueError:
continue
return True
return False
if is_path_accept(req, 'json'):
return 'json'
elif is_path_accept(req, 'xml'):
return 'xml'
elif is_header_accept(req, 'xml'):
return 'xml'
elif is_header_accept(req, 'json'):
return 'json'
def create_response(content_type, body):
""" Prepare a response object
with the set content type
:param content_type: content type to be specified in the header
:param body: response body
:returns: returns the prepared response object.
"""
resp = Response(content_type=content_type)
resp.body = body
return resp
def update_dict_using_xpath(input_dict, xpath_dict):
""" Update an input dict with values from xpath_dict
by traversing the xpath in the dict.
if dict cannot be traversed for a particular xpath key,
the xpath key is ignored.
:params input_dict: input dict to be traversed and updated.
:params xpath_dict: dict containing xpath as key, value is the value
to be replaced with for the traversed xpath in
the input dict.
"""
if not input_dict:
return None
if not xpath_dict:
return input_dict
for (k, d) in xpath_dict.items():
try:
loc = input_dict
if k.startswith('/'):
k = k[1::]
path_elements = []
for ele in get_path_elements(k):
path_elements.append(ele)
for i in range(len(path_elements) - 1):
loc = loc[path_elements[i]]
loc[path_elements[-1]]
loc[path_elements[-1]] = d
except (LookupError, ValueError), err:
LOG.debug(_('XPath traversion error in input \
dictionary current key:%s ' % str(err)))
return input_dict
def get_entity_list_xml(
entity_dict,
nsmap,
root_element_tag,
sub_element_tag,
root_prefix='None',
):
""" Get entity list in xml format
:params: entity_dict with root key as entity name. The value is
an array of entity dictionaries which each containing entity attributes
as keys and a separate 'links' key/value pair. The value of which is an
array of dictionaries containing hyperlinks with relations to the
entity in each dictionary. An example entity_dict is shown below:
entity_dict = {
'vmhosts': [{
"id": 'host-1234',
"name": 'newhost',
"links": [
{
"rel": "self",
"href": 'http://localhost:8774/v2/admin/vmhosts'
},
{
"rel": "bookmark",
"href": 'http://localhost:8774/admin/vmhosts'
}
],
}],
"vmhosts_links": [
{
"rel": "next",
"href": 'http://localhost:8774/v2/admin/vmhosts&marker=4"
}
]}
:params nsmap: namespace map to be used for the generated xml.
:params root_element_tag: element tag of the root element.
:params sub_element_tag: element tag for each sub element. i.e for each
entity dictionary.
:params root_prefix: root prefix to be used for identifying the
namespace of the document from the nsmap.
:returns: list of entities in xml format using the entity dictionary.
:raises LookupError: If there is more than one root(key) element in the
entity_dict.
"""
if not entity_dict:
return ''
# TODO(siva): add check for entities_links
keys = entity_dict.keys()
root_key = ''
if len(keys) > 2:
raise LookupError('More than one root element in entity')
page_links = []
if len(keys) == 2:
if keys[0].endswith("_links"):
page_links = entity_dict[keys[0]]
root_key = keys[1]
elif keys[1].endswith("_links"):
root_key = keys[0]
page_links = entity_dict[keys[1]]
else:
raise LookupError('More than one root element in entity')
else:
root_key = entity_dict.keys()[0]
root_namespace = ''
if nsmap is not None and root_prefix in nsmap:
root_namespace = '{%s}' % nsmap[root_prefix]
root = Element(root_namespace + root_element_tag, nsmap=nsmap)
dict_list = entity_dict[root_key]
for ent in dict_list:
if not ent:
continue
link_list = []
if 'links' in ent:
link_list = ent['links']
del ent['links']
attrib = {}
for (key, val) in ent.items():
if key is not None:
if val is not None:
attrib[key] = val
else:
attrib[key] = ''
entity_sub = SubElement(root, root_namespace + sub_element_tag,
attrib)
for link in link_list:
SubElement(entity_sub, constants.ATOM + 'link', link)
for link in page_links:
SubElement(root, constants.ATOM + 'link', link)
return etree.tostringlist(root)[0]
class TagDictionaryError(Exception):
""" Error thrown when an invalid tag dictionary
is provided.
"""
def __init__(self, msg, tag_dict=None):
Exception.__init__(self)
self.msg = msg
self.tag_dict = tag_dict
def __str__(self):
return self.msg + str(self.tag_dict)
def get_next_xml(attrib):
''' Get atom link with given attributes dict '''
return etree.tostring(Element(constants.ATOM + 'link', attrib=attrib))
def set_select_attributes(resource_obj, attr_dict):
''' Set select attributes on the object
:param resource_obj: object on which attributes are to be set
:param attr_dict: attribute key value pairs to be set on the object
:returns: resource object with attribute values set
'''
if not attr_dict:
return resource_obj
for (key, val) in attr_dict.items():
setattr(resource_obj, key, val)
return resource_obj
def serialize_simple_obj(py_obj, root_tag, var_names):
"""
serializes simple object to xml
:param py_obj: simple python object
:param root_tag: root tag to be used
:param var_names: names of member attributes to be retrieved from
the object
:returns: xml with child member elements value pairs from var_names
"""
getstr = lambda obj: ('' if not obj else str(obj))
root = etree.Element(root_tag)
for var in var_names:
child = etree.SubElement(root, var)
try:
value = getattr(py_obj, var)
except AttributeError:
continue
else:
child.text = getstr(value)
return etree.tostring(root)
def append_xml_as_child(xml_str, child_xml):
'''
Append xml as child to a parent xml string
:param xml_str: parent xml string
:param child_xml: child xml string
:returns: parent xml appended with child xml.
'''
root = etree.fromstring(xml_str,
parser=etree.XMLParser(remove_blank_text=True))
child = etree.fromstring(child_xml,
parser=etree.XMLParser(remove_blank_text=True))
root.append(child)
return etree.tostring(root)
def get_query_fields(req):
''' Get list of query fields from the webob request '''
if constants.QUERY_FIELD_KEY in req.GET:
return sum(map(lambda x: ([] if not x else str(x).split(',')),
req.GET.getall(constants.QUERY_FIELD_KEY)), [])
else:
return None
def get_select_elements_xml(input_xml, field_list, default_field=None):
''' Get select element xml from input xml. Invalid field names are
ignored. If a default field is specified and if it is not in the
field_list it will be added on top of selected elements.
:param input_xml: input xml
:param field_list: element names as a list
:returns: select elements as separate xml
'''
root = etree.fromstring(input_xml)
root_namespace = ''
if not root.prefix and root.nsmap:
root_namespace = root.nsmap[None]
elif root.nsmap and root.prefix is not None:
root_namespace = root.nsmap[root.prefix]
root_ns = '{%s}' % root_namespace
display_root = etree.Element(root.tag, nsmap=root.nsmap)
for field in field_list:
try:
for ele in root.findall(root_ns + field):
display_root.append(ele)
except:
# Ignore if the field name is invalid.
pass
if len(display_root) > 0 and default_field and \
default_field not in field_list:
try:
for i, ele in enumerate(root.findall(root_ns + default_field)):
display_root.insert(i, ele)
except:
# Ignore if the field name is invalid.
pass
return etree.tostring(display_root)

View File

@ -1,113 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from webob.exc import HTTPNotFound
from .. import log as logging
from .. import healthnmon_api as api
from ..api import util
from ..api import base
from ..api import constants
LOG = logging.getLogger(__name__)
class VirtualSwitchController(base.Controller):
""" Controller class for Virtual Switch resource extension """
def __init__(self):
''' Initialize controller with resource specific param values '''
base.Controller.__init__(self,
constants.VIRTUAL_SWITCH_COLLECTION_NAME,
'virtualswitch',
'VirtualSwitch')
# NOTE(siva): virtual switch type not included in simple list output.
def index(self, req):
""" List all virtual switches as a simple list
:param req: webob request
:returns: simple list of virtual switches with resource links to
each virtual switch.
"""
virtualswitch_list = self.get_all_by_filters(
req,
api.virtual_switch_get_all_by_filters)
if not virtualswitch_list:
virtualswitch_list = []
limited_list, collection_links = self.limited_by_marker(
virtualswitch_list,
req)
return self._index(req, limited_list, collection_links)
def detail(self, req):
"""
List all virtual switches as a detailed list with appropriate
resource links
:param req: webob request
:returns: webob response for detail list operation.
"""
virtualswitch_list = self.get_all_by_filters(
req,
api.virtual_switch_get_all_by_filters)
if not virtualswitch_list:
virtualswitch_list = []
limited_list, collection_links = self.limited_by_marker(
virtualswitch_list,
req)
return self._detail(req, limited_list, collection_links)
def _get_resource_tag_dict_list(self, application_url, proj_id):
""" Get the list of tag dictionaries applicable to the resource
:param application_url: application url from request
:param proj_id: project id
:returns: list of tag dictionaries for the resource
"""
return [{
'tag': 'subnetIds',
'tag_replacement': 'subnet',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id, constants.SUBNET_COLLECTION_NAME),
'tag_attrib': None,
}]
def show(self, req, id):
""" Display details for particular virtual_switch
identified by resource id.
:param req: webob request
:param id: unique id to identify virtual_switch resource.
:returns: complete virtual_switch resource details for the
specified id and request.
"""
try:
LOG.debug(_('Show virtual_switch id : %s' % str(id)))
(ctx, proj_id) = util.get_project_context(req)
virtual_switch_list = api.virtual_switch_get_by_ids(ctx,
[id])
LOG.debug(_('Project id: %s Received virtual \
switches from database' % proj_id))
if virtual_switch_list:
return self._show(req, virtual_switch_list[0])
except Exception, err:
LOG.error(_('Exception while fetching data from healthnmon api %s'
% str(err)), exc_info=1)
return HTTPNotFound()

View File

@ -1,170 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from webob.exc import HTTPNotFound
from .. import log as logging
from .. import healthnmon_api as api
from ..api import util
from ..api import constants
from ..api import base
from ..resourcemodel import healthnmonResourceModel
LOG = logging.getLogger(__name__)
class VMController(base.Controller):
""" Controller class for Vm resource extension """
def __init__(self):
''' Initialize controller with resource specific param values '''
base.Controller.__init__(self,
constants.VM_COLLECTION_NAME,
'vm',
'Vm')
def index(self, req):
""" List all virtual machine as a simple list
:param req: webob request
:returns: simple list of virtual machine with appropriate
resource links.
"""
server_list = self.get_all_by_filters(req, api.vm_get_all_by_filters)
if not server_list:
server_list = []
limited_list, collection_links = self.limited_by_marker(server_list,
req)
return self._index(req, limited_list, collection_links)
def detail(self, req):
"""
List all virtual machines as a detailed list with appropriate
resource links
:param req: webob request
:returns: webob response for detail list operation.
"""
server_list = self.get_all_by_filters(req, api.vm_get_all_by_filters)
if not server_list:
server_list = []
limited_list, collection_links = self.limited_by_marker(server_list,
req)
return self._detail(req, limited_list, collection_links)
def _get_resource_xml_with_links(self, req, vm):
""" Get resource as xml updated with
reference links to other resources.
:param req: request object
:param vm: vm object as per resource model
:returns: (vm_xml, out_dict) tuple where,
vm_xml is the updated xml and
out_dict is a dictionary with keys as
the xpath of replaced entities and
value is the corresponding entity dict.
"""
(ctx, proj_id) = util.get_project_context(req)
vm_xml = util.dump_resource_xml(vm, self._model_name)
out_dict = {}
vm_xml_update = util.replace_with_links(
vm_xml,
self._get_resource_tag_dict_list(req.application_url, proj_id),
out_dict)
field_list = util.get_query_fields(req)
if field_list is not None:
if 'utilization' in field_list:
vm_xml_update = self._add_perf_data(vm.get_id(),
vm_xml_update, ctx)
vm_xml_update = \
util.get_select_elements_xml(vm_xml_update,
field_list, 'id')
elif len(req.GET.getall('utilization')) > 0:
vm_xml_update = self._add_perf_data(vm.get_id(),
vm_xml_update, ctx)
return (vm_xml_update, out_dict)
def _get_resource_tag_dict_list(self, application_url, proj_id):
""" Get the list of tag dictionaries applicable to virtual machine
:param application_url: application url from request
:param proj_id: project id
:returns: list of tag dictionaries for virtual machine
"""
return [{
'tag': 'storageVolumeId',
'tag_replacement': 'storagevolume',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id,
constants.STORAGEVOLUME_COLLECTION_NAME),
'tag_attrib': None,
}, {
'tag': 'vmHostId',
'tag_replacement': 'vmhost',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id,
constants.VMHOSTS_COLLECTION_NAME),
'tag_attrib': None,
}]
def show(self, req, id):
""" Display details for particular virtual machine
identified by resource id.
:param req: webob request
:param id: unique id to identify virtual machine.
:returns: complete resource details for the specified id and
request.
"""
try:
LOG.debug(_('Show vm id : %s' % str(id)))
(ctx, proj_id) = util.get_project_context(req)
vm_list = api.vm_get_by_ids(ctx, [id])
LOG.debug(_('Project id: %s Received vmhosts from the database'
% proj_id))
if vm_list:
return self._show(req, vm_list[0])
except Exception, err:
LOG.error(_('Exception while fetching data from healthnmon api %s'
% str(err)), exc_info=1)
return HTTPNotFound()
def _add_perf_data(
self,
vm_id,
input_xml,
ctx,
):
''' Append virtual machine resource utilization data
:param vm_id: virtual machine id
:param input_xml: virtual machine detail xml
:param ctx: request context
:returns: virtual machine detail xml appended with
resource utilization
'''
perf_data = api.get_vm_utilization(ctx, vm_id)
attr_dict = perf_data['ResourceUtilization']
resource_obj = healthnmonResourceModel.ResourceUtilization()
util.set_select_attributes(resource_obj, attr_dict)
utilization_xml = util.dump_resource_xml(resource_obj,
'utilization')
LOG.debug(_('Utilization xml: %s' % utilization_xml))
return util.append_xml_as_child(input_xml, utilization_xml)

View File

@ -1,194 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ..api import util
from ..api import constants
from ..api import base
from ..resourcemodel import healthnmonResourceModel
from .. import healthnmon_api as api
from .. import log as logging
from webob.exc import HTTPNotFound
import os
LOG = logging.getLogger(__name__)
class VmHostsController(base.Controller):
""" Controller class for VmHosts resource extension """
def __init__(self):
''' Initialize controller with resource specific param values '''
base.Controller.__init__(self,
constants.VMHOSTS_COLLECTION_NAME,
'vmhost',
'VmHost')
def index(self, req):
""" List all vmhosts as a simple list
:param req: webob request
:returns: response for simple list of vmhosts with resource links
"""
vmhosts = self.get_all_by_filters(req, api.vm_host_get_all_by_filters)
if not vmhosts:
vmhosts = []
limited_list, collection_links = self.limited_by_marker(vmhosts,
req)
return self._index(req, limited_list, collection_links)
def detail(self, req):
"""
List all vmhosts as a detailed list with appropriate
resource links
:param req: webob request
:returns: webob response for detail list operation.
"""
vmhosts = self.get_all_by_filters(req, api.vm_host_get_all_by_filters)
if not vmhosts:
vmhosts = []
limited_list, collection_links = self.limited_by_marker(vmhosts, req)
return self._detail(req, limited_list, collection_links)
def _get_resource_xml_with_links(self, req, host):
""" Get host resource as xml updated with
reference links to other resources.
:param req: request object
:param host: host object as per resource model
:returns: (host_xml, out_dict) tuple where,
host_xml is the updated xml and
out_dict is a dictionary with keys as
the xpath of replaced entities and
value is the corresponding entity dict.
"""
(ctx, proj_id) = util.get_project_context(req)
host_xml = util.dump_resource_xml(host, self._model_name)
out_dict = {}
host_xml_update = util.replace_with_links(
host_xml,
self._get_resource_tag_dict_list(req.application_url, proj_id),
out_dict)
field_list = util.get_query_fields(req)
if field_list is not None:
if 'utilization' in field_list:
host_xml_update = self._add_perf_data(host.get_id(),
host_xml_update, ctx)
host_xml_update = \
util.get_select_elements_xml(host_xml_update,
field_list, 'id')
elif len(req.GET.getall('utilization')) > 0:
host_xml_update = self._add_perf_data(host.get_id(),
host_xml_update, ctx)
return (host_xml_update, out_dict)
def _get_resource_tag_dict_list(self, application_url, proj_id):
""" Get the list of tag dictionaries applicable to vmhost
resource
:param application_url: application url from request
:param proj_id: project id
:returns: list of tag dictionaries for vmhosts
"""
return [{
'tag': 'virtualSwitches',
'tag_replacement': 'virtualswitch',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id,
constants.VIRTUAL_SWITCH_COLLECTION_NAME),
'tag_attrib': ['name', 'switchType'],
}, {
'tag': 'storageVolumeIds',
'tag_replacement': 'storagevolume',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id,
constants.STORAGEVOLUME_COLLECTION_NAME),
'tag_attrib': None,
}, {
'tag': 'virtualMachineIds',
'tag_replacement': 'virtualmachine',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id, constants.VM_COLLECTION_NAME),
'tag_attrib': None,
}, {
'tag': 'virtualSwitchId',
'tag_replacement': 'virtualswitch',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id,
constants.VIRTUAL_SWITCH_COLLECTION_NAME),
'tag_attrib': None,
}, {
'tag': 'subnetIds',
'tag_replacement': 'subnet',
'tag_key': 'id',
'tag_collection_url': os.path.join(
application_url,
proj_id, constants.SUBNET_COLLECTION_NAME),
'tag_attrib': None,
}]
def show(self, req, id):
""" Display details for particular vmhost
identified by resource id.
:param req: webob request
:param id: unique id to identify vmhost resource.
:returns: complete vmhost resource details for the specified id and
request.
"""
try:
LOG.debug(_('Show vmhost id : %s' % str(id)))
(ctx, proj_id) = util.get_project_context(req)
host_list = api.vm_host_get_by_ids(ctx, [id])
LOG.debug(_('Project id: %s Received vmhosts from the database'
% proj_id))
if host_list:
return self._show(req, host_list[0])
except Exception, err:
LOG.error(_('Exception while fetching data from healthnmon api %s'
% str(err)), exc_info=1)
return HTTPNotFound()
def _add_perf_data(
self,
vmhost_id,
input_xml,
ctx,
):
''' Append vmhost resource utilization data
:param vmhost_id: vmhost id
:param input_xml: vmhost detail xml
:param ctx: request context
:returns: vmhost detail xml appended with
resource utilization
'''
perf_data = api.get_vmhost_utilization(ctx, vmhost_id)
attr_dict = perf_data['ResourceUtilization']
resource_obj = healthnmonResourceModel.ResourceUtilization()
util.set_select_attributes(resource_obj, attr_dict)
utilization_xml = util.dump_resource_xml(resource_obj,
'utilization')
LOG.debug(_('Utilization xml: %s' % utilization_xml))
return util.append_xml_as_child(input_xml, utilization_xml)

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,56 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Manage communication with compute nodes and
collects inventory and monitoring info
"""
from healthnmon import log as logging
from healthnmon.collector.utilization_cache_manager import \
UtilizationCacheManager
LOG = logging.getLogger(__name__)
class CollectorManager(object):
def __init__(self, host=None):
self.host_name = host
def get_resource_utilization(
self,
context,
uuid,
perfmon_type,
window_minutes,
):
""" Returns performance data of VMHost and VM via
hypervisor connection driver """
return UtilizationCacheManager.get_utilization_from_cache(
uuid,
perfmon_type
)
def update_resource_utilization(
self,
context,
uuid,
perfmon_type,
utilization,
):
""" Updates sampled performance data to collector cache """
UtilizationCacheManager.update_utilization_in_cache(
uuid, perfmon_type, utilization)

View File

@ -1,74 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
heathnmon Service default driver - Manage communication with
compute nodes and collects inventory and monitoring info
"""
from nova.openstack.common import cfg
from nova.openstack.common import importutils
from healthnmon import log as logging
LOG = logging.getLogger(__name__)
driver_opts = [
cfg.StrOpt('healthnmon_collector_impl',
default=
'healthnmon.collector.collector_manager.CollectorManager',
help='The healthnmon inventory manager class to use'),
]
CONF = cfg.CONF
CONF.register_opts(driver_opts)
class Healthnmon(object):
"""The base class that all healthnmon
driver classes should inherit from.
"""
def __init__(self, host=None):
self.host_name = host
self.collector_manager = \
importutils.import_object(
CONF.healthnmon_collector_impl, host=self.host_name)
def get_resource_utilization(
self,
context,
uuid,
perfmon_type,
window_minutes,
):
""" Return performance data for requested host/vm
for last windowMinutes."""
return self.collector_manager.get_resource_utilization(
context,
uuid, perfmon_type, window_minutes)
def update_resource_utilization(
self,
context,
uuid,
perfmon_type,
utilization,
):
""" Updates sampled performance data to collector cache """
return self.collector_manager.update_resource_utilization(
context,
uuid, perfmon_type, utilization)

View File

@ -1,145 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
heathnmon Service - Manage communication with compute nodes and
collects inventory and monitoring info
"""
from nova import manager, utils
from nova.openstack.common import importutils
from nova.openstack.common import cfg
from healthnmon.constants import Constants
from healthnmon.collector import driver
from healthnmon import log as logging
from nova import exception
import sys
LOG = logging.getLogger(__name__)
manager_opts = [
cfg.StrOpt('healthnmon_driver',
default='healthnmon.collector.driver.Healthnmon',
help='Default driver to use for the healthnmon service')
]
CONF = cfg.CONF
def register_flags():
try:
CONF.healthnmon_driver
except cfg.NoSuchOptError:
CONF.register_opts(manager_opts)
register_flags()
class HealthnMonCollectorManager(manager.Manager):
"""Manage communication with compute nodes, collects inventory
and monitoring info."""
def __init__(
self,
host=None,
healthnmon_driver=None,
*args,
**kwargs
):
self.host_name = host
if not healthnmon_driver:
healthnmon_driver = CONF.healthnmon_driver
LOG.info(
"Initializing healthnmon. Loading driver %s" % healthnmon_driver)
try:
self.driver = \
utils.check_isinstance(
importutils.import_object(
healthnmon_driver, host=self.host_name),
driver.Healthnmon)
except ImportError, e:
LOG.error(_('Unable to load the healthnmon driver: %s') % e)
sys.exit(1)
except exception.ClassNotFound, e:
LOG.error(_('Unable to load the healthnmon driver: %s') % e)
sys.exit(1)
super(HealthnMonCollectorManager, self).__init__(*args, **kwargs)
def get_vmhost_utilization(
self,
context,
uuid,
windowMinutes=5,
):
""" Gets sampled performance data of requested VmHost """
LOG.info(_('Received the message for VM Host ' +
'Utilization for uuid : %s') % uuid)
resource_utilization = \
self.driver.get_resource_utilization(context,
uuid,
Constants.VmHost,
windowMinutes)
LOG.debug(_('VM Host Resource Utilization: %s')
% resource_utilization)
return dict(ResourceUtilization=resource_utilization)
def get_vm_utilization(
self,
context,
uuid,
windowMinutes=5,
):
""" Gets sampled performance data of requested Vm """
LOG.info(_('Received the message for VM Utilization for uuid : %s'
) % uuid)
resource_utilization = \
self.driver.get_resource_utilization(context, uuid,
Constants.Vm, windowMinutes)
LOG.debug(_('VM Resource Utilization : %s')
% resource_utilization)
return dict(ResourceUtilization=resource_utilization)
def update_vmhost_utilization(
self,
context,
uuid,
utilization,
):
""" Updates sampled performance data of VmHost to collector cache """
LOG.info(_('Received the message for VM Host ' +
'Utilization update for uuid : %s') % uuid)
self.driver.update_resource_utilization(context,
uuid,
Constants.VmHost,
utilization)
def update_vm_utilization(
self,
context,
uuid,
utilization,
):
""" Updates sampled performance data of Vm to collector cache """
LOG.info(_('Received the message for VM ' +
'Utilization update for uuid : %s'), uuid)
self.driver.update_resource_utilization(
context, uuid, Constants.Vm, utilization)

View File

@ -1,82 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from nova.service import Service
from nova import version
from nova import utils
from nova import context
from nova import exception
from nova.openstack.common import rpc
from nova import db
from healthnmon import log
LOG = log.getLogger(__name__)
class HealthnmonCollectorService(Service):
def start(self):
vcs_string = version.version_string_with_package()
LOG.audit(_('Starting %(topic)s node (version %(vcs_string)s)'),
{'topic': self.topic, 'vcs_string': vcs_string})
self.manager.init_host()
self.model_disconnected = False
ctxt = context.get_admin_context()
try:
self.service_ref = self.conductor_api.service_get_by_args(
ctxt, self.host, self.binary)
self.service_id = self.service_ref['id']
except exception.NotFound:
self.service_ref = self._create_service_ref(ctxt)
if self.backdoor_port is not None:
self.manager.backdoor_port = self.backdoor_port
self.conn = rpc.create_connection(new=True)
LOG.debug(_("Creating Consumer connection for Service %s") %
self.topic)
self.manager.pre_start_hook(rpc_connection=self.conn)
rpc_dispatcher = self.manager.create_rpc_dispatcher()
# Share this same connection for these Consumers
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=False)
node_topic = '%s.%s' % (self.topic, self.host)
self.conn.create_consumer(node_topic, rpc_dispatcher, fanout=False)
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=True)
# Consume from all consumers in a thread
self.conn.consume_in_thread()
self.manager.post_start_hook()
pulse = self.servicegroup_api.join(self.host, self.topic, self)
if pulse:
self.timers.append(pulse)
if self.periodic_enable:
if self.periodic_fuzzy_delay:
initial_delay = random.randint(0, self.periodic_fuzzy_delay)
else:
initial_delay = None
periodic = utils.DynamicLoopingCall(self.periodic_tasks)
periodic.start(initial_delay=initial_delay,
periodic_interval_max=self.periodic_interval_max)
self.timers.append(periodic)

View File

@ -1,59 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from healthnmon.constants import Constants
from healthnmon import log
LOG = log.getLogger(__name__)
class UtilizationCacheManager(object):
global _utilizationCache
_utilizationCache = {
Constants.VmHost: {},
Constants.Vm: {},
}
@staticmethod
def get_utilization_cache():
return _utilizationCache
@staticmethod
def get_utilization_from_cache(uuid, obj_type):
LOG.debug(
_('Entering into get_utilization_from_cache ' +
'for uuid:obj_type %s:%s'), uuid, obj_type)
if uuid in UtilizationCacheManager.get_utilization_cache()[obj_type]:
return UtilizationCacheManager.\
get_utilization_cache()[obj_type][uuid]
@staticmethod
def update_utilization_in_cache(uuid, obj_type, utilization):
LOG.debug(
_('Entering into update_utilization_in_cache ' +
'for uuid:obj_type %s:%s'), uuid, obj_type)
UtilizationCacheManager.\
get_utilization_cache()[obj_type][uuid] = utilization
@staticmethod
def delete_utilization_in_cache(uuid, obj_type):
LOG.debug(
_('Entering into delete_utilization_in_cache ' +
'for uuid:obj_type %s:%s'), uuid, obj_type)
if uuid in UtilizationCacheManager.get_utilization_cache()[obj_type]:
del UtilizationCacheManager.\
get_utilization_cache()[obj_type][uuid]

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,312 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import os
import pwd
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
import paramiko
import subprocess
import time
import socket
import libvirt
from healthnmon import log
from nova import crypto
LOG = log.getLogger('healthnmon.common.sshConfiguration')
"""
Find's nova Home path
"""
def get_nova_home():
"""
retrieve home path of nova . By default it is /var/lib/nova
"""
nova_home = pwd.getpwnam('nova').pw_dir
LOG.debug(_('Nova Home Directory' + nova_home))
return nova_home
"""
Find's the current user
"""
def whoami():
return pwd.getpwuid(os.getuid())[0]
'''
Change the user execution Id.
This will work when running as root and want to become somebody else
'''
def change_user(name):
uid = pwd.getpwnam(name).pw_uid
gid = pwd.getpwnam(name).pw_gid
os.setgid(gid)
os.setuid(uid)
'''
Validate the appliance and set all the parameters if required
'''
def is_valid_appliance():
nova_home = get_nova_home()
' Change to user nova '
LOG.debug(_('Executing as ' + whoami()))
if whoami() == 'root':
change_user('nova')
if whoami() != 'nova':
LOG.debug(_('nova does not exists'))
return False
if os.path.exists(nova_home) is False:
'''
nova does not exists.Exiting now
#todo : sys.exit()
'''
return False
# Check if the id_rsa and id_rsa.pub already exists.
# If they exist don't generate new keys"
if not os.path.exists(os.path.join(nova_home + '/.ssh/')):
os.makedirs(os.path.join(nova_home + '/.ssh/'), 01700)
if os.path.isfile(
os.path.join(nova_home + '/.ssh/id_rsa.pub')) is False and \
os.path.isfile(os.path.join(nova_home + '/.ssh/id_rsa')) is False:
' Generate id_rsa and id_rsa.pub files. '
' This will be stored in $NOVAHOME/.ssh/ '
' use nova'
private_key, public_key, _fingerprint = crypto.generate_key_pair()
pub_file = open(os.path.join(nova_home + '/.ssh/id_rsa.pub'), "w+")
pub_file.writelines(public_key)
pub_file.close()
private_file = open(os.path.join(nova_home + '/.ssh/id_rsa'), "w+")
private_file.writelines(private_key)
private_file.close()
os.chmod(os.path.join(nova_home + '/.ssh/id_rsa.pub'), 0700)
os.chmod(os.path.join(nova_home + '/.ssh/id_rsa'), 0700)
# os.popen('ssh-keygen -t rsa', 'w').write(''' ''')
LOG.debug(_('created new id_rsa and id_rsa.pub'))
else:
LOG.debug(_('id_rsa and id_rsa.pub exists'))
' create known_hosts file if it does not exist'
if os.path.isfile(os.path.join(nova_home + '/.ssh/known_hosts')) is False:
filename = os.path.join(nova_home + '/.ssh/known_hosts')
handle = open(filename, 'w')
handle.close
os.chmod(os.path.join(nova_home + '/.ssh/known_hosts'), 0700)
return True
def configure_host(hostname, user, password):
sshConn = Client(hostname, user, password)
' Validate and configure appliance'
if is_valid_appliance() is True:
' Test Connection with host '
if sshConn.test_connection_auth() == 'False':
print 'Cannot connect to host'
return
' nova home path '
nova_home = get_nova_home()
sftp = sshConn.get_ftp_connection()
try:
sftp.stat('.ssh/authorized_keys2')
except:
try:
sftp.stat('.ssh/')
except IOError:
LOG.debug(_('.ssh folder does not exists on Host. ' +
'Creating .ssh folder'))
try:
sftp.mkdir('.ssh/')
' folder created. Now change permissions '
sshConn.exec_command('chmod 700 .ssh')
except IOError:
pass
' Now check if authorized_keys2 files exists. '
' If not create it and change file permissions '
try:
sftp.stat('.ssh/authorized_keys2')
except IOError:
LOG.debug(_('authorized_keys2 file does not exists on Host. ' +
'Creating authorized_keys2 filer'))
try:
sftp.file('.ssh/authorized_keys2', ' x')
' file created. Now change permissions '
sshConn.exec_command('chmod 700 .ssh/authorized_keys2')
except IOError:
pass
' Create a temp directory '
try:
LOG.debug(_('tempPubKey'))
sftp.mkdir('tempPubKey')
except IOError:
'delete this folder and recreate it'
sshConn.exec_command('rm -rf tempPubKey')
sftp.mkdir('tempPubKey')
' Transfer the id_rsa.pub to kvm host '
sftp.put(os.path.join(
nova_home + '/.ssh/id_rsa.pub'), 'tempPubKey/id_rsa.pub')
' Append tempPubKey/id_rsa.pub to .ssh/authorized_keys2 '
sshConn.exec_command(
'cat tempPubKey/id_rsa.pub >> .ssh/authorized_keys2')
sftp.close()
' delete the temp directory '
sshConn.exec_command('rm -rf tempPubKey')
' Verify the libvirt Connection'
verify_libvirt_connection(user, hostname)
'''
Verify Libvirt connection with the kvm Host
'''
def verify_libvirt_connection(user, hostname):
try:
conn = libvirt.open(
'qemu+ssh://' + str(user) + '@' + str(hostname) + '/system')
if isinstance(conn, libvirt.virConnect):
print 'SSH successfully configured'
except libvirt.libvirtError:
print 'Error connecting to remote libvirt'
'''
This class implements the ssh client with the host
'''
class Client(object):
def __init__(self, host, username, password, timeout=10):
self.host = host
self.username = username
self.password = password
self.timeout = int(timeout)
def _get_ssh_connection(self):
"""Returns an ssh connection to the specified host"""
_timeout = True
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
nova_home = get_nova_home()
ssh.load_host_keys(os.path.join(nova_home + '/.ssh/known_hosts'))
_start_time = time.time()
while not self._is_timed_out(self.timeout, _start_time):
try:
ssh.connect(self.host, username=self.username,
password=self.password,
look_for_keys=False, timeout=20)
_timeout = False
break
except socket.error:
continue
except paramiko.AuthenticationException:
time.sleep(15)
continue
if _timeout:
print 'SSH connection timed out. Cannot Connect to '
+ str(self.username) + '@' + str(self.host)
sys.exit(0)
return ssh
def get_ftp_connection(self):
transport = paramiko.Transport(self.host)
transport.connect(username=self.username, password=self.password)
sftp = paramiko.SFTPClient.from_transport(transport)
return sftp
def _is_timed_out(self, timeout, start_time):
return time.time() - timeout > start_time
def connect_until_closed(self):
"""Connect to the server and wait until connection is lost"""
try:
ssh = self._get_ssh_connection()
_transport = ssh.get_transport()
_start_time = time.time()
_timed_out = self._is_timed_out(self.timeout, _start_time)
while _transport.is_active() and not _timed_out:
time.sleep(5)
_timed_out = self._is_timed_out(self.timeout,
_start_time)
ssh.close()
except (EOFError, paramiko.AuthenticationException, socket.error):
return
def exec_command(self, cmd):
"""Execute the specified command on the server.
:returns: data read from standard output of the command
"""
ssh = self._get_ssh_connection()
(_stdin, stdout, _stderr) = ssh.exec_command(cmd)
output = stdout.read()
ssh.close()
return output
def test_connection_auth(self):
""" Returns true if ssh can connect to server"""
try:
connection = self._get_ssh_connection()
connection.close()
except paramiko.AuthenticationException:
return False
return True

View File

@ -1,88 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
""" Defines constants for healthnmon module """
class Constants(object):
VmHost = 'VmHost'
Vm = 'Vm'
StorageVolume = 'StorageVolume'
VirtualSwitch = 'VirtualSwitch'
PortGroup = 'PortGroup'
Network = 'Network'
OLD_STATS = 'old_stats'
NEW_STATS = 'new_stats'
# Vm_Power_States
VM_POWER_STATE_ACTIVE = 'ACTIVE'
VM_POWER_STATE_BUILDING = 'BUILDING'
VM_POWER_STATE_REBUILDING = 'REBUILDING'
VM_POWER_STATE_PAUSED = 'PAUSED'
VM_POWER_STATE_SUSPENDED = 'SUSPENDED'
VM_POWER_STATE_SHUTDOWN = 'SHUTDOWN'
VM_POWER_STATE_RESCUED = 'RESCUED'
VM_POWER_STATE_DELETED = 'DELETED'
VM_POWER_STATE_STOPPED = 'STOPPED'
VM_POWER_STATE_SOFT_DELETE = 'SOFT_DELETE'
VM_POWER_STATE_MIGRATING = 'MIGRATING'
VM_POWER_STATE_RESIZING = 'RESIZING'
VM_POWER_STATE_ERROR = 'ERROR'
VM_POWER_STATE_UNKNOWN = 'UNKNOWN'
VM_POWER_STATES = {
0: VM_POWER_STATE_STOPPED,
1: VM_POWER_STATE_ACTIVE,
2: VM_POWER_STATE_BUILDING,
3: VM_POWER_STATE_PAUSED,
4: VM_POWER_STATE_SHUTDOWN,
5: VM_POWER_STATE_STOPPED,
6: VM_POWER_STATE_ERROR,
7: VM_POWER_STATE_ERROR,
}
# StorageVolume Connection States
STORAGE_STATE_ACTIVE = 'Active'
STORAGE_STATE_INACTIVE = 'Inactive'
# VMHost Connection states
VMHOST_CONNECTED = 'Connected'
VMHOST_DISCONNECTED = 'Disconnected'
# VirtualSwitch Connection States
VIRSWITCH_STATE_ACTIVE = 'Active'
VIRSWITCH_STATE_INACTIVE = 'Inactive'
# Date/Time fields in ISO 8601 format
DATE_TIME_FORMAT = "%Y%m%dT%H%M%S.000Z"
# Vm Connection State
VM_CONNECTED = 'Connected'
VM_DISCONNECTED = 'Disconnected'
# Vm Auto start Enabled
AUTO_START_ENABLED = 'AutostartEnabled'
AUTO_START_DISABLED = 'AutoStartDisabled'
class DbConstants(object):
ORDER_ASC = 'asc'
ORDER_DESC = 'desc'

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,462 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Defines interface for healthnmon DB access.
Uses underlying SQL alchemy layer which is dynamically loaded.
"""
from nova import utils
from oslo.config import cfg
db_opts = [
cfg.StrOpt('healthnmon_db_backend',
default='sqlalchemy',
help='The backend to use for db'),
]
CONF = cfg.CONF
CONF.register_opts(db_opts)
IMPL = utils.LazyPluggable('healthnmon_db_backend',
sqlalchemy='healthnmon.db.sqlalchemy.api')
#################################################
def vm_host_save(context, vmhost):
"""This API will create or update a VmHost object and
its associations to DB. For the update to be working the
VMHost object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
vmhost - VmHost type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.vm_host_save(context, vmhost)
def vm_host_get_by_ids(context, ids):
"""This API will return a list of VmHost objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return IMPL.vm_host_get_by_ids(context, ids)
def vm_host_get_all(context):
"""This API will return a list of all the VmHost objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.vm_host_get_all(context)
def vm_host_delete_by_ids(context, ids):
"""This API will delete VmHost objects and its associations to DB.
Parameters:
ids - ids for VmHost objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.vm_host_delete_by_ids(context, ids)
def vm_host_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vm_hosts that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VmHost model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'connectionState':'Connected',
'name':['n1', 'n2']} will filter as
connectionState = 'Connected' AND name in ('n1', 'n2')
Special filter :
changes-since : long value - time in epoch ms
Gets the hosts changed or
deleted after the specified time
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vm_hosts that match all filters and sorted with sort_key
"""
return IMPL.vm_host_get_all_by_filters(context,
filters, sort_key, sort_dir)
#################################################
def vm_save(context, vm):
"""This API will create or update a Vm object and its associations to DB.
For the update to be working the VM object should have been one returned
by DB API. Else it will be considered as a insert.
Parameters:
vm - Vm type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.vm_save(context, vm)
def vm_get_by_ids(context, ids):
"""This API will return a list of Vm objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return IMPL.vm_get_by_ids(context, ids)
def vm_get_all(context):
"""This API will return a list of all the Vm objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.vm_get_all(context)
def vm_delete_by_ids(context, ids):
"""This API will delete Vms object and its associations to DB.
Parameters:
ids - ids for Vm objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.vm_delete_by_ids(context, ids)
def vm_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vms that match all filters and sorted with sort_key.
Deleted rows will be returned by default, unless
there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Vm model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'powerState':'ACTIVE', 'name':['n1', 'n2']}
will filter as
powerState = 'ACTIVE' AND name in ('n1', 'n2')
Special filter :
changes-since : long value - time in epoch ms
Gets the Vms changed or deleted
after the specified time
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vms that match all filters and sorted with sort_key
"""
return IMPL.vm_get_all_by_filters(context, filters, sort_key, sort_dir)
#################################################
def storage_volume_save(context, storagevolume):
"""This API will create or update a StorageVolume object and
its associations to DB. For the update to be working the
storagevolume object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
storagevolume - StorageVolume type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.storage_volume_save(context, storagevolume)
def storage_volume_get_by_ids(context, ids):
"""This API will return a list of StorageVolume objects
which corresponds to ids
Parameters:
ids - List of StorageVolume ids
context - nova.context.RequestContext object
"""
return IMPL.storage_volume_get_by_ids(context, ids)
def storage_volume_get_all(context):
"""This API will return a list of all the StorageVolume
objects present in Db
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.storage_volume_get_all(context)
def storage_volume_delete_by_ids(context, ids):
"""This API will delete Volume objects and its associations to DB.
Parameters:
ids - ids for Volumes objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.storage_volume_delete_by_ids(context, ids)
def storage_volume_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the storage volumes that match all filters
and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of StorageVolume model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'size':1024, 'name':['vol1', 'vol2']}
will filter as
size = 1024 AND name in ('vol1', 'vol2')
Special filter :
changes-since : long value - time in epoch ms
Gets the volumes changed or
deleted after the specified time
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of storage volumes that match all filters
and sorted with sort_key
"""
return IMPL.storage_volume_get_all_by_filters(context,
filters, sort_key, sort_dir)
# ====== VirtualSwitc APIs ===============
def virtual_switch_save(context, virtual_switch):
"""This API will create or update a VirtualSwitch object and
its associations to DB. For the update to be working the
VirtualSwitch object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
virtual_switch - VirtualSwitch type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.virtual_switch_save(context, virtual_switch)
def virtual_switch_get_by_ids(context, ids):
"""This API will return a list of VirtualSwitch
objects which corresponds to ids
Parameters:
ids - List of VirtualSwitch ids
context - nova.context.RequestContext object
"""
return IMPL.virtual_switch_get_by_ids(context, ids)
def virtual_switch_get_all(context):
"""This API will return a list of all the VirtualSwitch
objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.virtual_switch_get_all(context)
def virtual_switch_delete_by_ids(context, ids):
"""This API will delete VirtualSwitch objects and its associations to DB.
Parameters:
ids - ids for VirtualSwitch objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.virtual_switch_delete_by_ids(context, ids)
def virtual_switch_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the virtual_switch that match all filters
and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VirtualSwitch model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'switchType':'abc', 'name':['n1', 'n2']}
will filter as
switchType = 'abc' AND name in ('n1', 'n2')
Special filter :
changes-since : long value - time in epoch ms
Gets the virtual switches
changed or deleted after
the specified time
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of virtual_switch that match all filters
and sorted with sort_key
"""
return IMPL.virtual_switch_get_all_by_filters(context,
filters,
sort_key,
sort_dir)
# ====== PortGroup APIs ===============
def port_group_save(context, port_group):
"""This API will create or update a PortGroup and its associations to DB.
For the update to be working the port group object should have been one
returned by DB API. Else it will be considered as a insert.
Parameters:
port group - PortGroup type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.port_group_save(context, port_group)
def port_group_get_by_ids(context, ids):
"""This API will return a list of PortGroup which corresponds to ids
Parameters:
ids - List of PortGroup ids
context - nova.context.RequestContext object
"""
return IMPL.port_group_get_by_ids(context, ids)
def port_group_get_all(context):
"""This API will return a list of all the PortGroup objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.port_group_get_all(context)
def port_group_delete_by_ids(context, ids):
"""This API will delete PortGroup objects and its associations to DB.
Parameters:
ids - ids for PortGroup objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.port_group_delete_by_ids(context, ids)
# ====== Subnet APIs ===============
def subnet_save(context, subnet):
"""This API will create or update a Subnet object and its associations
to DB. For the update to be working the Subnet object should have been
one returned by DB API. Else it will be considered as a insert.
Parameters:
subnet - Subnet type object to be saved
context - nova.context.RequestContext object
"""
return IMPL.subnet_save(context, subnet)
def subnet_get_by_ids(context, ids):
"""This API will return a list of Subnet objects which corresponds to ids
Parameters:
ids - List of Subnet ids
context - nova.context.RequestContext object
"""
return IMPL.subnet_get_by_ids(context, ids)
def subnet_get_all(context):
"""This API will return a list of all the Subnet objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return IMPL.subnet_get_all(context)
def subnet_delete_by_ids(context, ids):
"""This API will delete Subnet objects and its associations to DB.
Parameters:
ids - ids for Subnet objects to be deleted
context - nova.context.RequestContext object
"""
return IMPL.subnet_delete_by_ids(context, ids)
def subnet_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the subnet that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Subnet model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'isPublic':True, 'name':['n1', 'n2']}
will filter as
isPublic = True AND name in ('n1', 'n2')
Special filter :
changes-since : long value - time in epoch ms
Gets the subnets changed or
deleted after the specified time
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of subnet that match all filters and sorted with sort_key
"""
return IMPL.subnet_get_all_by_filters(context, filters, sort_key, sort_dir)

View File

@ -1,37 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Database setup and migration commands."""
from nova import utils
IMPL = utils.LazyPluggable('db_backend',
sqlalchemy='healthnmon.db.sqlalchemy.migration'
)
INIT_VERSION = 0
def db_sync(version=None):
"""Migrate the database to `version` or the most recent version."""
return IMPL.db_sync(version=version)
def db_version():
"""Display the current database version."""
return IMPL.db_version()

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,450 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy import vmhost_api, vm_api, storagevolume_api, \
virtualswitch_api, portgroup_api, subnet_api
LOG = logging.getLogger(__name__)
#################################
@context_api.require_admin_context
def vm_host_save(context, vmhost):
"""This API will create or update a VmHost object and its
associations to DB. For the update to be working the VMHost
object should have been one returned by DB API. Else it will
be considered as a insert.
Parameters:
vmhost - VmHost type object to be saved
context - nova.context.RequestContext object
"""
return vmhost_api.vm_host_save(context, vmhost)
@context_api.require_admin_context
def vm_host_get_by_ids(context, ids):
"""This API will return a list of VmHost objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return vmhost_api.vm_host_get_by_ids(context, ids)
@context_api.require_admin_context
def vm_host_get_all(context):
"""This API will return a list of all the VmHost objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return vmhost_api.vm_host_get_all(context)
@context_api.require_admin_context
def vm_host_delete_by_ids(context, ids):
"""This API will delete VmHost objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return vmhost_api.vm_host_delete_by_ids(context, ids)
@context_api.require_admin_context
def vm_host_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vm_hosts that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VmHost model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'connectionState':'Connected',
'name':['n1', 'n2']} will filter as
connectionState = 'Connected' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vm_hosts that match all filters and sorted with sort_key
"""
return vmhost_api.vm_host_get_all_by_filters(context,
filters, sort_key, sort_dir)
#################################
@context_api.require_context
def vm_save(context, vm):
"""This API will create or update a Vm object and its associations to DB.
For the update to be working the VM object should have been
one returned by DB API. Else it will be considered as a insert.
Parameters:
vm - Vm type object to be saved
context - nova.context.RequestContext object
"""
return vm_api.vm_save(context, vm)
@context_api.require_context
def vm_get_by_ids(context, ids):
"""This API will return a list of Vm objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return vm_api.vm_get_by_ids(context, ids)
@context_api.require_context
def vm_get_all(context):
"""This API will return a list of all the Vm objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return vm_api.vm_get_all(context)
@context_api.require_context
def vm_delete_by_ids(context, ids):
"""This API will delete Vm objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return vm_api.vm_delete_by_ids(context, ids)
@context_api.require_admin_context
def vm_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vms that match all filters and sorted with sort_key.
Deleted rows will be returned by default, unless there's
a filter that says otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Vm model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'powerState':'ACTIVE', 'name':['n1', 'n2']}
will filter as
powerState = 'ACTIVE' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc for descending
sort direction
Returns:
list of vms that match all filters and sorted with sort_key
"""
return vm_api.vm_get_all_by_filters(context, filters, sort_key, sort_dir)
####################################################
@context_api.require_admin_context
def storage_volume_save(context, storagevolume):
"""This API will create or update a StorageVolume object and its
associations to DB. For the update to be working the VMHost object
should have been one returned by DB API. Else it will be considered
as a insert.
Parameters:
storagevolume - StorageVolume type object to be saved
context - nova.context.RequestContext object
"""
return storagevolume_api.storage_volume_save(context, storagevolume)
@context_api.require_admin_context
def storage_volume_get_by_ids(context, ids):
"""This API will return a list of StorageVolume
objects which corresponds to ids
Parameters:
ids - List of StorageVolume ids
context - nova.context.RequestContext object
"""
return storagevolume_api.storage_volume_get_by_ids(context, ids)
@context_api.require_admin_context
def storage_volume_get_all(context):
"""This API will return a list of all the StorageVolume
objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
return storagevolume_api.storage_volume_get_all(context)
@context_api.require_admin_context
def storage_volume_delete_by_ids(context, ids):
"""This API will delete StorageVolume objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object (optional parameter)
"""
return storagevolume_api.storage_volume_delete_by_ids(context, ids)
@context_api.require_admin_context
def storage_volume_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the storage volumes that match all filters
and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of StorageVolume model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'size':1024, 'name':['vol1', 'vol2']}
will filter as
size = 1024 AND name in ('vol1', 'vol2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc
for descending sort direction
Returns:
list of storage volumes that match all filters
and sorted with sort_key
"""
return storagevolume_api.storage_volume_get_all_by_filters(context,
filters,
sort_key,
sort_dir)
# ====== VIrtual Switch APIs ==============
@context_api.require_admin_context
def virtual_switch_save(context, virtual_switch):
"""This API will create or update a VirtualSwitch object
and its associations to DB. For the update to be working
the virtual_switch object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
virtual_switch - network type object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
return virtualswitch_api.virtual_switch_save(context, virtual_switch)
@context_api.require_admin_context
def virtual_switch_get_by_ids(context, ids):
"""This API will return a list of virtual switch objects
which corresponds to ids
Parameters:
ids - List of virtual switch ids
context - nova.context.RequestContext object (optional parameter)
"""
return virtualswitch_api.virtual_switch_get_by_ids(context, ids)
@context_api.require_admin_context
def virtual_switch_get_all(context):
"""This API will return a list of all the
virtual switch objects present in Db
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
return virtualswitch_api.virtual_switch_get_all(context)
@context_api.require_admin_context
def virtual_switch_delete_by_ids(context, ids):
"""This API will delete virtual switch objects which corresponds to ids
Parameters:
ids - List of virtual switch ids
context - nova.context.RequestContext object (optional parameter)
"""
return virtualswitch_api.virtual_switch_delete_by_ids(context, ids)
@context_api.require_admin_context
def virtual_switch_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the virtual_switch that match all filters and
sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VirtualSwitch model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'switchType':'abc', 'name':['n1', 'n2']}
will filter as
switchType = 'abc' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc for
descending sort direction
Returns:
list of virtual_switch that match all filters and
sorted with sort_key
"""
return virtualswitch_api.virtual_switch_get_all_by_filters(context,
filters,
sort_key,
sort_dir)
# ====== Port Group APIs ==============
@context_api.require_admin_context
def port_group_save(context, port_group):
"""This API will create or update a PortGroup object and
its associations to DB. For the update to be working the
port_group object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
port_group - port group object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
return portgroup_api.port_group_save(context, port_group)
@context_api.require_admin_context
def port_group_get_by_ids(context, ids):
"""This API will return a list of PortGroup objects
which corresponds to ids
Parameters:
ids - List of port group ids
context - nova.context.RequestContext object (optional parameter)
"""
return portgroup_api.port_group_get_by_ids(context, ids)
@context_api.require_admin_context
def port_group_get_all(context):
"""This API will return a list of all the PortGroup objects present in DB
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
return portgroup_api.port_group_get_all(context)
@context_api.require_admin_context
def port_group_delete_by_ids(context, ids):
"""This API will delete port_group objects which corresponds to ids
Parameters:
ids - List of port group ids
context - nova.context.RequestContext object (optional parameter)
"""
return portgroup_api.port_group_delete_by_ids(context, ids)
# ====== Subnet APIs ===============
@context_api.require_admin_context
def subnet_save(context, subnet):
"""This API will create or update a Subnet object and
its associations to DB. For the update to be working the
subnet object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
subnet - port group object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
return subnet_api.subnet_save(context, subnet)
@context_api.require_admin_context
def subnet_get_by_ids(context, ids):
"""This API will return a list of subnet objects which corresponds to ids
Parameters:
ids - List of subnet ids
context - nova.context.RequestContext object (optional parameter)
"""
return subnet_api.subnet_get_by_ids(context, ids)
@context_api.require_admin_context
def subnet_get_all(context):
"""This API will return a list of all the Subnet objects present in DB
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
return subnet_api.subnet_get_all(context)
@context_api.require_admin_context
def subnet_delete_by_ids(context, ids):
"""This API will delete Subnets objects which corresponds to ids
Parameters:
ids - List of subnets ids
context - nova.context.RequestContext object (optional parameter)
"""
return subnet_api.subnet_delete_by_ids(context, ids)
@context_api.require_admin_context
def subnet_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the subnet that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Subnet model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'isPublic':True, 'name':['n1', 'n2']}
will filter as
isPublic = True AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of subnet that match all filters and sorted with sort_key
"""
return subnet_api.subnet_get_all_by_filters(context,
filters, sort_key, sort_dir)

File diff suppressed because it is too large Load Diff

View File

@ -1,466 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Maps schema objects to Model classes in module
nova.healthnmon.resourcemodel.healthnmonResourceModel
"""
from sqlalchemy import and_, or_
from sqlalchemy.orm import mapper, relationship
from healthnmon.resourcemodel import healthnmonResourceModel as model
from healthnmon.db.sqlalchemy import manage_healthnmon_db as schema
from sqlalchemy.ext.associationproxy import association_proxy
from healthnmon.resourcemodel.healthnmonResourceModel import MemberSpec_
def map_models():
# clear_mappers()
mapper(model.OsProfile, schema.OsProfile)
mapper(model.Cost, schema.Cost)
mapper(model.PhysicalServer, schema.PhysicalServer)
mapper(model.IpProfile, schema.IpProfile)
mapper(model.VmHost, schema.VmHost, inherits=model.PhysicalServer,
properties={
'createEpoch': [schema.PhysicalServer.c.createEpoch,
schema.VmHost.c.createEpoch],
'lastModifiedEpoch': [schema.PhysicalServer.c.lastModifiedEpoch,
schema.VmHost.c.lastModifiedEpoch],
'deletedEpoch': [schema.PhysicalServer.c.deletedEpoch,
schema.VmHost.c.deletedEpoch],
'resourceManagerId': [schema.PhysicalServer.c.resourceManagerId,
schema.VmHost.c.resourceManagerId],
'cost': relationship(model.Cost,
foreign_keys=schema.PhysicalServer.c.costId),
'os': relationship(model.OsProfile,
foreign_keys=schema.PhysicalServer.c.osId),
'deleted': [schema.PhysicalServer.c.deleted,
schema.VmHost.c.deleted],
'virtualSwitches': relationship(
model.VirtualSwitch,
foreign_keys=schema.VirtualSwitch.c.vmHostId,
primaryjoin=(and_(
schema.VmHost.c.id == schema.VirtualSwitch.c.vmHostId,
or_(schema.VirtualSwitch.c.deleted == False,
schema.VirtualSwitch.c.deleted == None))),
cascade='all, delete, delete-orphan'),
'portGroups': relationship(
model.PortGroup,
foreign_keys=schema.PortGroup.c.vmHostId,
primaryjoin=(and_(
schema.VmHost.c.id == schema.PortGroup.c.vmHostId,
or_(schema.PortGroup.c.deleted == False,
schema.PortGroup.c.deleted == None))),
cascade='all, delete, delete-orphan'),
'ipAddresses': relationship(
model.IpProfile,
foreign_keys=schema.IpProfile.c.vmHostId,
cascade='all, delete, delete-orphan'),
})
mapper(model.Subnet, schema.Subnet, properties={
'groupIdTypes': relationship(
model.GroupIdType,
foreign_keys=schema.GroupIdType.c.subnetId),
'resourceTags': relationship(
model.ResourceTag,
foreign_keys=schema.ResourceTag.c.subnetId),
'ipAddressRanges': relationship(
model.IpAddressRange,
foreign_keys=schema.IpAddressRange.c.subnetId,
cascade='all, delete, delete-orphan'),
'usedIpAddresses': relationship(
model.IpAddress,
foreign_keys=schema.IpAddress.c.subnetId,
cascade='all, delete, delete-orphan'),
'networkSrc': relationship(SubnetNetworkSource, uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetNetworkSources.c.subnetId,
cascade='all, delete, delete-orphan'),
'dnsServer': relationship(SubnetDnsServer, uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetDnsServers.c.subnetId),
'dnsSuffixes': relationship(
SubnetDnsSearchSuffix,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetDnsSearchSuffixes.c.subnetId),
'defaultGateway': relationship(
SubnetDefaultGateway,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetDefaultGateways.c.subnetId,
cascade='all, delete, delete-orphan'),
'winsServer': relationship(
SubnetWinServer,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetWinServers.c.subnetId),
'ntpDateServer': relationship(
SubnetNtpDateServer,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetNtpDateServers.c.subnetId),
'deploymentService': relationship(
SubnetDeploymentService,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetDeploymentServices.c.subnetId),
'parents': relationship(
SubnetParentId,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetParentIds.c.subnetId),
'childs': relationship(
SubnetChildId,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetChildIds.c.subnetId),
'redundancyPeer': relationship(
SubnetRedundancyPeerId,
uselist=True,
primaryjoin=schema.Subnet.c.id
== schema.SubnetRedundancyPeerIds.c.subnetId),
})
mapper(model.GroupIdType, schema.GroupIdType,
properties={'networkType': relationship(GroupIdTypeNetworkTypes,
uselist=True, primaryjoin=schema.GroupIdType.c.id
== schema.GroupIdTypeNetworkTypes.c.groupTypeId)})
model.GroupIdType.networkTypes = association_proxy('networkType',
'networkTypeId')
mapper(GroupIdTypeNetworkTypes, schema.GroupIdTypeNetworkTypes)
mapper(model.ResourceTag, schema.ResourceTag)
mapper(model.IpAddress, schema.IpAddress)
mapper(model.IpAddressRange, schema.IpAddressRange,
properties={'startAddress': relationship(model.IpAddress,
foreign_keys=schema.IpAddressRange.c.startAddressId,
primaryjoin=schema.IpAddressRange.c.startAddressId
== schema.IpAddress.c.id),
'endAddress': relationship(model.IpAddress,
foreign_keys=schema.IpAddressRange.c.endAddressId,
primaryjoin=schema.IpAddressRange.c.endAddressId
== schema.IpAddress.c.id)})
model.Subnet.networkSources = association_proxy('networkSrc',
'networkSourceId')
mapper(SubnetNetworkSource, schema.SubnetNetworkSources)
model.Subnet.dnsServers = association_proxy('dnsServer',
'dnsServerId')
mapper(SubnetDnsServer, schema.SubnetDnsServers)
model.Subnet.defaultGateways = association_proxy('defaultGateway',
'defaultGatewayId')
mapper(SubnetDefaultGateway, schema.SubnetDefaultGateways)
model.Subnet.dnsSearchSuffixes = association_proxy('dnsSuffixes',
'dnsSuffixId')
mapper(SubnetDnsSearchSuffix, schema.SubnetDnsSearchSuffixes)
model.Subnet.winsServers = association_proxy('winsServer',
'winServerId')
mapper(SubnetWinServer, schema.SubnetWinServers)
model.Subnet.ntpDateServers = association_proxy('ntpDateServer',
'ntpDateServerId')
mapper(SubnetNtpDateServer, schema.SubnetNtpDateServers)
model.Subnet.deploymentServices = \
association_proxy('deploymentService', 'deploymentServiceId')
mapper(SubnetDeploymentService, schema.SubnetDeploymentServices)
model.Subnet.parentIds = association_proxy('parents', 'parentId')
mapper(SubnetParentId, schema.SubnetParentIds)
model.Subnet.childIds = association_proxy('childs', 'childId')
mapper(SubnetChildId, schema.SubnetChildIds)
model.Subnet.redundancyPeerIds = association_proxy('redundancyPeer',
'redundancyPeerId')
mapper(SubnetRedundancyPeerId, schema.SubnetRedundancyPeerIds)
mapper(model.VirtualSwitch, schema.VirtualSwitch, properties={
'portGroups': relationship(
model.PortGroup,
foreign_keys=schema.PortGroup.c.virtualSwitchId,
primaryjoin=(
and_(schema.VirtualSwitch.c.id ==
schema.PortGroup.c.virtualSwitchId,
or_(schema.PortGroup.c.deleted == False,
schema.PortGroup.c.deleted == None))),
cascade='all, delete, delete-orphan'),
'cost': relationship(model.Cost,
foreign_keys=schema.VirtualSwitch.c.costId),
'subnets': relationship(
VirtualSwitchSubnetIds,
uselist=True,
primaryjoin=schema.VirtualSwitch.c.id
== schema.VirtualSwitchSubnetIds.c.virtualSwitchId,
cascade='all, delete, delete-orphan'),
'networks': relationship(
VirtualSwitchInterfaces,
uselist=True,
primaryjoin=schema.VirtualSwitch.c.id
== schema.NetworkInterfaces.c.vSwitchId,
cascade='all, delete, delete-orphan')})
model.VirtualSwitch.subnetIds = association_proxy('subnets', 'subnetId')
mapper(VirtualSwitchSubnetIds, schema.VirtualSwitchSubnetIds)
model.VirtualSwitch.networkInterfaces = association_proxy(
'networks', 'interfaceId')
mapper(VirtualSwitchInterfaces, schema.NetworkInterfaces)
mapper(model.PortGroup, schema.PortGroup, properties={
'type_': schema.PortGroup.c.type,
'cost': relationship(model.Cost,
foreign_keys=schema.PortGroup.c.costId)})
mapper(model.Vm, schema.Vm, properties={
'cost': relationship(model.Cost,
foreign_keys=schema.Vm.c.costId),
'os': relationship(model.OsProfile,
foreign_keys=schema.Vm.c.osId),
'ipAddresses': relationship(model.IpProfile,
foreign_keys=schema.IpProfile.c.vmId,
cascade='all, delete, delete-orphan'
),
'vmNetAdapters': relationship(
model.VmNetAdapter,
foreign_keys=schema.VmNetAdapter.c.vmId,
cascade='all, delete, delete-orphan'),
'vmScsiControllers': relationship(
model.VmScsiController,
foreign_keys=schema.VmScsiController.c.vmId,
cascade='all, delete, delete-orphan'),
'vmDisks': relationship(
model.VmDisk,
foreign_keys=schema.VmDisk.c.vmId,
cascade='all, delete, delete-orphan'),
'vmGenericDevices': relationship(
model.VmGenericDevice,
foreign_keys=schema.VmGenericDevice.c.vmId,
cascade='all, delete, delete-orphan'),
'vmGlobalSettings': relationship(
model.VmGlobalSettings,
foreign_keys=schema.Vm.c.globalSettingsId),
'cpuResourceAllocation': relationship(
model.ResourceAllocation,
foreign_keys=schema.Vm.c.cpuResourceAllocationId,
primaryjoin=schema.Vm.c.cpuResourceAllocationId
== schema.ResourceAllocation.c.id),
'memoryResourceAllocation': relationship(
model.ResourceAllocation,
foreign_keys=schema.Vm.c.memoryResourceAllocationId,
primaryjoin=schema.Vm.c.memoryResourceAllocationId
== schema.ResourceAllocation.c.id)})
mapper(model.VmGlobalSettings, schema.VmGlobalSettings)
mapper(model.VmScsiController, schema.VmScsiController,
properties={'type_': schema.VmScsiController.c.type})
mapper(model.VmNetAdapter, schema.VmNetAdapter,
properties={'ipAdd': relationship(VmNetAdapterIpProfile,
uselist=True, primaryjoin=schema.VmNetAdapter.c.id
== schema.VmNetAdapterIpProfiles.c.netAdapterId,
cascade='all, delete, delete-orphan')})
model.VmNetAdapter.ipAddresses = association_proxy('ipAdd', 'ipAddress')
mapper(VmNetAdapterIpProfile, schema.VmNetAdapterIpProfiles)
mapper(model.VmDisk, schema.VmDisk)
mapper(model.VmGenericDevice, schema.VmGenericDevice,
properties={'properties': relationship(model.Property,
foreign_keys=schema.VmProperty.c.vmDeviceId,
cascade='all, delete, delete-orphan')})
mapper(model.Property, schema.VmProperty)
mapper(model.ResourceAllocation, schema.ResourceAllocation)
mapper(model.HostMountPoint, schema.HostMountPoint)
mapper(model.StorageVolume, schema.StorageVolume,
properties={'mountPoints': relationship(model.HostMountPoint,
foreign_keys=schema.HostMountPoint.c.storageVolumeId,
cascade='all, delete, delete-orphan'),
'vmDisks': relationship(model.VmDisk,
foreign_keys=schema.VmDisk.c.storageVolumeId,
cascade='all, delete, delete-orphan')})
# The below are mapper classes for list tables.
class VirtualSwitchSubnetIds(object):
member_data_items_ = {'subnetId': MemberSpec_('subnetId',
'xs:string', 0),
'virtualSwitchId': MemberSpec_('virtualSwitchId',
'xs:string', 0)}
def __init__(self, subnetId=None, virtualSwitchId=None):
self.subnetId = subnetId
self.virtualSwitchId = virtualSwitchId
class VirtualSwitchInterfaces(object):
member_data_items_ = {'interfaceId': MemberSpec_('interfaceId',
'xs:string', 0),
'vSwitchId': MemberSpec_('vSwitchId',
'xs:string', 0)}
def __init__(self, interfaceId=None, vSwitchId=None):
self.interfaceId = interfaceId
self.vSwitchId = vSwitchId
class SubnetNetworkSource(object):
member_data_items_ = \
{'networkSourceId': MemberSpec_('networkSourceId', 'xs:string',
0), 'subnetId': MemberSpec_('subnetId', 'xs:string', 0)}
def __init__(self, networkSourceId=None, subnetId=None):
self.networkSourceId = networkSourceId
self.subnetId = subnetId
class SubnetDnsServer(object):
member_data_items_ = {'dnsServerId': MemberSpec_('dnsServerId',
'xs:string', 0),
'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, dnsServerId=None, subnetId=None):
self.dnsServerId = dnsServerId
self.subnetId = subnetId
class SubnetDnsSearchSuffix(object):
member_data_items_ = {'dnsSuffixId': MemberSpec_('dnsSuffixId',
'xs:string', 0),
'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, dnsSuffixId=None, subnetId=None):
self.dnsSuffixId = dnsSuffixId
self.subnetId = subnetId
class SubnetDefaultGateway(object):
member_data_items_ = \
{'defaultGatewayId': MemberSpec_('defaultGatewayId', 'xs:string', 0),
'subnetId': MemberSpec_('subnetId', 'xs:string', 0)}
def __init__(self, defaultGatewayId=None, subnetId=None):
self.defaultGatewayId = defaultGatewayId
self.subnetId = subnetId
class SubnetWinServer(object):
member_data_items_ = {'winServerId': MemberSpec_('winServerId',
'xs:string', 0),
'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, winServerId=None, subnetId=None):
self.winServerId = winServerId
self.subnetId = subnetId
class SubnetNtpDateServer(object):
member_data_items_ = \
{'ntpDateServerId': MemberSpec_('ntpDateServerId', 'xs:string',
0), 'subnetId': MemberSpec_('subnetId', 'xs:string', 0)}
def __init__(self, ntpDateServerId=None, subnetId=None):
self.ntpDateServerId = ntpDateServerId
self.subnetId = subnetId
class SubnetDeploymentService(object):
member_data_items_ = \
{'deploymentServiceId': MemberSpec_('deploymentServiceId',
'xs:string', 0), 'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, deploymentServiceId=None, subnetId=None):
self.deploymentServiceId = deploymentServiceId
self.subnetId = subnetId
class SubnetParentId(object):
member_data_items_ = {'parentId': MemberSpec_('parentId',
'xs:string', 0),
'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, parentId=None, subnetId=None):
self.parentId = parentId
self.subnetId = subnetId
class SubnetChildId(object):
member_data_items_ = {'childId': MemberSpec_('childId', 'xs:string', 0),
'subnetId': MemberSpec_('subnetId',
'xs:string', 0)}
def __init__(self, childId=None, subnetId=None):
self.childId = childId
self.subnetId = subnetId
class SubnetRedundancyPeerId(object):
member_data_items_ = \
{'redundancyPeerId': MemberSpec_('redundancyPeerId', 'xs:string',
0), 'subnetId': MemberSpec_('subnetId', 'xs:string', 0)}
def __init__(self, redundancyPeerId=None, subnetId=None):
self.redundancyPeerId = redundancyPeerId
self.subnetId = subnetId
class GroupIdTypeNetworkTypes(object):
member_data_items_ = {'networkTypeId': MemberSpec_('networkTypeId',
'xs:string', 0),
'groupTypeId': MemberSpec_('groupTypeId',
'xs:string', 0)}
def __init__(self, networkTypeId=None, groupTypeId=None):
self.networkTypeId = networkTypeId
self.groupTypeId = groupTypeId
class VmNetAdapterIpProfile(object):
member_data_items_ = {'ipAddress': MemberSpec_('ipAddress',
'xs:string', 0),
'netAdapterId': MemberSpec_('netAdapterId',
'xs:string', 0)}
def __init__(self, ipAddress=None, netAdapterId=None):
self.ipAddress = ipAddress
self.netAdapterId = netAdapterId

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,19 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from migrate.versioning.shell import main
if __name__ == '__main__':
main(debug='False', repository='.')

View File

@ -1,23 +0,0 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=healthnmon
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=healthnmon_migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]

View File

@ -1,2 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-

View File

@ -1,91 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import distutils.version as dist_version
import os
from healthnmon.db import migration
from nova.openstack.common.db.sqlalchemy import session as db_session
from nova import exception
import migrate
from migrate.versioning import util as migrate_util
from migrate import exceptions as versioning_exceptions
from migrate.versioning import api as versioning_api
from migrate.versioning.repository import Repository
_REPOSITORY = None
get_engine = db_session.get_engine
@migrate_util.decorator
def patched_with_engine(f, *a, **kw):
url = a[0]
engine = migrate_util.construct_engine(url, **kw)
try:
kw['engine'] = engine
return f(*a, **kw)
finally:
if isinstance(engine, migrate_util.Engine) and engine is not url:
migrate_util.log.debug('Disposing SQLAlchemy engine %s', engine)
engine.dispose()
MIN_PKG_VERSION = dist_version.StrictVersion('0.7.3')
if (not hasattr(migrate, '__version__') or
dist_version.StrictVersion(migrate.__version__) < MIN_PKG_VERSION):
migrate_util.with_engine = patched_with_engine
def db_sync(version=None):
if version is not None:
try:
version = int(version)
except ValueError:
raise exception.NovaException(_("version should be an integer"))
current_version = db_version()
repository = _find_migrate_repo()
if version is None or version > current_version:
return versioning_api.upgrade(get_engine(), repository, version)
else:
return versioning_api.downgrade(get_engine(), repository,
version)
def db_version():
repository = _find_migrate_repo()
try:
return versioning_api.db_version(get_engine(), repository)
except versioning_exceptions.DatabaseNotControlledError:
return db_version_control(migration.INIT_VERSION)
def db_version_control(version=None):
repository = _find_migrate_repo()
versioning_api.version_control(get_engine(), repository, version)
return version
def _find_migrate_repo():
"""Get the path for the migrate repository."""
global _REPOSITORY
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
assert os.path.exists(path)
if _REPOSITORY is None:
_REPOSITORY = Repository(path)
return _REPOSITORY

View File

@ -1,142 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
LOG = logging.getLogger(__name__)
def port_group_save(context, port_group):
"""This API will create or update a PortGroup object and
its associations to DB. For the update to be working the
port_group object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
port_group - port group object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
if port_group is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
port_groups = port_group_get_by_ids(context, [port_group.id])
if port_groups:
port_group.set_createEpoch(port_groups[0].get_createEpoch())
port_group.set_lastModifiedEpoch(epoch_time)
else:
port_group.set_createEpoch(epoch_time)
__save_and_expunge(session, port_group)
except Exception:
LOG.exception(_('error while adding/updating PortGroup'))
raise
finally:
__cleanup_session(session)
def port_group_get_by_ids(context, ids):
"""This API will return a list of PortGroup objects
which corresponds to ids
Parameters:
ids - List of port group ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
portgroups = \
session.query(PortGroup).filter(
and_(PortGroup.id.in_(ids),
or_(PortGroup.deleted == False,
PortGroup.deleted == None))).\
options(joinedload('cost')).all()
session.expunge_all()
return portgroups
except Exception:
LOG.exception(_('error while obtaining PortGroup'))
raise
finally:
__cleanup_session(session)
def port_group_get_all(context):
"""This API will return a list of all the PortGroup objects present in DB
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
session = None
try:
session = nova_session.get_session()
portgroups = \
session.query(PortGroup).filter(
or_(PortGroup.deleted == False,
PortGroup.deleted == None)).\
options(joinedload('cost')).all()
session.expunge_all()
return portgroups
except Exception:
LOG.exception(_('error while obtaining PortGroup'))
raise Exception('portGroup_get_all exception')
finally:
__cleanup_session(session)
def port_group_delete_by_ids(context, ids):
"""This API will delete port_group objects which corresponds to ids
Parameters:
ids - List of port group ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
portGroups = port_group_get_by_ids(context, ids)
for portGroup in portGroups:
epoch_time = get_current_epoch_ms()
portGroup.set_deletedEpoch(epoch_time)
portGroup.set_deleted(True)
__save_and_expunge(session, portGroup)
except Exception:
LOG.exception(_('error while deleting the PortGroup'))
raise
finally:
__cleanup_session(session)

View File

@ -1,209 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
LOG = logging.getLogger(__name__)
def storage_volume_save(context, storagevolume):
"""This API will create or update a StorageVolume object and its
associations to DB. For the update to be working the VMHost object
should have been one returned by DB API. Else it will be considered
as a insert.
Parameters:
storagevolume - StorageVolume type object to be saved
context - nova.context.RequestContext object
"""
if storagevolume is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
storagevolumes = storage_volume_get_by_ids(context,
[storagevolume.id])
if storagevolumes:
storagevolume.set_createEpoch(storagevolumes[0].get_createEpoch())
storagevolume.set_lastModifiedEpoch(epoch_time)
else:
storagevolume.set_createEpoch(epoch_time)
__save_and_expunge(session, storagevolume)
except Exception:
LOG.exception(_('error while adding/updating StorageVolume'))
raise
finally:
__cleanup_session(session)
def storage_volume_get_by_ids(context, ids):
"""This API will return a list of StorageVolume
objects which corresponds to ids
Parameters:
ids - List of StorageVolume ids
context - nova.context.RequestContext object
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
storagevolumes = \
session.query(
StorageVolume).filter(
and_(StorageVolume.id.in_(ids),
or_(StorageVolume.deleted == False,
StorageVolume.deleted == None))).\
options(joinedload('mountPoints')).all()
return storagevolumes
except Exception:
LOG.exception(_('error while obtaining StorageVolume'))
raise Exception('StorageVolume_get_by_id exception')
finally:
__cleanup_session(session)
def storage_volume_get_all(context):
"""This API will return a list of all the StorageVolume
objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
session = None
try:
session = nova_session.get_session()
storagevolumes = \
session.query(
StorageVolume).filter(
or_(StorageVolume.deleted == False,
StorageVolume.deleted == None)) \
.options(joinedload('mountPoints')).all()
return storagevolumes
except Exception:
LOG.exception(_('error while obtaining StorageVolume'))
raise Exception('StorageVolume_get_all exception')
finally:
__cleanup_session(session)
def __delete_vm_storage_association(storage, context):
vmDisks = storage.vmDisks
if (vmDisks is not None) and (len(vmDisks) > 0):
del vmDisks[:]
def __delete_host_storage_association(storage, context):
try:
hostMounts = storage.get_mountPoints()
if len(hostMounts) > 0:
del hostMounts[:]
except Exception:
LOG.exception('Error while removing association between vmHost \
and storageVolume')
raise
def storage_volume_delete_by_ids(context, ids):
"""This API will delete StorageVolume objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
storageVolumes = session.query(StorageVolume).\
filter(StorageVolume.id.in_(ids)).\
filter(or_(StorageVolume.deleted == False,
StorageVolume.deleted == None)).\
options(joinedload('mountPoints')).\
options(joinedload('vmDisks')).\
all()
for storageVolume in storageVolumes:
__delete_host_storage_association(storageVolume, context)
__delete_vm_storage_association(storageVolume, context)
epoch_time = get_current_epoch_ms()
storageVolume.set_deletedEpoch(epoch_time)
storageVolume.set_deleted(True)
__save_and_expunge(session, storageVolume)
except Exception:
LOG.exception(_('error while deleteing storage volume'))
raise
finally:
__cleanup_session(session)
def storage_volume_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the storage volumes that match all filters
and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of StorageVolume model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'size':1024, 'name':['vol1', 'vol2']}
will filter as
size = 1024 AND name in ('vol1', 'vol2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc
for descending sort direction
Returns:
list of storage volumes that match all filters
and sorted with sort_key
"""
session = None
try:
session = nova_session.get_session()
filtered_query = _create_filtered_ordered_query(session, StorageVolume,
filters=filters,
sort_key=sort_key,
sort_dir=sort_dir)
storagevolumes = filtered_query.options(
joinedload('mountPoints')).all()
return storagevolumes
except Exception:
LOG.exception(_('Error in storage_volume_get_all_by_filters'))
raise
finally:
__cleanup_session(session)

View File

@ -1,306 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
LOG = logging.getLogger(__name__)
def _get_deleted_obj(inv_obj_id_list, db_obj_list, epoch_time):
to_be_deleted_obj = []
for db_obj in db_obj_list:
if db_obj.get_id() not in inv_obj_id_list:
db_obj.set_deletedEpoch(epoch_time)
db_obj.set_deleted(True)
to_be_deleted_obj.append(db_obj)
return to_be_deleted_obj
def subnet_save(context, subnet):
"""This API will create or update a Subnet object and
its associations to DB. For the update to be working the
subnet object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
subnet - port group object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
if subnet is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
subnets = subnet_get_by_ids(context, [subnet.id])
if subnets:
subnet.set_createEpoch(subnets[0].get_createEpoch())
subnet.set_lastModifiedEpoch(epoch_time)
# set for ipaddress
ipaddress_existing = subnets[0].get_usedIpAddresses()
ipadd_dict = {}
for ipaddress in ipaddress_existing:
ipadd_dict[ipaddress.get_id()] = ipaddress.get_createEpoch()
usedipaddress = subnet.get_usedIpAddresses()
usedip_id_list = []
for usedip in usedipaddress:
usedip_id_list.append(usedip.get_id())
if usedip.get_id() in ipadd_dict:
usedip.set_createEpoch(ipadd_dict[usedip.get_id()])
usedip.set_lastModifiedEpoch(epoch_time)
else:
usedip.set_createEpoch(epoch_time)
# set for ipaddressRange
ipaddress_range_existing = subnets[0].get_ipAddressRanges()
ipaddr_dict = {}
for ipaddress_r in ipaddress_range_existing:
ipaddr_dict[ipaddress_r.get_id()] = ipaddress_r.\
get_createEpoch()
ipaddress_range = subnet.get_ipAddressRanges()
ip_range_id_list = []
for ip_range in ipaddress_range:
ip_range_id_list.append(ip_range.get_id())
if ip_range.get_id() in ipaddr_dict:
ip_range.set_createEpoch(ipaddr_dict[ip_range.get_id()])
ip_range.set_lastModifiedEpoch(epoch_time)
else:
ip_range.set_createEpoch(epoch_time)
# if the any ipaddres is not in new subnet and present
# in db, then update the deleteEpoch and mark as deleted
# Get the deleted ipAddresses and ip-Ranges and set the
# deleted flag and deletedEpoch value."
deleted_ipAddress = _get_deleted_obj(
usedip_id_list, ipaddress_existing, epoch_time)
deleted_ipRanges = _get_deleted_obj(
ip_range_id_list, ipaddress_range_existing, epoch_time)
for deleted_ip in deleted_ipAddress:
subnet.add_usedIpAddresses(deleted_ip)
for deleted_ipRange in deleted_ipRanges:
subnet.add_ipAddressRanges(deleted_ipRange)
else:
subnet.set_createEpoch(epoch_time)
for ipaddr in subnet.get_usedIpAddresses():
ipaddr.set_createEpoch(epoch_time)
for ipadd_r in subnet.get_ipAddressRanges():
ipadd_r.set_createEpoch(epoch_time)
__save_and_expunge(session, subnet)
except Exception:
LOG.exception(_('error while adding/updating Subnet'))
raise
finally:
__cleanup_session(session)
def subnet_get_by_ids(context, ids):
"""This API will return a list of subnet objects which corresponds to ids
Parameters:
ids - List of subnet ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
subnets = session.query(Subnet).filter(
and_(Subnet.id.in_(ids),
or_(Subnet.deleted == False,
Subnet.deleted == None))).\
options(joinedload_all('groupIdTypes.networkType')).\
options(joinedload('resourceTags')).\
options(joinedload_all('ipAddressRanges.startAddress')).\
options(joinedload_all('ipAddressRanges.endAddress')).\
options(joinedload('usedIpAddresses')).\
options(joinedload('parents')).\
options(joinedload('networkSrc')).\
options(joinedload('dnsServer')).\
options(joinedload('dnsSuffixes')).\
options(joinedload('defaultGateway')).\
options(joinedload('winsServer')).\
options(joinedload('ntpDateServer')).\
options(joinedload('deploymentService')).\
options(joinedload('childs')).\
options(joinedload('redundancyPeer')).all()
return subnets
except Exception:
LOG.exception(_('error while obtaining Subnets'))
raise
finally:
__cleanup_session(session)
def subnet_get_all(context):
"""This API will return a list of all the Subnet objects present in DB
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
session = None
try:
session = nova_session.get_session()
subnets = session.query(Subnet).filter(
or_(Subnet.deleted == False,
Subnet.deleted == None)).\
options(joinedload_all('groupIdTypes.networkType')).\
options(joinedload('resourceTags')).\
options(joinedload_all('ipAddressRanges.startAddress')).\
options(joinedload_all('ipAddressRanges.endAddress')).\
options(joinedload('usedIpAddresses')).\
options(joinedload('parents')).\
options(joinedload('networkSrc')).\
options(joinedload('dnsServer')).\
options(joinedload('dnsSuffixes')).\
options(joinedload('defaultGateway')).\
options(joinedload('winsServer')).\
options(joinedload('ntpDateServer')).\
options(joinedload('deploymentService')).\
options(joinedload('childs')).\
options(joinedload('redundancyPeer')).all()
return subnets
except Exception:
LOG.exception(_('error while obtaining Subnets'))
raise
finally:
__cleanup_session(session)
def __delete_vSwitch_subnet_association(session, subnetId):
vSwitches = session.query(
VirtualSwitch, VirtualSwitchSubnetIds).filter(
and_(VirtualSwitchSubnetIds.subnetId == subnetId,
or_(VirtualSwitch.deleted == False,
VirtualSwitch.deleted == None))).\
options(joinedload_all('subnets')).all()
if len(vSwitches) > 0:
subnetList = []
for vSwitchType in vSwitches:
vSwitch = vSwitchType[0]
subnets = vSwitch.get_subnetIds()
for subnet in subnets:
if not subnet == subnetId:
subnetList.append(subnet)
vSwitch.set_subnetIds([])
__save_and_expunge(session, vSwitch)
if len(subnetList) > 0:
vSwitch.set_subnetIds(subnetList)
__save_and_expunge(session, vSwitch)
def subnet_delete_by_ids(context, ids):
"""This API will delete Subnets objects which corresponds to ids
Parameters:
ids - List of subnets ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
subnets = subnet_get_by_ids(context, ids)
epoch_time = get_current_epoch_ms()
for subnet in subnets:
__delete_vSwitch_subnet_association(session, subnet.id)
subnet.set_deletedEpoch(epoch_time)
subnet.set_deleted(True)
usedIpAddresses = subnet.get_usedIpAddresses()
for usedIp in usedIpAddresses:
usedIp.set_deletedEpoch(epoch_time)
usedIp.set_deleted(True)
for ip_range in subnet.get_ipAddressRanges():
ip_range.set_deletedEpoch(epoch_time)
ip_range.set_deleted(True)
__save_and_expunge(session, subnet)
except Exception:
LOG.exception(_('error while obtaining Subnet'))
raise
finally:
__cleanup_session(session)
def subnet_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the subnet that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Subnet model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'isPublic':True, 'name':['n1', 'n2']}
will filter as
isPublic = True AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of subnet that match all filters and sorted with sort_key
"""
session = None
try:
session = nova_session.get_session()
filtered_query = _create_filtered_ordered_query(session,
Subnet,
filters=filters,
sort_key=sort_key,
sort_dir=sort_dir)
subnets = filtered_query.\
options(joinedload_all('groupIdTypes.networkType')).\
options(joinedload('resourceTags')).\
options(joinedload_all('ipAddressRanges.startAddress')).\
options(joinedload_all('ipAddressRanges.endAddress')).\
options(joinedload('usedIpAddresses')).\
options(joinedload('parents')).\
options(joinedload('networkSrc')).\
options(joinedload('dnsServer')).\
options(joinedload('dnsSuffixes')).\
options(joinedload('defaultGateway')).\
options(joinedload('winsServer')).\
options(joinedload('ntpDateServer')).\
options(joinedload('deploymentService')).\
options(joinedload('childs')).\
options(joinedload('redundancyPeer')).all()
return subnets
except Exception:
LOG.exception(_('Error while obtaining Subnets'))
raise
finally:
__cleanup_session(session)

View File

@ -1,173 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
LOG = logging.getLogger(__name__)
def _create_filtered_ordered_query(session, *models, **kwargs):
"""
Create a query on the model objects which is sorted and filtered
Arguments:
session - Sqlalchemy Session
models - Model classes to be queried.
Filtering and ordering will be applied to the
first model class in list
Keyword Arguments:
filters - dictionary of filters to be applied
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc for descending
sort direction
Returns:
sqlalchemy.orm.query.Query object with all the filters and ordering
"""
# Extract kwargs
sort_key = kwargs.pop('sort_key', None)
sort_dir = kwargs.pop('sort_dir', None)
filters = kwargs.pop('filters', None)
# Create query
query = session.query(*models)
# Apply filters
primary_model = models[0]
if filters is not None:
# Make a copy of the filters dictionary to use going forward, as we'll
# be modifying it and we shouldn't affect the caller's use of it.
filters = filters.copy()
if 'changes-since' in filters:
try:
changes_since_val = filters.pop('changes-since')
changes_since = long(changes_since_val)
lastModifiedEpoch_col = getattr(primary_model,
'lastModifiedEpoch')
deletedEpoch_col = getattr(primary_model, 'deletedEpoch')
createEpoch_col = getattr(primary_model, 'createEpoch')
changes_since_filter = or_(
lastModifiedEpoch_col > changes_since,
deletedEpoch_col > changes_since,
createEpoch_col > changes_since,)
query = query.filter(changes_since_filter)
except ValueError:
LOG.warn(
_('Invalid value for changes-since filter : '
+ str(changes_since_val)),
exc_info=True)
except AttributeError:
LOG.warn(
_('Cannot apply changes-since filter to model : '
+ str(primary_model)),
exc_info=True)
if 'deleted' in filters:
try:
deleted_val = filters.pop('deleted')
deleted_col = getattr(primary_model, 'deleted')
if deleted_val == 'true':
query = query.filter(deleted_col == True)
else:
not_deleted_filter = or_(deleted_col == False,
deleted_col == None)
query = query.filter(not_deleted_filter)
except AttributeError:
LOG.warn(
_('Cannot apply deleted filter to model : '
+ str(primary_model)),
exc_info=True)
# Apply other filters
filter_dict = {}
for key in filters.keys():
value = filters.pop(key)
try:
column_attr = getattr(primary_model, key)
except AttributeError:
LOG.warn(
_('Cannot apply ' + str(key) + ' filter to model : '
+ str(primary_model)),
exc_info=True)
continue
if primary_model.get_all_members()[key].container == 1:
# Its a list type attribute. So use contains
if isinstance(value, (list, tuple, set, frozenset)):
# Use the filter column_attr contains value[0] OR
# column_attr contains value[1] ...
or_clauses = []
for each_value in value:
clause = column_attr.contains(each_value)
or_clauses.append(clause)
query = query.filter(or_(*or_clauses))
else:
query = query.filter(column_attr.contains(value))
elif isinstance(value, (list, tuple, set, frozenset)):
# Looking for values in a list; apply to query directly
query = query.filter(column_attr.in_(value))
else:
# OK, simple exact match; save for later
filter_dict[key] = value
if len(filter_dict) > 0:
query = query.filter_by(**filter_dict)
# Apply sorting
if sort_key is not None:
try:
sort_col = getattr(primary_model, sort_key)
if sort_dir == constants.DbConstants.ORDER_DESC:
sort_fn = desc
else:
# Default sort asc
sort_fn = asc
query = query.order_by(sort_fn(sort_col))
except AttributeError:
LOG.warn(
_('Cannot apply sorting as model '
+ str(
primary_model) + ' do not have field ' + str(sort_key)),
exc_info=True)
return query
def __save_and_expunge(session, obj):
"""Save a ORM object to db and expunge the object from session
Parameters:
session - Sqlalchemy Session
obj - ORM object to be saved
"""
session.merge(obj)
session.flush()
session.expunge_all()
def __cleanup_session(session):
"""Clean up session
Parameters:
session - Sqlalchemy Session
"""
if session is not None:
session.flush()
session.expunge_all()
session.close()

View File

@ -1,237 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
from healthnmon.db.sqlalchemy import portgroup_api
LOG = logging.getLogger(__name__)
def virtual_switch_save(context, virtual_switch):
"""This API will create or update a VirtualSwitch object
and its associations to DB. For the update to be working
the virtual_switch object should have been one returned by DB API.
Else it will be considered as a insert.
Parameters:
virtual_switch - network type object to be saved
context - nova.context.RequestContext object (optional parameter)
"""
if virtual_switch is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
virtual_switches = virtual_switch_get_by_ids(context,
[virtual_switch.id])
if virtual_switches:
# Add the extracted createEpcoh and the new epoch to
# lastModifiedEpoch to the added portGroups
pGroupDict = {}
for virtualswitch in virtual_switches:
pgs = virtualswitch.get_portGroups()
for pg in pgs:
pGroupDict[pg.get_id()] = pg.get_createEpoch()
virtual_switch.set_createEpoch(
virtual_switches[0].get_createEpoch())
virtual_switch.set_lastModifiedEpoch(epoch_time)
portGroups = virtual_switch.get_portGroups()
for portGroup in portGroups:
portId = portGroup.get_id()
if portId in pGroupDict:
portGroup.set_createEpoch(pGroupDict[portId])
portGroup.set_lastModifiedEpoch(epoch_time)
else:
portGroup.set_createEpoch(epoch_time)
else:
virtual_switch.set_createEpoch(epoch_time)
for portGroup in virtual_switch.get_portGroups():
portGroup.set_createEpoch(epoch_time)
__save_and_expunge(session, virtual_switch)
except Exception:
LOG.exception(_('error while adding/updating VirtualSwitch'))
raise
finally:
__cleanup_session(session)
def virtual_switch_get_by_ids(context, ids):
"""This API will return a list of virtual switch objects
which corresponds to ids
Parameters:
ids - List of virtual switch ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
virtualswitches = \
session.query(VirtualSwitch).filter(
and_(VirtualSwitch.id.in_(ids),
or_(VirtualSwitch.deleted == False,
VirtualSwitch.deleted == None))).\
options(joinedload('cost')).\
options(joinedload_all('portGroups.cost')).\
options(joinedload('networks')).\
options(joinedload('subnets')).all()
return virtualswitches
except Exception:
LOG.exception(_('error while obtaining VirtualSwitch'))
raise
finally:
__cleanup_session(session)
def virtual_switch_get_all(context):
"""This API will return a list of all the
virtual switch objects present in Db
Parameters:
context - nova.context.RequestContext object (optional parameter)
"""
session = None
try:
session = nova_session.get_session()
virtualswitches = \
session.query(VirtualSwitch).filter(
or_(VirtualSwitch.deleted == False,
VirtualSwitch.deleted == None)).\
options(joinedload('cost')).\
options(joinedload_all('portGroups.cost')).\
options(joinedload('networks')).\
options(joinedload('subnets')).all()
return virtualswitches
except Exception:
LOG.exception(_('error while obtaining VirtualSwitch'))
raise
finally:
__cleanup_session(session)
def virtual_switch_delete_by_ids(context, ids):
"""This API will delete virtual switch objects which corresponds to ids
Parameters:
ids - List of virtual switch ids
context - nova.context.RequestContext object (optional parameter)
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
vSwitches = virtual_switch_get_by_ids(context, ids)
portGroupIds = \
session.query(
PortGroup.id).filter(and_(PortGroup.virtualSwitchId.in_(ids),
or_(PortGroup.deleted == False,
PortGroup.deleted == None))).all()
pgIds = []
for portGroupId in portGroupIds:
pg_tuple = portGroupId[0]
pgIds.append(pg_tuple)
portgroup_api.port_group_delete_by_ids(context, pgIds)
for vSwitch in vSwitches:
epoch_time = get_current_epoch_ms()
vSwitch.set_deletedEpoch(epoch_time)
vSwitch.set_deleted(True)
__save_and_expunge(session, vSwitch)
except Exception:
LOG.exception(_('error while deleting the VirtualSwitch'))
raise
finally:
__cleanup_session(session)
def _load_deleted_switches(session, vswitches):
for vs in vswitches:
vs_id = vs.get_id()
deleted_vs_pg = session.query(PortGroup).\
filter(and_(PortGroup.deleted == True,
PortGroup.virtualSwitchId == vs_id)).\
options(joinedload('cost')).all()
for pg in deleted_vs_pg:
vs.add_portGroups(pg)
def virtual_switch_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the virtual_switch that match all filters and
sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VirtualSwitch model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'switchType':'abc', 'name':['n1', 'n2']}
will filter as
switchType = 'abc' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc for
descending sort direction
Returns:
list of virtual_switch that match all filters and
sorted with sort_key
"""
session = None
try:
session = nova_session.get_session()
filtered_query = _create_filtered_ordered_query(session, VirtualSwitch,
filters=filters,
sort_key=sort_key,
sort_dir=sort_dir)
virtualswitches = filtered_query.options(
joinedload_all('portGroups.cost')).\
options(joinedload('cost')).\
options(joinedload('networks')).\
options(joinedload('subnets')).all()
if filters is not None and 'deleted' in filters:
vs_filters = filters.copy()
deleted_val = vs_filters.pop('deleted')
if deleted_val and deleted_val == 'true':
_load_deleted_switches(session, virtualswitches)
return virtualswitches
except Exception:
LOG.exception(_('Error while obtaining VirtualSwitch'))
raise
finally:
__cleanup_session(session)

View File

@ -1,213 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
LOG = logging.getLogger(__name__)
def vm_save(context, vm):
"""This API will create or update a Vm object and its associations to DB.
For the update to be working the VM object should have been
one returned by DB API. Else it will be considered as a insert.
Parameters:
vm - Vm type object to be saved
context - nova.context.RequestContext object
"""
if vm is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
vms = vm_get_by_ids(context, [vm.id])
if vms is not None and len(vms) > 0:
vm.set_createEpoch(vms[0].get_createEpoch())
vm.set_lastModifiedEpoch(epoch_time)
vmGlobalSettings = vm.get_vmGlobalSettings()
if vmGlobalSettings is not None:
if vms[0].get_vmGlobalSettings() is not None:
vmGlobalSettings.set_createEpoch(
vms[0].get_vmGlobalSettings().get_createEpoch())
vmGlobalSettings.set_lastModifiedEpoch(epoch_time)
else:
vmGlobalSettings.set_createEpoch(epoch_time)
else:
vm.set_createEpoch(epoch_time)
vmGlobalSettings = vm.get_vmGlobalSettings()
if vmGlobalSettings is not None:
vmGlobalSettings.set_createEpoch(epoch_time)
__save_and_expunge(session, vm)
except Exception:
LOG.exception(_('Error while saving vm'))
raise
finally:
__cleanup_session(session)
def vm_get_by_ids(context, ids):
"""This API will return a list of Vm objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
vms = session.query(Vm).filter(
and_(Vm.id.in_(ids),
or_(Vm.deleted == False,
Vm.deleted == None))).\
options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('vmNetAdapters', 'ipAdd')).\
options(joinedload('vmScsiControllers')).\
options(joinedload('vmDisks')).\
options(joinedload_all('vmGenericDevices', 'properties')).\
options(joinedload('vmGlobalSettings')).\
options(joinedload('cpuResourceAllocation')).\
options(joinedload('memoryResourceAllocation')).all()
return vms
except Exception:
LOG.exception(_('error while obtaining Vm'))
raise
finally:
__cleanup_session(session)
def vm_get_all(context):
"""This API will return a list of all the Vm objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
session = None
try:
session = nova_session.get_session()
vms = session.query(Vm).filter(or_(Vm.deleted == False,
Vm.deleted == None)).\
options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('vmNetAdapters', 'ipAdd')).\
options(joinedload('vmScsiControllers')).\
options(joinedload('vmDisks')).\
options(joinedload_all('vmGenericDevices', 'properties')).\
options(joinedload('vmGlobalSettings')).\
options(joinedload('cpuResourceAllocation')).\
options(joinedload('memoryResourceAllocation')).all()
return vms
except Exception:
LOG.exception(_('error while obtaining Vm'))
raise
finally:
__cleanup_session(session)
def vm_delete_by_ids(context, ids):
"""This API will delete Vm objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
vms = vm_get_by_ids(context, ids)
for vm in vms:
epoch_time = get_current_epoch_ms()
vm.set_deletedEpoch(epoch_time)
vm.set_deleted(True)
vmGlobalSettings = vm.get_vmGlobalSettings()
if vmGlobalSettings is not None:
vmGlobalSettings.set_deletedEpoch(epoch_time)
vmGlobalSettings.set_deleted(True)
__save_and_expunge(session, vm)
except Exception:
LOG.exception(_('error while deleting vm'))
raise
finally:
__cleanup_session(session)
def vm_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vms that match all filters and sorted with sort_key.
Deleted rows will be returned by default, unless there's
a filter that says otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Vm model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'powerState':'ACTIVE', 'name':['n1', 'n2']}
will filter as
powerState = 'ACTIVE' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction, desc for descending
sort direction
Returns:
list of vms that match all filters and sorted with sort_key
"""
session = None
try:
session = nova_session.get_session()
filtered_query = _create_filtered_ordered_query(session, Vm,
filters=filters,
sort_key=sort_key,
sort_dir=sort_dir)
vms = filtered_query.\
options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('vmNetAdapters', 'ipAdd')).\
options(joinedload('vmScsiControllers')).\
options(joinedload('vmDisks')).\
options(joinedload_all('vmGenericDevices', 'properties')).\
options(joinedload('vmGlobalSettings')).\
options(joinedload('cpuResourceAllocation')).\
options(joinedload('memoryResourceAllocation')).all()
return vms
except Exception:
LOG.exception(_('Error while obtaining Vm'))
raise
finally:
__cleanup_session(session)

View File

@ -1,443 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.orm import joinedload, joinedload_all
from sqlalchemy import and_, or_
from sqlalchemy.sql.expression import asc
from sqlalchemy.sql.expression import desc
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, \
Vm, StorageVolume, HostMountPoint, VirtualSwitch, PortGroup, \
Subnet
from healthnmon.db.sqlalchemy.mapper import VirtualSwitchSubnetIds
from nova.openstack.common.db.sqlalchemy import session as nova_session
from nova.db.sqlalchemy import api as context_api
from healthnmon import log as logging
from healthnmon import constants
from healthnmon.utils import get_current_epoch_ms
from healthnmon.db.sqlalchemy.util import _create_filtered_ordered_query, \
__save_and_expunge, __cleanup_session
from healthnmon.db.sqlalchemy import virtualswitch_api, vm_api,\
storagevolume_api
LOG = logging.getLogger(__name__)
def _get_deleted_vSwitches(inv_switch_id_list, db_Switches, epoch_time):
to_be_deleted_switches = []
for old_switch in db_Switches:
if old_switch.get_id() not in inv_switch_id_list:
old_switch.set_deletedEpoch(epoch_time)
old_switch.set_deleted(True)
to_be_deleted_switches.append(old_switch)
return to_be_deleted_switches
def _get_deleted_portgroups(inv_pgroup_id_list, db_pgroups,
epoch_time, res_id):
to_be_deleted_pgroups = []
for old_pgroup in db_pgroups:
if old_pgroup.get_id() not in inv_pgroup_id_list:
if res_id == old_pgroup.get_virtualSwitchId():
old_pgroup.set_deletedEpoch(epoch_time)
old_pgroup.set_deleted(True)
to_be_deleted_pgroups.append(old_pgroup)
return to_be_deleted_pgroups
def __vm_host_set_virtualMachineIds(vmhosts, vmIdsRes):
vmIdDict = {}
for row in vmIdsRes:
hostId = row[0]
vmId = row[1]
if hostId not in vmIdDict:
vmIdDict[hostId] = []
vmIdDict[hostId].append(vmId)
for vmhost in vmhosts:
if vmhost.get_id() in vmIdDict:
vmhost.set_virtualMachineIds(vmIdDict.get(vmhost.get_id()))
def __vm_host_set_storageVolumeIds(vmhosts, volIdsRes):
volIdDict = {}
for row in volIdsRes:
hostId = row[0]
volId = row[1]
if hostId not in volIdDict:
volIdDict[hostId] = []
volIdDict[hostId].append(volId)
for vmhost in vmhosts:
if vmhost.get_id() in volIdDict:
vmhost.set_storageVolumeIds(volIdDict.get(vmhost.get_id()))
def vm_host_save(context, vmhost):
"""This API will create or update a VmHost object and its
associations to DB. For the update to be working the VMHost
object should have been one returned by DB API. Else it will
be considered as a insert.
Parameters:
vmhost - VmHost type object to be saved
context - nova.context.RequestContext object
"""
if vmhost is None:
return
session = None
try:
session = nova_session.get_session()
epoch_time = get_current_epoch_ms()
vmhosts = vm_host_get_by_ids(context, [vmhost.id])
deleted_host_portgroups = []
if vmhosts:
vmhost.set_createEpoch(vmhosts[0].get_createEpoch())
vmhost.set_lastModifiedEpoch(epoch_time)
existingVSwitches = vmhosts[0].get_virtualSwitches()
"Dict to store switch epoch against switchid"
switchDict_Epoch = {}
for existingVSwitch in existingVSwitches:
switchDict_Epoch[existingVSwitch.get_id()] = \
existingVSwitch.get_createEpoch()
existing_host_portgroups = vmhosts[0].get_portGroups()
pGroupDict = {}
for existingPortGroup in existing_host_portgroups:
pGroupDict[existingPortGroup.get_id()] = \
existingPortGroup.get_createEpoch()
vSwitches = vmhost.get_virtualSwitches()
newSwitchList = []
existing_switch_PortGroups = []
for vSwitch in vSwitches:
switchId = vSwitch.get_id()
db_switch = \
virtualswitch_api.virtual_switch_get_by_ids(context,
[switchId])
if len(db_switch) > 0:
existing_switch_PortGroups = db_switch[0].get_portGroups()
newSwitchList.append(switchId)
if switchId in switchDict_Epoch:
vSwitch.set_createEpoch(switchDict_Epoch[switchId])
vSwitch.set_lastModifiedEpoch(epoch_time)
else:
vSwitch.set_createEpoch(epoch_time)
vs_portGroups = vSwitch.get_portGroups()
vs_newportgroupList = []
for vs_portGroup in vs_portGroups:
portId = vs_portGroup.get_id()
vs_newportgroupList.append(portId)
vs_portGroup.set_virtualSwitchId(switchId)
if portId in pGroupDict:
vs_portGroup.set_createEpoch(pGroupDict[portId])
vs_portGroup.set_lastModifiedEpoch(epoch_time)
else:
vs_portGroup.set_createEpoch(epoch_time)
# Get the deleted port groups and set the deleted flag as true
# and deletedEpoch value."
deleted_portgroups = _get_deleted_portgroups(
vs_newportgroupList,
existing_switch_PortGroups,
epoch_time, switchId)
for deleted_portgroup in deleted_portgroups:
vSwitch.add_portGroups(deleted_portgroup)
deleted_host_portgroups.append(deleted_portgroup)
# Get the deleted virtual switches and set the deleted
# flag as true and deletedEpoch value."
deleted_switches = _get_deleted_vSwitches(newSwitchList,
existingVSwitches,
epoch_time)
for deleted_switch in deleted_switches:
deleted_pgs = deleted_switch.get_portGroups()
for deleted_pg in deleted_pgs:
deleted_pg.deleted = True
deleted_pg.set_deletedEpoch(epoch_time)
deleted_host_portgroups.append(deleted_pg)
vmhost.add_virtualSwitches(deleted_switch)
portGroups = vmhost.get_portGroups()
newportgroupList = []
for portGroup in portGroups:
portId = portGroup.get_id()
newportgroupList.append(portId)
if portId in pGroupDict:
portGroup.set_createEpoch(pGroupDict[portId])
portGroup.set_lastModifiedEpoch(epoch_time)
else:
portGroup.set_createEpoch(epoch_time)
# Add the deleted port groups which was appended
# during virtualswitch."
for deleted_pg in deleted_host_portgroups:
vmhost.add_portGroups(deleted_pg)
else:
vmhost.set_createEpoch(epoch_time)
# Add the createEpcoh to the added virtualSwitches
vSwitches = vmhost.get_virtualSwitches()
for vSwitch in vSwitches:
vSwitch.set_createEpoch(epoch_time)
# Add the createEpcoh to the added portGroups
portGroups = vmhost.get_portGroups()
for portGroup in portGroups:
portGroup.set_createEpoch(epoch_time)
__save_and_expunge(session, vmhost)
except Exception:
LOG.exception(_('error while adding vmhost'))
raise
finally:
__cleanup_session(session)
def vm_host_get_by_ids(context, ids):
"""This API will return a list of VmHost objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
vmhosts = session.query(VmHost).filter(
and_(VmHost.id.in_(ids),
or_(VmHost.deleted == False,
VmHost.deleted == None))).\
options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload_all('virtualSwitches.portGroups.cost')).\
options(joinedload_all('portGroups.cost')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('virtualSwitches.subnets')).\
options(joinedload_all('virtualSwitches.networks')).\
options(joinedload_all('virtualSwitches.cost')).\
all()
# Populate virtualMachineIds
vmIdsRes = session.query(Vm.vmHostId, Vm.id).\
filter(Vm.vmHostId.in_(ids)).\
filter(or_(Vm.deleted == False, Vm.deleted == None)).\
all()
__vm_host_set_virtualMachineIds(vmhosts, vmIdsRes)
# Populate storageVolumeIds
volIdsRes = session.query(
HostMountPoint.vmHostId,
HostMountPoint.storageVolumeId).filter(
HostMountPoint.vmHostId.in_(ids)).all()
__vm_host_set_storageVolumeIds(vmhosts, volIdsRes)
return vmhosts
except Exception:
LOG.exception(_('error while obtaining host'))
raise Exception('VmHost_get_by_id exception')
finally:
__cleanup_session(session)
def vm_host_get_all(context):
"""This API will return a list of all the VmHost objects present in DB
Parameters:
context - nova.context.RequestContext object
"""
session = None
try:
session = nova_session.get_session()
vmhosts = session.query(VmHost).filter(
or_(VmHost.deleted == False, VmHost.deleted == None)).\
options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload_all('virtualSwitches.portGroups.cost')).\
options(joinedload_all('portGroups.cost')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('virtualSwitches.subnets')).\
options(joinedload_all('virtualSwitches.networks')).\
options(joinedload_all('virtualSwitches.cost')).\
all()
# options(joinedload_all('localDisks.mountPoints')).\
# Populate virtualMachineIds
vmIdsRes = session.query(Vm.vmHostId, Vm.id).\
filter(or_(Vm.deleted == False, Vm.deleted == None)).all()
__vm_host_set_virtualMachineIds(vmhosts, vmIdsRes)
# Populate storageVolumeIds
volIdsRes = session.query(HostMountPoint.vmHostId,
HostMountPoint.storageVolumeId).all()
__vm_host_set_storageVolumeIds(vmhosts, volIdsRes)
return vmhosts
except Exception:
LOG.exception(_('error while obtaining hosts'))
raise Exception('VmHost_get_all exception')
finally:
__cleanup_session(session)
def vm_host_delete_by_ids(context, ids):
"""This API will delete VmHost objects which corresponds to ids
Parameters:
ids - List of VmHost ids
context - nova.context.RequestContext object
"""
if ids is None:
return
session = None
try:
session = nova_session.get_session()
vmhosts = vm_host_get_by_ids(context, ids)
delete_epoch_time = get_current_epoch_ms()
for host in vmhosts:
vmid_tuples = \
session.query(Vm.id).filter(
and_(Vm.vmHostId.in_(ids),
or_(Vm.deleted == False,
Vm.deleted == None))).all()
vmIds = []
for vmid_tuple in vmid_tuples:
vmid = vmid_tuple[0]
vmIds.append(vmid)
vm_api.vm_delete_by_ids(context, vmIds)
# StorageVolume deletion
# Loop thru each of the Storage Volumes and check
# whether it has this host attached to its mount point.
storageIds = host.get_storageVolumeIds()
storageObj = storagevolume_api.\
storage_volume_get_by_ids(context, storageIds)
for storage in storageObj:
mountPoints = storage.get_mountPoints()
# If this relation found then create a new list
# of mount points and
# add these to the storage
newMountPoints = []
for mountPoint in mountPoints:
hostId = mountPoint.get_vmHostId()
if host.id != hostId:
newMountPoints.append(mountPoint)
storage.set_mountPoints(newMountPoints)
__save_and_expunge(session, storage)
vSwitches = host.get_virtualSwitches()
for vSwitch in vSwitches:
portGroups = vSwitch.get_portGroups()
for portGroup in portGroups:
portGroup.set_deleted(True)
portGroup.set_deletedEpoch(delete_epoch_time)
vSwitch.set_deleted(True)
vSwitch.set_deletedEpoch(delete_epoch_time)
portGroups = host.get_portGroups()
for portGroup in portGroups:
portGroup.set_deleted(True)
portGroup.set_deletedEpoch(delete_epoch_time)
# Finally delete the host
host.set_deleted(True)
host.set_deletedEpoch(delete_epoch_time)
__save_and_expunge(session, host)
except Exception:
LOG.exception(_('error while deleting host'))
raise
finally:
__cleanup_session(session)
def _load_deleted_objects(session, vmhosts):
for host in vmhosts:
deleted_host_vs = session.query(VirtualSwitch).\
filter(and_(VirtualSwitch.deleted == True,
VirtualSwitch.vmHostId == host.get_id())).\
options(joinedload('cost')).\
options(joinedload('networks')).\
options(joinedload('subnets')).all()
deleted_host_pgs = []
for vsd in deleted_host_vs:
host.add_virtualSwitches(vsd)
for vs in host.get_virtualSwitches():
vs_id = vs.get_id()
deleted_vs_pg = session.query(PortGroup).\
filter(and_(PortGroup.deleted == True,
PortGroup.virtualSwitchId == vs_id)).\
options(joinedload('cost')).all()
for pg in deleted_vs_pg:
vs.add_portGroups(pg)
deleted_host_pgs.append(pg)
for deleted_host_pg in deleted_host_pgs:
host.add_portGroups(deleted_host_pg)
def vm_host_get_all_by_filters(context, filters, sort_key, sort_dir):
"""
Get all the vm_hosts that match all filters and sorted with sort_key.
Deleted rows will be returned by default,
unless there's a filter that says
otherwise
Arguments:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VmHost model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'connectionState':'Connected',
'name':['n1', 'n2']} will filter as
connectionState = 'Connected' AND name in ('n1', 'n2')
sort_key - Column on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vm_hosts that match all filters and sorted with sort_key
"""
session = None
deleted_val = None
# Make a copy of the filters dictionary to not effect caller's use of it.
if filters is not None and 'deleted' in filters:
vm_host_filters = filters.copy()
deleted_val = vm_host_filters.pop('deleted')
try:
session = nova_session.get_session()
filtered_query = _create_filtered_ordered_query(session,
VmHost,
filters=filters,
sort_key=sort_key,
sort_dir=sort_dir)
vmhosts = filtered_query.options(joinedload('cost')).\
options(joinedload('os')).\
options(joinedload_all('virtualSwitches.portGroups.cost')).\
options(joinedload_all('portGroups.cost')).\
options(joinedload('ipAddresses')).\
options(joinedload_all('virtualSwitches.subnets')).\
options(joinedload_all('virtualSwitches.networks')).\
options(joinedload_all('virtualSwitches.cost')).all()
# Populate virtualMachineIds
if deleted_val and deleted_val == 'true':
_load_deleted_objects(session, vmhosts)
vmIdsRes = session.query(
Vm.vmHostId, Vm.id).filter(Vm.deleted == True).all()
else:
vmIdsRes = session.query(
Vm.vmHostId, Vm.id).filter(or_(Vm.deleted == False,
Vm.deleted == None)).all()
__vm_host_set_virtualMachineIds(vmhosts, vmIdsRes)
# Populate storageVolumeIds
volIdsRes = session.query(HostMountPoint.vmHostId,
HostMountPoint.storageVolumeId).all()
__vm_host_set_storageVolumeIds(vmhosts, volIdsRes)
return vmhosts
except Exception:
LOG.exception(_('Error while obtaining hosts'))
raise
finally:
__cleanup_session(session)

View File

@ -1,282 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handles all requests relating to inventory.
"""
from healthnmon import log as logging
from oslo.config import cfg
from nova.db.sqlalchemy import api as context_api
from healthnmon.db import api
from nova.openstack.common import rpc
LOG = logging.getLogger('healthnmon.healthnmon_api')
api_opts = [
cfg.StrOpt('healthnmon_collector_topic',
default='healthnmon.collector',
help='The topic healthnmon-collector service listen on')
]
CONF = cfg.CONF
try:
CONF.healthnmon_collector_topic
except cfg.NoSuchOptError:
CONF.register_opts(api_opts)
'''def vm_host_get_all(context):
""" This API will make a call to db layer to fetch the list of all
the VmHost objects.
Parameters:
context - nova.context.RequestContext object
"""
return api.vm_host_get_all(context)'''
def vm_host_get_all_by_filters(context, filters, sort_key, sort_dir):
""" This API will make a call to db layer to fetch the list of all the
VmHost objects.
Parameters:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VmHost model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'connectionState':'Connected',
'name':['n1', 'n2']} will filter as
connectionState = 'Connected' AND name in ('n1', 'n2')
sort_key - Field on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vm_hosts that match all filters and sorted with sort_key
"""
return api.vm_host_get_all_by_filters(context, filters, sort_key, sort_dir)
def vm_host_get_by_ids(context, host_ids):
""" This API will make a call to db layer to fetch a VmHost objects which
corresponds to ids
Parameters:
host_ids - List of VmHost ids
context - nova.context.RequestContext object
"""
return api.vm_host_get_by_ids(context, host_ids)
def storage_volume_get_by_ids(context, storagevolume_ids):
""" This API will make a call to db layer to fetch a StorageVolume objects
which corresponds to ids
Parameters:
storagevolume_ids - List of StorageVolume ids
context - nova.context.RequestContext object
"""
return api.storage_volume_get_by_ids(context, storagevolume_ids)
'''def storage_volume_get_all(context):
""" This API will make a call to db layer to fetch the list of all the
StorageVolume objects.
Parameters:
context - nova.context.RequestContext object
"""
return api.storage_volume_get_all(context)'''
def storage_volume_get_all_by_filters(context, filters, sort_key, sort_dir):
""" This API will make a call to db layer to fetch the list of all the
StorageVolume objects.
Parameters:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of StorageVolume model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'size':1024, 'name':['vol1', 'vol2']}
will filter as
size = 1024 AND name in ('vol1', 'vol2')
sort_key - Field on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of storage volumes that match all filters
and sorted with sort_key
"""
return api.storage_volume_get_all_by_filters(context,
filters, sort_key, sort_dir)
def vm_get_by_ids(context, vm_ids):
""" This API will make a call to db layer to fetch a Vm objects which
corresponds to ids
Parameters:
vm_ids - List of Vm ids
context - nova.context.RequestContext object
"""
return api.vm_get_by_ids(context, vm_ids)
'''def vm_get_all(context):
""" This API will make a call to db layer to fetch the list of all the
Vm objects.
Parameters:
context - nova.context.RequestContext object
"""
return api.vm_get_all(context)'''
def vm_get_all_by_filters(context, filters, sort_key, sort_dir):
""" This API will make a call to db layer to fetch the list of all the
VM objects.
Parameters:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Vm model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'powerState':'ACTIVE', 'name':['n1', 'n2']}
will filter as
powerState = 'ACTIVE' AND name in ('n1', 'n2')
sort_key - Field on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of vms that match all filters and sorted with sort_key
"""
return api.vm_get_all_by_filters(context, filters, sort_key, sort_dir)
'''def subnet_get_all(context):
""" Fetch list of subnets
:param context: nova.context.RequestContext object
"""
return api.subnet_get_all(context)'''
def subnet_get_all_by_filters(context, filters, sort_key, sort_dir):
""" This API will make a call to db layer to fetch the list of all the
Subnet objects.
Parameters:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of Subnet model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'isPublic':True, 'name':['n1', 'n2']}
will filter as
isPublic = True AND name in ('n1', 'n2')
sort_key - Field on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of subnet that match all filters and sorted with sort_key
"""
return api.subnet_get_all_by_filters(context, filters, sort_key, sort_dir)
def subnet_get_by_ids(context, subnet_ids):
""" Fetch subnet details of the subnet ids
Parameters:
subnet_ids - List of subnet ids
context - nova.context.RequestContext object
"""
return api.subnet_get_by_ids(context, subnet_ids)
'''def virtual_switch_get_all(context):
""" Fetch list of virtual switches
Parameters:
context - nova.context.RequestContext object
"""
return api.virtual_switch_get_all(context)'''
def virtual_switch_get_all_by_filters(context, filters, sort_key, sort_dir):
""" This API will make a call to db layer to fetch the list of all the
VirtualSwitch objects.
Parameters:
context - nova.context.RequestContext object
filters - dictionary of filters to be applied
keys should be fields of VirtualSwitch model
if value is simple value = filter is applied and
if value is list or tuple 'IN' filter is applied
eg : {'switchType':'abc', 'name':['n1', 'n2']}
will filter as
switchType = 'abc' AND name in ('n1', 'n2')
sort_key - Field on which sorting is to be applied
sort_dir - asc for Ascending sort direction,
desc for descending sort direction
Returns:
list of virtual_switch that match all filters and
sorted with sort_key
"""
return api.virtual_switch_get_all_by_filters(context,
filters, sort_key, sort_dir)
def virtual_switch_get_by_ids(context, virtual_switch_ids):
""" Fetch virtual switch details of the ids
Parameters:
virtual_switch_ids - List of virtual switch ids
context - nova.context.RequestContext object
"""
return api.virtual_switch_get_by_ids(context, virtual_switch_ids)
@context_api.require_context
def get_vm_utilization(context, vm_id):
""" This API will fetches VM utilization from healthnmon service thru rpc
Parameters:
vm_id - uuid of the virtual machine.
context - nova.context.RequestContext object
"""
return rpc.call(context, CONF.healthnmon_collector_topic,
{'method': 'get_vm_utilization',
'args': {'uuid': vm_id}})
@context_api.require_context
def get_vmhost_utilization(context, host_id):
""" This API will fetches VM Host utilization from healthnmon service thru
rpc call.
Parameters:
host_id: uuid of the vmhost.
context - nova.context.RequestContext object
"""
return rpc.call(context, CONF.healthnmon_collector_topic,
{'method': 'get_vmhost_utilization',
'args': {'uuid': host_id}})

View File

@ -1,270 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
""" Healthnmon logging handler.
This module adds to logging functionality by adding the option to specify
current green thread id/thread id when calling the various log methods.
"""
import logging
import logging.handlers
import cStringIO
import traceback
from oslo.config import cfg
from eventlet import greenthread
import zipfile
import os
log_opts = [
cfg.StrOpt('healthnmon_log_dir',
default='/var/log/healthnmon',
help='Log directory for healthnmon'),
cfg.StrOpt('healthnmon_collector_log_config',
default='/etc/healthnmon/logging-healthnmon-collector.conf',
help='Log configuration file for healthnmon collector'),
cfg.StrOpt('healthnmon_virtproxy_log_config',
default='/etc/healthnmon/logging-healthnmon-virtproxy.conf',
help='Log configuration file for healthnmon virtproxy'),
cfg.StrOpt('healthnmon_manage_log_config',
default='/etc/healthnmon/logging-healthnmon-manage.conf',
help='Log configuration file for healthnmon'),
cfg.StrOpt('healthnmon_logging_audit_format_string',
default='%(asctime)s,%(componentId)s,%(orgId)s,%(orgId)s,\
%(domain)s,%(userId)s,%(loggingId)s,%(taskId)s,%(sourceIp)s,\
%(result)s,%(action)s,%(severity)s,%(name)s,\
%(objectDescription)s,%(message)s',
help='format string to use for logging audit log messages'),
cfg.StrOpt('logging_greenthread_format_string',
default='%(asctime)s | %(levelname)s | \
%(name)s | %(gthread_id)d | '
'%(message)s',
help='format string to use for log messages \
with green thread id'),
cfg.StrOpt('logging_thread_format_string',
default='%(asctime)s | %(levelname)s | %(name)s | %(thread)d | '
'%(message)s',
help='format string to use for log messages \
with green thread id'),
cfg.StrOpt('logging_greenthread_exception_prefix',
default='%(asctime)s | TRACE | %(name)s | %(gthread_id)d | ',
help='prefix each line of exception output with this format'),
cfg.StrOpt('logging_thread_exception_prefix',
default='%(asctime)s | TRACE | %(name)s | %(thread)d | ',
help='prefix each line of exception output with this format'),
]
CONF = cfg.CONF
CONF.register_opts(log_opts)
# AUDIT level
logging.AUDIT = logging.INFO + 1
logging.addLevelName(logging.AUDIT, 'AUDIT')
class HealthnmonLogAdapter(logging.LoggerAdapter):
""" Healthnmon logging handler that extends default logger to include
green thread/thread identifier """
warn = logging.LoggerAdapter.warning
def __init__(self, logger):
self.logger = logger
def audit(self, msg, *args, **kwargs):
self.log(logging.AUDIT, msg, *args, **kwargs)
def process(self, msg, kwargs):
"""Uses hash of current green thread object for unqiue identifier """
if 'extra' not in kwargs:
kwargs['extra'] = {}
extra = kwargs['extra']
if greenthread.getcurrent() is not None:
extra.update({'gthread_id': hash(greenthread.getcurrent())})
extra['extra'] = extra.copy()
return msg, kwargs
class HealthnmonFormatter(logging.Formatter):
"""Thread aware formatter configured through flags.
The flags used to set format strings are: logging_greenthread_format_string
and logging_thread_format_string.
For information about what variables are available for the formatter see:
http://docs.python.org/library/logging.html#formatter
"""
def format(self, record):
"""Uses green thread id if available, otherwise thread id is used ."""
if 'gthread_id' not in record.__dict__:
self._fmt = CONF.logging_thread_format_string
else:
self._fmt = CONF.logging_greenthread_format_string
if record.exc_info:
record.exc_text = self.formatException(record.exc_info, record)
return logging.Formatter.format(self, record)
def formatException(self, exc_info, record=None):
"""Format exception output with
CONF.healthnmon_logging_exception_prefix."""
if not record:
return logging.Formatter.formatException(self, exc_info)
stringbuffer = cStringIO.StringIO()
traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
None, stringbuffer)
lines = stringbuffer.getvalue().split('\n')
stringbuffer.close()
if 'gthread_id' not in record.__dict__:
exception_prefix = CONF.logging_thread_exception_prefix
else:
exception_prefix = CONF.logging_greenthread_exception_prefix
if exception_prefix.find('%(asctime)') != -1:
record.asctime = self.formatTime(record, self.datefmt)
formatted_lines = []
for line in lines:
pl = exception_prefix % record.__dict__
fl = '%s%s' % (pl, line)
formatted_lines.append(fl)
return '\n'.join(formatted_lines)
class HealthnmonLogHandler(logging.handlers.RotatingFileHandler):
"""Size based rotating file handler which zips the backup files
"""
def __init__(self, filename, mode='a', maxBytes=104857600, backupCount=20,
encoding='utf-8'):
logging.handlers.RotatingFileHandler.__init__(
self, filename, mode, maxBytes, backupCount, encoding)
def doRollover(self):
logging.handlers.RotatingFileHandler.doRollover(self)
if self.backupCount > 0:
for i in range(self.backupCount - 1, 0, -1):
sfn = "%s.%d.gz" % (self.baseFilename, i)
dfn = "%s.%d.gz" % (self.baseFilename, i + 1)
if os.path.exists(sfn):
if os.path.exists(dfn):
os.remove(dfn)
os.rename(sfn, dfn)
dfn = self.baseFilename + ".1"
compressed_log_file = zipfile.ZipFile(dfn + ".gz", "w")
compressed_log_file.write(dfn, os.path.basename(
dfn), zipfile.ZIP_DEFLATED)
compressed_log_file.close()
os.remove(dfn)
class HealthnmonAuditFilter(logging.Filter):
def filter(self, record):
if record.levelno == logging.AUDIT:
return True
class HealthnmonAuditFormatter(HealthnmonFormatter):
"""Format audit messages as per the audit logging format"""
def format(self, record):
self._fmt = CONF.healthnmon_logging_audit_format_string
if 'componentId' not in record.__dict__:
record.__dict__['componentId'] = 'Healthnmon'
if 'orgId' not in record.__dict__:
record.__dict__['orgId'] = ''
if 'domain' not in record.__dict__:
record.__dict__['domain'] = ''
if 'userId' not in record.__dict__:
record.__dict__['userId'] = ''
if 'loggingId' not in record.__dict__:
record.__dict__['loggingId'] = ''
if 'taskId' not in record.__dict__:
record.__dict__['taskId'] = ''
if 'sourceIp' not in record.__dict__:
record.__dict__['sourceIp'] = ''
if 'result' not in record.__dict__:
record.__dict__['result'] = ''
if 'action' not in record.__dict__:
record.__dict__['action'] = ''
if 'severity' not in record.__dict__:
record.__dict__['severity'] = ''
if 'objectDescription' not in record.__dict__:
record.__dict__['objectDescription'] = ''
if record.exc_info:
record.exc_text = self.formatException(record.exc_info, record)
return logging.Formatter.format(self, record)
class HealthnmonAuditHandler(HealthnmonLogHandler):
""""""
def __init__(self, filename, mode='a', maxBytes=104857600, backupCount=20,
encoding='utf-8'):
HealthnmonLogHandler.__init__(
self, filename, mode, maxBytes, backupCount, encoding)
self.addFilter(HealthnmonAuditFilter())
# def handle_exception(type, value, tb):
# extra = {}
# if CONF.verbose:
# extra['exc_info'] = (type, value, tb)
# getLogger().critical(str(value), **extra)
def healthnmon_collector_setup():
"""Setup healthnmon logging."""
# sys.excepthook = handle_exception
if CONF.healthnmon_collector_log_config:
try:
logging.config.fileConfig(CONF.healthnmon_collector_log_config)
except Exception:
traceback.print_exc()
raise
def healthnmon_manage_setup():
"""Setup healthnmon logging."""
# sys.excepthook = handle_exception
if CONF.healthnmon_manage_log_config:
try:
logging.config.fileConfig(CONF.healthnmon_manage_log_config)
except Exception:
traceback.print_exc()
raise
def healthnmon_virtproxy_setup():
"""Setup healthnmon logging."""
# sys.excepthook = handle_exception
if CONF.healthnmon_virtproxy_log_config:
try:
logging.config.fileConfig(CONF.healthnmon_virtproxy_log_config)
except Exception:
traceback.print_exc()
raise
_loggers = {}
def getLogger(name='healthnmon'):
if name not in _loggers:
_loggers[name] = HealthnmonLogAdapter(logging.getLogger(name))
return _loggers[name]

View File

@ -1,29 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
notifier_opts = [cfg.StrOpt('healthnmon_default_notification_level',
default='INFO',
help='Default notification level for \
healthnmon notifications'
),
cfg.ListOpt('healthnmon_notification_drivers',
default=['healthnmon.notifier.rabbit_notifier', ],
help='Default notification drivers for \
healthnmon notifications')]
CONF = cfg.CONF
CONF.register_opts(notifier_opts)

View File

@ -1,131 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
""" Healthnmon notifier api
Implements the healthnmon notifier API
"""
import uuid
from oslo.config import cfg
from nova.openstack.common import timeutils, jsonutils, importutils
from healthnmon import log as logging
import time
from healthnmon.constants import Constants
LOG = logging.getLogger('healthnmon.notifier.api')
CONF = cfg.CONF
WARN = 'WARN'
INFO = 'INFO'
ERROR = 'ERROR'
CRITICAL = 'CRITICAL'
DEBUG = 'DEBUG'
priorities = (DEBUG, WARN, INFO, ERROR, CRITICAL)
drivers = None
class BadPriorityException(Exception):
pass
def notify(context,
publisher_id,
event_type,
priority,
payload,
):
"""
Sends a notification using the specified driver
Notify parameters:
publisher_id - the source of the message. Cannot be none.
event_type - the literal type of event (ex. LifeCycle.Vm.Created)
priority - patterned after the enumeration of Python logging levels in
the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
payload - A python dictionary of attributes
Outgoing message format includes the above parameters, and appends the
following:
message_id - a UUID representing the id for this notification
timestamp - the GMT timestamp the notification was sent at
The composite message will be constructed as a dictionary of the above
attributes, which will then be sent via the transport mechanism defined
by the driver.
Message example:
{'message_id': str(uuid.uuid4()),
'publisher_id': 'compute.host1',
'timestamp': utils.utcnow(),
'priority': 'WARN',
'event_type': 'LifeCycle.Vm.Created',
'payload': {'entity_id': 'XXXX', ... }}
"""
if priority not in priorities:
raise BadPriorityException(_('%s not in valid priorities'
% priority))
# Ensure everything is JSON serializable.
payload = jsonutils.to_primitive(payload, convert_instances=True)
msg = dict(
message_id=str(uuid.uuid4()),
publisher_id=publisher_id,
event_type=event_type,
priority=priority,
payload=payload,
timestamp=time.strftime(
Constants.DATE_TIME_FORMAT, timeutils.utcnow().timetuple()),
)
for driver in _get_drivers():
try:
print 'driver name:'
print driver
print context
print 'message:'
print msg
driver.notify(context, msg)
except Exception, e:
print 'error occurred while notifiy: '
print e
LOG.exception(_("Problem '%(e)s' attempting to send to \
healthnmon notification driver %(driver)s."
% locals()))
def _get_drivers():
"""Instantiates and returns drivers based on the flag values."""
global drivers
if not drivers:
drivers = []
for notification_driver in CONF.healthnmon_notification_drivers:
print 'notification driver :'
print notification_driver
drivers.append(importutils.import_module(notification_driver))
print 'drivers -------------------->'
print drivers
return drivers

View File

@ -1,37 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Notifier driver which simply logs the message
"""
import json
from oslo.config import cfg
from healthnmon import log as logging
CONF = cfg.CONF
def notify(context, message):
"""Notifies the recipient of the desired event given the model.
Log notifications using nova's default logging system"""
priority = message.get('priority',
CONF.healthnmon_default_notification_level)
priority = priority.lower()
logger = logging.getLogger('healthnmon.notification.%s'
% message['event_type'])
getattr(logger, priority)(json.dumps(message))

View File

@ -1,51 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
""" Healthnmon notification driver which sends message to rabbitmq
"""
from oslo.config import cfg
from nova.openstack.common import rpc
from nova import context as req_context
CONF = cfg.CONF
def notify(context, message):
"""Sends a notification to the RabbitMQ"""
if not context:
context = req_context.get_admin_context()
priority = message.get('priority',
CONF.healthnmon_default_notification_level)
priority = priority.lower()
# Construct topic name
# As the below code use to create multiple queues, it is removed
topic_parts = []
topic_parts.append('healthnmon_notification')
topic_parts.append(priority)
event_type = message.get('event_type', None)
if event_type is not None:
topic_parts.append(event_type)
payload = message.get('payload', None)
if payload is not None:
entity_id = payload.get('entity_id', None)
if entity_id is not None:
topic_parts.append(entity_id)
topic = '.'.join(topic_parts)
rpc.notify(context, "healthnmon_notification", message)

View File

@ -1,378 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2011 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utilities with minimum-depends for use in setup.py
"""
import datetime
import os
import re
import subprocess
import sys
from setuptools.command import sdist
def parse_mailmap(mailmap='.mailmap'):
mapping = {}
if os.path.exists(mailmap):
with open(mailmap, 'r') as fp:
for l in fp:
l = l.strip()
if not l.startswith('#') and ' ' in l:
canonical_email, alias = [x for x in l.split(' ')
if x.startswith('<')]
mapping[alias] = canonical_email
return mapping
def canonicalize_emails(changelog, mapping):
"""Takes in a string and an email alias mapping and replaces all
instances of the aliases in the string with their real email.
"""
for alias, email in mapping.iteritems():
changelog = changelog.replace(alias, email)
return changelog
# Get requirements from the first file that exists
def get_reqs_from_files(requirements_files):
for requirements_file in requirements_files:
if os.path.exists(requirements_file):
with open(requirements_file, 'r') as fil:
return fil.read().split('\n')
return []
def parse_requirements(requirements_files=['requirements.txt',
'tools/pip-requires']):
requirements = []
for line in get_reqs_from_files(requirements_files):
# For the requirements list, we need to inject only the portion
# after egg= so that distutils knows the package it's looking for
# such as:
# -e git://github.com/openstack/nova/master#egg=nova
if re.match(r'\s*-e\s+', line):
requirements.append(re.sub(r'\s*-e\s+.*#egg=(.*)$', r'\1',
line))
# such as:
# http://github.com/openstack/nova/zipball/master#egg=nova
elif re.match(r'\s*https?:', line):
requirements.append(re.sub(r'\s*https?:.*#egg=(.*)$', r'\1',
line))
# -f lines are for index locations, and don't get used here
elif re.match(r'\s*-f\s+', line):
pass
# argparse is part of the standard library starting with 2.7
# adding it to the requirements list screws distro installs
elif line == 'argparse' and sys.version_info >= (2, 7):
pass
else:
requirements.append(line)
return requirements
def parse_dependency_links(requirements_files=['requirements.txt',
'tools/pip-requires']):
dependency_links = []
# dependency_links inject alternate locations to find packages listed
# in requirements
for line in get_reqs_from_files(requirements_files):
# skip comments and blank lines
if re.match(r'(\s*#)|(\s*$)', line):
continue
# lines with -e or -f need the whole line, minus the flag
if re.match(r'\s*-[ef]\s+', line):
dependency_links.append(re.sub(r'\s*-[ef]\s+', '', line))
# lines that are only urls can go in unmolested
elif re.match(r'\s*https?:', line):
dependency_links.append(line)
return dependency_links
def write_requirements():
venv = os.environ.get('VIRTUAL_ENV', None)
if venv is not None:
with open("requirements.txt", "w") as req_file:
output = subprocess.Popen(["pip", "-E", venv, "freeze", "-l"],
stdout=subprocess.PIPE)
requirements = output.communicate()[0].strip()
req_file.write(requirements)
def _run_shell_command(cmd):
if os.name == 'nt':
output = subprocess.Popen(["cmd.exe", "/C", cmd],
stdout=subprocess.PIPE)
else:
output = subprocess.Popen(["/bin/sh", "-c", cmd],
stdout=subprocess.PIPE)
out = output.communicate()
if len(out) == 0:
return None
if len(out[0].strip()) == 0:
return None
return out[0].strip()
def _get_git_next_version_suffix(branch_name):
datestamp = datetime.datetime.now().strftime('%Y%m%d')
if branch_name == 'milestone-proposed':
revno_prefix = "r"
else:
revno_prefix = ""
_run_shell_command("git fetch origin +refs/meta/*:refs/remotes/meta/*")
milestone_cmd = "git show meta/openstack/release:%s" % branch_name
milestonever = _run_shell_command(milestone_cmd)
if milestonever:
first_half = "%s~%s" % (milestonever, datestamp)
else:
first_half = datestamp
post_version = _get_git_post_version()
# post version should look like:
# 0.1.1.4.gcc9e28a
# where the bit after the last . is the short sha, and the bit between
# the last and second to last is the revno count
(revno, sha) = post_version.split(".")[-2:]
second_half = "%s%s.%s" % (revno_prefix, revno, sha)
return ".".join((first_half, second_half))
def _get_git_current_tag():
tag_info = _get_git_tag_info()
possible_tags = _run_shell_command("git tag --contains HEAD")
if not possible_tags:
return None
tags = possible_tags.split('\n')
if tag_info in tags:
return tag_info
return None
def _get_git_tag_info():
return _run_shell_command("git describe --tags")
def _get_git_post_version():
current_tag = _get_git_current_tag()
if current_tag is not None:
return current_tag
else:
tag_info = _get_git_tag_info()
if tag_info is None:
base_version = "0.0"
cmd = "git --no-pager log --oneline"
out = _run_shell_command(cmd)
revno = len(out.split("\n"))
sha = _run_shell_command("git describe --always")
else:
tag_infos = tag_info.split("-")
base_version = "-".join(tag_infos[:-2])
(revno, sha) = tag_infos[-2:]
return "%s.%s.%s" % (base_version, revno, sha)
def write_git_changelog():
"""Write a changelog based on the git changelog."""
new_changelog = 'ChangeLog'
if not os.getenv('SKIP_WRITE_GIT_CHANGELOG'):
if os.path.isdir('.git'):
git_log_cmd = 'git log --stat'
changelog = _run_shell_command(git_log_cmd)
mailmap = parse_mailmap()
with open(new_changelog, "w") as changelog_file:
changelog_file.write(canonicalize_emails(changelog, mailmap))
else:
open(new_changelog, 'w').close()
def generate_authors():
"""Create AUTHORS file using git commits."""
jenkins_email = 'jenkins@review.(openstack|stackforge).org'
old_authors = 'AUTHORS.in'
new_authors = 'AUTHORS'
if not os.getenv('SKIP_GENERATE_AUTHORS'):
if os.path.isdir('.git'):
# don't include jenkins email address in AUTHORS file
git_log_cmd = ("git log --format='%aN <%aE>' | sort -u | "
"egrep -v '" + jenkins_email + "'")
changelog = _run_shell_command(git_log_cmd)
mailmap = parse_mailmap()
with open(new_authors, 'w') as new_authors_fh:
new_authors_fh.write(canonicalize_emails(changelog, mailmap))
if os.path.exists(old_authors):
with open(old_authors, "r") as old_authors_fh:
new_authors_fh.write('\n' + old_authors_fh.read())
else:
open(new_authors, 'w').close()
_rst_template = """%(heading)s
%(underline)s
.. automodule:: %(module)s
:members:
:undoc-members:
:show-inheritance:
"""
def read_versioninfo(project):
"""Read the versioninfo file. If it doesn't exist, we're in a github
zipball, and there's really no way to know what version we really
are, but that should be ok, because the utility of that should be
just about nil if this code path is in use in the first place."""
versioninfo_path = os.path.join(project, 'versioninfo')
if os.path.exists(versioninfo_path):
with open(versioninfo_path, 'r') as vinfo:
version = vinfo.read().strip()
else:
version = None
return version
def write_versioninfo(project, version):
"""Write a simple file containing the version of the package."""
with open(os.path.join(project, 'versioninfo'), 'w') as fil:
fil.write("%s\n" % version)
def get_cmdclass():
"""Return dict of commands to run from setup.py."""
cmdclass = dict()
def _find_modules(arg, dirname, files):
for filename in files:
if filename.endswith('.py') and filename != '__init__.py':
arg["%s.%s" % (dirname.replace('/', '.'),
filename[:-3])] = True
class LocalSDist(sdist.sdist):
"""Builds the ChangeLog and Authors files from VC first."""
def run(self):
write_git_changelog()
generate_authors()
# sdist.sdist is an old style class, can't use super()
sdist.sdist.run(self)
cmdclass['sdist'] = LocalSDist
# If Sphinx is installed on the box running setup.py,
# enable setup.py to build the documentation, otherwise,
# just ignore it
try:
from sphinx.setup_command import BuildDoc
class LocalBuildDoc(BuildDoc):
def generate_autoindex(self):
print "**Autodocumenting from %s" % os.path.abspath(os.curdir)
modules = {}
option_dict = self.distribution.get_option_dict('build_sphinx')
source_dir = os.path.join(option_dict['source_dir'][1], 'api')
if not os.path.exists(source_dir):
os.makedirs(source_dir)
for pkg in self.distribution.packages:
if '.' not in pkg:
os.path.walk(pkg, _find_modules, modules)
module_list = modules.keys()
module_list.sort()
autoindex_filename = os.path.join(source_dir, 'autoindex.rst')
with open(autoindex_filename, 'w') as autoindex:
autoindex.write(""".. toctree::
:maxdepth: 1
""")
for module in module_list:
output_filename = os.path.join(source_dir,
"%s.rst" % module)
heading = "The :mod:`%s` Module" % module
underline = "=" * len(heading)
values = dict(module=module, heading=heading,
underline=underline)
print "Generating %s" % output_filename
with open(output_filename, 'w') as output_file:
output_file.write(_rst_template % values)
autoindex.write(" %s.rst\n" % module)
def run(self):
if not os.getenv('SPHINX_DEBUG'):
self.generate_autoindex()
for builder in ['html', 'man']:
self.builder = builder
self.finalize_options()
self.project = self.distribution.get_name()
self.version = self.distribution.get_version()
self.release = self.distribution.get_version()
BuildDoc.run(self)
cmdclass['build_sphinx'] = LocalBuildDoc
except ImportError:
pass
return cmdclass
def get_git_branchname():
for branch in _run_shell_command("git branch --color=never").split("\n"):
if branch.startswith('*'):
_branch_name = branch.split()[1].strip()
if _branch_name == "(no":
_branch_name = "no-branch"
return _branch_name
def get_pre_version(projectname, base_version):
"""Return a version which is leading up to a version that will
be released in the future."""
version = read_versioninfo(projectname)
if not version and os.path.isdir('.git'):
current_tag = _get_git_current_tag()
if current_tag is not None:
if base_version:
version = base_version
else:
version = current_tag
else:
branch_name = os.getenv('BRANCHNAME',
os.getenv('GERRIT_REFNAME',
get_git_branchname()))
version_suffix = _get_git_next_version_suffix(branch_name)
version = "%s~%s" % (base_version, version_suffix)
write_versioninfo(projectname, version)
if not version:
version = "0.0.0"
return version
def get_post_version(projectname):
"""Return a version which is equal to the tag that's on the current
revision if there is one, or tag plus number of additional revisions
if the current revision has no tag."""
version = read_versioninfo(projectname)
if not version and os.path.isdir('.git'):
version = _get_git_post_version()
write_versioninfo(projectname, version)
if not version:
version = "0.0.0"
return version

View File

@ -1,154 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012 OpenStack LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utilities for consuming the auto-generated versioninfo files.
"""
import datetime
import pkg_resources
import os
import setup
class _deferred_version_string(object):
"""Internal helper class which provides delayed version calculation."""
def __init__(self, version_info, prefix):
self.version_info = version_info
self.prefix = prefix
def __str__(self):
return "%s%s" % (self.prefix, self.version_info.version_string())
def __repr__(self):
return "%s%s" % (self.prefix, self.version_info.version_string())
class VersionInfo(object):
def __init__(self, package, python_package=None, pre_version=None):
"""Object that understands versioning for a package
:param package: name of the top level python namespace. For glance,
this would be "glance" for python-glanceclient, it
would be "glanceclient"
:param python_package: optional name of the project name. For
glance this can be left unset. For
python-glanceclient, this would be
"python-glanceclient"
:param pre_version: optional version that the project is working to
"""
self.package = package
if python_package is None:
self.python_package = package
else:
self.python_package = python_package
self.pre_version = pre_version
self.version = None
def _generate_version(self):
"""Defer to the openstack.common.setup routines for making a
version from git."""
if self.pre_version is None:
return setup.get_post_version(self.python_package)
else:
return setup.get_pre_version(self.python_package, self.pre_version)
def _newer_version(self, pending_version):
"""Check to see if we're working with a stale version or not.
We expect a version string that either looks like:
2012.2~f3~20120708.10.4426392
which is an unreleased version of a pre-version, or:
0.1.1.4.gcc9e28a
which is an unreleased version of a post-version, or:
0.1.1
Which is a release and which should match tag.
For now, if we have a date-embedded version, check to see if it's
old, and if so re-generate. Otherwise, just deal with it.
"""
try:
version_date = int(self.version.split("~")[-1].split('.')[0])
if version_date < int(datetime.date.today().strftime('%Y%m%d')):
return self._generate_version()
else:
return pending_version
except Exception:
return pending_version
def version_string_with_vcs(self, always=False):
"""Return the full version of the package including suffixes indicating
VCS status.
For instance, if we are working towards the 2012.2 release,
canonical_version_string should return 2012.2 if this is a final
release, or else something like 2012.2~f1~20120705.20 if it's not.
:param always: if true, skip all version caching
"""
if always:
self.version = self._generate_version()
if self.version is None:
requirement = pkg_resources.Requirement.parse(self.python_package)
versioninfo = "%s/versioninfo" % self.package
try:
raw_version = pkg_resources.resource_string(requirement,
versioninfo)
self.version = self._newer_version(raw_version.strip())
except (IOError, pkg_resources.DistributionNotFound):
self.version = self._generate_version()
return self.version
def get_version(self, package_name, pre_version=None):
version = os.environ.get("OSLO_PACKAGE_VERSION", None)
if version is None:
version = self._canonical_version_string(always=True)
return version
def _canonical_version_string(self, always=False):
"""Return the simple version of the package excluding any suffixes.
For instance, if we are working towards the 2012.2 release,
canonical_version_string should return 2012.2 in all cases.
:param always: if true, skip all version caching
"""
return self.version_string_with_vcs(always).split('~')[0]
def version_string(self, always=False):
"""Return the base version of the package.
For instance, if we are working towards the 2012.2 release,
version_string should return 2012.2 if this is a final release, or
2012.2-dev if it is not.
:param always: if true, skip all version caching
"""
version_parts = self.version_string_with_vcs(always).split('~')
if len(version_parts) == 1:
return version_parts[0]
else:
return '%s-dev' % (version_parts[0],)
def deferred_version_string(self, prefix=""):
"""Generate an object which will expand in a string context to
the results of version_string(). We do this so that don't
call into pkg_resources every time we start up a program when
passing version information into the CONF constructor, but
rather only do the calculation when and if a version is requested
"""
return _deferred_version_string(self, prefix)

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,103 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import pyclbr
import inspect
import logging
import traceback
from nova.openstack.common import importutils
from healthnmon import log
from healthnmon.profiler import profile_cpu, profile_mem
LOG = log.getLogger('healthnmon.utils')
def profile_cputime(module, decorator_name, status):
try:
if status:
profile_cpu.add_module(module)
else:
profile_cpu.delete_module(module)
# import decorator function
decorator = importutils.import_class(decorator_name)
__import__(module)
# Retrieve module information using pyclbr
module_data = pyclbr.readmodule_ex(module)
for key in module_data.keys():
# set the decorator for the class methods
if isinstance(module_data[key], pyclbr.Class):
clz = importutils.import_class("%s.%s" % (module, key))
for method, func in inspect.getmembers(clz, inspect.ismethod):
if func.func_code.co_name == 'profile_cputime':
pass
else:
setattr(clz, method,
decorator("%s.%s.%s" % (module, key, method),
func))
LOG.info(_('Decorated method ' + method))
# set the decorator for the function
if isinstance(module_data[key], pyclbr.Function):
func = importutils.import_class("%s.%s" % (module, key))
if func.func_code.co_name == 'profile_cputime':
pass
else:
setattr(sys.modules[module], key,
decorator("%s.%s" % (module, key), func))
LOG.info(_('Decorated method ' + key))
except:
LOG.error(_('Invalid module or decorator name '))
LOG.error(_('Exception occurred %s ') % traceback.format_exc())
def profile_memory(method, decorator_name, status, setref):
try:
profile_mem.modules_profiling_status[method] = status
profile_mem.setref = setref
# import decorator function
decorator = importutils.import_class(decorator_name)
class_str, _sep, method_str = method.rpartition('.')
clz = importutils.import_class(class_str)
# set the decorator for the function
func = getattr(clz, method_str)
if func.func_code.co_name == 'profile_memory':
pass
else:
setattr(clz, method_str,
decorator(method, func))
LOG.info(_('Decorated method ' + method_str))
except:
LOG.error(_('Invalid method or decorator name '))
LOG.error(_('Exception occurred %s ') % traceback.format_exc())
def setLogLevel(level, module_name):
level = level.upper()
if level not in logging._levelNames:
LOG.error(_(' Invalid log level %s ') % level)
raise Exception(' Invalid log level ' + level)
l = logging.getLevelName(level.upper())
if module_name == 'healthnmon':
logging.getLogger().setLevel(l)
log.getLogger().logger.setLevel(l)
else:
log.getLogger(module_name).logger.setLevel(l)
LOG.audit(_(module_name + ' log level set to %s ') % level)

View File

@ -1,88 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Defines decorator that provides details on time spent
in each function in specified modules which is used
from utils.monkey_patch()
"""
from healthnmon import log as logging
from oslo.config import cfg
import functools
import time
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
modules = []
def profile_cputime_decorator(name, fn):
""" decorator for logging which is used from utils.monkey_patch()
:param name: name of the function
:param fn: - object of the function
:returns: function -- decorated function
"""
@functools.wraps(fn)
def profile_cputime(*args, **kwarg):
if not modules:
getmodules()
module = get_module_name(name)
status = get_state(module)
if status:
st = time.time()
rt = fn(*args, **kwarg)
logger = logging.getLogger(module)
logger.debug(_(' %(fn_name)s | %(time)f | ms'),
{'fn_name': name,
'time': (time.time() - st) * 1000})
return rt
else:
return fn(*args, **kwarg)
return profile_cputime
def getmodules():
if CONF.monkey_patch is True:
for module_and_decorator in CONF.monkey_patch_modules:
module = module_and_decorator.split(':')[0]
modules.append(module)
def get_module_name(module_name):
for m in modules:
if module_name.startswith(m):
return m
def add_module(module_name):
if module_name is not None and module_name not in modules:
modules.append(module_name)
def delete_module(module_name):
if module_name is not None and module_name in modules:
modules.remove(module_name)
def get_state(module_name):
if not module_name:
return False
else:
return True

View File

@ -1,108 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Defines decorator for profiling heathnmon Service
Uses heapy- for memory profiling
"""
from healthnmon import log as logging
from oslo.config import cfg
import traceback
import functools
import os
LOG = logging.getLogger('healthnmon.profiler')
CONF = cfg.CONF
h = None
mem_profile_path = None
hpy = None
modules_profiling_status = {}
setref = None
def profile_memory_decorator(method, fn):
""" decorator for logging which is used from utils.monkey_patch()
:param name: name of the function
:param function: - object of the function
:returns: function -- decorated function
"""
@functools.wraps(fn)
def profile_memory(*args, **kwarg):
status = modules_profiling_status[method]
if status:
import_guupy()
LOG.info(_('Start memory profiling'))
init_mem_profiler()
rt = fn(*args, **kwarg)
mem_profile()
LOG.info(_('End memory profiling'))
return rt
else:
return fn(*args, **kwarg)
return profile_memory
def import_guupy():
global hpy
if hpy is None:
guppy = __import__('guppy', globals(), locals(),
['hpy'], -1)
hpy = guppy.hpy
def init_mem_profiler():
""" Intializes the heapy module used for memory profiling """
global h
if h is None:
h = hpy()
_init_mem_profile_path()
def _init_mem_profile_path():
global mem_profile_path
mem_profile_path = _get_memprofile_dumpfile('healthnmon')
if mem_profile_path:
open(mem_profile_path, 'a')
mode = int(CONF.logfile_mode, 8)
os.chmod(mem_profile_path, mode)
def mem_profile():
"""
Sets configuration in heapy to
1) generate and dump memory snapshot
2) set the reference point for next dump
"""
try:
# LOG.debug(_(h.heap()))
h.heap().dump(mem_profile_path)
LOG.debug(_("Dumped the memory profiling data "))
if setref:
LOG.debug(_("Setting the reference for next \
memory profiling data "))
h.setref()
except:
LOG.debug(_('Exception occurred %s ') % traceback.format_exc())
def _get_memprofile_dumpfile(binary=None):
logdir = CONF.healthnmon_log_dir
if logdir:
return '%s_memprofile.hpy' % (os.path.join(logdir, binary),)

View File

@ -1,24 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
The Model mappings should be done before the creation of any model objects.
So do model mapping in the resourcemodel package init
"""
from healthnmon.db.sqlalchemy import mapper
mapper.map_models()

View File

@ -1,282 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""generateDs user methods spec module.
This module will be used by generateDs to add
SQLAlchemy reconstructor methods to generated model classes.
All the fields in model classes may not have a alchemy mapping
The reconstructor method will add unmapped fields
to the SQLAlchemy constructed objects.
"""
import re
# MethodSpec class used by generateDs.
# See http://www.rexx.com/~dkuhlman/generateDS.html#user-methods
# for more details.
class MethodSpec(object):
def __init__(
self,
name='',
source='',
class_names='',
class_names_compiled=None,
):
"""MethodSpec -- A specification of a method.
Member variables:
name -- The method name
source -- The source code for the method. Must be
indented to fit in a class definition.
class_names -- A regular expression that must match the
class names in which the method is to be inserted.
class_names_compiled -- The compiled regex in class_names.
generateDS.py will do this compile for you.
"""
self.name = name
self.source = source
if class_names is None:
self.class_names = ('.*',)
else:
self.class_names = class_names
if class_names_compiled is None:
self.class_names_compiled = re.compile(self.class_names)
else:
self.class_names_compiled = class_names_compiled
def get_name(self):
return self.name
def set_name(self, name):
self.name = name
def get_source(self):
return self.source
def set_source(self, source):
self.source = source
def get_class_names(self):
return self.class_names
def set_class_names(self, class_names):
self.class_names = class_names
self.class_names_compiled = re.compile(class_names)
def get_class_names_compiled(self):
return self.class_names_compiled
def set_class_names_compiled(self, class_names_compiled):
self.class_names_compiled = class_names_compiled
def match_name(self, class_name):
"""Match against the name of the class currently being generated.
If this method returns True, the method will be inserted in
the generated class.
"""
if self.class_names_compiled.search(class_name):
return True
else:
return False
def get_interpolated_source(self, values_dict):
"""Get the method source code, interpolating values from values_dict
into it. The source returned by this method is inserted into
the generated class.
"""
source = self.source % values_dict
return source
def show(self):
print 'specification:'
print ' name: %s' % (self.name,)
print self.source
print ' class_names: %s' % (self.class_names,)
print ' names pat : %s' \
% (self.class_names_compiled.pattern,)
#
# Method specification for getting the member
# details of the class hierarchy recursively
getallmems_method_spec = MethodSpec(name='get_all_members',
source='''\
@classmethod
def get_all_members(cls):
member_items = %(class_name)s.member_data_items_
if %(class_name)s.superclass != None:
member_items.update(%(class_name)s.superclass.get_all_members())
return member_items
''',
class_names=r'^.*$')
export_to_dictionary_method_spec = MethodSpec(name='export_to_dictionary',
source='''\
def export_to_dictionary(self):
return %(class_name)s._export_to_dictionary(self)
''',
class_names=r'^.*$')
_export_to_dictionary_method_spec = MethodSpec(name='_export_to_dictionary',
source='''\
@classmethod
def _export_to_dictionary(cls, value):
resource_model_module_name = cls.__module__
value_module_name = value.__class__.__module__
if value_module_name == resource_model_module_name:
# This is a resource model object
member_specs = value.get_all_members()
exported = {}
for member_name in member_specs:
member_getter = getattr(value, '_'.join(('get', member_name)))
member_value = member_getter()
member_spec = member_specs.get(member_name)
if member_spec.get_container() == 1:
exported[member_name] = []
for iter_value in member_value:
if member_value is not None:
exported[member_name].\
append(cls._export_to_dictionary(iter_value))
else:
exported[member_name].append(None)
else:
exported[member_name] = \
cls._export_to_dictionary(member_value)
return exported
else:
return value
''',
class_names=r'^.*$')
build_from_dictionary_method_spec = MethodSpec(name='build_from_dictionary',
source='''\
@classmethod
def build_from_dictionary(cls, dict):
if dict is None:
return None
model = %(class_name)s()
member_specs = cls.get_all_members()
for member_name in dict.keys():
member_spec = member_specs.get(member_name)
type_name = member_spec.get_data_type()
try:
__import__(cls.__module__)
attribute_class = getattr(sys.modules[cls.__module__],
type_name)
except (ValueError, AttributeError):
attribute_class = None
built_value = None
if attribute_class:
# An attribute which in itself is resource model
if member_spec.get_container() == 1:
values = dict[member_name]
if values is not None:
built_value = []
for value in values:
built_value.append(attribute_class.\
build_from_dictionary(value))
else:
built_value = attribute_class.\
build_from_dictionary(dict[member_name])
else:
built_value = dict[member_name]
member_setter = getattr(model, '_'.join(('set', member_name)))
member_setter(built_value)
return model
''',
class_names=r'^.*$')
export_to_json_method_spec = MethodSpec(name='export_to_json',
source='''\
def export_to_json(self):
import json
return json.dumps(self.export_to_dictionary(), indent=2)
''',
class_names=r'^.*$')
build_from_json_method_spec = MethodSpec(name='build_from_json',
source='''\
@classmethod
def build_from_json(cls,json_str):
import json
return %(class_name)s.build_from_dictionary(json.loads(json_str))
''',
class_names=r'^.*$')
# Method specification for adding reconstructor method
recon_method_spec = MethodSpec(name='init_loader',
source='''\
from sqlalchemy import orm
@orm.reconstructor
def init_loader(self):
from sqlalchemy import orm
objMapper = orm.object_mapper(self)
containedKeys = self.__dict__
requiredkeys = %(class_name)s.get_all_members()
self.extensiontype_ = None
for requiredkey in requiredkeys:
mappedProp = None
try:
mappedProp = objMapper.get_property(requiredkey)
except Exception:
mappedProp = None
if not mappedProp :
if not containedKeys.has_key(requiredkey):
if requiredkeys[requiredkey].get_container() == 1:
setattr(self, requiredkey, [])
else:
setattr(self, requiredkey, None)
''',
class_names=r'^.*$') # Attach to all classes
#
# Provide a list of method specifications.
# As per generateDs framework this list of specifications
# must be named METHOD_SPECS.
#
METHOD_SPECS = (getallmems_method_spec,
export_to_dictionary_method_spec,
_export_to_dictionary_method_spec,
build_from_dictionary_method_spec,
export_to_json_method_spec,
build_from_json_method_spec,
recon_method_spec)
def test():
for spec in METHOD_SPECS:
spec.show()
def main():
test()
if __name__ == '__main__':
main()

View File

@ -1,69 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Script to generate healthnmonResourceModel.py file using generateDs
"""
import subprocess
import sys
def generate_resource_model():
outfile = 'healthnmonResourceModel.py'
xsdfile = 'healthnmonResourceModel.xsd'
usermethodspec = 'generateDs_add_reconstructor_method'
command = \
'generateDS.py -o %s -m --member-specs=dict \
--user-methods=%s -q -f %s' \
% (outfile, usermethodspec, xsdfile)
print 'Generating %s from %s using generateDs' % (outfile, xsdfile)
run_command(command.split())
print 'Model file generation Succeeded'
def die(message, *args):
print >> sys.stderr, message % args
sys.exit(1)
def run_command_with_code(cmd, redirect_output=True,
check_exit_code=True):
"""
Runs a command in an out-of-process shell, returning the
output of that command.
"""
if redirect_output:
stdout = subprocess.PIPE
else:
stdout = None
proc = subprocess.Popen(cmd, stdout=stdout)
output = proc.communicate()[0]
if check_exit_code and proc.returncode != 0:
die('Command "%s" failed.\n%s', ' '.join(cmd), output)
return (output, proc.returncode)
def run_command(cmd, redirect_output=True, check_exit_code=True):
return run_command_with_code(cmd, redirect_output,
check_exit_code)[0]
if __name__ == '__main__':
generate_resource_model()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,265 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
ResourceModelDiff - Handles comparing two resource model objects
and provides dictionary of add, update and delete attributes
"""
from healthnmon import log as logging
logging = logging.getLogger('healthnmon.resourcemodeldiff')
class ResourceModelDiff(object):
"""
ResourceModelDIff - Handles comparing two resource model objects
and provides dictionary of add, update and delete attributes
"""
def __init__(self, old_resource_model=None,
new_resource_model=None):
self.old_modelobj = old_resource_model
self.new_modelobj = new_resource_model
def _collate_results(self, result):
"""Method to collate the results"""
out_result = {}
for change_type in result:
temp_dict = {}
for key in result[change_type]:
logging.debug(_('change_type = %s') % change_type)
temp_dict[key] = result[change_type][key]
if len(temp_dict) > 0:
out_result[change_type] = temp_dict
return out_result
def _diff_objects(self, old_obj, new_obj):
"""Unify decision making on the leaf node level."""
res = None
if old_obj.__class__.__module__.startswith(
'healthnmon.resourcemodel.healthnmonResourceModel'):
res_dict = self.diff_resourcemodel(old_obj, new_obj)
if len(res_dict) > 0:
res = res_dict
elif isinstance(old_obj, dict):
# We want to go through the tree post-order
res_dict = self._diff_dicts(old_obj, new_obj)
if len(res_dict) > 0:
res = res_dict
# Now we are on the same level
# different types, new value is new
elif type(old_obj) != type(new_obj):
# In case we have the unicode type for old_obj from db
# and string type in the newly created object,
# both having the same values
if ((type(old_obj) in [str, unicode]) and
(type(new_obj) in [str, unicode])):
primitive_diff = self._diff_primitives(old_obj, new_obj)
if primitive_diff is not None:
res = primitive_diff
# In all the other cases, if type changes return the new obj.
else:
res = new_obj
elif isinstance(old_obj, list):
# recursive arrays
# we can be sure now, that both new and old are
# of the same type
res_list = self._diff_lists(old_obj, new_obj)
if len(res_list) > 0:
res = res_list
else:
# the only thing remaining are scalars
primitive_diff = self._diff_primitives(old_obj, new_obj)
if primitive_diff is not None:
res = primitive_diff
return res
def _diff_primitives(
self,
old,
new,
name=None,
):
"""
Method to check diff of primitive types
"""
if old != new:
return new
else:
return None
def _diff_lists(self, old_list, new_list):
"""
Method to check diff of list types
As we are processing two ResourceModel objects both
the lists should be of same type
"""
result = {'_add': {}, '_delete': {}, '_update': {}}
if len(old_list) > 0 and hasattr(old_list[0], 'id'):
addlistindex = 0
removelistindex = 0
updatelistindex = 0
for old_idx in range(len(old_list)):
obj_not_in_new_list = True
for new_idx in range(len(new_list)):
if getattr(old_list[old_idx], 'id') \
== getattr(new_list[new_idx], 'id'):
obj_not_in_new_list = False
res = self._diff_objects(old_list[old_idx],
new_list[new_idx])
if res is not None:
result['_update'
][getattr(new_list[new_idx], 'id'
)] = res
updatelistindex += 1
break
if obj_not_in_new_list:
result['_delete'][getattr(old_list[old_idx], 'id'
)] = old_list[old_idx]
removelistindex += 1
for new_idx in range(len(new_list)):
obj_not_in_old_list = True
for old_idx in range(len(old_list)):
if getattr(old_list[old_idx], 'id') \
== getattr(new_list[new_idx], 'id'):
obj_not_in_old_list = False
break
if obj_not_in_old_list:
result['_add'][getattr(new_list[new_idx], 'id')] = \
new_list[new_idx]
addlistindex += 1
else:
shorterlistlen = min(len(old_list), len(new_list))
for idx in range(shorterlistlen):
res = self._diff_objects(old_list[idx], new_list[idx])
if res is not None:
result['_update'][idx] = res
# the rest of the larger array
if shorterlistlen == len(old_list):
for idx in range(shorterlistlen, len(new_list)):
result['_add'][idx] = new_list[idx]
else:
for idx in range(shorterlistlen, len(old_list)):
result['_delete'][idx] = old_list[idx]
return self._collate_results(result)
def _diff_dicts(self, old_obj=None, new_obj=None):
"""
Method to check diff of dictionary types
As we are processing two ResourceModel objects both the
dictionaries should be of same type
"""
old_keys = set()
new_keys = set()
if old_obj and len(old_obj) > 0:
old_keys = set(old_obj.keys())
if new_obj and len(new_obj) > 0:
new_keys = set(new_obj.keys())
keys = old_keys | new_keys
result = {'_add': {}, '_delete': {}, '_update': {}}
for attribute_name in keys:
# old_obj is missing
if attribute_name not in old_obj:
result['_add'][attribute_name] = new_obj[attribute_name]
elif attribute_name not in new_obj:
# new_obj is missing
result['_delete'][attribute_name] = \
old_obj[attribute_name]
else:
res = self._diff_objects(old_obj[attribute_name],
new_obj[attribute_name])
if res is not None:
result['_update'][attribute_name] = res
return self._collate_results(result)
def diff_resourcemodel(self, old_obj=None, new_obj=None):
"""
Method to check diff of two resource model types
As we are processing two ResourceModel objects
both objects should be of same type
"""
if not old_obj and hasattr(self, 'old_modelobj'):
old_obj = self.old_modelobj
if not new_obj and hasattr(self, 'new_modelobj'):
new_obj = self.new_modelobj
old_obj_spec_dict = old_obj.get_all_members()
new_obj_spec_dict = new_obj.get_all_members()
old_keys = set()
new_keys = set()
if old_obj_spec_dict and len(old_obj_spec_dict) > 0:
old_keys = set(old_obj_spec_dict.keys())
if new_obj_spec_dict and len(new_obj_spec_dict) > 0:
new_keys = set(new_obj_spec_dict.keys())
keys = old_keys | new_keys
result = {'_add': {}, '_delete': {}, '_update': {}}
for attribute_name in keys:
# old_obj is missing
if attribute_name not in old_keys:
result['_add'][attribute_name] = getattr(new_obj,
attribute_name)
elif attribute_name not in new_keys:
# new_obj is missing
result['_delete'][attribute_name] = getattr(old_obj,
attribute_name)
else:
res = self._diff_objects(getattr(old_obj,
attribute_name),
getattr(new_obj, attribute_name))
if res is not None:
result['_update'][attribute_name] = res
return self._collate_results(result)

View File

@ -1,55 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
heathnmon - Context details of Resource manager managing the Compute node
"""
from healthnmon import log as logging
LOG = logging.getLogger('healthnmon.driver')
class ComputeRMContext(object):
"""Holds the compute node context for a particular compute
node that is being managed in the zone."""
def __init__(
self,
rmType=None,
rmIpAddress=None,
rmUserName=None,
rmPassword=None,
rmPort=None,
):
self.rmType = rmType
self.rmIpAddress = rmIpAddress
self.rmUserName = rmUserName
self.rmPassword = rmPassword
self.rmPort = rmPort
def __getattribute__(self, name):
try:
return super(ComputeRMContext, self).__getattribute__(name)
except AttributeError, ex:
raise ex
def __eq__(self, other):
return self.rmIpAddress == other.rmIpAddress
def __hash__(self):
return hash(self.rmIpAddress)

View File

@ -1,124 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base Unit Test Class for Healthnmon
"""
from nova.openstack.common import timeutils as utils
import unittest
import mox
import shutil
import stubout
import os
from oslo.config import cfg
import healthnmon
CONF = cfg.CONF
healthnmon_path = os.path.abspath(
os.path.join(healthnmon.get_healthnmon_location(), '../'))
class TestCase(unittest.TestCase):
"""Test case base class for all unit tests."""
def setUp(self):
"""Run before each test method to initialize test environment."""
super(TestCase, self).setUp()
self.start = utils.utcnow()
shutil.copyfile(os.path.join(healthnmon_path, CONF.sqlite_clean_db),
os.path.join(healthnmon_path, CONF.sqlite_db))
# emulate some of the mox stuff, we can't use the metaclass
# because it screws with our generators
self.mox = mox.Mox()
self.stubs = stubout.StubOutForTesting()
self.injected = []
self._services = []
self._overridden_opts = []
def tearDown(self):
"""Runs after each test method to tear down test environment."""
try:
self.mox.UnsetStubs()
self.stubs.UnsetAll()
self.stubs.SmartUnsetAll()
self.mox.VerifyAll()
super(TestCase, self).tearDown()
finally:
# Clean out fake_rabbit's queue if we used it
# Reset any overridden CONF
self.reset_flags()
# Stop any timers
for x in self.injected:
try:
x.stop()
except AssertionError:
pass
# Kill any services
for x in self._services:
try:
x.kill()
except Exception:
pass
# Delete attributes that don't start with _ so they don't pin
# memory around unnecessarily for the duration of the test
# suite
for key in [k for k in self.__dict__.keys() if k[0] != '_']:
del self.__dict__[key]
def flags(self, **kw):
"""Override flag variables for a test."""
group = kw.pop('group', None)
module = kw.pop('module', None)
for k, v in kw.iteritems():
if module:
CONF.import_opt(k, module, group)
CONF.set_override(k, v, group)
self._overridden_opts.append((k, group))
def reset_flags(self):
"""Resets all flag variables for the test.
Runs after each test.
"""
for (k, group) in self._overridden_opts:
CONF.clear_override(k, group)
self._overridden_opts = []
def assertIn(self, a, b, *args, **kwargs):
"""Python < v2.7 compatibility. Assert 'a' in 'b'"""
try:
f = super(TestCase, self).assertIn
except AttributeError:
self.assertTrue(a in b, *args, **kwargs)
else:
f(a, b, *args, **kwargs)
def assertNotIn(self, a, b, *args, **kwargs):
"""Python < v2.7 compatibility. Assert 'a' NOT in 'b'"""
try:
f = super(TestCase, self).assertNotIn
except AttributeError:
self.assertFalse(a in b, *args, **kwargs)
else:
f(a, b, *args, **kwargs)

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,424 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Colorizer Code is borrowed from Twisted:
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Unittest runner for Healthnmon.
To run all tests
python healthnmon/testing/runner.py
To run a single test module:
python healthnmon/testing/runner.py test_compute
or
python healthnmon/testing/runner.py api.test_wsgi
To run a single test:
python healthnmon/testing/runner.py
test_compute:ComputeTestCase.test_run_terminate
"""
import gettext
import heapq
import os
import unittest
import sys
import time
import eventlet
from nose import config
from nose import core
from nose import result
gettext.install('healthnmon', unicode=1)
reldir = os.path.join(os.path.dirname(__file__), '..', '..')
absdir = os.path.abspath(reldir)
sys.path.insert(0, absdir)
from oslo.config import cfg
from nova.openstack.common import log as logging
CONF = cfg.CONF
class _AnsiColorizer(object):
"""
A colorizer is an object that loosely wraps around a stream, allowing
callers to write text to the stream in a particular color.
Colorizer classes must implement C{supported()} and C{write(text, color)}.
"""
_colors = dict(
black=30,
red=31,
green=32,
yellow=33,
blue=34,
magenta=35,
cyan=36,
white=37,
)
def __init__(self, stream):
self.stream = stream
def supported(cls, stream=sys.stdout):
"""
A class method that returns True if the current platform supports
coloring terminal output using this method. Returns False otherwise.
"""
if not stream.isatty():
return False # auto color only on TTYs
try:
import curses
except ImportError:
return False
else:
try:
try:
return curses.tigetnum('colors') > 2
except curses.error:
curses.setupterm()
return curses.tigetnum('colors') > 2
except Exception:
raise
# guess false in case of error
return False
supported = classmethod(supported)
def write(self, text, color):
"""
Write the given text to the stream in the given color.
@param text: Text to be written to the stream.
@param color: A string label for a color. e.g. 'red', 'white'.
"""
color = self._colors[color]
self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text))
class _Win32Colorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
import win32console as win
(red, green, blue, bold) = (win.FOREGROUND_RED,
win.FOREGROUND_GREEN,
win.FOREGROUND_BLUE,
win.FOREGROUND_INTENSITY)
self.stream = stream
self.screenBuffer = win.GetStdHandle(win.STD_OUT_HANDLE)
self._colors = {
'normal': red | green | blue,
'red': red | bold,
'green': green | bold,
'blue': blue | bold,
'yellow': red | green | bold,
'magenta': red | blue | bold,
'cyan': green | blue | bold,
'white': red | green | blue | bold,
}
def supported(cls, stream=sys.stdout):
try:
import win32console
screenBuffer = \
win32console.GetStdHandle(win32console.STD_OUT_HANDLE)
except ImportError:
return False
import pywintypes
try:
screenBuffer.SetConsoleTextAttribute(
win32console.FOREGROUND_RED
| win32console.FOREGROUND_GREEN
| win32console.FOREGROUND_BLUE)
except pywintypes.error:
return False
else:
return True
supported = classmethod(supported)
def write(self, text, color):
color = self._colors[color]
self.screenBuffer.SetConsoleTextAttribute(color)
self.stream.write(text)
self.screenBuffer.SetConsoleTextAttribute(self._colors['normal'
])
class _NullColorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
self.stream = stream
def supported(cls, stream=sys.stdout):
return True
supported = classmethod(supported)
def write(self, text, color):
self.stream.write(text)
def get_elapsed_time_color(elapsed_time):
if elapsed_time > 1.0:
return 'red'
elif elapsed_time > 0.25:
return 'yellow'
else:
return 'green'
class HealthnmonTestResult(result.TextTestResult):
def __init__(self, *args, **kw):
self.show_elapsed = kw.pop('show_elapsed')
result.TextTestResult.__init__(self, *args, **kw)
self.num_slow_tests = 5
self.slow_tests = [] # this is a fixed-sized heap
self._last_case = None
self.colorizer = None
# NOTE(vish): reset stdout for the terminal check
stdout = sys.stdout
sys.stdout = sys.__stdout__
for colorizer in [_Win32Colorizer, _AnsiColorizer,
_NullColorizer]:
if colorizer.supported():
self.colorizer = colorizer(self.stream)
break
sys.stdout = stdout
# NOTE(lorinh): Initialize start_time in case a sqlalchemy-migrate
# error results in it failing to be initialized later. Otherwise,
# _handleElapsedTime will fail, causing the wrong error message to
# be outputted.
self.start_time = time.time()
def getDescription(self, test):
return str(test)
def _handleElapsedTime(self, test):
self.elapsed_time = time.time() - self.start_time
item = (self.elapsed_time, test)
# Record only the n-slowest tests using heap
if len(self.slow_tests) >= self.num_slow_tests:
heapq.heappushpop(self.slow_tests, item)
else:
heapq.heappush(self.slow_tests, item)
def _writeElapsedTime(self, test):
color = get_elapsed_time_color(self.elapsed_time)
self.colorizer.write(' %.2f' % self.elapsed_time, color)
def _writeResult(
self,
test,
long_result,
color,
short_result,
success,
):
if self.showAll:
self.colorizer.write(long_result, color)
if self.show_elapsed and success:
self._writeElapsedTime(test)
self.stream.writeln()
elif self.dots:
self.stream.write(short_result)
self.stream.flush()
# NOTE(vish): copied from unittest with edit to add color
def addSuccess(self, test):
unittest.TestResult.addSuccess(self, test)
self._handleElapsedTime(test)
self._writeResult(test, 'OK', 'green', '.', True)
# NOTE(vish): copied from unittest with edit to add color
def addFailure(self, test, err):
unittest.TestResult.addFailure(self, test, err)
self._handleElapsedTime(test)
self._writeResult(test, 'FAIL', 'red', 'F', False)
# NOTE(vish): copied from nose with edit to add color
def addError(self, test, err):
"""Overrides normal addError to add support for
errorClasses. If the exception is a registered class, the
error will be added to the list for that class, not errors.
"""
self._handleElapsedTime(test)
stream = getattr(self, 'stream', None)
(ec, ev, tb) = err
try:
exc_info = self._exc_info_to_string(err, test)
except TypeError:
# 2.3 compat
exc_info = self._exc_info_to_string(err)
for (cls, (storage, label, isfail)) in \
self.errorClasses.items():
if result.isclass(ec) and issubclass(ec, cls):
if isfail:
test.passed = False
storage.append((test, exc_info))
# Might get patched into a streamless result
if stream is not None:
if self.showAll:
message = [label]
detail = result._exception_detail(err[1])
if detail:
message.append(detail)
stream.writeln(': '.join(message))
elif self.dots:
stream.write(label[:1])
return
self.errors.append((test, exc_info))
test.passed = False
if stream is not None:
self._writeResult(test, 'ERROR', 'red', 'E', False)
def startTest(self, test):
unittest.TestResult.startTest(self, test)
self.start_time = time.time()
current_case = test.test.__class__.__name__
if self.showAll:
if current_case != self._last_case:
self.stream.writeln(current_case)
self._last_case = current_case
self.stream.write(' %s'
% str(test.test._testMethodName).ljust(60))
self.stream.flush()
class HealthnmonTestRunner(core.TextTestRunner):
def __init__(self, *args, **kwargs):
self.show_elapsed = kwargs.pop('show_elapsed')
core.TextTestRunner.__init__(self, *args, **kwargs)
def _makeResult(self):
return HealthnmonTestResult(self.stream, self.descriptions,
self.verbosity, self.config,
show_elapsed=self.show_elapsed)
def _writeSlowTests(self, result_):
# Pare out 'fast' tests
slow_tests = [item for item in result_.slow_tests
if get_elapsed_time_color(item[0]) != 'green']
if slow_tests:
slow_total_time = sum(item[0] for item in slow_tests)
self.stream.writeln('Slowest %i tests took %.2f secs:'
% (len(slow_tests), slow_total_time))
for (elapsed_time, test) in sorted(slow_tests,
reverse=True):
time_str = '%.2f' % elapsed_time
self.stream.writeln(' %s %s' % (time_str.ljust(10),
test))
def run(self, test):
result_ = core.TextTestRunner.run(self, test)
if self.show_elapsed:
self._writeSlowTests(result_)
return result_
def run():
# This is a fix to allow the --hide-elapsed flag while accepting
# arbitrary nosetest flags as well
argv = [x for x in sys.argv if x != '--hide-elapsed']
hide_elapsed = argv != sys.argv
logging.setup("healthnmon")
# If any argument looks like a test name but doesn't have "nova.tests" in
# front of it, automatically add that so we don't have to type as much
for (i, arg) in enumerate(argv):
if arg.startswith('test_'):
argv[i] = 'healthnmon.tests.%s' % arg
# testdir = os.path.abspath(os.path.join("healthnmon", "tests"))
testdir = os.path.abspath('healthnmon')
c = config.Config(stream=sys.stdout, env=os.environ, verbosity=3,
workingDir=testdir,
plugins=core.DefaultPluginManager())
runner = HealthnmonTestRunner(stream=c.stream,
verbosity=c.verbosity, config=c,
show_elapsed=not hide_elapsed)
sys.exit(not core.run(config=c, testRunner=runner, argv=argv))
if __name__ == '__main__':
eventlet.monkey_patch(
all=False, os=True, select=True, socket=True, thread=False, time=True)
run()

View File

@ -1,746 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def open(name):
return virConnect()
def openReadOnly(name):
return virConnect()
def openAuth(uri, auth, flag):
return virConnect()
def virEventRegisterDefaultImpl():
pass
def virEventRunDefaultImpl():
pass
VIR_DOMAIN_EVENT_ID_LIFECYCLE = 0
VIR_DOMAIN_EVENT_ID_REBOOT = 1
VIR_DOMAIN_EVENT_ID_DISK_CHANGE = 9
VIR_NODE_CPU_STATS_ALL_CPUS = -1
VIR_NODE_MEMORY_STATS_ALL_CELLS = -1
class virConnect:
def __init__(self):
self.storagePools = ['dirpool', 'default', 'iscsipool']
self.interval = None
self.count = None
def getCapabilities(self):
return """<capabilities>
<host>
<uuid>34353438-3934-434e-3738-313630323543</uuid>
<cpu>
<arch>x86_64</arch>
<model>Opteron_G2</model>
<vendor>AMD</vendor>
<topology sockets='2' cores='2' threads='1'/>
<feature name='cr8legacy'/>
<feature name='extapic'/>
<feature name='cmp_legacy'/>
<feature name='3dnow'/>
<feature name='3dnowext'/>
<feature name='fxsr_opt'/>
<feature name='mmxext'/>
<feature name='ht'/>
<feature name='vme'/>
</cpu>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<secmodel>
<model>apparmor</model>
<doi>0</doi>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu</emulator>
<machine>pc-0.14</machine>
<machine canonical='pc-0.14'>pc</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<domain type='qemu'>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<pae/>
<nonpae/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine>pc-0.14</machine>
<machine canonical='pc-0.14'>pc</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<domain type='qemu'>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
"""
def getSysinfo(self, flag):
return """<sysinfo type='smbios'>
<bios>
<entry name='vendor'>HP</entry>
<entry name='version'>A13</entry>
<entry name='date'>02/21/2008</entry>
</bios>
<system>
<entry name='manufacturer'>HP</entry>
<entry name='product'>ProLiant BL465c G1 </entry>
<entry name='version'>Not Specified</entry>
<entry name='serial'>CN7816025C </entry>
<entry name='uuid'>34353438-3934-434E-3738-313630323543</entry>
<entry name='sku'>454894-B21 </entry>
<entry name='family'>ProLiant</entry>
</system>
</sysinfo>"""
def getInfo(self):
return [
'x86_64',
3960,
4,
1000,
1,
2,
2,
1,
]
def getHostname(self):
return 'ubuntu164.vmm.hp.com'
def listDefinedDomains(self):
return ['ReleaseBDevEnv']
def listDomainsID(self):
return [1]
def lookupByName(self, name):
return virDomain()
def lookupByID(self, domId):
return virDomain()
def lookupByUUIDString(self, uuid):
return virDomain()
def storagePoolLookupByName(self, name):
if name == 'default':
return virStoragePool()
elif name == 'nova-storage-pool':
return virDirPool()
else:
return virStoragePoolInactive()
def storageVolLookupByPath(self, name):
return virStorageVol()
def listStoragePools(self):
return self.storagePools
def listDefinedStoragePools(self):
return ['inactivePool']
def storagePoolDefineXML(self, xml, flag):
self.storagePools.append('nova-storage-pool')
return virDirPool()
def listNetworks(self):
return ['default']
def listDefinedNetworks(self):
return ['inactiveNetwork']
def listInterfaces(self):
return ['br100', 'eth0', 'lo']
def listDefinedInterfaces(self):
return ['inactiveInterface']
def networkLookupByName(self, name):
if name == 'default':
return virLibvirtNetwork()
elif name == 'staticNw':
return virLibvirtStaticNw()
else:
return virLibvirtInactiveNw()
def interfaceLookupByName(self, name):
if name == 'br100':
return virLibvirtInterface()
elif name == 'eth0':
return virLibvirtInterfaceEth0()
elif name == 'inactiveInterface':
return virLibvirtInterfaceInactive()
else:
return virLibvirtInterfaceLo()
def getFreeMemory(self):
return 0
def getType(self):
return 'QEMU'
def getVersion(self):
return 14001
def getCPUStats(self, cpuNum, flags):
return {'kernel': 5238340000000L, 'idle\
': 453151690000000L, 'user': 2318860000000L, 'iowait': 20620000000L}
def getMemoryStats(self, cellNum, flags):
return {'cached': 140320L, 'total': 32875672L, 'buffers\
': 36032L, 'free': 31977592L}
def domainEventRegisterAny(self, dom, eventID, cb, opaque):
"""Adds a Domain Event Callback. Registering for a domain
callback will enable delivery of the events """
return 1
def domainEventDeregisterAny(self, callbackid):
return 1
def close(self):
return 0
def setKeepAlive(self, interval, count):
self.interval = interval
self.count = count
return 1
def isAlive(self):
return self.interval and self.count
class virDomain:
def UUIDString(self):
return '25f04dd3-e924-02b2-9eac-876e3c943262'
def XMLDesc(self, flag):
return """<domain type='qemu' id='1'>
<name>TestVirtMgrVM7</name>
<uuid>25f04dd3-e924-02b2-9eac-876e3c943262</uuid>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='pc-0.14'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/TestVirtMgrVM7.img'/>
<target dev='hda' bus='scsi'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/disk/by-path/ip-10.10.4.21:3260-iscsi-iqn.\
2010-10.org.openstack:volume-00000001-lun-1'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' \
function='0x0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source junk='/dev/disk/by-path/ip-10.10.4.21:3260-iscsi-iqn.\
2010-10.org.openstack:volume-00000001-lun-1'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' \
function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/ubuntu164/vmdks/ubuntu-11.\
10-desktop-i386.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' unit='0'/>
</disk>
<controller type='scsi' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' \
function='0x1'/>
</controller>
<interface type='network'>
<mac address='52:54:00:4c:82:63'/>
<source network='default'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' \
function='0x0'/>
<filterref \
filter='nova-instance-instance-000002f2-fa163e7ab3f9'>
<parameter name='DHCPSERVER' value='10.1.1.22'/>
<parameter name='IP' value='10.1.1.19'/>
</filterref>
</interface>
<interface type='bridge'>
<mac address='52:54:00:4c:82:63'/>
<source network='default'/>
<target dev='br100'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' \
function='0x0'/>
<filterref filter='nova-instance-instance-000002f2-fa163e1b7489'>
<parameter name='DHCPSERVER' value='10.2.1.22'/>
<parameter name='IP' value='10.2.1.20'/>
</filterref>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes'/>
<sound model='ich6'>
<alias name='sound0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' \
function='0x0'/>
</sound>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' \
function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' \
function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor'>
<label>libvirt-25f04dd3-e924-02b2-9eac-876e3c943262</label>
<imagelabel>libvirt-25f04dd3-e924-02b2-9eac-876e3c943262\
</imagelabel>
</seclabel>
</domain>
"""
def name(self):
return 'TestVirtMgrVM7'
def ID(self):
return 1
def blockInfo(self, path, flags):
return [100, 200, 300]
def blockStats(self, path):
return (6492L, 191928832L, 1600L, 14091264L, -1L)
def interfaceStats(self, path):
return (
56821L,
1063L,
0L,
0L,
4894L,
30L,
0L,
0L,
)
def info(self):
return [1, 2097152L, 2097152L, 1, 372280000000L]
def state(self, flag):
return [1, 1]
def isActive(self):
return 0
def autostart(self):
return 1
class virStoragePool:
def UUIDString(self):
return '95f7101b-892c-c388-867a-8340e5fea27a'
def XMLDesc(self, flag):
return """<pool type='dir'>
<name>default</name>
<uuid>95f7101b-892c-c388-867a-8340e5fea27a</uuid>
<capacity>113595187200</capacity>
<allocation>11105746944</allocation>
<available>102489440256</available>
<source>
</source>
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>"""
def name(self):
return 'default'
def isActive(self):
return 1
def refresh(self, data):
pass
class virDirPool:
def UUIDString(self):
return '95f7101b-892c-c388-867a-8340e5feadir'
def XMLDesc(self, flag):
return """<pool type='dir'>
<name>nova-storage-pool</name>
<uuid>95f7101b-892c-c388-867a-8340e5feadir</uuid>
<capacity>113595187200</capacity>
<allocation>11105746944</allocation>
<available>102489440256</available>
<source>
</source>
<target>
<path>/var/lib/nova/instances</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>"""
def name(self):
return 'nova-storage-pool'
def setAutostart(self, flag):
pass
def build(self, flag):
pass
def create(self, flag):
pass
def isActive(self):
return 1
def refresh(self, data):
pass
class virStoragePoolInactive:
def UUIDString(self):
return '95f7101b-892c-c388-867a-8340e5fea27x'
def XMLDesc(self, flag):
return """<pool type='dir'>
<name>inactivePool</name>
<uuid>95f7101b-892c-c388-867a-8340e5fea27a</uuid>
<capacity>113595187200</capacity>
<allocation>11105746944</allocation>
<available>102489440256</available>
<source>
</source>
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>"""
def name(self):
return 'inactivePool'
def isActive(self):
return 0
def refresh(self, data):
pass
class virStorageVol:
def storagePoolLookupByVolume(self):
return virStoragePool()
def UUIDString(self):
return '95f7101b-892c-c388-867a-8340e5fea27x'
class virLibvirtNetwork:
def UUIDString(self):
return '3fbfbefb-17dd-07aa-2dac-13afbedf3be3'
def XMLDesc(self, flag):
return """<network>
<name>default</name>
<uuid>3fbfbefb-17dd-07aa-2dac-13afbedf3be3</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0' />
<mac address='52:54:00:34:14:AE'/> \
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254' />
</dhcp>
</ip>
</network>"""
def name(self):
return 'default'
def autostart(self):
return 0
def isActive(self):
return 1
class virLibvirtStaticNw:
def UUIDString(self):
return '3fbfbefb-17dd-07aa-2dac-13afbedf3be9'
def XMLDesc(self, flag):
return """<network>
<name>staticNw</name>
<uuid>3fbfbefb-17dd-07aa-2dac-13afbedf3be3</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0' />
<mac address='52:54:00:34:14:AE'/> \
<ip address='192.168.122.1' netmask='255.255.255.0'>
</ip>
</network>"""
def name(self):
return 'staticNw'
def autostart(self):
return 0
def isActive(self):
return 1
class virLibvirtInactiveNw:
def UUIDString(self):
return '3fbfbefb-17dd-07aa-2dac-13afbedf3be9'
def XMLDesc(self, flag):
return """<network>
<name>inactiveNetwork</name>
<uuid>3fbfbefb-17dd-07aa-2dac-13afbedf3be3</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0' />
<mac address='52:54:00:34:14:AE'/> \
<ip address='192.168.122.1' netmask='255.255.255.0'>
</ip>
</network>"""
def name(self):
return 'inactiveNw'
def autostart(self):
return 1
def isActive(self):
return 0
# class virNetwork:
#
# def networkLookupByVolume(self):
# return virLibvirtNetwork()
class virLibvirtInterface:
def XMLDesc(self, flag):
return """<interface type='bridge' name='br100'>
<protocol family='ipv4'>
<ip address='10.1.1.3' prefix='24'/>
<ip address='10.1.1.14' prefix='24'/>
</protocol>
<protocol family='ipv6'>
<ip address='fe80::223:7dff:fe34:dbf0' prefix='64'/>
</protocol>
<bridge>
<interface type='ethernet' name='vnet0'>
<mac address='fe:54:00:12:e3:90'/> \
</interface> \
<interface type='ethernet' name='eth1'>
<mac address='00:23:7d:34:db:f0'/> \
</interface>
</bridge> \
</interface>"""
def name(self):
return 'br100'
def isActive(self):
return 1
def MACString(self):
return '00:23:7d:34:db:f0'
class virLibvirtInterfaceEth0:
def XMLDesc(self, flag):
return """<interface type='ethernet' name='eth0'>
<mac address='00:23:7d:34:bb:e8'/>
<protocol family='ipv4'>
<ip address='10.10.155.140' prefix='16'/> \
</protocol>
<protocol family='ipv6'>
<ip address='fe80::223:7dff:fe34:bbe8' prefix='64'/> \
</protocol> \
</interface> """
class virLibvirtInterfaceLo:
def XMLDesc(self, flag):
return """<interface type='ethernet' name='lo'>
<protocol family='ipv4'>
<ip address='127.0.0.1' prefix='8'/>
<ip address='169.254.169.254' prefix='32'/>
</protocol>
<protocol family='ipv6'>
<ip address='::1' prefix='128'/>
</protocol>
</interface> """
class virLibvirtInterfaceInactive:
def XMLDesc(self, flag):
return """<interface type='bridge' name='inactiveInterface'>
<protocol family='ipv6'>
<ip address='fe80::223:7dff:fe34:dbf0' prefix='64'/>
</protocol>
<bridge>
<interface type='ethernet' name='eth1'>
<mac address='00:23:7d:34:db:f0'/> \
</interface>
<interface type='ethernet' name='vnet0'>
<mac address='fe:54:00:12:e3:90'/> \
</interface> \
</bridge> \
</interface>"""
def name(self):
return 'inactiveInterface'
def isActive(self):
return 0
def MACString(self):
return '00:23:7d:34:db:f1'
class libvirtError(Exception):
def getDesc(self):
return 'Error'
def get_error_code(self):
return 38
def get_error_domain(self):
return 13
VIR_CRED_AUTHNAME = 2
VIR_CRED_NOECHOPROMPT = 7
# virErrorDomain
VIR_ERR_SYSTEM_ERROR = 38
VIR_FROM_REMOTE = 13
VIR_FROM_RPC = 7

View File

@ -1,69 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from healthnmon.db import migration as healthnmon_migration
import FakeLibvirt
from nova.db import migration as nova_migration
from oslo.config import cfg
import __builtin__
import healthnmon
import os
import shutil
import sys
setattr(__builtin__, '_', lambda x: x)
sys.modules['libvirt'] = FakeLibvirt
test_opts = [
cfg.StrOpt('sqlite_clean_db',
default='clean.sqlite',
help='File name of clean sqlite db'),
]
CONF = cfg.CONF
CONF.register_opts(test_opts)
CONF.import_opt('sqlite_db', 'nova.openstack.common.db.sqlalchemy.session')
CONF.import_opt('sqlite_synchronous',
'nova.openstack.common.db.sqlalchemy.session')
CONF.set_default('sqlite_db', 'tests.sqlite')
CONF.set_default('sqlite_synchronous', False)
def setup():
''' for nova test.py create a dummy clean.sqlite '''
healthnmon_path = os.path.abspath(
os.path.join(healthnmon.get_healthnmon_location(), '../'))
cleandb = os.path.join(healthnmon_path, CONF.sqlite_clean_db)
if os.path.exists(cleandb):
pass
else:
open(cleandb, 'w').close()
''' for healthnmon create db '''
healthnmon_path = os.path.abspath(
os.path.join(healthnmon.get_healthnmon_location(), '../'))
sql_connection_url = "sqlite:///" + str(healthnmon_path) + "/$sqlite_db"
CONF.set_default("sql_connection", sql_connection_url)
testdb = os.path.join(healthnmon_path, CONF.sqlite_db)
if os.path.exists(testdb):
return
nova_migration.db_sync()
healthnmon_migration.db_sync()
cleandb = os.path.join(healthnmon_path, CONF.sqlite_clean_db)
shutil.copyfile(testdb, cleandb)
""" Uncomment the line below for running tests through eclipse """
# setup()

View File

@ -1,15 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,55 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from test_healthnmon import HealthnmonTest
from test_storagevolume import StorageVolumeTest
from test_util import UtilTest
from test_vm import VMTest
from test_vmhosts import VmHostsTest
from test_subnet import SubnetTest
from test_virtualswitch import VirtualSwitchTest
from test_base import BaseControllerTest
def run_tests():
loader = unittest.TestLoader()
healthnmon_suite = loader.loadTestsFromTestCase(HealthnmonTest)
storage_suite = loader.loadTestsFromTestCase(StorageVolumeTest)
util_suite = loader.loadTestsFromTestCase(UtilTest)
vm_suite = loader.loadTestsFromTestCase(VMTest)
subnet_suite = loader.loadTestsFromTestCase(SubnetTest)
vmhosts_suite = loader.loadTestsFromTestCase(VmHostsTest)
virtual_switch_suite = \
loader.loadTestsFromTestCase(VirtualSwitchTest)
base_suite = loader.loadTestsFromTestCase(BaseControllerTest)
alltests = [
healthnmon_suite,
storage_suite,
util_suite,
vm_suite,
subnet_suite,
vmhosts_suite,
virtual_switch_suite,
base_suite
]
result = unittest.TestResult()
for test in alltests:
test.run(result)
if __name__ == '__main__':
run_tests()

View File

@ -1,37 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Represent a fake data model object
'''
class FakeModel(object):
' Represent a fake data model class'
def __init__(self, id):
self.id = id
def get_id(self):
return self.id
def get_name(self):
return 'name_' + str(self.id)
def export(self, outfile, indent, name_):
outfile.write(''.join(['<', name_, '>', '<id>', self.id, '</id><name>',
self.get_name(), '</name>', '</', name_, '>']))

View File

@ -1,318 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import webob
from fakemodel import FakeModel
from sqlalchemy import exc as sql_exc
from nova.exception import Invalid
from nova import context
from healthnmon.api.base import Controller
from healthnmon.resourcemodel.healthnmonResourceModel import VmHost, IpProfile
from healthnmon.constants import DbConstants
expected_index_json = '{"accounts_links": \
[{"href": "http://marker", "rel": "next"}, \
{"href": "http://marker", "rel": "previous"}], \
"accounts": [{"id": "1", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/1", "rel": "self"}, \
{"href": "http://localhost:8774/accounts/1", \
"rel": "bookmark"}], "name": "name_1"}, \
{"id": "2", "links": [{"href": "http://localhost:8774/v2.0/accounts/2", \
"rel": "self"}, {"href": "http://localhost:8774/accounts/2", \
"rel": "bookmark"}], "name": "name_2"}, {"id": "3", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/3", "rel": "self"}, \
{"href": "http://localhost:8774/accounts/3", "rel": "bookmark"}], \
"name": "name_3"}, {"id": "4", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/4", "rel": "self"}, \
{"href": "http://localhost:8774/accounts/4", "rel": "bookmark"}], \
"name": "name_4"}]}'
expected_index_fields_json = '{"accounts_links": [{"href": "http://marker", \
"rel": "next"}, {"href": "http://marker", "rel": "previous"}], \
"accounts": [{"id": "1", "links": [{"href": \
"http://localhost:8774/v2.0/accounts/1?fields=id", \
"rel": "self"}, {"href": "http://localhost:8774/accounts/1", \
"rel": "bookmark"}], "name": "name_1"}, {"id": "2", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/2?fields=id", \
"rel": "self"}, {"href": "http://localhost:8774/accounts/2", \
"rel": "bookmark"}], "name": "name_2"}, {"id": "3", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/3?fields=id", \
"rel": "self"}, {"href": "http://localhost:8774/accounts/3", \
"rel": "bookmark"}], "name": "name_3"}, {"id": "4", "links": \
[{"href": "http://localhost:8774/v2.0/accounts/4?fields=id", \
"rel": "self"}, {"href": "http://localhost:8774/accounts/4", \
"rel": "bookmark"}], "name": "name_4"}]}'
expected_index_xml = '<accounts xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<account id="1" name="name_1"><atom:link \
href="http://localhost:8774/v2.0/accounts/1" \
rel="self"/><atom:link href="http://localhost:8774/accounts/1" \
rel="bookmark"/></account><account id="2" name="name_2">\
<atom:link href="http://localhost:8774/v2.0/accounts/2" \
rel="self"/><atom:link href="http://localhost:8774/accounts/2" \
rel="bookmark"/></account><account id="3" name="name_3">\
<atom:link href="http://localhost:8774/v2.0/accounts/3" \
rel="self"/><atom:link href="http://localhost:8774/accounts/3" \
rel="bookmark"/></account><account id="4" name="name_4">\
<atom:link href="http://localhost:8774/v2.0/accounts/4" \
rel="self"/><atom:link href="http://localhost:8774/accounts/4" \
rel="bookmark"/></account><atom:link href="http://marker" \
rel="next"/><atom:link href="http://marker" rel="previous"/></accounts>'
expected_index_fields_xml = '<accounts \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<account id="1" name="name_1">\
<atom:link href="http://localhost:8774/v2.0/accounts/1?fields=id" \
rel="self"/><atom:link href="http://localhost:8774/accounts/1" \
rel="bookmark"/></account><account id="2" name="name_2">\
<atom:link href="http://localhost:8774/v2.0/accounts/2?fields=id" \
rel="self"/><atom:link href="http://localhost:8774/accounts/2" \
rel="bookmark"/></account><account id="3" name="name_3">\
<atom:link href="http://localhost:8774/v2.0/accounts/3?fields=id" rel="self"/>\
<atom:link href="http://localhost:8774/accounts/3" rel="bookmark"/>\
</account><account id="4" name="name_4"><atom:link \
href="http://localhost:8774/v2.0/accounts/4?fields=id" \
rel="self"/><atom:link href="http://localhost:8774/accounts/4" \
rel="bookmark"/></account><atom:link href="http://marker" rel="next"/>\
<atom:link href="http://marker" rel="previous"/></accounts>'
expected_detail_xml = '<accounts xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><Account><id>1</id>\
<name>name_1</name></Account><Account><id>2</id><name>name_2</name></Account>\
<Account><id>3</id><name>name_3</name></Account><Account><id>4</id>\
<name>name_4</name></Account><atom:link href="http://marker" rel="next"/>\
<atom:link href="http://marker" rel="previous"/></accounts>'
expected_detail_json = '{"accounts_links": [{"href": "http://marker", \
"rel": "next"}, {"href": "http://marker", "rel": "previous"}], \
"accounts": [{"id": "1", "name": "name_1"}, {"id": "2", "name": "name_2"}, \
{"id": "3", "name": "name_3"}, {"id": "4", "name": "name_4"}]}'
expected_links = "[{'href': 'http://localhost:8774/v2.0/accounts?\
limit=1&marker=3', 'rel': 'next'}, \
{'href': 'http://localhost:8774/v2.0/accounts?limit=1&marker=1', \
'rel': 'previous'}]"
expected_search_json = "({'deleted': 'false'}, 'id', 'desc')"
expected_search_changes_since = "({'deleted': u'f', \
'changes-since': 1336633200000L}, 'createEpoch', 'desc')"
expected_base_show_json = '{"Account": {"id": "1", "name": "name_1"}}'
expected_base_detail_json = '{"accounts": [{"id": "1", "name": "name_1"}, \
{"id": "2", "name": "name_2"}, {"id": "3", "name": "name_3"}, \
{"id": "4", "name": "name_4"}]}'
expected_base_show_xml = '<Account><id>1</id><name>name_1</name></Account>'
expected_base_detail_xml = '<accounts \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><Account>\
<id>1</id><name>name_1</name></Account><Account><id>2</id><name>name_2</name>\
</Account><Account><id>3</id><name>name_3</name></Account><Account>\
<id>4</id><name>name_4</name></Account></accounts>'
class BaseControllerTest(unittest.TestCase):
def setUp(self):
self.controller = Controller('accounts', 'account', 'Account')
self.admin_context = context.RequestContext('admin', '', is_admin=True)
def tearDown(self):
pass
def test__index_json(self):
request = webob.Request.blank('/v2.0/accounts.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._index(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEquals(expected_index_json, resp.body)
def test__index_fields_json(self):
request = webob.Request.blank('/v2.0/accounts.json?fields=id',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._index(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEquals(expected_index_fields_json, resp.body)
def test__index_xml(self):
request = webob.Request.blank('/v2.0/accounts.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._index(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEquals(expected_index_xml, resp.body)
def test__index_fields_xml(self):
request = webob.Request.blank('/v2.0/accounts.xml?fields=id',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._index(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEquals(expected_index_fields_xml, resp.body)
def test__detail_json(self):
request = webob.Request.blank('/v2.0/accounts/detail',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._detail(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEqual(resp.body, expected_detail_json)
def test__detail_xml(self):
request = webob.Request.blank('/v2.0/accounts/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller._detail(
request,
[FakeModel(str(x)) for x in range(1, 5)],
[{'rel': 'next', 'href': 'http://marker'},
{'rel': 'previous', 'href': 'http://marker'}, ])
self.assertEqual(resp.body, expected_detail_xml)
def test_search_options_changes_since(self):
request = webob.Request.blank(
'/v2.0/accounts/detail?changes-since=\
2012-05-10T00:00:00&deleted=false',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller.get_search_options(request, VmHost)
self.assertNotEqual(resp, None)
filters = resp[0]
self.assert_(filters['deleted'] == 'false')
self.assert_(filters['changes-since'] == 1336608000000)
sort_key = resp[1]
self.assert_(sort_key == 'createEpoch')
sort_dir = resp[2]
self.assert_(sort_dir == DbConstants.ORDER_DESC)
def test_search_options_composite(self):
request = webob.Request.blank(
'/v2.0/accounts/detail?name=\
SRS&name=SRS111&os=windows&virtualizationType=QEMU',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller.get_search_options(request, VmHost)
self.assertNotEqual(resp, None)
def test_search_options_non_epoc(self):
request = webob.Request.blank('/v2.0/accounts/detail',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = self.controller.get_search_options(request, IpProfile)
self.assertNotEqual(resp, None)
self.assertEqual(str(resp), expected_search_json)
def test_search_options_exception(self):
request = webob.Request.blank(
'/v2.0/accounts/detail?changes-since=ABCD',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.get_search_options, request, VmHost)
def test_limited_by_marker(self):
request = webob.Request.blank('/v2.0/accounts?marker=2&limit=1',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
item_list, collection_links = self.controller.limited_by_marker(
[FakeModel(
str(x)) for x in range(1, 5)],
request)
self.assertEqual(item_list[0].get_id(), '3')
self.assertEqual(str(collection_links), expected_links)
def test_limited_by_marker_exception(self):
request = webob.Request.blank('/v2.0/accounts?marker=19',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.limited_by_marker,
[FakeModel('1')],
request)
def test_data_error(self):
def test_func(ctx, filters, sort_key, sort_dir):
raise sql_exc.DataError('a', 'b', 'c')
request = webob.Request.blank('/v2.0/accounts?marker=19',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
self.assertRaises(Invalid,
Controller('vmhosts',
'vmhost',
'VmHost').get_all_by_filters,
request,
test_func)
# Unit tests for defect fix DE84: Healthnmon-API: limit=0 specified in the
# query gives incorrect result.
def test_zero_limit_value(self):
request = webob.Request.blank('/v2.0/accounts?limit=0',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
self.assertEquals(self.controller.limited_by_marker([FakeModel('1')],
request,
20),
([], []))
def test_negative_limit_value(self):
request = webob.Request.blank('/v2.0/accounts?limit=-1',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.limited_by_marker,
[FakeModel('1')],
request)
# Unit tests for defect DE86: Healthnmon-API: Add identifier of the
# resource irrespective of the fields asked( applicable for all resources)
def test_base_identifier_json(self):
request = webob.Request.blank('/v2.0/accounts?fields=name',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
item_list = [FakeModel(str(x)) for x in range(1, 5)]
self.assertEquals(self.controller._show(request, item_list[0]).body,
expected_base_show_json)
self.assertEquals(self.controller._detail(request, item_list, []).body,
expected_base_detail_json)
def test_base_identifier_xml(self):
request = webob.Request.blank('/v2.0/accounts/detail.xml?fields=name',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
item_list = [FakeModel(str(x)) for x in range(1, 5)]
self.assertEquals(self.controller._show(request, item_list[0]).body,
expected_base_show_xml)
self.assertEquals(self.controller._detail(request, item_list, []).body,
expected_base_detail_xml)
if __name__ == "__main__":
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()

View File

@ -1,44 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from healthnmon.api.healthnmon import Healthnmon
from nova.api.openstack.compute import contrib
import unittest
import mox
class FakeExtensionManager:
def register(self, descriptor):
pass
class HealthnmonTest(unittest.TestCase):
def setUp(self):
""" Setup initial mocks and logging configuration """
super(HealthnmonTest, self).setUp()
self.mock = mox.Mox()
def tearDown(self):
self.mock.stubs.UnsetAll()
def test_get_resources(self):
self.mock.StubOutWithMock(contrib, 'standard_extensions')
contrib.standard_extensions(mox.IgnoreArg()).AndReturn(None)
self.assertNotEqual(Healthnmon(FakeExtensionManager()).get_resources(),
None)

View File

@ -1,416 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from healthnmon.api import util
from healthnmon.api.storagevolume import StorageVolumeController
from healthnmon.db import api
from healthnmon.resourcemodel.healthnmonResourceModel import StorageVolume
from healthnmon.resourcemodel.healthnmonResourceModel import HostMountPoint
from nova import context
from webob.exc import HTTPNotFound
import mox
import unittest
import webob
class StorageVolumeTest(unittest.TestCase):
""" Test cases for healthnmon resource extensions """
expected_index_xml = \
'<storagevolumes xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<storagevolume id="datastore-111" name="datastore-111">\
<atom:link \
href="http://localhost:8774/v2.0/storagevolumes/datastore-111" \
rel="self"/>\
<atom:link href="http://localhost:8774/storagevolumes/datastore-111" \
rel="bookmark"/>\
</storagevolume>\
<storagevolume id="datastore-112" name="datastore-112">\
<atom:link \
href="http://localhost:8774/v2.0/storagevolumes/datastore-112" \
rel="self"/>\
<atom:link href="http://localhost:8774/storagevolumes/datastore-112" \
rel="bookmark"/>\
</storagevolume>\
</storagevolumes>'
expected_index_json = \
'{"storagevolumes": [{"id": "datastore-111", "links": [\
{"href": "http://localhost:8774/v2.0/storagevolumes/datastore-111", \
"rel": "self"}, \
{"href": "http://localhost:8774/storagevolumes/datastore-111", \
"rel": "bookmark"}], \
"name": "datastore-111"}, \
{"id": "datastore-112", "links": [\
{"href": "http://localhost:8774/v2.0/storagevolumes/datastore-112", \
"rel": "self"}, \
{"href": "http://localhost:8774/storagevolumes/datastore-112", \
"rel": "bookmark"}], \
"name": "datastore-112"}]}'
expected_details_json = '{"StorageVolume": \
{"mountPoints": {"path": "/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da", \
"vmhosts": [{"id": "host-9", "links": \
[{"href": "http://localhost:8774/v2.0/vmhosts/host-9", \
"rel": "self"}, {"href": "http://localhost:8774/vmhosts/host-9", \
"rel": "bookmark"}]}]}, "vmfsVolume": "true", "resourceManagerId": \
"13274325-BFD6-464F-A9D1-61332573B5E2", "name": "datastore-111", \
"volumeType": "VMFS", "volumeId": \
"/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da", \
"free": "32256294912", "assignedServerCount": "2", "shared": "true", \
"id": "datastore-111", "size": "107105746944"}}'
expected_storage_details_xml = '<StorageVolume><id>datastore-111</id>\
<name>datastore-111</name><resourceManagerId>\
13274325-BFD6-464F-A9D1-61332573B5E2</resourceManagerId>\
<size>107105746944</size><free>32256294912</free>\
<mountPoints><path>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da</path>\
<vmhost xmlns:atom="http://www.w3.org/2005/Atom" id="host-9">\
<atom:link href="http://localhost:8774/v2.0/vmhosts/host-9" rel="self"/>\
<atom:link href="http://localhost:8774/vmhosts/host-9" rel="bookmark"/>\
</vmhost></mountPoints><vmfsVolume>true</vmfsVolume><shared>true</shared>\
<assignedServerCount>2</assignedServerCount><volumeType>VMFS</volumeType>\
<volumeId>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da</volumeId>\
</StorageVolume>'
expected_detail_xml = '<storagevolumes \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><StorageVolume>\
<id>datastore-111</id><name>datastore-111</name>\
<resourceManagerId>13274325-BFD6-464F-A9D1-61332573B5E2</resourceManagerId>\
<size>107105746944</size><free>32256294912</free><mountPoints>\
<path>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da</path>\
<vmhost id="host-9"><atom:link \
href="http://localhost:8774/v2.0/vmhosts/host-9" rel="self"/>\
<atom:link href="http://localhost:8774/vmhosts/host-9" rel="bookmark"/>\
</vmhost></mountPoints><vmfsVolume>true</vmfsVolume><shared>true</shared>\
<assignedServerCount>2</assignedServerCount><volumeType>VMFS</volumeType>\
<volumeId>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da</volumeId>\
</StorageVolume><StorageVolume><id>datastore-112</id>\
<name>datastore-112</name>\
<resourceManagerId>13274325-BFD6-464F-A9D1-61332573B5E2</resourceManagerId>\
<size>107105746944</size><free>32256294912</free>\
<mountPoints><path>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db</path>\
<vmhost id="host-9"><atom:link \
href="http://localhost:8774/v2.0/vmhosts/host-9" rel="self"/>\
<atom:link href="http://localhost:8774/vmhosts/host-9" rel="bookmark"/>\
</vmhost></mountPoints><vmfsVolume>false</vmfsVolume><shared>false</shared>\
<assignedServerCount>1</assignedServerCount>\
<volumeType>VMFS</volumeType>\
<volumeId>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db</volumeId>\
</StorageVolume></storagevolumes>'
expected_limited_detail_xml = '<storagevolumes \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><StorageVolume>\
<id>datastore-112</id><name>datastore-112</name>\
<resourceManagerId>13274325-BFD6-464F-A9D1-61332573B5E2</resourceManagerId>\
<size>107105746944</size><free>32256294912</free><mountPoints>\
<path>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db</path>\
<vmhost id="host-9"><atom:link \
href="http://localhost:8774/v2.0/vmhosts/host-9" rel="self"/>\
<atom:link href="http://localhost:8774/vmhosts/host-9" rel="bookmark"/>\
</vmhost></mountPoints><vmfsVolume>false</vmfsVolume><shared>false</shared>\
<assignedServerCount>1</assignedServerCount><volumeType>VMFS</volumeType>\
<volumeId>/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db</volumeId>\
</StorageVolume><atom:link \
href="http://localhost:8774/v2.0/storagevolumes?limit=1" \
rel="previous"/></storagevolumes>'
def setUp(self):
""" Setup initial mocks and logging configuration """
super(StorageVolumeTest, self).setUp()
self.config_drive = None
self.mock = mox.Mox()
self.admin_context = context.RequestContext('admin', '',
is_admin=True)
def tearDown(self):
self.mock.stubs.UnsetAll()
def test_list_storagevolumes_json(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.assertEqual(self.expected_index_json, resp.body)
def test_list_storagevolumes_xml(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.assertEqual(resp.body, self.expected_index_xml)
def test_list_limited_storagevolumes_detail_xml(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes/detail.xml?'
'limit=1&marker=datastore-111',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().detail(request)
self.assertEqual(resp.body, self.expected_limited_detail_xml)
def test_list_storagevolumes_detail_xml(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().detail(request)
self.assertEqual(resp.body, self.expected_detail_xml)
def test_list_storagevolumes_detail_none_xml(self):
storagevolumes = None
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().detail(request)
self.assertNotEqual(resp, None, 'Return xml string')
def test_list_storagevolumes_xml_header(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
request.headers['Accept'] = 'application/xml'
resp = StorageVolumeController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.assertEqual(resp.body, self.expected_index_xml)
def test_list_storagevolumes_json_header(self):
storagevolumes = self.get_storagevolume_list()
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes',
base_url='http://localhost:8774/v2.0/')
request.headers['Accept'] = 'application/json'
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.assertEqual(self.expected_index_json, resp.body)
def test_storagevolume_details_json(self):
storagevolumes = self.get_single_storagevolume()
self.mock.StubOutWithMock(api, 'storage_volume_get_by_ids')
api.storage_volume_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = \
webob.Request.blank('/v2.0/storagevolumes/datastore-111.json',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().show(request, 'datastore-111')
self.assertNotEqual(resp, None,
'Return json response for datastore-111')
self.assertEqual(self.expected_details_json, resp.body)
def test_storagevolume_details_xml(self):
storagevolumes = self.get_single_storagevolume()
self.mock.StubOutWithMock(api, 'storage_volume_get_by_ids')
api.storage_volume_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = \
webob.Request.blank('/v2.0/storagevolumes/datastore-111.xml',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().show(request, 'datastore-111')
self.assertNotEqual(resp, None,
'Return xml response for datastore-111')
self.assertEqual(self.expected_storage_details_xml, resp.body)
def test_storagevolume_details_none_xml(self):
storagevolumes = None
self.mock.StubOutWithMock(api, 'storage_volume_get_by_ids')
api.storage_volume_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = \
webob.Request.blank('/v2.0/storagevolumes/datastore-111.xml',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().show(request, 'datastore-111')
self.assertNotEqual(resp, None,
'Return xml response for datastore-111')
def test_storagevolume_details_json_exception(self):
storagevolumes = self.get_storagevolume_list()
xml_utils = util
self.mock.StubOutWithMock(xml_utils, 'xml_to_dict')
xml_utils.xml_to_dict(mox.IgnoreArg()).AndRaise(Exception(
'Test Exception'))
self.mock.StubOutWithMock(api, 'storage_volume_get_by_ids')
api.storage_volume_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = \
webob.Request.blank('/v2.0/storagevolumes/datastore-111.json',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().show(request, 'datastore-111')
self.assertTrue(isinstance(resp, HTTPNotFound))
def test_list_storagevolumes_none_check(self):
self.mock.StubOutWithMock(api, 'storage_volume_get_all_by_filters')
api.storage_volume_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(None)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/storagevolumes',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().index(request)
self.assertEqual(resp.body, '{"storagevolumes": []}',
'Return xml string')
def test_query_field_key(self):
storagevolumes = self.get_single_storagevolume()
self.mock.StubOutWithMock(api, 'storage_volume_get_by_ids')
api.storage_volume_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(storagevolumes)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/storagevolumes/datastore-111.json?fields=id,name',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = StorageVolumeController().show(request, 'datastore-111')
self.assertNotEqual(resp, None,
'Return xml response for datastore-111')
self.mock.stubs.UnsetAll()
def get_single_storagevolume(self, storageId=None):
# storagevolume_list = []
if storageId is not None:
self.get_storagevolume_list(storageId)
return self.get_storagevolume_list()
def get_storagevolume_list(self, storageId=None):
storagevolume_dict = {}
storagevolume_list = []
storagevolume = StorageVolume()
storagevolume.set_id('datastore-111')
storagevolume.set_name('datastore-111')
storagevolume.set_resourceManagerId(
'13274325-BFD6-464F-A9D1-61332573B5E2')
storagevolume.set_size(107105746944)
storagevolume.set_free(32256294912)
storagevolume.set_vmfsVolume(True)
storagevolume.set_shared(True)
storagevolume.set_assignedServerCount(2)
storagevolume.set_volumeType('VMFS')
storagevolume.set_volumeId(
'/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da')
hostMountPoint = \
HostMountPoint(
'/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5da', 'host-9')
storagevolume.add_mountPoints(hostMountPoint)
storagevolume_list.append(storagevolume)
storagevolume_dict[storagevolume.get_id()] = storagevolume
storagevolume = StorageVolume()
storagevolume.set_id('datastore-112')
storagevolume.set_name('datastore-112')
storagevolume.set_resourceManagerId(
'13274325-BFD6-464F-A9D1-61332573B5E2')
storagevolume.set_size(107105746944)
storagevolume.set_free(32256294912)
storagevolume.set_vmfsVolume(False)
storagevolume.set_shared(False)
storagevolume.set_assignedServerCount(1)
storagevolume.set_volumeType('VMFS')
storagevolume.set_volumeId(
'/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db')
hostMountPoint = \
HostMountPoint(
'/vmfs/volumes/4e374cf3-328f8064-aa2c-78acc0fcb5db', 'host-9')
storagevolume.add_mountPoints(hostMountPoint)
storagevolume_list.append(storagevolume)
storagevolume_dict[storagevolume.get_id()] = storagevolume
if storageId is not None:
return [storagevolume_dict[storageId]]
return storagevolume_list
if __name__ == '__main__':
unittest.main()

View File

@ -1,326 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import webob
import mox
from webob.exc import HTTPNotFound
from nova import context
from healthnmon.db import api
from healthnmon.api import util
from healthnmon.resourcemodel.healthnmonResourceModel import Subnet
from healthnmon.api.subnet import SubnetController
class SubnetTest(unittest.TestCase):
""" Tests for Subnet extension """
expected_limited_detail_xml = '<subnets \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><Subnet>\
<id>subnet-02</id><name>subnet-02</name></Subnet>\
<atom:link href="http://localhost:8774/v2.0/subnets?limit=1" \
rel="previous"/></subnets>'
expected_detail_xml = '<subnets \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0"><Subnet>\
<id>subnet-01</id><name>subnet-01</name></Subnet><Subnet>\
<id>subnet-02</id><name>subnet-02</name></Subnet></subnets>'
expected_index_json = '{"subnets": [{"id": "subnet-01", \
"links": [{"href": "http://localhost:8774/v2.0/subnets/subnet-01", \
"rel": "self"}, {"href": "http://localhost:8774/subnets/subnet-01", \
"rel": "bookmark"}], "name": "subnet-01"}, {"id": "subnet-02", \
"links": [{"href": "http://localhost:8774/v2.0/subnets/subnet-02", \
"rel": "self"}, {"href": "http://localhost:8774/subnets/subnet-02", \
"rel": "bookmark"}], "name": "subnet-02"}]}'
expected_index_detail = '<subnets \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<subnet id="subnet-01" name="subnet-01"><atom:link \
href="http://localhost:8774/v2.0/subnets/subnet-01" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-01" rel="bookmark"/>\
</subnet><subnet id="subnet-02" name="subnet-02">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-02" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-02" rel="bookmark"/>\
</subnet></subnets>'
expected_xml_header = '<subnets xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<subnet id="subnet-01" name="subnet-01">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-01" \
rel="self"/><atom:link href="http://localhost:8774/subnets/subnet-01" \
rel="bookmark"/></subnet><subnet id="subnet-02" name="subnet-02">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-02" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-02" rel="bookmark"/>\
</subnet></subnets>'
expected_json_header = '{"subnets": [{"id": "subnet-01", \
"links": [{"href": "http://localhost:8774/v2.0/subnets/subnet-01", \
"rel": "self"}, {"href": "http://localhost:8774/subnets/subnet-01", \
"rel": "bookmark"}], "name": "subnet-01"}, {"id": "subnet-02", \
"links": [{"href": "http://localhost:8774/v2.0/subnets/subnet-02", \
"rel": "self"}, {"href": "http://localhost:8774/subnets/subnet-02", \
"rel": "bookmark"}], "name": "subnet-02"}]}'
expected_limited_json = '{"Subnet": {"id": "subnet-01", \
"name": "subnet-01"}}'
expected_limited_xml = '<Subnet>\n <id>subnet-01</id>\n \
<name>subnet-01</name>\n</Subnet>\n'
def setUp(self):
""" Setup initial mocks and logging configuration """
super(SubnetTest, self).setUp()
self.config_drive = None
self.mock = mox.Mox()
self.admin_context = context.RequestContext('admin', '',
is_admin=True)
def tearDown(self):
self.mock.stubs.UnsetAll()
def test_list_subnet_json(self):
subnet_list = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.assertEqual(self.expected_index_json, resp.body)
self.mock.stubs.UnsetAll()
def test_list_subnet_xml(self):
subnet_list = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.assertEqual(resp.body, self.expected_index_detail)
self.mock.stubs.UnsetAll()
def test_list_limited_subnet_detail_xml(self):
subnet_list = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/detail.xml?'
'limit=1&marker=subnet-01',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().detail(request)
self.assertEqual(resp.body, self.expected_limited_detail_xml)
def test_list_subnet_detail_xml(self):
subnet_list = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().detail(request)
self.assertEqual(resp.body, self.expected_detail_xml)
def test_list_subnet_none_detail_xml(self):
subnet_list = None
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().detail(request)
self.assertNotEqual(resp, None, 'Return xml string')
def test_list_subnet_xml_header(self):
subnets = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnets)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets',
base_url='http://localhost:8774/v2.0/')
request.headers['Accept'] = 'application/xml'
request.environ['nova.context'] = self.admin_context
resp = SubnetController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.assertEqual(resp.body, self.expected_xml_header)
self.mock.stubs.UnsetAll()
def test_list_subnet_json_header(self):
subnets = self.get_subnet_list()
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnets)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets',
base_url='http://localhost:8774/v2.0/')
request.headers['Accept'] = 'application/json'
request.environ['nova.context'] = self.admin_context
resp = SubnetController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.assertEqual(self.expected_json_header, resp.body)
self.mock.stubs.UnsetAll()
def test_list_subnets_none_check(self):
self.mock.StubOutWithMock(api, 'subnet_get_all_by_filters')
api.subnet_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(None)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().index(request)
self.assertEqual(resp.body, '{"subnets": []}',
'Return json string')
def test_subnet_details_json(self):
subnet_list = self.get_single_subnet()
self.mock.StubOutWithMock(api, 'subnet_get_by_ids')
api.subnet_get_by_ids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnet/subnet-01.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().show(request, 'subnet-01')
self.assertNotEqual(resp, None,
'Return json response for subnet-01')
self.assertEqual(self.expected_limited_json, resp.body)
def test_subnet_details_xml(self):
subnet_list = self.get_single_subnet()
self.mock.StubOutWithMock(api, 'subnet_get_by_ids')
api.subnet_get_by_ids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/subnet-01.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().show(request, 'subnet-01')
self.assertNotEqual(resp, None,
'Return xml response for subnet-01')
self.assertEqual(self.expected_limited_xml, resp.body)
self.mock.stubs.UnsetAll()
def test_subnet_details_none_xml(self):
subnet_list = None
self.mock.StubOutWithMock(api, 'subnet_get_by_ids')
api.subnet_get_by_ids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/subnet-01.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().show(request, 'subnet-01')
self.assertNotEqual(resp, None,
'Return xml response for subnet-01')
self.mock.stubs.UnsetAll()
def test_subnet_details_json_exception(self):
subnet_list = self.get_single_subnet()
xml_utils = util
self.mock.StubOutWithMock(xml_utils, 'xml_to_dict')
xml_utils.xml_to_dict(mox.IgnoreArg()).AndRaise(IndexError('Test index'
))
self.mock.StubOutWithMock(api, 'subnet_get_by_ids')
api.subnet_get_by_ids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/subnets/subnet-01.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = SubnetController().show(request, 'subnet-01')
self.assertTrue(isinstance(resp, HTTPNotFound))
def test_query_field_key(self):
subnet_list = self.get_single_subnet()
self.mock.StubOutWithMock(api, 'subnet_get_by_ids')
api.subnet_get_by_ids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(subnet_list)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/subnets/subnet-01.json?fields=id,name',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = SubnetController().show(request, 'subnet-01')
self.assertNotEqual(resp, None,
'Return xml response for subnet-01')
self.assertEqual(self.expected_limited_json, resp.body)
self.mock.stubs.UnsetAll()
def get_single_subnet(self):
subnet_list = []
subnet = Subnet()
subnet.set_id('subnet-01')
subnet.set_name('subnet-01')
subnet_list.append(subnet)
return subnet_list
def get_subnet_list(self):
subnet_list = []
subnet = Subnet()
subnet.set_id('subnet-01')
subnet.set_name('subnet-01')
subnet_list.append(subnet)
subnet = Subnet()
subnet.set_id('subnet-02')
subnet.set_name('subnet-02')
subnet_list.append(subnet)
return subnet_list
if __name__ == '__main__':
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()

View File

@ -1,467 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from healthnmon.api import constants
from healthnmon.api import util
from nova import context
from healthnmon.resourcemodel import healthnmonResourceModel
import json
import unittest
from webob import BaseRequest, Request
from lxml import etree
from lxml import objectify
class UtilTest(unittest.TestCase):
def setUp(self):
super(UtilTest, self).setUp()
def tearDown(self):
pass
def test_xml_to_dict(self):
test_xml = \
"""<VmHost>
<id>VH1</id>
<name>host1</name>
<note>some other note</note>
<processorCoresCount>6</processorCoresCount>
<properties>
<name>prop1</name>
<value>value1</value>
</properties>
<properties>
<name>prop2</name>
<value>value2</value>
</properties>
<os>
<osType>LINUX</osType>
<osSubType>UBUNTU</osSubType>
<osVersion>11.0</osVersion>
</os>
<model>model1</model>
<virtualizationType>KVM</virtualizationType>
</VmHost>"""
expected_host_json = \
'{"virtualizationType": "KVM", "name": "host1", \
"processorCoresCount": "6", "id": "VH1", "note": "some other note", \
"model": "model1", "os": {"osType": "LINUX", "osSubType": "UBUNTU", \
"osVersion": "11.0"}, "properties": [{"name": "prop1", "value": "value1"}, \
{"name": "prop2", "value": "value2"}]}'
test_dict = util.xml_to_dict(test_xml)
json_str = json.dumps(test_dict)
print json_str
self.assertEquals(json_str, expected_host_json)
def test_replace_with_links(self):
test_xml = \
"""<Vm><outer><a><storage><id>33</id><name>ESA</name>
</storage></a></outer>
<storageVolumeId>88</storageVolumeId>
<storageVolumeId>89</storageVolumeId>
<parent>
<id>89</id>
<name>testname</name>
<type>human</type>
</parent>
</Vm>"""
expected_out_xml = '<Vm><outer><a><storage><id>33</id>\
<name>ESA</name></storage></a></outer>\
<storagevolume xmlns:atom="http://www.w3.org/2005/Atom" id="88">\
<atom:link href="http://localhost/v2.0/storage/88" rel="self"/>\
<atom:link href="http://localhost/storage/88" rel="bookmark"/>\
</storagevolume>\
<storagevolume xmlns:atom="http://www.w3.org/2005/Atom" id="89">\
<atom:link href="http://localhost/v2.0/storage/89" rel="self"/>\
<atom:link href="http://localhost/storage/89" rel="bookmark"/>\
</storagevolume>\
<person xmlns:atom="http://www.w3.org/2005/Atom" \
name="testname" type="human" id="89">\
<atom:link href="http://localhost/v2.0/people/89" rel="self"/>\
<atom:link href="http://localhost/people/89" rel="bookmark"/>\
</person></Vm>'
dict_tag_props = [{
'tag': 'storageVolumeId',
'tag_replacement': 'storagevolume',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/storage',
'tag_attrib': None,
}, {
'tag': 'vmHostId',
'tag_replacement': 'vmhosts',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/vmhosts',
'tag_attrib': None,
}, {
'tag': 'parent',
'tag_replacement': 'person',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/people',
'tag_attrib': ['name', 'type'],
}]
# element = util.replace_with_references(test_xml, dict_tag_props)
out_dict = {}
replaced_xml = util.replace_with_links(test_xml,
dict_tag_props, out_dict)
self.assertNotEqual(None, replaced_xml)
self.compare_xml(expected_out_xml, replaced_xml)
# print element.toxml('utf-8')
def test_xml_to_dict_with_collections(self):
input_xml = '<Vm><outer><a><storage><id>33</id>\
<name>ESA</name></storage></a></outer>\
<storagevolume xmlns:atom="http://www.w3.org/2005/Atom" id="88">\
<atom:link href="http://localhost/v2.0/storage/88" rel="self"/>\
<atom:link href="http://localhost/storage/88" rel="bookmark"/>\
</storagevolume>\
<storagevolume xmlns:atom="http://www.w3.org/2005/Atom" id="89">\
<atom:link href="http://localhost/v2.0/storage/89" rel="self"/>\
<atom:link href="http://localhost/storage/89" rel="bookmark"/>\
</storagevolume>\
<person xmlns:atom="http://www.w3.org/2005/Atom" \
name="testname" type="human" id="89">\
<atom:link href="http://localhost/v2.0/people/89" \
rel="self"/><atom:link href="http://localhost/people/89" rel="bookmark"/>\
</person></Vm>'
expected_json = '{"person": {"links": [{"href": \
"http://localhost/v2.0/people/89", "rel": "self"}, {"href": \
"http://localhost/people/89", "rel": "bookmark"}]}, \
"outer": {"a": {"storage": {"id": "33", "name": "ESA"}}}, \
"storagevolumes": [{"id": "88", "links": [{"href": \
"http://localhost/v2.0/storage/88", "rel": "self"}, {"href": \
"http://localhost/storage/88", "rel": "bookmark"}]}, \
{"id": "89", "links": [{"href": "http://localhost/v2.0/storage/89", \
"rel": "self"}, {"href": "http://localhost/storage/89", \
"rel": "bookmark"}]}]}'
self.assertEquals(
json.dumps(util.xml_to_dict(input_xml)), expected_json)
def test_replace_with_links_prefix(self):
prefix_xml = \
'<p:Vm xmlns:p="http://localhost/prefix"><p:outer><p:a><p:storage>\
<p:id>33</p:id><p:name>ESA</p:name></p:storage></p:a></p:outer>\
<p:storageVolumeId>88</p:storageVolumeId>\
<p:storageVolumeId>89</p:storageVolumeId></p:Vm>'
dict_tag_props = [{
'tag': 'storageVolumeId',
'tag_replacement': 'storagevolume',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/storage',
'tag_attrib': None,
}, {
'tag': 'vmHostId',
'tag_replacement': 'vmhosts',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/vmhosts',
'tag_attrib': None,
}]
expected_out_xml = \
'<p:Vm xmlns:p="http://localhost/prefix">\
<p:outer>\
<p:a><p:storage><p:id>33</p:id><p:name>ESA</p:name></p:storage></p:a>\
</p:outer>\
<p:storagevolume xmlns:atom="http://www.w3.org/2005/Atom" p:id="88">\
<atom:link href="http://localhost/v2.0/storage/88" rel="self"/>\
<atom:link href="http://localhost/storage/88" rel="bookmark"/>\
</p:storagevolume>\
<p:storagevolume xmlns:atom="http://www.w3.org/2005/Atom" p:id="89">\
<atom:link href="http://localhost/v2.0/storage/89" rel="self"/>\
<atom:link href="http://localhost/storage/89" rel="bookmark"/>\
</p:storagevolume>\
</p:Vm>'
out_dict = {}
replaced_xml = util.replace_with_links(prefix_xml,
dict_tag_props, out_dict)
self.assertNotEqual(None, replaced_xml)
self. compare_xml(expected_out_xml, replaced_xml)
def test_xml_with_no_children_to_dict(self):
xml_str = '<tag></tag>'
test_dict = util.xml_to_dict(xml_str)
self.assertEquals(test_dict, None)
def test_get_path_elements(self):
expected_list = ['', 'parent', 0]
obtained_list = []
for element in util.get_path_elements('/parent[1]'):
obtained_list.append(element)
self.assertEquals(expected_list, obtained_list,
'get path elements')
def test_get_project_context(self):
test_context = context.RequestContext('user', 'admin', is_admin=True)
req = BaseRequest({'nova.context': test_context})
(ctx, proj_id) = util.get_project_context(req)
self.assertEquals(ctx, test_context, 'Context test util')
self.assertEquals(proj_id, 'admin')
# def test_remove_version_from_href(self):
# common.remove_version_from_href( \
# 'http://10.10.120.158:8774/v2/virtualmachines/\
# e9f7f71d-8208-1963-77fc-a1c90d4a1802'
# )
# try:
# common.remove_version_from_href('http://localhost/')
# except ValueError, err:
# print err
# else:
# self.fail('No Value Error thrown when removing version number'
# )
def test_invalid_dict(self):
prefix_xml = \
'<p:Vm xmlns:p="http://localhost/prefix"><p:outer><p:a><p:storage>\
<p:id>33</p:id><p:name>ESA</p:name></p:storage></p:a></p:outer>\
<p:storageVolumeId>88</p:storageVolumeId>\
<p:storageVolumeId>89</p:storageVolumeId></p:Vm>'
dict_tag_props = [{
'tag': 'storageVolumeId',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/storage',
'tag_attrib': None,
}]
replaced_xml = util.replace_with_links(prefix_xml,
dict_tag_props, {})
print replaced_xml
self.assertNotEqual(None, replaced_xml)
self.assertEqual(util.replace_with_links(prefix_xml, [None],
{}), prefix_xml)
dict_tag_props = [{
'tag': 'storageVolumeId',
'tag_key': 'id',
'tag_collection_url': 'http://localhost/v2.0/storage',
'tag_attrib': None,
3.23: 32,
}]
util.replace_with_links(prefix_xml, dict_tag_props, {})
def test_none_dict(self):
xml_str = '<test>value</test>'
self.assertEquals(xml_str, util.replace_with_links(xml_str,
None, {}))
self.assertEquals(xml_str, util.replace_with_links(xml_str,
[{None: None}], {}))
# def test_single_element(self):
# xml_str = '<test>value</test>'
# dict_tag_props = [{
# 'tag': 'test',
# 'tag_replacement': None,
# 'tag_key': 'key',
# 'tag_collection_url': 'http://localhost/v2.0/collection',
# 'tag_attrib': None,
# }]
#
# replaced_xml = util.replace_with_links(xml_str, dict_tag_props,
# {})
# expected_xml = \
# '<test xmlns:atom="http://www.w3.org/2005/Atom key="value">\
#<atom:link href="http://localhost/v2.0/collection/value" rel="self"/>\
#<atom:link href="http://localhost/collection/value" rel="bookmark"/></test>'
#
# self.assertNotEqual(replaced_xml, xml_str)
def test_tag_dictionary_error(self):
xml_str = '<test xmlns="http://localhost/name"></test>'
dict_tag_props = [{
'tag': 'test',
'tag_replacement': None,
'tag_key': 'key',
'tag_collection_url': 'http://localhost/v2.0/collection',
'tag_attrib': None,
}]
try:
util.replace_with_links(xml_str, dict_tag_props, {})
except util.TagDictionaryError, err:
print err
else:
self.fail('TagDictionary Error not thrown for xml: %s'
% xml_str)
def test_update_dict_using_xpath(self):
xpath_dict = {'/mypath[1]/element': [1, 2, 3]}
expected_dict = {'mypath': [{'element': [1, 2, 3]}]}
input_dict = {'mypath': [{'element': [4, 5, 6]}]}
util.update_dict_using_xpath(input_dict, xpath_dict)
self.assertEquals(input_dict, expected_dict)
self.assertEquals(None, util.update_dict_using_xpath(None,
None))
self.assertEquals(input_dict,
util.update_dict_using_xpath(input_dict,
None))
def test_serialize_simple_obj(self):
class Test:
def __init__(self):
self.a = 10
self.b = None
self.assertEquals(util.serialize_simple_obj(Test(), 'root', ('a',
'b')), '<root><a>10</a><b></b></root>')
self.assertEquals(util.serialize_simple_obj(Test(), 'root', 'c'
), '<root><c/></root>')
def test_append_xml_as_child(self):
xml = '<root>212</root>'
xml = util.append_xml_as_child(xml, '<sub>23</sub>')
print xml
self.assertEquals(util.append_xml_as_child('<root><a>3</a></root>',
'<a>4</a>'), '<root><a>3</a><a>4</a></root>'
)
def test_get_entity_list_xml(self):
entity_list = []
expected_list_xml = \
'<b:entities xmlns:b="http://testnamespace" \
xmlns:atom="http://www.w3.org/2005/Atom"><b:entity type="" id="0">\
<atom:link href="http://localhost:8080/v2/entities/0" rel="self"/>\
<atom:link href="http://localhost:8080/entities/0" rel="bookmark"/></b:entity>\
<b:entity type="" id="1">\
<atom:link href="http://localhost:8080/v2/entities/1" rel="self"/>\
<atom:link href="http://localhost:8080/entities/1" rel="bookmark"/>\
</b:entity><atom:link href="http://markerlink" rel="next"/></b:entities>'
for i in range(2):
href = 'http://localhost:8080/v2/entities/' + str(i)
bookmark = 'http://localhost:8080/entities/' + str(i)
entdict = {'id': str(i), 'type': None,
'links': [{'rel': 'self', 'href': href},
{'rel': 'bookmark', 'href': bookmark}],
}
entity_list.append(entdict)
entities_dict = dict(entities=entity_list)
entities_dict['entities_links'] = [
{
'rel': 'next',
'href': 'http://markerlink'
}
]
self.assertEquals(util.get_entity_list_xml(entities_dict,
{'b': 'http://testnamespace',
'atom': constants.XMLNS_ATOM}, 'entities',
'entity', 'b'), expected_list_xml)
try:
util.get_entity_list_xml({'a': 23, 'b': 23}, None, None,
None)
except Exception, inst:
self.assertTrue(isinstance(inst, LookupError))
self.assertEquals(util.get_entity_list_xml(None, None, None,
None), '')
self.assertEquals(util.get_entity_list_xml({}, None, None,
None), '')
self.assertEquals(util.get_entity_list_xml({'abcd': [None]},
None, 'abcd', None), '<abcd/>')
def test_get_query_fields(self):
req = \
Request.blank('/test?fields=a,b&fields=utilization&fields=')
self.assertEquals(util.get_query_fields(req), ['a', 'b',
'utilization'])
def test_get_select_elements_xml(self):
input_xml = \
'<outer xmlns="http://space/noprefix"><a><sub>32</sub></a>\
<b>83</b><c>32</c></outer>'
self.assertEquals(util.get_select_elements_xml(input_xml, ['a',
'c']),
'<outer xmlns="http://space/noprefix">\
<a><sub>32</sub></a><c>32</c></outer>'
)
prefix_xml = \
'<p:Vm xmlns:p="http://localhost/prefix"><p:outer><p:a><p:storage>\
<p:id>33</p:id><p:name>ESA</p:name></p:storage></p:a></p:outer>\
<p:storageVolumeId>88</p:storageVolumeId>\
<p:storageVolumeId>89</p:storageVolumeId></p:Vm>'
# self.assertEquals(util.get_select_elements_xml(prefix_xml, ['a']),
# '<outer><a><sub>32</sub></a><c>32</c></outer>')
self.assertEquals('<p:Vm xmlns:p="http://localhost/prefix">\
<p:outer><p:a><p:storage><p:id>33</p:id>\
<p:name>ESA</p:name></p:storage></p:a></p:outer></p:Vm>',
util.get_select_elements_xml(prefix_xml,
['outer']))
def test_get_select_elements_xml_default_field(self):
input_xml = \
'<outer xmlns="http://space/noprefix"><a><sub>32</sub></a>\
<b>83</b><c>32</c></outer>'
self.assertEquals(util.get_select_elements_xml(input_xml, ['a',
'c']),
'<outer xmlns="http://space/noprefix">\
<a><sub>32</sub></a><c>32</c></outer>'
)
self.assertEquals(
util.get_select_elements_xml(input_xml, ['c'], 'b'),
'<outer xmlns="http://space/noprefix"><b>83</b><c>32</c></outer>')
prefix_xml = \
'<p:Vm xmlns:p="http://localhost/prefix"><p:outer><p:a><p:storage>\
<p:id>33</p:id><p:name>ESA</p:name></p:storage></p:a></p:outer>\
<p:storageVolumeId>88</p:storageVolumeId>\
<p:storageVolumeId>89</p:storageVolumeId></p:Vm>'
self.assertEquals(
'<p:Vm xmlns:p="http://localhost/prefix">\
<p:storageVolumeId>88</p:storageVolumeId>\
<p:storageVolumeId>89</p:storageVolumeId>\
<p:outer><p:a><p:storage><p:id>33</p:id>\
<p:name>ESA</p:name></p:storage></p:a></p:outer></p:Vm>',
util.get_select_elements_xml(prefix_xml,
['outer'], 'storageVolumeId'))
def test_set_select_attributes(self):
resource_obj = healthnmonResourceModel.ResourceUtilization()
self.assertEquals(resource_obj,
util.set_select_attributes(resource_obj,
None))
def test_get_next_xml(self):
self.assertEquals(util.get_next_xml({'rel': 'next',
'href': 'http://nextlink'}),
'<ns0:link xmlns:ns0="http://www.w3.org/2005/Atom" \
href="http://nextlink" rel="next"/>')
def compare_xml(self, expected, actual):
expectedObject = objectify.fromstring(expected)
expected = etree.tostring(expectedObject)
actualObject = objectify.fromstring(actual)
actual = etree.tostring(actualObject)
self.assertEquals(expected, actual)
if __name__ == '__main__':
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()

View File

@ -1,447 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# (c) Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import webob
import mox
import json
from webob.exc import HTTPNotFound
from nova import context
from healthnmon.db import api
from healthnmon.api import util
from healthnmon.resourcemodel.healthnmonResourceModel import VirtualSwitch
from healthnmon.api.virtualswitch import VirtualSwitchController
from lxml import etree
from lxml import objectify
from StringIO import StringIO
class VirtualSwitchTest(unittest.TestCase):
""" Tests for virtual switch extension """
expected_limited_detail_xml = '<virtualswitches \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<VirtualSwitch><id>virtual-switch-02</id><name>virtual-switch-02</name>\
<switchType>type-02</switchType><subnet id="subnet-392">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-392" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-392" rel="bookmark"/>\
</subnet></VirtualSwitch>\
<atom:link href="http://localhost:8774/v2.0/virtualswitches?limit=1" \
rel="previous"/></virtualswitches>'
expected_detail_xml = '<virtualswitches \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<VirtualSwitch><id>virtual-switch-01</id><name>virtual-switch-01</name>\
<switchType>type-01</switchType><subnet id="subnet-233">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-233" \
rel="self"/><atom:link href="http://localhost:8774/subnets/subnet-233" \
rel="bookmark"/></subnet><subnet id="subnet-03">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-03" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-03" rel="bookmark"/>\
</subnet></VirtualSwitch><VirtualSwitch><id>virtual-switch-02</id>\
<name>virtual-switch-02</name><switchType>type-02</switchType>\
<subnet id="subnet-392"><atom:link \
href="http://localhost:8774/v2.0/subnets/subnet-392" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-392" rel="bookmark"/>\
</subnet></VirtualSwitch></virtualswitches>'
def setUp(self):
""" Setup initial mocks and logging configuration """
super(VirtualSwitchTest, self).setUp()
self.config_drive = None
self.mock = mox.Mox()
self.admin_context = context.RequestContext('admin', '',
is_admin=True)
def tearDown(self):
self.mock.stubs.UnsetAll()
def test_list_virtual_switch_json(self):
expected_out_json = '{"virtualswitches": [{"id": "virtual-switch-01", \
"links": [{"href": \
"http://localhost:8774/v2.0/virtualswitches/virtual-switch-01", \
"rel": "self"}, {"href": \
"http://localhost:8774/virtualswitches/virtual-switch-01", \
"rel": "bookmark"}], "name": "virtual-switch-01"}, \
{"id": "virtual-switch-02", "links": \
[{"href": "http://localhost:8774/v2.0/virtualswitches/virtual-switch-02", \
"rel": "self"}, {"href": \
"http://localhost:8774/virtualswitches/virtual-switch-02", \
"rel": "bookmark"}], "name": "virtual-switch-02"}]}'
virtual_switch_list = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.compare_json(expected_out_json, resp.body)
# self.assertEqual(self.expected_index_json, resp.body)
self.mock.stubs.UnsetAll()
def test_list_virtual_switch_xml(self):
expected_out_xml = '<virtualswitches \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<virtualswitch id="virtual-switch-01" name="virtual-switch-01">\
<atom:link \
href="http://localhost:8774/v2.0/virtualswitches/virtual-switch-01" \
rel="self"/>\
<atom:link href="http://localhost:8774/virtualswitches/virtual-switch-01" \
rel="bookmark"/>\
</virtualswitch><virtualswitch id="virtual-switch-02" \
name="virtual-switch-02">\
<atom:link \
href="http://localhost:8774/v2.0/virtualswitches/virtual-switch-02" \
rel="self"/>\
<atom:link href="http://localhost:8774/virtualswitches/virtual-switch-02" \
rel="bookmark"/>\
</virtualswitch></virtualswitches>'
virtual_switch_list = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.compare_xml(expected_out_xml, resp.body)
# self.assertEqual(resp.body, self.expected_index_xml)
self.mock.stubs.UnsetAll()
def test_list_virtual_switch_xml_header(self):
expected_out_xml = '<virtualswitches \
xmlns:atom="http://www.w3.org/2005/Atom" \
xmlns="http://docs.openstack.org/ext/healthnmon/api/v2.0">\
<virtualswitch id="virtual-switch-01" name="virtual-switch-01">\
<atom:link \
href="http://localhost:8774/v2.0/virtualswitches/virtual-switch-01" \
rel="self"/>\
<atom:link href="http://localhost:8774/virtualswitches/virtual-switch-01" \
rel="bookmark"/>\
</virtualswitch><virtualswitch id="virtual-switch-02" \
name="virtual-switch-02">\
<atom:link \
href="http://localhost:8774/v2.0/virtualswitches/virtual-switch-02" \
rel="self"/>\
<atom:link href="http://localhost:8774/virtualswitches/virtual-switch-02" \
rel="bookmark"/>\
</virtualswitch></virtualswitches>'
virtual_switches = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switches)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virutalswitches',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
request.headers['Accept'] = 'application/xml'
resp = VirtualSwitchController().index(request)
self.assertNotEqual(resp, None, 'Return xml string')
self.compare_xml(expected_out_xml, resp.body)
# self.assertEqual(resp.body, self.expected_index_xml)
self.mock.stubs.UnsetAll()
def test_list_virtual_switch_json_header(self):
expected_out_json = '{"virtualswitches": [{"id": "virtual-switch-01", \
"links": [{"href": \
"http://localhost:8774/v2.0/virtualswitches/virtual-switch-01", \
"rel": "self"}, {"href": \
"http://localhost:8774/virtualswitches/virtual-switch-01", \
"rel": "bookmark"}], "name": "virtual-switch-01"}, \
{"id": "virtual-switch-02", "links": [{"href": \
"http://localhost:8774/v2.0/virtualswitches/virtual-switch-02", \
"rel": "self"},{"href": \
"http://localhost:8774/virtualswitches/virtual-switch-02", \
"rel": "bookmark"}], "name": "virtual-switch-02"}]}'
virtual_switches = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switches)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
request.headers['Accept'] = 'application/json'
resp = VirtualSwitchController().index(request)
self.assertNotEqual(resp, None, 'Return json string')
self.compare_json(expected_out_json, resp.body)
# self.assertEqual(self.expected_index_json, resp.body)
self.mock.stubs.UnsetAll()
def test_list_limited_virtual_switch_detail_xml(self):
virtual_switches = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switches)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches/detail.xml?'
'limit=1&marker=virtual-switch-01',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().detail(request)
self.assertEqual(resp.body, self.expected_limited_detail_xml)
def test_list_virtual_switch_detail_xml(self):
virtual_switches = self.get_virtual_switch_list()
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switches)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().detail(request)
self.assertEqual(resp.body, self.expected_detail_xml)
def test_list_virtual_switch_detail_none_xml(self):
virtual_switches = None
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switches)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches/detail.xml',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().detail(request)
self.assertNotEqual(resp, None, 'Return xml string')
def test_list_virtual_switch_none_check(self):
self.mock.StubOutWithMock(api, 'virtual_switch_get_all_by_filters')
api.virtual_switch_get_all_by_filters(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(None)
self.mock.ReplayAll()
request = webob.Request.blank('/v2.0/virtualswitches',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().index(request)
self.assertEqual(resp.body, '{"virtualswitches": []}',
'Return json string')
def test_virtual_switch_details_json(self):
expected_out_json = '{"VirtualSwitch": {"subnets": [{"id": \
"subnet-3883", "links": [{"href": \
"http://localhost:8774/v2.0/subnets/subnet-3883", "rel": "self"}, \
{"href": "http://localhost:8774/subnets/subnet-3883", "rel": "bookmark"}]}, \
{"id": "subnet-323", "links": [{"href": \
"http://localhost:8774/v2.0/subnets/subnet-323", "rel": "self"}, \
{"href": "http://localhost:8774/subnets/subnet-323", \
"rel": "bookmark"}]}], "id": "virtual-switch-01", \
"switchType": "dvSwitch", "name": "virtual-switch-01"}}'
virtual_switch_list = self.get_single_virtual_switch()
self.mock.StubOutWithMock(api, 'virtual_switch_get_by_ids')
api.virtual_switch_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/virtualswitches/virtual-switch-01.json',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().show(request,
'virtual-switch-01')
self.assertNotEqual(resp, None,
'Return json response for virtual-switch-01'
)
self.mock.stubs.UnsetAll()
self.compare_json(expected_out_json, resp.body)
def test_virtual_switch_details_xml(self):
expected_out_xml = '<VirtualSwitch><id>virtual-switch-01</id>\
<name>virtual-switch-01</name><switchType>dvSwitch</switchType>\
<subnet xmlns:atom="http://www.w3.org/2005/Atom" id="subnet-3883">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-3883" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-3883" rel="bookmark"/>\
</subnet>\
<subnet xmlns:atom="http://www.w3.org/2005/Atom" id="subnet-323">\
<atom:link href="http://localhost:8774/v2.0/subnets/subnet-323" rel="self"/>\
<atom:link href="http://localhost:8774/subnets/subnet-323" rel="bookmark"/>\
</subnet></VirtualSwitch>'
virtual_switch_list = self.get_single_virtual_switch()
self.mock.StubOutWithMock(api, 'virtual_switch_get_by_ids')
api.virtual_switch_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/virtualswitches/virtual-switch-01.xml',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().show(request,
'virtual-switch-01')
self.assertNotEqual(resp, None,
'Return xml response for virtual-switch-01')
self.compare_xml(expected_out_xml, resp.body)
self.mock.stubs.UnsetAll()
def test_virtual_switch_none_details_xml(self):
virtual_switch_list = None
self.mock.StubOutWithMock(api, 'virtual_switch_get_by_ids')
api.virtual_switch_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/virtualswitches/virtual-switch-01.xml',
base_url='http://localhost:8774/v2.0/'
)
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().show(request,
'virtual-switch-01')
self.assertNotEqual(resp, None,
'Return xml response for virtual-switch-01')
self.mock.stubs.UnsetAll()
def test_virtual_switch_details_json_exception(self):
virtual_switch_list = self.get_single_virtual_switch()
xml_utils = util
self.mock.StubOutWithMock(xml_utils, 'xml_to_dict')
xml_utils.xml_to_dict(mox.IgnoreArg()).AndRaise(IndexError('Test index'
))
self.mock.StubOutWithMock(api, 'virtual_switch_get_by_ids')
api.virtual_switch_get_by_ids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = \
webob.Request.blank(
'/v2.0/virtualswitches/virtual-switch-01.json',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().show(request, 'virtual-switch-01')
self.assertTrue(isinstance(resp, HTTPNotFound))
def test_query_field_key(self):
expected_out_json = '{"VirtualSwitch": {"id": "virtual-switch-01", \
"name": "virtual-switch-01"}}'
virtual_switch_list = self.get_single_virtual_switch()
self.mock.StubOutWithMock(api, 'virtual_switch_get_by_ids')
api.virtual_switch_get_by_ids(
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn(virtual_switch_list)
self.mock.ReplayAll()
request = \
webob.Request.blank('/v2.0/vm/vm-01.json?fields=id,name',
base_url='http://localhost:8774/v2.0/')
request.environ['nova.context'] = self.admin_context
resp = VirtualSwitchController().show(request, 'virtual-switch-01')
self.assertNotEqual(resp, None,
'Return xml response for virtual-switch-01')
self.compare_json(expected_out_json, resp.body)
self.mock.stubs.UnsetAll()
def get_single_virtual_switch(self):
virtual_switch_list = []
virtual_switch = VirtualSwitch()
virtual_switch.set_id('virtual-switch-01')
virtual_switch.set_name('virtual-switch-01')
virtual_switch.set_switchType('dvSwitch')
virtual_switch.add_subnetIds('subnet-3883')
virtual_switch.add_subnetIds('subnet-323')
virtual_switch_list.append(virtual_switch)
return virtual_switch_list
def get_virtual_switch_list(self):
virtual_switch_list = []
virtual_switch = VirtualSwitch()
virtual_switch.set_id('virtual-switch-01')
virtual_switch.set_name('virtual-switch-01')
virtual_switch.set_switchType('type-01')
virtual_switch.add_subnetIds('subnet-233')
virtual_switch.add_subnetIds('subnet-03')
virtual_switch_list.append(virtual_switch)
virtual_switch = VirtualSwitch()
virtual_switch.set_id('virtual-switch-02')
virtual_switch.set_name('virtual-switch-02')
virtual_switch.set_switchType('type-02')
virtual_switch.add_subnetIds('subnet-392')
virtual_switch_list.append(virtual_switch)
return virtual_switch_list
def compare_xml(self, expected, actual):
expectedObject = objectify.fromstring(expected)
expected = etree.tostring(expectedObject)
actualObject = objectify.fromstring(actual)
actual = etree.tostring(actualObject)
self.assertEquals(expected, actual)
def compare_json(self, expected, actual):
expectedObject = json.load(StringIO(expected))
actualObject = json.load(StringIO(actual))
self.assertEquals(expectedObject, actualObject)
if __name__ == '__main__':
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()

Some files were not shown because too many files have changed in this diff Show More