Initial Commit

This commit is contained in:
samu4924 2013-03-30 14:47:00 -05:00
commit c177eb7cce
52 changed files with 4532 additions and 0 deletions

30
.gitignore vendored Normal file
View File

@ -0,0 +1,30 @@
*.py[co]
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
#Translations
*.mo
#Mr Developer
.mr.developer.cfg
# IDE Project Files
*.project
*.pydev*

0
AUTHORS.md Normal file
View File

0
HISTORY.md Normal file
View File

13
LICENSE Normal file
View File

@ -0,0 +1,13 @@
# Copyright 2013 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

1
MANIFEST.in Normal file
View File

@ -0,0 +1 @@
include README.md LICENSE NOTICE HISTORY.md pip-requires

0
NOTICE Normal file
View File

101
README.md Normal file
View File

@ -0,0 +1,101 @@
Open CAFE Core
================================
<pre>
( (
) )
.........
| |___
| |_ |
| :-) |_| |
| |___|
|_______|
=== CAFE Core ===
</pre>
The Common Automation Framework Engine is the core engine/driver used to build an automated testing framework. It is designed to be used as the
base engine for building an automated framework for API and non-UI resource testing. It is designed to support functional, integration and
reliability testing. The engine is **NOT** designed to support performance or load testing.
CAFE core provides a model, a pattern and assorted common tools for building automated tests. It provides its own light weight unittest based
runner, however, it is designed to be modular. It can be extended to support most test case front ends/runners (nose, pytest, lettuce, testr, etc...)
through driver plug-ins.
Supported Operating Systems
---------------------------
Open CAFE Core has been developed primarily in Linux and MAC environments, however, it supports installation and
execution on Windows
Installation
------------
Open CAFE Core can be [installed with pip](https://pypi.python.org/pypi/pip) from the git repository after it is cloned to a local machine.
* Clone this repository to your local machine
* CD to the root directory in your cloned repository.
* Run "pip install . --upgrade" and pip will auto install all dependencies.
After the CAFE Core is installed you will have command line access to the default unittest runner, the cafe-runner. (See cafe-runner --help for more info)
Remember, open CAFE is just the core driver/engine. You have to build an implementation and test repository that use it!
Configuration
--------------
Open CAFE works out of the box with the cafe-runner (cafe.drivers.unittest). CAFE will auto-generate a base engine.config during installation. This
base configuration will be installed to: <USER_HOME>/.cloudcafe/configs/engine.config
If you wish to modify default installation values you can update the engine.config file after CAFE installation. Keep in mind that the Engine will
over-write this file on installation/upgrade.
Terminology
-----------
Following are some notes on Open CAFE lingo and concepts.
###Implementation
Although the engine can serve as a basic framework for testing, it's meant to
be used as the base for the implementation of a product-specific testing
framework.
###Product
Anything that's being tested by an implementation of Open CAFE Core. If you would like to see a refernce implementation, there is an
[Open Source implementation](https://github.com/stackforge) based on [OpenStack](http://http://www.openstack.org/)
###Client / Client Method
A **client** is an "at-least-one"-to-"at-most-one" mapping of a product's functionality to a collection of client methods.
Using a [REST API](https://en.wikipedia.org/wiki/Representational_state_transfer) as an example, a client that represents that API in
CAFE will contain at least one (but possibly more) method(s) for every function exposed by that API. Should a call in the API prove to be too
difficult or cumbersome to define via a single **client method**, then multiple client methods can be defined such that as a whole
they represent the complete set of that API call's functionality. A **client method** should never be a superset of more than one call's
functionality.
###Behavior
A **behavior** is a many-to-many mapping of client methods to business logic, functioning as compound methods. An
example behavior might be to POST content, perform a GET to verify the POST, and then return the verified data
###Model
A **model** can be many things, but generally is a class that describes a specific data object.
An example may be a collection of logic for converting an XML or JSON response into a
data object, so that a single consumer can be written to consume the model.
###Provider
This is meant to be a convenience facade that performs configuration of clients
and behaviors to provide configuration-based default combinations of different clients and behaviors
Basic CAFE Package Anatomy
-------
Below is a short description of the top level CAFE Packages.
##cafe
This is the root package. The wellspring from which the CAFE flows...
##common
Contains modules common the entire engine. This is the primary namespace for tools, data generators, common reporting classes, etc...
##engine
Contains all the base implementations of clients, behaviors, models to be used by a CAFE implementation. It also contains supported generic clients,
behaviors and models. For instance, the engine.clients.remote_instance clients are meant to be used directly by an implementation.
##drivers
The end result of CAFE is to build an implementation to talk to a particular product or products, and a repository of automated test cases. The drivers
package is specifically for building CAFE support for various Python based test runners. There is a default unittest based driver implemented which
heavily extends the basic unittest functionality. Driver plug-ins can easily be constructed to add CAFE support for most of the popular ones already
available (nose, pytest, lettuce, testr, etc...) or even for 100% custom test case drivers if desired.

22
cafe/__init__.py Normal file
View File

@ -0,0 +1,22 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
__title__ = 'cafe'
__version__ = '0.0.1'
#__build__ = 0x010100
__author__ = 'Rackspace Cloud QE'
__license__ = 'Internal Only'
__copyright__ = 'Copyright 2013 Rackspace Inc.'

16
cafe/common/__init__.py Normal file
View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,5 @@
'''
@summary: Classes and Utilities for adapters that provide low level connectivity to various resources
@note: Most often consumed by a L{cafe.engine.clients} or L{cafe.common.reporting}
@note: Should not be used directly by a test case or process
'''

View File

@ -0,0 +1,13 @@
class BaseDataGenerator(object):
'''
Any Data Generator should extend this class.
It should just define a self.test_records which is a list of dictionaries that you want the tests to run with.
'''
def __init__(self):
self.test_records = []
def generate_test_records(self):
for test_record in self.test_records:
yield test_record

View File

@ -0,0 +1,3 @@
'''
@summary: Generic data generators for fuzzing
'''

View File

@ -0,0 +1,67 @@
'''
@summary: Generic Data Generators for fuzzing
@copyright: Copyright (c) 2013 Rackspace US, Inc.
'''
import sys
from cafe.common.generators.base import BaseDataGenerator
class SecDataGeneratorString(BaseDataGenerator):
'''
@summary: Used for reading data from a file
'''
def __init__(self, count=-1, disallow='',
filename="../data/fuzz/fuzz_data"):
'''
@summary: Data generator for opening files and sending line at a time
@param count: number of lines to send
@type count: int
@param disallow: Characters not allowed
@type disallow: string
@param filename: path to the filename starting from bin directory
@type filename: string
@return: None
@note: ints are stored in twos compliment so negative numbers return
positive numbers with unexpected results (-1,0) reutrns 255
'''
#Tests to ensure inputs are correct
try:
file_pointer = open(filename)
except Exception as exception:
sys.stderr.write("Check filename in data generator "
"SecDataGeneratorString.\n")
raise exception
if type(count) != int:
count = -1
if type(disallow) != str:
disallow = ''
#generates data
self.test_records = []
for j, i in enumerate(file_pointer):
if j == count:
break
i = i.rstrip('\r\n')
for k in disallow:
i = i.replace(k, "")
self.test_records.append({"fuzz_data": i, "result": "unknown"})
class SecDataGeneratorCount(BaseDataGenerator):
'''
@summary: Used for generating a count
'''
def __init__(self, start, stop):
'''
@summary: Data generator for counting int returned
@param start: start int
@type start: int
@param stop: stop int
@type stop: int
@return: int
'''
self.test_records = []
for i in range(start, stop):
self.test_records.append({"fuzz_data": i})

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,120 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import logging
import os
import sys
from cafe.engine.config import EngineConfig
engine_config = EngineConfig()
def get_object_namespace(obj):
'''Attempts to return a dotted string name representation of the general
form 'package.module.class.obj' for an object that has an __mro__ attribute
Designed to let you to name loggers inside objects in such a way
that the engine logger organizes them as child loggers to the modules
they originate from.
So that logging doesn't cause exceptions, if the namespace cannot be
extracted from the object's mro atribute, the actual name returned is set
to a probably-unique string, the id() of the object passed,
and is then further improved by a series of functions until
one of them fails.
The value of the last successful name-setting method is returned.
'''
try:
return parse_class_namespace_string(str(obj.__mro__[0]))
except:
pass
#mro name wasn't availble, generate a unique name
#By default, name is set to the memory address of the passed in object
#since it's guaranteed to work.
name = str(id(obj))
try:
name = "{0}_{1}".format(name, obj.__name__)
except:
pass
return name
def parse_class_namespace_string(class_string):
'''Parses the dotted namespace out of an object's __mro__.
Returns a string
'''
class_string = str(class_string)
class_string = class_string.replace("'>", "")
class_string = class_string.replace("<class '", "")
return str(class_string)
def getLogger(log_name, log_level=None):
'''Convenience function to create a logger and set it's log level at the
same time.
Log level defaults to logging.DEBUG
'''
#Create new log
new_log = logging.getLogger(name=log_name)
new_log.setLevel(log_level or logging.DEBUG)
if engine_config.use_verbose_logging:
if logging.getLogger(log_name).handlers == []:
if log_name == "":
log_name = engine_config.master_log_file_name
new_log.addHandler(setup_new_cchandler(log_name))
return new_log
def setup_new_cchandler(
log_file_name, log_dir=None, encoding=None, msg_format=None):
'''Creates a log handler names <log_file_name> configured to save the log
in <log_dir> or <os environment variable 'CLOUDCAFE_LOG_PATH'> or
'./logs', in that order or precedent.
File handler defaults: 'a+', encoding=encoding or "UTF-8", delay=True
'''
log_dir = log_dir or engine_config.log_directory
try:
log_dir = os.path.expanduser(log_dir)
except Exception as exception:
sys.stderr.write(
"\nUnable to verify log directory: {0}\n".format(exception))
try:
if not os.path.exists(log_dir):
os.mkdir(log_dir)
except Exception as exception:
sys.stderr.write(
"\nError creating log directory: {0}\n".format(exception))
log_path = os.path.join(log_dir, "{0}.log".format(log_file_name))
#Set up handler with encoding and msg formatter in log directory
log_handler = logging.FileHandler(log_path, "a+",
encoding=encoding or "UTF-8", delay=True)
fmt = msg_format or "%(asctime)s: %(levelname)s: %(name)s: %(message)s"
log_handler.setFormatter(logging.Formatter(fmt=fmt))
return log_handler

View File

@ -0,0 +1,158 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
'''
@summary: Generic Classes for test statistics
'''
import os
from datetime import datetime
from unittest2.result import TestResult
class TestRunMetrics(object):
'''
@summary: Generic Timer used to track any time span
@ivar start_time: Timestamp from the start of the timer
@type start_time: C{datetime}
@ivar stop_time: Timestamp of the end of the timer
@type stop_time: C{datetime}
@todo: This is a stop gap. It will be necessary to override large portions
of the runner and the default unittest.TestCase architecture to make
this auto-magically work with unittest properly.
This should be a child of unittest.TestResult
'''
def __init__(self):
self.total_tests = 0
self.total_passed = 0
self.total_failed = 0
self.timer = TestTimer()
self.result = TestResultTypes.UNKNOWN
class TestResultTypes(object):
'''
@summary: Types dictating an individual Test Case result
@cvar PASSED: Test has passed
@type PASSED: C{str}
@cvar FAILED: Test has failed
@type FAILED: C{str}
@cvar SKIPPED: Test was skipped
@type SKIPPED: C{str}
@cvar TIMEDOUT: Test exceeded pre-defined execution time limit
@type TIMEDOUT: C{str}
@note: This is essentially an Enumerated Type
'''
PASSED = "Passed"
FAILED = "Failed"
SKIPPED = "Skipped" #Not Supported Yet
TIMEDOUT = "Timedout" #Not Supported Yet
UNKNOWN = "UNKNOWN"
class TestTimer(object):
'''
@summary: Generic Timer used to track any time span
@ivar start_time: Timestamp from the start of the timer
@type start_time: C{datetime}
@ivar stop_time: Timestamp of the end of the timer
@type stop_time: C{datetime}
'''
def __init__(self):
self.start_time = None
self.stop_time = None
def start(self):
'''
@summary: Starts this timer
@return: None
@rtype: None
'''
self.start_time = datetime.now()
def stop(self):
'''
@summary: Stops this timer
@return: None
@rtype: None
'''
self.stop_time = datetime.now()
def get_elapsed_time(self):
'''
@summary: Convenience method for total elapsed time
@rtype: C{datetime}
@return: Elapsed time for this timer. C{None} if timer has not started
'''
elapsedTime = None
if (self.start_time != None):
if (self.stop_time != None):
elapsedTime = (self.stop_time - self.start_time)
else:
elapsedTime = (datetime.now() - self.start_time)
else:
''' Timer hasn't started, error on the side of caution '''
rightNow = datetime.now()
elapsedTime = (rightNow - rightNow)
return(elapsedTime)
class PBStatisticsLog(object):
'''
@summary: PSYCHOTICALLY BASIC Statistics logger
@ivar: File: File Name of this logger
@type File: C{str}
@ivar: FileMode: Mode this logger runs in. a or w
@type FileMode: C{str}
@ivar: Errors: List of all error messages recorded by this logger
@type Errors: C{list}
@ivar: Warnings: List of all warning messages recorded by this logger
@type Warnings: C{list}
@ivar: IsDebugMode: Flag to turn Debug logging on and off
@type IsDebugMode: C{bool}
@todo: Upgrade this astoundingly basic logger to Python or Twisted logger framework
@attention: THIS LOGGER IS DESIGNED TO BE TEMPORARY. It will be replaced in the matured framework
'''
def __init__(self, fileName=None, log_dir='.', startClean=False):
self.FileMode = 'a'
if fileName is not None:
if not os.path.exists(log_dir):
os.makedirs(log_dir)
self.File = os.path.normpath(os.path.join(log_dir, fileName))
if startClean == True and os.path.exists(self.File) == True:
''' Force the file to be overwritten before any writing '''
os.remove(self.File)
else:
self.File = None
if(os.path.exists(self.File) == False):
''' Write out the header to the stats log '''
self.__write("Elapsed Time,Start Time,Stop Time,Result,Errors,Warnings")
def __write(self, message):
'''
@summary: Writes a message to this log file
@param formatted: Indicates if message applies standard formatting
@type formatted: C{bool}
@return: None
@rtype: None
'''
if self.File is not None:
log = open(self.File, self.FileMode)
log.write("%s\n" % message)
log.close()
return
def report(self, test_result=TestRunMetrics()):
self.__write("{0},{1},{2},{3}".format(test_result.timer.get_elapsed_time(),
test_result.timer.start_time,
test_result.timer.stop_time,
test_result.result))

16
cafe/drivers/__init__.py Normal file
View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,25 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
def tags(*tags, **attrs):
def _decorator(func):
setattr(func, '__test_tags__', [])
setattr(func, '__test_attrs__', {})
func.__test_tags__.extend(tags)
func.__test_attrs__.update(attrs)
return func
return _decorator

View File

@ -0,0 +1,236 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
'''
@summary: Base Classes for Test Fixtures
@note: Corresponds DIRECTLY TO A unittest.TestCase
@see: http://docs.python.org/library/unittest.html#unittest.TestCase
'''
import unittest2 as unittest
from datetime import datetime
from cafe.engine.config import EngineConfig
from cafe.common.reporting import cclogging
from cafe.common.reporting.metrics import TestRunMetrics
from cafe.common.reporting.metrics import TestResultTypes
from cafe.common.reporting.metrics import PBStatisticsLog
engine_config = EngineConfig()
class BaseTestFixture(unittest.TestCase):
'''
@summary: Foundation for TestRepo Test Fixture.
@note: This is the base class for ALL test cases in TestRepo. Add new
functionality carefully.
@see: http://docs.python.org/library/unittest.html#unittest.TestCase
'''
@classmethod
def assertClassSetupFailure(cls, message):
'''
@summary: Use this if you need to fail from a Test Fixture's
setUpClass() method
'''
cls.fixture_log.error("FATAL: %s:%s" % (cls.__name__, message))
raise AssertionError("FATAL: %s:%s" % (cls.__name__, message))
@classmethod
def assertClassTeardownFailure(cls, message):
'''
@summary: Use this if you need to fail from a Test Fixture's
tearUpClass() method
'''
cls.fixture_log.error("FATAL: %s:%s" % (cls.__name__, message))
raise AssertionError("FATAL: %s:%s" % (cls.__name__, message))
def shortDescription(self):
'''
@summary: Returns a one-line description of the test
'''
if(self._testMethodDoc != None):
if(self._testMethodDoc.startswith("\n") == True):
self._testMethodDoc = " ".join(self._testMethodDoc.
splitlines()).strip()
return (unittest.TestCase.shortDescription(self))
@classmethod
def setUpClass(cls):
super(BaseTestFixture, cls).setUpClass()
#Master Config Provider
#Setup root log handler only if the root logger doesn't already haves
if cclogging.getLogger('').handlers == []:
cclogging.getLogger('').addHandler(
cclogging.setup_new_cchandler('cc.master'))
#Setup fixture log, which is really just a copy of the master log
#for the duration of this test fixture
cls.fixture_log = cclogging.getLogger('')
cls._fixture_log_handler = cclogging.setup_new_cchandler(
cclogging.get_object_namespace(cls))
cls.fixture_log.addHandler(cls._fixture_log_handler)
'''
@todo: Upgrade the metrics to be more unittest compatible. Currently the
unittest results are not available at the fixture level, only the test case
or the test suite and runner level.
'''
# Setup the fixture level metrics
cls.fixture_metrics = TestRunMetrics()
cls.fixture_metrics.timer.start()
# Report
cls.fixture_log.info("{0}".format('=' * 56))
cls.fixture_log.info("Fixture...: {0}".format(
str(cclogging.get_object_namespace(cls))))
cls.fixture_log.info("Created At: {0}"
.format(cls.fixture_metrics.timer.start_time))
cls.fixture_log.info("{0}".format('=' * 56))
@classmethod
def tearDownClass(cls):
# Kill the timers and calculate the metrics objects
cls.fixture_metrics.timer.stop()
if(cls.fixture_metrics.total_passed ==
cls.fixture_metrics.total_tests):
cls.fixture_metrics.result = TestResultTypes.PASSED
else:
cls.fixture_metrics.result = TestResultTypes.FAILED
# Report
cls.fixture_log.info("{0}".format('=' * 56))
cls.fixture_log.info("Fixture.....: {0}".format(
str(cclogging.get_object_namespace(cls))))
cls.fixture_log.info("Result......: {0}"
.format(cls.fixture_metrics.result))
cls.fixture_log.info("Start Time..: {0}"
.format(cls.fixture_metrics.timer.start_time))
cls.fixture_log.info("Elapsed Time: {0}"
.format(cls.fixture_metrics.timer.get_elapsed_time()))
cls.fixture_log.info("Total Tests.: {0}"
.format(cls.fixture_metrics.total_tests))
cls.fixture_log.info("Total Passed: {0}"
.format(cls.fixture_metrics.total_passed))
cls.fixture_log.info("Total Failed: {0}"
.format(cls.fixture_metrics.total_failed))
cls.fixture_log.info("{0}".format('=' * 56))
#Remove the fixture log handler from the fixture log
cls.fixture_log.removeHandler(cls._fixture_log_handler)
#Call super teardown after we've finished out additions to teardown
super(BaseTestFixture, cls).tearDownClass()
def setUp(self):
# Setup the timer and other custom init jazz
self.fixture_metrics.total_tests += 1
self.test_metrics = TestRunMetrics()
self.test_metrics.timer.start()
# Log header information
self.fixture_log.info("{0}".format('=' * 56))
self.fixture_log.info("Test Case.: {0}".format(self._testMethodName))
self.fixture_log.info("Created.At: {0}".format(self.test_metrics.timer.
start_time))
if (self.shortDescription()):
self.fixture_log.info("{0}".format(self.shortDescription()))
self.fixture_log.info("{0}".format('=' * 56))
''' @todo: Get rid of this hard coded value for the statistics '''
# set up the stats log
self.stats_log = PBStatisticsLog("{0}.statistics.csv".format(self._testMethodName), "{0}/../statistics/".format(engine_config.log_directory))
# Let the base handle whatever hoodoo it needs
unittest.TestCase.setUp(self)
def tearDown(self):
# Kill the timer and other custom destroy jazz
self.test_metrics.timer.stop()
'''
@todo: This MUST be upgraded this from resultForDoCleanups into a
better pattern or working with the result object directly.
This is related to the todo in L{TestRunMetrics}
'''
# Build metrics
if(self._resultForDoCleanups.wasSuccessful()):
self.fixture_metrics.total_passed += 1
self.test_metrics.result = TestResultTypes.PASSED
else:
self.fixture_metrics.total_failed += 1
self.test_metrics.result = TestResultTypes.FAILED
# Report
self.fixture_log.info("{0}".format('=' * 56))
self.fixture_log.info("Test Case...: {0}".
format(self._testMethodName))
self.fixture_log.info("Result......: {0}".
format(self.test_metrics.result))
self.fixture_log.info("Start Time...: {0}".
format(self.test_metrics.timer.start_time))
self.fixture_log.info("Elapsed Time: {0}".
format(self.test_metrics.timer.get_elapsed_time()))
self.fixture_log.info("{0}".format('=' * 56))
# Write out our statistics
self.stats_log.report(self.test_metrics)
# Let the base handle whatever hoodoo it needs
super(BaseTestFixture, self).tearDown()
class BaseParameterizedTestFixture(BaseTestFixture):
""" TestCase classes that want to be parameterized should
inherit from this class.
"""
def __copy__(self):
new_copy = self.__class__(self._testMethodName)
for key in self.__dict__.keys():
new_copy.key = self.__dict__[key]
return new_copy
def setUp(self):
self._testMethodName = self.__dict__
super(BaseTestFixture, self).setup()
def __str__(self):
if "test_record" in self.__dict__:
return self._testMethodName + " " + str(self.test_record)
else:
return super(BaseParameterizedTestFixture, self).__str__()
class BaseBurnInTestFixture(BaseTestFixture):
'''
@summary: Base test fixture that allows for Burn-In tests
'''
@classmethod
def setUpClass(cls):
super(BaseBurnInTestFixture, cls).setUpClass()
cls.test_list = []
cls.iterations = 0
@classmethod
def addTest(cls, test_case):
cls.test_list.append(test_case)
def setUp(self):
# Let the base handle whatever hoodoo it needs
super(BaseBurnInTestFixture, self).setUp()
def tearDown(self):
# Let the base handle whatever goodoo it needs
super(BaseBurnInTestFixture, self).tearDown()

View File

@ -0,0 +1,5 @@
'''
@summary: Classes and Utilities for adapters that provide low level connectivity to various resources
@note: Most often consumed by a L{cafe.engine.clients} or L{cafe.common.reporting}
@note: Should not be used directly by a test case or process
'''

View File

@ -0,0 +1,51 @@
from unittest2.suite import TestSuite
class BaseParameterizedLoader(object):
'''
Instantiate this class with a data generator object(DataGenerator subclass)
Then use that instance to add your tests like you add your tests into the suite.
e.g. data_generator = LavaAPIDataGenerator()
custom_loader = BaseParameterizedLoader(data_generator)
custom_loader.addTest(TestClass("test-1"))
custom_loader.addTest(TestClass("test-2"))
custom_loader.addTest(TestClass("test-3"))
custom_loader.getSuite()
'''
def __init__(self, data_provider):
self.data_provider = data_provider
self.tests = []
def addTest(self,testcase):
'''
Add tests to this loader. This takes a test case object as a parameter
See e.g. above
'''
self.tests.append(testcase)
def getSuite(self):
'''
returns a test suite used by the unittest to run packages.
load_tests function can return this.
'''
if len(self.tests) != 0:
'''
Port all the jsons to instance variables of the class
'''
suite = TestSuite()
for test_record in self.data_provider.generate_test_records():
for test in self.tests:
test_to_be_mod = test
if test.__dict__.has_key(test_record.keys()[0]):
test_to_be_mod = test.__copy__()
else:
test_to_be_mod = test
for key in test_record.keys():
setattr(test_to_be_mod, key, test_record[key])
setattr(test_to_be_mod, "test_record", test_record)
suite.addTest(test_to_be_mod)
return suite
else:
raise Exception,"No tests added to the parameterized loader"

View File

@ -0,0 +1,141 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest2 as unittest
import xml.etree.ElementTree as ET
class ParseResult(object):
def __init__(self, result_dict, master_testsuite, xml_path, execution_time):
for keys, values in result_dict.items():
setattr(self, keys, values)
self.master_testsuite = master_testsuite
self.xml_path = xml_path
self.execution_time = execution_time
def get_passed_tests(self):
all_tests = []
actual_number_of_tests_run = []
failed_tests = []
skipped_tests = []
errored_tests = []
setup_errored_classes = []
setup_errored_tests = []
passed_obj_list = []
for item in vars(self.master_testsuite).get('_tests'):
all_tests.append(vars(item).get('_tests')[0])
for failed_test in self.failures:
failed_tests.append(failed_test[0])
for skipped_test in self.skipped:
skipped_tests.append(skipped_test[0])
for errored_test in self.errors:
if errored_test[0].__class__.__name__ != '_ErrorHolder':
errored_tests.append(errored_test[0])
else:
setup_errored_classes.append(str(errored_test[0]).split(".")[-1].rstrip(')'))
if len(setup_errored_classes) != 0:
for item_1 in all_tests:
for item_2 in setup_errored_classes:
if item_2 == item_1.__class__.__name__:
setup_errored_tests.append(item_1)
else:
actual_number_of_tests_run = all_tests
for passed_test in list(set(all_tests) - set(failed_tests) - set(skipped_tests) - set(errored_tests) - set(setup_errored_tests)):
passed_obj = Result(passed_test.__class__.__name__, vars(passed_test).get('_testMethodName'))
passed_obj_list.append(passed_obj)
return passed_obj_list
def get_skipped_tests(self):
skipped_obj_list = []
for item in self.skipped:
skipped_obj = Result(item[0].__class__.__name__, vars(item[0]).get('_testMethodName'), skipped_msg=item[1])
skipped_obj_list.append(skipped_obj)
return skipped_obj_list
def get_errored_tests(self):
errored_obj_list = []
for item in self.errors:
if (item[0].__class__.__name__ is not '_ErrorHolder'):
errored_obj = Result(item[0].__class__.__name__, vars(item[0]).get('_testMethodName'), error_trace=item[1])
else:
errored_obj = Result(str(item[0]).split(" ")[0], str(item[0]).split(".")[-1].rstrip(')'), error_trace=item[1])
errored_obj_list.append(errored_obj)
return errored_obj_list
def parse_failures(self):
failure_obj_list = []
for failure in self.failures:
failure_obj = Result(failure[0].__class__.__name__, vars(failure[0]).get('_testMethodName'), failure[1])
failure_obj_list.append(failure_obj)
return failure_obj_list
def summary_result(self):
summary_res = {}
summary_res = {'tests': str(self.testsRun), 'errors': str(len(self.errors)), 'failures': str(len(self.failures)), 'skipped': str(len(self.skipped))}
return summary_res
def generate_xml_report(self):
executed_tests = []
executed_tests = self.get_passed_tests() + self.parse_failures() + self.get_errored_tests() + self.get_skipped_tests()
summary_result = self.summary_result()
root = ET.Element("testsuite")
root.attrib['name'] = ''
root.attrib['tests'] = str(len(vars(self.master_testsuite).get('_tests')))
root.attrib['errors'] = summary_result['errors']
root.attrib['failures'] = summary_result['failures']
root.attrib['skips'] = summary_result['skipped']
root.attrib['time'] = str(self.execution_time)
for testcase in executed_tests:
testcase_tag = ET.SubElement(root, 'testcase')
testcase_tag.attrib['classname'] = testcase.test_class_name
testcase_tag.attrib['name'] = testcase.test_method_name
if testcase.failure_trace is not None:
testcase_tag.attrib['result'] = "FAILED"
error_tag = ET.SubElement(testcase_tag, 'failure')
error_tag.attrib['type'] = testcase.failure_trace.split(":")[1].split()[-1]
error_tag.attrib['message'] = testcase.failure_trace.split(":")[-1].strip()
error_tag.text = testcase.failure_trace
else:
if testcase.skipped_msg is not None:
skipped_tag = ET.SubElement(testcase_tag, 'skipped')
testcase_tag.attrib['result'] = "SKIPPED"
skipped_tag.attrib['message'] = testcase.skipped_msg.strip()
elif testcase.error_trace is not None:
testcase_tag.attrib['result'] = "ERROR"
error_tag = ET.SubElement(testcase_tag, 'error')
error_tag.attrib['type'] = testcase.error_trace.split(":")[1].split()[-1]
error_tag.attrib['message'] = testcase.error_trace.split(":")[-1].strip()
error_tag.text = testcase.error_trace
else:
testcase_tag.attrib['result'] = "PASSED"
file = open(self.xml_path + "/cc_result.xml", 'wb')
ET.ElementTree(root).write(file)
file.close()
class Result(object):
def __init__(self, test_class_name, test_method_name, failure_trace=None, skipped_msg=None, error_trace=None):
self.test_class_name = test_class_name
self.test_method_name = test_method_name
self.failure_trace = failure_trace
self.skipped_msg = skipped_msg
self.error_trace = error_trace

View File

@ -0,0 +1,990 @@
#!/usr/bin/env python
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import imp
import sys
import time
import fnmatch
import inspect
import logging
import argparse
import platform
import threading
import traceback
import unittest2 as unittest
from datetime import datetime
from cafe.drivers.unittest.parsers import ParseResult
'''@todo: This needs to be configurable/dealt with by the install '''
import test_repo
# Default Config Options
if platform.system().lower() == 'windows':
DIR_SEPR = '\\'
else:
DIR_SEPR = '/'
BASE_DIR = "{0}{1}.cloudcafe".format(os.path.expanduser("~"), DIR_SEPR)
DATA_DIR = os.path.expanduser('{0}{1}data'.format(BASE_DIR, DIR_SEPR))
LOG_BASE_PATH = os.path.expanduser('{0}{1}logs'.format(BASE_DIR, DIR_SEPR))
YELLOW = '\033[1;33m'
GREEN = '\033[1;32m'
WHITE = '\033[1;37m'
RED = '\033[0;31m'
HIGHLIGHTED_RED = '\033[1;41m'
END = '\033[1;m'
class _WritelnDecorator:
"""Used to decorate file-like objects with a handy 'writeln' method"""
def __init__(self,stream):
self.stream = stream
def __getattr__(self, attr):
return getattr(self.stream,attr)
def writeln(self, arg=None):
if arg: self.write(arg)
self.write('\n')
class CCParallelTextTestRunner(unittest.TextTestRunner):
def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1):
self.stream = _WritelnDecorator(stream)
self.descriptions = descriptions
self.verbosity = verbosity
def run(self, test):
"Run the given test case or test suite."
result = self._makeResult()
startTime = time.time()
test(result)
stopTime = time.time()
timeTaken = stopTime - startTime
result.printErrors()
run = result.testsRun
return result
class CCRunner(object):
'''
Cloud Cafe Runner
'''
def __init__(self):
self.log = logging.getLogger('RunnerLog')
def get_cl_args(self):
'''
collects and parses the command line args
creates the runner's help msg
'''
export_path = os.path.join(BASE_DIR, 'lib')
help1 = ' '.join(['runner.py', '<product>', '<config>', '-m',
'<module pattern>', '-M', '<test method>', '-t',
'[tag tag...]'])
help2 = ' '.join(['runner.py', '<product>', '<config>', '-p',
'[package package...]', '-M',
'<method name pattern>'])
help3 = ' '.join(['runner.py', '<product>', '<config>', '-p',
'[package package...]', '-M',
'<method name pattern>', '-t', '[tag tag...]'])
help4 = ' '.join(['runner.py', '<product>', '<config>', '-p',
'[package package...]', '-m', '<module pattern>',
'-M', '<test method>', '-t', '[tag tag...]'])
usage_string = """
*run all the tests for a product
runner.py <product> <config>
*run all the tests for a product with matching module name pattern
runner.py <product> <config> -m <module pattern>
*run all the tests for a product with matching the method name pattern
runner.py <product> <config> -M <method name pattern>
*run all the tests for a product with matching tag(s)
runner.py <product> <config> -t [tag tag...]
*run all the tests for a product with matching the method name pattern
and matching tag(s)
runner.py <product> <config> -M <method name pattern> -t [tag tag...]
*run all the tests for a product with matching module name pattern,
method name pattern and matching tag(s)
%s
**run all modules in a package(s) for a product
runner.py <product> <config> -p [package package...]
**run all modules in a package(s) for a product matching a name pattern
runner.py <product> <config> -p [package package...] -m <module pattern>
**run all modules in a package(s) for a product with matching the method
name pattern
%s
**run all modules in a package(s) for a product with matching tag(s)
runner.py <product> <config> -p [package package...] -t [tag tag...]
**run all modules in a package(s) for a product with matching the method
name pattern and matching tag(s)
%s
**run all modules in a package(s) for a product with matching module
name pattern, method name pattern and matching tag(s)
%s
*list tests for a product
runner.py <product> -l <tests>
*list configs for a product
runner.py <product> -l <configs>
*list tests and configs for a product
runner.py <product> -l <tests> <configs>
notes:
SET YOUR PYTHON PATH!
export PYTHONPATH=$PYTHONPATH:%s
TAGS:
format: -t [+] tag1 tag2 tag3 key1=value key2=value
By default tests with that match any of the tags will be returned.
Sending a '+' as the first character in the tag list will only
returned all the tests that match all the tags.
config file and optional module name given on the command line
do not have .config and .py extensions respectively.
by default if no packages are specified, all tests under the
product's test repo folder will be run.
runner will search under the products testrepo folder for the
package so full dotted package paths are not necessary on the
command line.
if a module is specified, all modules matching the name pattern
will be run under the products test repo folder or packages (if
specified).
""" % (help1, help2, help3, help4, export_path)
desc = "Cloud Common Automated Engine Framework"
argparser = argparse.ArgumentParser(usage=usage_string,
description=desc)
argparser.add_argument('product',
metavar='<product>',
help='product name')
argparser.add_argument('config',
nargs='?',
default=None,
metavar='<config_file>',
help='product test config')
argparser.add_argument('-q', '--quiet',
default=2,
action='store_const',
const=1,
help="quiet")
argparser.add_argument('-f', '--fail-fast',
default=False,
action='store_true',
help="fail fast")
argparser.add_argument('-s', '--supress-load-tests',
default=False,
action='store_true',
dest='supress_flag',
help="supress load tests method")
argparser.add_argument('-p', '--packages',
nargs='*',
default=None,
metavar='[package(s)]',
help="test package(s) in the product's "
"test repo")
argparser.add_argument('-m', '--module',
default=None,
metavar='<module>',
help="test module regex - defaults to '*.py'")
argparser.add_argument('-M', '--method-regex',
default=None,
metavar='<method>',
help="test method regex defaults to 'test_'")
argparser.add_argument('-t', '--tags',
nargs='*',
default=None,
metavar='tags',
help="tags")
argparser.add_argument('-l', '--list',
nargs='+',
choices=['tests', 'configs'],
metavar='<tests> <configs>',
help='list tests and or configs')
argparser.add_argument('--generateXML',
help="generates and xml of the testsuite run")
argparser.add_argument('--parallel',
action="store_true",
default=False)
argparser.add_argument('--data-directory',
help="directory for tests to get data from")
args = argparser.parse_args()
return args
def log_results(self, result):
'''
@summary: Replicates the printing functionality of unittest's
runner.run() but log's instead of prints
'''
expected_fails = unexpected_successes = skipped = 0
try:
results = map(len, (result.expectedFailures,
result.unexpectedSuccesses,
result.skipped))
expected_fails, unexpected_successes, skipped = results
except AttributeError:
pass
infos = []
if not result.wasSuccessful():
failed, errored = map(len, (result.failures, result.errors))
if failed:
infos.append("failures=%d" % failed)
if errored:
infos.append("errors=%d" % errored)
self.log_errors('ERROR', result, result.errors)
self.log_errors('FAIL', result, result.failures)
self.log.info("Ran %d Tests" % result.testsRun)
self.log.info('FAILED ')
else:
self.log.info("Ran %d Tests" % result.testsRun)
self.log.info("Passing all tests")
if skipped:
infos.append("skipped=%d" % skipped)
if expected_fails:
infos.append("expected failures=%d" % expected_fails)
if unexpected_successes:
infos.append("unexpected successes=%d" % unexpected_successes)
if infos:
self.log.info(" (%s)\n" % (", ".join(infos),))
else:
self.log.info("\n")
# Write out the log dir at the end so it's easy to find
print(self.colorize('=', WHITE) * 150)
print(self.colorize("Detailed logs: {0}".format(os.getenv("CLOUDCAFE_LOG_PATH")), WHITE))
print(self.colorize('-', WHITE) * 150)
def log_errors(self, label, result, errors):
border1 = ''.join(['\n', '=' * 45, '\n'])
border2 = ''.join(['-' * 45, '\n'])
for test, err in errors:
msg = "%s: %s\n" % (label, result.getDescription(test))
self.log.info(''.join([border1, msg, border2, err]))
def tree(self, directory, padding, print_files=False):
'''
creates an ascii tree for listing files or configs
'''
files = []
#dir_token = '+-'
#file_token = '>'
print self.colorize(''.join([padding[:-1], '+-']), WHITE),
print self.colorize(os.path.basename(os.path.abspath(directory)), RED),
print self.colorize('/', WHITE)
padding = ''.join([padding, ' '])
if print_files:
try:
files = os.listdir(directory)
except OSError:
print self.colorize('Config directory: {0} Does Not Exist'.format(directory), HIGHLIGHTED_RED)
else:
files = [x for x in os.listdir(directory) if
os.path.isdir(DIR_SEPR.join([directory, x]))]
count = 0
for file_name in files:
count += 1
path = DIR_SEPR.join([directory, file_name])
if os.path.isdir(path):
if count == len(files):
self.tree(path, ''.join([padding, ' ']), print_files)
else:
self.tree(path, ''.join([padding, '|']), print_files)
else:
if file_name.find('.pyc') == -1 and \
file_name != '__init__.py':
print self.colorize(''.join([padding, file_name]), WHITE)
def set_env(self, config_path, log_path, data_dir):
'''
sets an environment var so the tests can find thier respective
product config path
'''
os.environ['CCTNG_CONFIG_FILE'] = "{0}{1}configs{1}engine.config".format(BASE_DIR, DIR_SEPR)
os.environ['OSTNG_CONFIG_FILE'] = config_path
os.environ['CLOUDCAFE_LOG_PATH'] = log_path
os.environ['CLOUDCAFE_DATA_DIRECTORY'] = data_dir
print
print self.colorize('=', WHITE) * 150
print(self.colorize("Percolated Configuration", WHITE))
print self.colorize('-', WHITE) * 150
print(self.colorize("CCTNG_CONFIG_FILE.......: {0}{1}configs{1}engine.config".format(BASE_DIR, DIR_SEPR), WHITE))
print(self.colorize("OSTNG_CONFIG_FILE.......: {0}".format(config_path), WHITE))
print(self.colorize("CLOUDCAFE_DATA_DIRECTORY: {0}".format(data_dir), WHITE))
print(self.colorize("CLOUDCAFE_LOG_PATH......: {0}".format(log_path), WHITE))
print self.colorize('=', WHITE) * 150
def get_safe_file_date(self):
'''
@summary: Builds a date stamp that is safe for use in a file path/name
@return: The safely formatted datetime string
@rtype: C{str}
'''
return(str(datetime.now()).replace(' ', '_').replace(':', '_'))
def get_repo_path(self, product):
'''
returns the base string for the test repo directory
'''
repo_path = ''
if product is not None:
repo_path = os.path.join("{0}".format(test_repo.__path__[0]),
product)
return repo_path
def get_config_path(self, parent_path, product, cfg_file_name):
'''
returns the base string for the config path
'''
cfg_path = ''
if product is not None and cfg_file_name is not None:
if cfg_file_name.find('.config') == -1:
cfg_file_name = '.'.join([cfg_file_name, 'config'])
cfg_path = os.path.join(parent_path,
'configs',
product,
cfg_file_name)
return cfg_path
def get_dotted_path(self, path, split_token):
'''
creates a dotted path for use by unittest's loader
'''
try:
position = len(path.split(split_token)) - 1
temp_path = "{0}{1}".format(split_token, path.
split(split_token)[position])
split_path = temp_path.split(DIR_SEPR)
dotted_path = '.'.join(split_path)
except AttributeError:
return None
except Exception:
return None
return dotted_path
def find_root(self, path, target):
'''
walks the path searching for the target root folder.
'''
root_path = None
for root, _, _ in os.walk(path):
if os.path.basename(root).find(target) != -1:
root_path = root
break
else:
continue
return root_path
def find_file(self, path, target):
'''
walks the path searching for the target file. the full to the target
file is returned
'''
file_path = None
for root, _, files in os.walk(path):
for file_name in files:
if file_name.find(target) != -1 \
and file_name.find('.pyc') == -1:
file_path = DIR_SEPR.join([root, file_name])
break
else:
continue
return file_path
def find_subdir(self, path, target):
'''
walks the path searching for the target subdirectory.
the full to the target subdirectory is returned
'''
subdir_path = None
for root, dirs, _ in os.walk(path):
for dir_name in dirs:
if dir_name.find(target) != -1:
subdir_path = DIR_SEPR.join([root, dir_name])
break
else:
continue
return subdir_path
#this may be used later
def drill_path(self, path, target):
'''
walks the path searching for the last instance of the target path.
'''
return_path = {}
for root, _, _ in os.walk(path):
if os.path.basename(root).find(target) != -1:
return_path[target] = root
return return_path
def colorize(self, msg, color):
'''
colorizes a string
'''
end = '\033[1;m'
colorized_msg = ''.join([color, str(msg), end])
return colorized_msg
def error_msg(self, e_type, e_msg):
'''
creates an error message
'''
err_msg = ' '.join(['<[ WARNING', str(e_type), 'ERROR:', str(e_msg),
']>'])
return err_msg
def load_module(self, module_path):
'''
uses imp to load a module
'''
loaded_module = None
module_name = os.path.basename(module_path)
package_path = os.path.dirname(module_path)
pkg_name = os.path.basename(package_path)
root_path = os.path.dirname(package_path)
if module_name.find('.py') != -1:
module_name = module_name.split('.')[0]
f, p_path, description = imp.find_module(pkg_name, [root_path])
loaded_pkg = imp.load_module(pkg_name, f, p_path, description)
f, m_path, description = imp.find_module(module_name,
loaded_pkg.__path__)
try:
mod = '.'.join([loaded_pkg.__name__, module_name])
loaded_module = imp.load_module(mod, f, m_path, description)
except ImportError:
raise
return loaded_module
def get_class_names(self, loaded_module):
'''
gets all the class names in an imported module
'''
class_names = []
# This has to be imported here as runner sets an environment variable
# That will be required by the BaseTestFixture
from cafe.drivers.unittest.fixtures import BaseTestFixture
for _, obj in inspect.getmembers(loaded_module, inspect.isclass):
temp_obj = obj
try:
while(temp_obj.__base__ != object):
if temp_obj.__base__ == unittest.TestCase \
or temp_obj.__base__ == BaseTestFixture \
and temp_obj != obj.__base__:
class_names.append(obj.__name__)
break
else:
temp_obj = temp_obj.__base__
except AttributeError:
continue
return class_names
def get_class(self, loaded_module, test_class_name):
to_return = None
try:
to_return = getattr(loaded_module, test_class_name)
except AttributeError, e:
print e
return to_return
return to_return
def get_modules(self, rootdir, module_regex):
'''
generator yields modules matching the module_regex
'''
for root, _, files in os.walk(rootdir):
for name in files:
if fnmatch.fnmatch(name, module_regex) \
and name.find('init') == -1 \
and name.find('.pyc') == -1:
file_name = name.split('.')[0]
full_path = '/'.join([root, file_name])
yield full_path
def check_attrs(self, method, attrs, attr_keys, token=None):
'''
checks to see if the method passed in has matching key=value
attributes. if a '+' token is passed only method that contain
foo and bar will be match
'''
truth_values = []
for attr_key in attr_keys:
if method.__dict__.has_key(attr_key):
method_val = method.__dict__[attr_key]
attr_val = attrs[attr_key]
truth_values[len(truth_values):] = [method_val == attr_val]
else:
truth_values[len(truth_values):] = [False]
temp = ''
if token == '+':
temp = 'False not in'
else:
temp = 'True in'
eval_string = ' '.join([temp, 'truth_values'])
return eval(eval_string)
def check_tags(self, method, tags, token):
'''
checks to see if the method passed in has matching tags.
if the tags are (foo, bar) this method will match foo or
bar. if a '+' token is passed only method that contain
foo and bar will be match
'''
truth_values = []
for tag in tags:
if hasattr(method, tag):
truth_values[len(truth_values):] = [True]
else:
truth_values[len(truth_values):] = [False]
temp = ''
if token == '+':
temp = 'False not in'
else:
temp = 'True in'
eval_string = ' '.join([temp, 'truth_values'])
return eval(eval_string)
def _parse_tags(self, tags):
'''
tags sent in from the command line are sent in as a string.
returns a list of tags and a '+' token if it is present.
'''
token = None
tag_list = []
attrs = {}
if tags[0] == '+':
token = tags[0]
tags = tags[1:]
for tag in tags:
tokens = tag.split('=')
if len(tokens) > 1:
attrs[tokens[0]] = tokens[1]
else:
tag_list[len(tag_list):] = [tag]
return tag_list, attrs, token
def build_suite(self, loaded_module, method_regex, cl_tags, supress_flag):
'''
loads the found tests and builds the test suite
'''
tag_list = []
attrs = {}
loader = unittest.TestLoader()
suite = unittest.TestSuite()
class_names = self.get_class_names(loaded_module)
module_path = os.path.dirname(loaded_module.__file__)
module_name = loaded_module.__name__.split('.')[1]
# base_dotted_path = self.get_dotted_path(module_path, test_repo.__path__[0])
base_dotted_path = self.get_dotted_path(module_path, test_repo.__name__)
if cl_tags is not None:
tag_list, attrs, token = self._parse_tags(cl_tags)
attr_keys = attrs.keys()
a_len = len(attr_keys)
t_len = len(tag_list)
if hasattr(loaded_module, 'load_tests') and \
supress_flag is False and \
method_regex == 'test_*' and cl_tags is None:
load_tests = getattr(loaded_module, 'load_tests')
suite.addTests(load_tests(loader, None, None))
return suite
for test_class_name in class_names:
class_ = self.get_class(loaded_module, test_class_name)
for method_name in dir(class_):
load_test_flag = False
attr_flag = False
tag_flag = False
if fnmatch.fnmatch(method_name, method_regex):
if cl_tags is None:
load_test_flag = True
else:
method = getattr(class_, method_name)
if dict(method.__dict__):
if t_len != 0 and a_len == 0:
tag_flag = self.check_tags(method,
tag_list,
token)
load_test_flag = tag_flag
elif t_len == 0 and a_len != 0:
attr_flag = self.check_attrs(method,
attrs,
attr_keys,
token)
load_test_flag = attr_flag
elif t_len != 0 and a_len != 0:
tag_flag = self.check_tags(method,
tag_list,
token)
attr_flag = self.check_attrs(method,
attrs,
attr_keys,
token)
load_test_flag = attr_flag and tag_flag
else:
continue
if load_test_flag is True:
try:
dotted_path = '.'.join([base_dotted_path,
module_name,
test_class_name,
method_name])
suite.addTest(loader.loadTestsFromName(
dotted_path))
except ImportError:
raise
except AttributeError:
raise
except Exception:
raise
return suite
def print_traceback(self):
'''
formats and prints out a minimal stack trace
'''
info = sys.exc_info()
excp_type, excp_value = info[:2]
err_msg = self.error_msg(excp_type.__name__,
excp_value)
print self.colorize(err_msg, HIGHLIGHTED_RED)
for file_name, lineno, function, text in \
traceback.extract_tb(info[2]):
print ">>>", file_name
print " > line", lineno, "in", function, \
repr(text)
print "-" * 100
def run(self):
'''
loops through all the packages, modules, and methods sent in from
the command line and runs them
'''
test_classes = []
cl_args = self.get_cl_args()
module_regex = None
if(os.path.exists(BASE_DIR) == False):
err_msg = self.error_msg("{0} does not exist - Exiting".
format(BASE_DIR))
print self.colorize(err_msg, HIGHLIGHTED_RED)
exit(1)
if cl_args.module is None:
module_regex = '*.py'
else:
if cl_args.module.find('.py') != -1:
module_regex = cl_args.module
else:
module_regex = '.'.join([cl_args.module, 'py'])
if cl_args.method_regex is None:
method_regex = 'test_*'
else:
if cl_args.method_regex.find('test_') != -1:
method_regex = cl_args.method_regex
else:
method_regex = ''.join(['test_', cl_args.method_regex])
parent_path = BASE_DIR
config_path = self.get_config_path(parent_path,
cl_args.product,
cl_args.config)
repo_path = self.get_repo_path(cl_args.product)
if os.path.exists(repo_path) is False:
err_msg = self.error_msg('Repo', ' '.join([cl_args.product,
repo_path,
'does not exist - Exiting']))
print self.colorize(err_msg, HIGHLIGHTED_RED)
exit(1)
if cl_args.list is not None:
for arg in cl_args.list:
if arg == 'tests':
banner = ''.join(['\n', '<[TEST REPO]>', '\n'])
path = repo_path
else:
banner = ''.join(['\n', '<[CONFIGS]>', '\n'])
path = os.path.join(parent_path, 'configs', cl_args.product)
print self.colorize(banner, WHITE)
self.tree(path, ' ', print_files=True)
else:
suite = unittest.TestSuite()
master_suite = unittest.TestSuite()
#Use the parallel runner if needed so the console logs look correct
if cl_args.parallel:
test_runner = CCParallelTextTestRunner(verbosity=cl_args.quiet)
else:
test_runner = unittest.TextTestRunner(verbosity=cl_args.quiet)
test_runner.failfast = cl_args.fail_fast
#-----------------------Debug Logger-----------------------------
#this is for the debug logger. its breaking right now
#test_runner = LogCaptureRunner(verbosity=cl_args.quiet)
#-----------------------Debug Logger-----------------------------
try:
stats_log_path = '/'.join(
[LOG_BASE_PATH,
cl_args.product,
cl_args.config,
"statistics"])
product_log_path = '/'.join(
[LOG_BASE_PATH,
cl_args.product,
cl_args.config,
self.get_safe_file_date()])
except TypeError:
print 'Config was not set on command line - Exiting'
exit(1)
if os.path.isdir(stats_log_path) is not True:
os.makedirs(stats_log_path)
if os.path.isdir(product_log_path) is not True:
os.makedirs(product_log_path)
#Get and then ensure the existence of the cc data directory
data_dir = None
user_data_dir = getattr(cl_args, 'data_directory', None)
if user_data_dir is not None:
#Quit if the data directory is user-defined and doesn't exist,
#otherwise it uses the user defined data dir
user_data_dir = os.path.expanduser(user_data_dir)
if os.path.isdir(user_data_dir):
data_dir = user_data_dir
else:
print "Data directory '{0}' does not exist. Exiting."\
.format(user_data_dir)
exit(1)
else:
#Make and use the default directory if it doesn't exist
'''
@TODO: Make this create a sub-directory based on the product
name like the log_dir does (minus timestamps and config
file name)
'''
data_dir = DATA_DIR
if not os.path.isdir(data_dir):
os.makedirs(data_dir)
self.set_env(config_path, product_log_path, data_dir)
if cl_args.packages is None:
for module in self.get_modules(repo_path, module_regex):
try:
loaded_module = self.load_module(module)
except ImportError:
self.print_traceback()
continue
try:
suite = self.build_suite(loaded_module,
method_regex,
cl_args.tags,
cl_args.supress_flag)
master_suite.addTests(suite)
test_classes.append(suite)
except ImportError:
self.print_traceback()
continue
except AttributeError:
self.print_traceback()
continue
except Exception:
self.print_traceback()
continue
else:
for package_name in cl_args.packages:
test_path = self.find_subdir(repo_path, package_name)
if test_path is None:
err_msg = self.error_msg('Package', package_name)
print self.colorize(err_msg, HIGHLIGHTED_RED)
continue
for module_path in self.get_modules(test_path,
module_regex):
try:
loaded_module = self.load_module(module_path)
except ImportError:
self.print_traceback()
continue
try:
suite = self.build_suite(loaded_module,
method_regex,
cl_args.tags,
cl_args.supress_flag)
master_suite.addTests(suite)
test_classes.append(suite)
except ImportError:
self.print_traceback()
continue
except AttributeError:
self.print_traceback()
continue
except Exception:
self.print_traceback()
continue
if cl_args.parallel:
unittest.installHandler()
threads = []
results = []
start = time.time()
for test in test_classes:
t = ThreadedRunner(test_runner, test, results)
t.start()
threads.append(t)
for t in threads:
t.join()
finish = time.time()
print '=' * 71
print 'Tests Complete.'
print '=' * 71
run = 0
errors = 0
failures = 0
for result in results:
run += result.testsRun
errors += len(result.errors)
failures += len(result.failures)
print ("Ran %d test%s in %.3fs" % (run, run != 1 and "s" or "", finish - start))
if failures:
print("Failures=%d" % failures)
if errors:
print("Errors=%d" % errors)
if failures or errors:
exit(1)
else:
unittest.installHandler()
start_time = time.time()
result = test_runner.run(master_suite)
total_execution_time = time.time() - start_time
if cl_args.generateXML is not None:
xml_path = ''.join([parent_path, cl_args.generateXML])
parse_res = ParseResult(vars(result), master_suite, xml_path,
total_execution_time)
parse_res.generate_xml_report()
self.log_results(result)
if not result.wasSuccessful():
exit(1)
class ThreadedRunner(threading.Thread):
def __init__(self, runner, test, results):
super(ThreadedRunner, self).__init__()
self.runner = runner
self.test = test
self.results = results
def run(self):
self.results.append(self.runner.run(self.test))
def entry_point():
print('\n'.join(["\t\t ( (",
"\t\t ) )",
"\t\t ......... ",
"\t\t | |___ ",
"\t\t | |_ |",
"\t\t | :-) |_| |",
"\t\t | |___|",
"\t\t |_______|",
"\t\t === CAFE Runner ==="]))
print("\t\t--------------------------------------------------------")
print("\t\tBrewing from {0}".format(BASE_DIR))
print("\t\t--------------------------------------------------------")
print
runner = CCRunner()
runner.run()
exit(0)

15
cafe/engine/__init__.py Normal file
View File

@ -0,0 +1,15 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

76
cafe/engine/behaviors.py Normal file
View File

@ -0,0 +1,76 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import decorator
from cafe.common.reporting import cclogging
class RequiredClientNotDefinedError(Exception):
"""Raised when a behavior method call can't find a required client """
pass
def behavior(*required_clients):
'''Decorator that tags method as a behavior, and optionally adds
required client objects to an internal attribute. Causes calls to this
method to throw RequiredClientNotDefinedError exception if the containing
class does not have the proper client instances defined.
'''
#@decorator.decorator
def _decorator(func):
#Unused for now
setattr(func, '__is_behavior__', True)
setattr(func, '__required_clients__', [])
for client in required_clients:
func.__required_clients__.append(client)
def _wrap(self, *args, **kwargs):
available_attributes = vars(self)
missing_clients = []
all_requirements_satisfied = True
if required_clients:
for required_client in required_clients:
required_client_found = False
for attr in available_attributes:
attribute = getattr(self, attr, None)
if isinstance(attribute, required_client):
required_client_found = True
break
all_requirements_satisfied = (
all_requirements_satisfied and
required_client_found)
missing_clients.append(required_client)
if not all_requirements_satisfied:
msg_plurality = ("an instance" if len(missing_clients) <= 1
else "instances")
msg = ("Behavior {0} expected {1} of {2} but couldn't"
" find one".format(
func, msg_plurality, missing_clients))
raise RequiredClientNotDefinedError(msg)
return func(self, *args, **kwargs)
return _wrap
return _decorator
class BaseBehavior(object):
def __init__(self):
self._log = cclogging.getLogger(
cclogging.get_object_namespace(self.__class__))

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,21 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cafe.common.reporting import cclogging
class BaseClient(object):
_log = cclogging.getLogger(__name__)

View File

@ -0,0 +1,158 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
'''Provides low level connectivity to the commandline via popen()
@note: Primarily intended to serve as base classes for a specific
command line client Class
'''
import os
import sys
import subprocess
from cafe.engine.models.commandline_response import CommandLineResponse
from cafe.engine.clients.base import BaseClient
class BaseCommandLineClient(BaseClient):
'''Wrapper for driving/parsing a command line program
@ivar base_command: This processes base command string. (I.E. 'ls', 'pwd')
@type base_command: C{str}
@note: This class is dependent on a local installation of the wrapped
client process. The thing you run has to be there!
'''
def __init__(self, base_command=None, env_var_dict=None):
'''
@param base_command: This processes base command string.
(I.E. 'ls', 'pwd')
@type base_command: C{str}
'''
super(BaseCommandLineClient, self).__init__()
self.base_command = base_command
self.env_var_dict = env_var_dict or {}
self.set_environment_variables(self.env_var_dict)
def set_environment_variables(self, env_var_dict=None):
'''Sets all os environment variables provided in env_var_dict'''
for key, value in env_var_dict.items():
self._log.debug('setting {0}={1}'.format(key, value))
os.putenv(str(key), str(value))
def unset_environment_variables(self, env_var_list=None):
'''Unsets all os environment variables provided in env_var_dict
by default.
If env_var_list is passed, attempts to unset all environment vars in
list'''
env_var_list = env_var_list or self.env_var_dict.keys() or []
for key, _ in env_var_list:
self._log.debug('unsetting {0}'.format(key))
os.unsetenv(str(key))
def run_command(self, cmd, *args):
'''Sends a command directly to this instance's command line
@param cmd: Command to sent to command line
@type cmd: C{str}
@param args: Optional list of args to be passed with the command
@type args: C{list}
@raise exception: If unable to close process after running the command
@return: The full response details from the command line
@rtype: L{CommandLineResponse}
@note: PRIVATE. Can be over-ridden in a child class
'''
os_process = None
os_response = CommandLineResponse()
#Process command we received
os_response.command = "{0} {1}".format(self.base_command, cmd)
if args and args[0]:
for arg in args[0]:
os_response.command += "{0} {1}".format(
os_response.command, arg)
"""@TODO: Turn this into a decorator like the rest client"""
try:
logline = ''.join([
'\n{0}\nCOMMAND LINE REQUEST\n{0}\n'.format('-' * 4),
"args..........: {0}".format(args),
"command.......: {0}".format(os_response.command)])
except Exception as exception:
self._log.exception(exception)
try:
self._log.debug(logline.decode('utf-8', 'replace'))
except Exception as exception:
#Ignore all exceptions that happen in logging, then log them
self._log.debug('\n{0}\nCOMMAND LINE REQUEST INFO\n{0}\n'.format(
'-' * 12))
self._log.exception(exception)
#Run the command
try:
os_process = subprocess.Popen(os_response.command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True)
except subprocess.CalledProcessError() as cpe:
self._log.exception(
"Exception running commandline command {0}\n{1}".format(
str(os_response.command), str(cpe)))
#Wait for the process to complete and then read the lines.
#for some reason if you read each line as the process is running
#and use os_process.Poll() you don't always get all output
std_out, std_err = os_process.communicate()
os_response.return_code = os_process.returncode
#Pass the full output of the process_command back. It is important to
#not parse, strip or otherwise massage this output in the private send
#since a child class could override and contain actual command
#processing logic.
os_response.standard_out = str(std_out).splitlines()
if std_err is not None:
os_response.standard_error = str(std_err).splitlines()
"""@TODO: Turn this into a decorator like in the rest client"""
try:
logline = ''.join([
'\n{0}\nCOMMAND LINE RESPONSE\n{0}\n'.format('-' * 4),
"standard out...: {0}".format(os_response.standard_out),
"standard error.: {0}".format(os_response.standard_error),
"return code....: {0}".format(os_response.return_code)])
except Exception as exception:
self._log.exception(exception)
try:
self._log.debug(logline.decode('utf-8', 'replace'))
except Exception as exception:
#Ignore all exceptions that happen in logging, then log them
self._log.debug('\n{0}\nCOMMAND LINE RESPONSE INFO\n{0}\n'.format(
'-' * 12))
self._log.exception(exception)
#Clean up the process to avoid any leakage/wonkiness with stdout/stderr
try:
os_process.kill()
except OSError:
#An OS Error is valid if the process has exited. We only
#need to be concerned about other exceptions
sys.exc_clear()
except Exception, kill_exception:
raise Exception(
"Exception forcing %s Process to close: {0}".format(
self.base_command, kill_exception))
finally:
del os_process
return os_response

View File

@ -0,0 +1,64 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import subprocess
import re
from cloudcafe.common.constants import InstanceClientConstants
class PingClient(object):
"""
@summary: Client to ping windows or linux servers
"""
@classmethod
def ping(cls, ip, ip_address_version_for_ssh):
"""
@summary: Ping a server with a IP
@param ip: IP address to ping
@type ip: string
@return: True if the server was reachable, False otherwise
@rtype: bool
"""
'''
Porting only Linux OS
'''
ping_command = InstanceClientConstants.PING_IPV6_COMMAND_LINUX if ip_address_version_for_ssh == 6 else InstanceClientConstants.PING_IPV4_COMMAND_LINUX
command = ping_command + ip
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
process.wait()
try:
packet_loss_percent = re.search(InstanceClientConstants.PING_PACKET_LOSS_REGEX,
process.stdout.read()).group(1)
except:
# If there is no match, fail
return False
return packet_loss_percent != '100'
@classmethod
def ping_using_remote_machine(cls, remote_client, ping_ip_address):
"""
@summary: Ping a server using a remote machine
@param remote_client: Client to remote machine
@param ip: IP address to ping
@type ip: string
@return: True if the server was reachable, False otherwise
@rtype: bool
"""
command = InstanceClientConstants.PING_IPV4_COMMAND_LINUX
ping_response = remote_client.exec_command(command + ping_ip_address)
packet_loss_percent = re.search(InstanceClientConstants.PING_PACKET_LOSS_REGEX, ping_response).group(1)
return packet_loss_percent != '100'

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,156 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cafe.common.reporting import cclogging
from cafe.engine.clients.remote_instance.linux.linux_instance_client import LinuxClient
from cafe.engine.clients.remote_instance.windows.windows_instance_client import WindowsClient
class InstanceClientFactory(object):
"""
@summary: Factory class which will create appropriate utility object
based on the operating system of the server.
"""
clientList = {'windows': 'WindowsClient', 'linux': 'LinuxClient', 'gentoo': 'LinuxClient',
'arch': 'LinuxClient', 'freebsd': 'FreeBSDClient'}
@classmethod
def get_instance_client(cls, ip_address, username, password, os_distro, server_id):
"""
@summary: Returns utility class based on the OS type of server
@param ip_address: IP Address of the server
@type ip_address: string
@param password: The administrator user password
@type password: string
@param username: The administrator user name
@type username: string
@return: Utility class based on the OS type of server
@rtype: LinuxClient or WindowsClient
"""
instanceClient = cls.clientList.get(os_distro.lower())
if instanceClient is None:
instanceClient = cls.clientList.get(cls.os_type.lower())
target_str = "globals().get('" + instanceClient + "')"
instanceClient = eval(target_str)
return instanceClient(ip_address=ip_address, username=username,
password=password, os_distro=os_distro,
server_id=server_id)
class InstanceClient(object):
"""
@summary: Wrapper class around different operating system utilities.
"""
def __init__(self, ip_address, password, os_distro, username=None, server_id=None):
self._client = InstanceClientFactory.get_instance_client(ip_address, password, os_distro, username, server_id)
self.client_log = cclogging.getLogger(cclogging.get_object_namespace(self.__class__))
def can_authenticate(self):
"""
@summary: Checks if you can authenticate to the server
@return: True if you can connect, False otherwise
@rtype: bool
"""
return self._client.test_connection_auth()
def get_hostname(self):
"""
@summary: Gets the host name of the server
@return: The host name of the server
@rtype: string
"""
return self._client.get_hostname()
def get_files(self, path):
"""
@summary: Gets the list of filenames from the path
@param path: Path from where to get the filenames
@type path: string
@return: List of filenames
@rtype: List of strings
"""
return self._client.get_files(path)
def get_ram_size_in_mb(self):
"""
@summary: Returns the RAM size in MB
@return: The RAM size in MB
@rtype: string
"""
return self._client.get_ram_size_in_mb()
def get_disk_size_in_gb(self):
"""
@summary: Returns the disk size in GB
@return: The disk size in GB
@rtype: int
"""
return self._client.get_disk_size_in_gb()
def get_number_of_vcpus(self):
"""
@summary: Get the number of vcpus assigned to the server
@return: The number of vcpus assigned to the server
@rtype: int
"""
return self._client.get_number_of_vcpus()
def get_partitions(self):
"""
@summary: Returns the contents of /proc/partitions
@return: The partitions attached to the instance
@rtype: string
"""
return self._client.get_partitions()
def get_uptime(self):
"""
@summary: Get the boot time of the server
@return: The boot time of the server
@rtype: time.struct_time
"""
return self._client.get_uptime()
def create_file(self, filedetails):
'''
@summary: Create a new file
@param filedetails: File details such as content, name
@type filedetails; FileDetails
'''
return self._client.create_file()
def get_file_details(self, filepath):
"""
@summary: Get the file details
@param filepath: Path to the file
@type filepath: string
@return: File details including permissions and content
@rtype: FileDetails
"""
return self._client.get_file_details()
def is_file_present(self, filepath):
"""
@summary: Check if the given file is present
@param filepath: Path to the file
@type filepath: string
@return: True if File exists, False otherwise
"""
return self._client.is_file_present()

View File

@ -0,0 +1,20 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
LAST_REBOOT_TIME_FORMAT = '%Y-%m-%d %H:%M:%S'
PING_IPV4_COMMAND = 'ping -c 3 '
PING_IPV6_COMMAND = 'ping6 -c 3 '
PING_PACKET_LOSS_REGEX = '(\d{1,3})\.?\d*\% packet loss'

View File

@ -0,0 +1,103 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import re
import os
from cafe.engine.clients.ssh import SSHBaseClient
class BasePersistentLinuxClient(object):
def __init__(self, ip_address, username, password, ssh_timeout=600, prompt=None):
self.ssh_client = SSHBaseClient(ip_address, username, password, ssh_timeout)
self.prompt = prompt
if not self.ssh_client.test_connection_auth():
raise
def format_disk_device(self, device, fstype):
'''Formats entire device, does not create partitions'''
return self.ssh_client.exec_command("mkfs.%s %s\n" % (str(fstype).lower(), str(device)))
def mount_disk_device(self, device, mountpoint, fstype, create_mountpoint=True):
'''
Mounts a disk at a specified mountpoint. performs 'touch mountpoint' before executing
'''
self.ssh_client.exec_command("mkdir %s" % str(mountpoint))
return self.ssh_client.exec_command("mount -t %s %s %s\n" % (str(fstype).lower(), str(device), str(mountpoint)))
def unmount_disk_device(self, mountpoint):
'''
Forces unmounts (umount -f) a disk at a specified mountpoint.
'''
return self.ssh_client.exec_command("umount -f %s\n" % (str(mountpoint)))
def write_random_data_to_disk(self, dir_path, filename, blocksize=1024,
count=1024):
'''Uses dd command to write blocksize*count bytes to dir_path/filename
via ssh on remote machine.
By default writes one mebibyte (2^20 bytes) if blocksize and count
are not defined.
NOTE: 1 MEBIbyte (2^20) != 1 MEGAbyte (10^6) for all contexts
Note: dd if=/dev/urandom
'''
dd_of = os.path.join(dir_path, str(filename))
return self.ssh_client.exec_command(
"dd if=/dev/urandom of=%s bs=%s count=%s\n" %
(str(dd_of), str(blocksize), str(count)))
def write_zeroes_data_to_disk(self, disk_mountpoint, filename, blocksize=1024, count=1024):
'''By default writes one mebibyte (2^20 bytes)'''
of = '%s/%s' % (disk_mountpoint, str(filename))
return self.ssh_client.exec_command(
"dd if=/dev/zero of=%s bs=%s count=%s\n" %
(str(of), str(blocksize), str(count)))
def execute_resource_bomb(self):
'''By default executes :(){ :|:& };:'''
return self.ssh_client.exec_command(":(){ :|:& };:")
def stat_file(self, filepath):
sshresp = self.ssh_client.exec_command("stat %s\n" % str(filepath))
return sshresp
def get_file_size_bytes(self, filepath):
'''
Performs wc -c on path provided, returning the numerical count in
the result
'''
sshresp = self.ssh_client.exec_command("wc -c %s\n" % str(filepath))
result = re.search('^(.*)\s', sshresp)
try:
return result.groups()[0]
except:
return None
def get_file_md5hash(self, filepath):
'''
Performs binary mode md5sum of file and returns hash.
(md5sum -b <file>)
'''
sshresp = self.ssh_client.exec_command("md5sum -b %s\n" % str(filepath))
result = re.search('^(.*)\s', sshresp)
try:
return result.groups()[0]
except:
return None

View File

@ -0,0 +1,51 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import time
import re
from cafe.engine.clients.remote_instance.linux.linux_instance_client import LinuxClient
from cloudcafe.common.constants import InstanceClientConstants
class FreeBSDClient(LinuxClient):
def get_boot_time(self):
"""
@summary: Get the boot time of the server
@return: The boot time of the server
@rtype: time.struct_time
"""
uptime_string = self.ssh_client.exec_command('uptime')
uptime = uptime_string.replace('\n', '').split(',')[0].split()[2]
uptime_unit = uptime_string.replace('\n', '').split(',')[0].split()[3]
if (uptime_unit == 'mins'):
uptime_unit_format = 'M'
else:
uptime_unit_format = 'S'
reboot_time = self.ssh_client.exec_command('date -v -' + uptime + uptime_unit_format + ' "+%Y-%m-%d %H:%M"').replace('\n', '')
return time.strptime(reboot_time, InstanceClientConstants.LAST_REBOOT_TIME_FORMAT)
def get_disk_size_in_gb(self):
"""
@summary: Returns the disk size in GB
@return: The disk size in GB
@rtype: int
"""
output = self.ssh_client.exec_command('gpart show -p | grep "GPT"').replace('\n', '')
disk_size = re.search(r'([0-9]+)G', output).group(1)
return int(disk_size)

View File

@ -0,0 +1,34 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import time
from cafe.engine.clients.remote_instance.linux.linux_instance_client import LinuxClient
from cloudcafe.common.constants import InstanceClientConstants
class GentooArchClient(LinuxClient):
def get_boot_time(self):
"""
@summary: Get the boot time of the server
@return: The boot time of the server
@rtype: time.struct_time
"""
boot_time_string = self.ssh_client.exec_command('who -b | grep -o "[A-Za-z]* [0-9].*"').replace('\n', ' ')
year = self.ssh_client.exec_command('date | grep -o "[0-9]\{4\}$"').replace('\n', '')
boot_time = boot_time_string + year
return time.strptime(boot_time, InstanceClientConstants.LAST_REBOOT_TIME_FORMAT_GENTOO)

View File

@ -0,0 +1,366 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import time
import re
from cafe.engine.clients.ssh import SSHBaseClient
from cafe.common.reporting import cclogging
from cafe.engine.clients.ping import PingClient
from cloudcafe.compute.common.models.file_details import FileDetails
from cloudcafe.compute.common.models.partition import Partition, DiskSize
from cafe.engine.clients.remote_instance.linux.base_client import BasePersistentLinuxClient
from cloudcafe.compute.common.exceptions import FileNotFoundException, ServerUnreachable, SshConnectionException
class LinuxClient(BasePersistentLinuxClient):
def __init__(self, ip_address, server_id, os_distro, username, password):
self.client_log = cclogging.getLogger \
(cclogging.get_object_namespace(self.__class__))
ssh_timeout = 600
if ip_address is None:
raise ServerUnreachable("None")
self.ip_address = ip_address
self.username = username
if self.username is None:
self.username = 'root'
self.password = password
self.server_id = server_id
self.ssh_client = SSHBaseClient(self.ip_address,
self.username,
self.password,
timeout=ssh_timeout)
if not self.ssh_client.test_connection_auth():
self.client_log.error("Ssh connection failed for: IP:{0} \
Username:{1} Password: {2}".format(self.ip_address,
self.username, self.password))
raise SshConnectionException("ssh connection failed")
def can_connect_to_public_ip(self):
"""
@summary: Checks if you can connect to server using public ip
@return: True if you can connect, False otherwise
@rtype: bool
"""
# This returns true since the connection has already been tested in the
# init method
return self.ssh_client is not None
def can_ping_public_ip(self, public_addresses, ip_address_version_for_ssh):
"""
@summary: Checks if you can ping a public ip
@param addresses: List of public addresses
@type addresses: Address List
@return: True if you can ping, False otherwise
@rtype: bool
"""
for public_address in public_addresses:
if public_address.version == 4 and not PingClient.ping(public_address.addr, ip_address_version_for_ssh):
return False
return True
def can_authenticate(self):
"""
@summary: Checks if you can authenticate to the server
@return: True if you can connect, False otherwise
@rtype: bool
"""
return self.ssh_client.test_connection_auth()
def reboot(self, timeout=100):
'''
@timeout: max timeout for the machine to reboot
'''
ssh_connector = SSHConnector(self.ip_address, self.username,
self.password)
response, prompt = ssh_connector.exec_shell_command("sudo reboot")
response, prompt = ssh_connector.exec_shell_command(self.password)
self.client_log.info("Reboot response for %s: %s" % (self.ip_address,
response))
max_time = time.time() + timeout
while time.time() < max_time:
time.sleep(5)
if self.ssh_client.test_connection_auth():
self.client_log.info("Reboot successful for %s"
% (self.ip_address))
return True
def get_hostname(self):
"""
@summary: Gets the host name of the server
@return: The host name of the server
@rtype: string
"""
return self.ssh_client.exec_command("hostname").rstrip()
def can_remote_ping_private_ip(self, private_addresses):
"""
@summary: Checks if you can ping a private ip from this server.
@param private_addresses: List of private addresses
@type private_addresses: Address List
@return: True if you can ping, False otherwise
@rtype: bool
"""
for private_address in private_addresses:
if private_address.version == 4 and not PingClient.ping_using_remote_machine(self.ssh_client, private_address.addr):
return False
return True
def get_files(self, path):
"""
@summary: Gets the list of filenames from the path
@param path: Path from where to get the filenames
@type path: string
@return: List of filenames
@rtype: List of strings
"""
command = "ls -m " + path
return self.ssh_client.exec_command(command).rstrip('\n').split(', ')
def get_ram_size_in_mb(self):
"""
@summary: Returns the RAM size in MB
@return: The RAM size in MB
@rtype: string
"""
output = self.ssh_client.exec_command('free -m | grep Mem')
# TODO (dwalleck): We should handle the failure case here
if output:
return output.split()[1]
def get_swap_size_in_mb(self):
"""
@summary: Returns the Swap size in MB
@return: The Swap size in MB
@rtype: int
"""
output = self.ssh_client.exec_command(
'fdisk -l /dev/xvdc1 2>/dev/null | grep "Disk.*bytes"').rstrip('\n')
if output:
return int(output.split()[2])
def get_disk_size_in_gb(self, disk_path):
"""
@summary: Returns the disk size in GB
@return: The disk size in GB
@rtype: int
"""
command = "df -h | grep '{0}'".format(disk_path)
output = self.ssh_client.exec_command(command)
size = output.split()[1]
def is_decimal(char):
return str.isdigit(char) or char == "."
size = filter(is_decimal, size)
return float(size)
def get_number_of_vcpus(self):
"""
@summary: Get the number of vcpus assigned to the server
@return: The number of vcpus assigned to the server
@rtype: int
"""
command = 'cat /proc/cpuinfo | grep processor | wc -l'
output = self.ssh_client.exec_command(command)
return int(output)
def get_partitions(self):
"""
@summary: Returns the contents of /proc/partitions
@return: The partitions attached to the instance
@rtype: string
"""
command = 'cat /proc/partitions'
output = self.ssh_client.exec_command(command)
return output
def get_uptime(self):
"""
@summary: Get the uptime time of the server
@return: The uptime of the server
"""
result = self.ssh_client.exec_command('cat /proc/uptime')
uptime = float(result.split(' ')[0])
return uptime
def create_file(self, file_name, file_content, file_path=None):
'''
@summary: Create a new file
@param file_name: File Name
@type file_name: String
@param file_content: File Content
@type file_content: String
@return filedetails: File details such as content, name and path
@rtype filedetails; FileDetails
'''
if file_path is None:
file_path = "/root/" + file_name
self.ssh_client.exec_command(
'echo -n ' + file_content + '>>' + file_path)
return FileDetails("644", file_content, file_path)
def get_file_details(self, filepath):
"""
@summary: Get the file details
@param filepath: Path to the file
@type filepath: string
@return: File details including permissions and content
@rtype: FileDetails
"""
output = self.ssh_client.exec_command(
'[ -f ' + filepath + ' ] && echo "File exists" || echo "File does not exist"')
if not output.rstrip('\n') == 'File exists':
raise FileNotFoundException(
"File:" + filepath + " not found on instance.")
file_permissions = self.ssh_client.exec_command(
'stat -c %a ' + filepath).rstrip("\n")
file_contents = self.ssh_client.exec_command('cat ' + filepath)
return FileDetails(file_permissions, file_contents, filepath)
def is_file_present(self, filepath):
"""
@summary: Check if the given file is present
@param filepath: Path to the file
@type filepath: string
@return: True if File exists, False otherwise
"""
output = self.ssh_client.exec_command(
'[ -f ' + filepath + ' ] && echo "File exists" || echo "File does not exist"')
return output.rstrip('\n') == 'File exists'
def get_partition_types(self):
"""
@summary: Return the partition types for all partitions
@return: The partition types for all partitions
@rtype: Dictionary
"""
partitions_list = self.ssh_client.exec_command(
'blkid').rstrip('\n').split('\n')
partition_types = {}
for row in partitions_list:
partition_name = row.split()[0].rstrip(':')
partition_types[partition_name] = re.findall(
r'TYPE="([^"]+)"', row)[0]
return partition_types
def get_partition_details(self):
"""
@summary: Return the partition details
@return: The partition details
@rtype: Partition List
"""
# Return a list of partition objects that each contains the name and
# size of the partition in bytes and the type of the partition
partition_types = self.get_partition_types()
partition_names = ' '.join(partition_types.keys())
partition_size_output = self.ssh_client.exec_command(
'fdisk -l %s 2>/dev/null | grep "Disk.*bytes"' % (partition_names)).rstrip('\n').split('\n')
partitions = []
for row in partition_size_output:
row_details = row.split()
partition_name = row_details[1].rstrip(':')
partition_type = partition_types[partition_name]
if partition_type == 'swap':
partition_size = DiskSize(
float(row_details[2]), row_details[3].rstrip(','))
else:
partition_size = DiskSize(
int(row_details[4]) / 1073741824, 'GB')
partitions.append(
Partition(partition_name, partition_size, partition_type))
return partitions
def verify_partitions(self, expected_disk_size, expected_swap_size, server_status, actual_partitions):
"""
@summary: Verify the partition details of the server
@param expected_disk_size: The expected value of the Disk size in GB
@type expected_disk_size: string
@param expected_swap_size: The expected value of the Swap size in GB
@type expected_swap_size: string
@param server_status: The status of the server
@type server_status: string
@param actual_partitions: The actual partition details of the server
@type actual_partitions: Partition List
@return: The result of verification and the message to be displayed
@rtype: Tuple (bool,string)
"""
expected_partitions = self._get_expected_partitions(
expected_disk_size, expected_swap_size, server_status)
if actual_partitions is None:
actual_partitions = self.get_partition_details()
for partition in expected_partitions:
if partition not in actual_partitions:
return False, self._construct_partition_mismatch_message(expected_partitions, actual_partitions)
return True, "Partitions Matched"
def _get_expected_partitions(self, expected_disk_size, expected_swap_size, server_status):
"""
@summary: Returns the expected partitions for a server based on server status
@param expected_disk_size: The Expected disk size of the server in GB
@type expected_disk_size: string
@param expected_swap_size: The Expected swap size of the server in MB
@type expected_swap_size: string
@param server_status: Status of the server (ACTIVE or RESCUE)
@type server_status: string
@return: The expected partitions
@rtype: Partition List
"""
# ignoring swap untill the rescue functionality is clarified
expected_partitions = [Partition(
'/dev/xvda1', DiskSize(expected_disk_size, 'GB'), 'ext3'),
Partition('/dev/xvdc1', DiskSize(expected_swap_size, 'MB'), 'swap')]
if str.upper(server_status) == 'RESCUE':
expected_partitions = [Partition(
'/dev/xvdb1', DiskSize(expected_disk_size, 'GB'), 'ext3')]
# expected_partitions.append(Partition('/dev/xvdd1',
# DiskSize(expected_swap_size, 'MB'), 'swap'))
return expected_partitions
def _construct_partition_mismatch_message(self, expected_partitions, actual_partitions):
"""
@summary: Constructs the partition mismatch message based on expected_partitions and actual_partitions
@param expected_partitions: Expected partitions of the server
@type expected_partitions: Partition List
@param actual_partitions: Actual Partitions of the server
@type actual_partitions: Partition List
@return: The partition mismatch message
@rtype: string
"""
message = 'Partitions Mismatch \n Expected Partitions:\n'
for partition in expected_partitions:
message += str(partition) + '\n'
message += ' Actual Partitions:\n'
for partition in actual_partitions:
message += str(partition) + '\n'
return message
def mount_file_to_destination_directory(self, source_path, destination_path):
'''
@summary: Mounts the file to destination directory
@param source_path: Path to file source
@type source_path: String
@param destination_path: Path to mount destination
@type destination_path: String
'''
self.ssh_client.exec_command(
'mount ' + source_path + ' ' + destination_path)

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@ -0,0 +1,18 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
class WindowsClient:
pass

304
cafe/engine/clients/rest.py Normal file
View File

@ -0,0 +1,304 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import requests
from time import time
from cafe.common.reporting import cclogging
from cafe.engine.clients.base import BaseClient
def _log_transaction(log, level=cclogging.logging.DEBUG):
""" Paramaterized decorator
Takes a python Logger object and an optional logging level.
"""
def _decorator(func):
"""Accepts a function and returns wrapped version of that function."""
def _wrapper(*args, **kwargs):
"""Logging wrapper for any method that returns a requests response.
Logs requestslib response objects, and the args and kwargs
sent to the request() method, to the provided log at the provided
log level.
"""
logline = '{0} {1}'.format(args, kwargs)
try:
log.debug(logline.decode('utf-8', 'replace'))
except Exception as exception:
#Ignore all exceptions that happen in logging, then log them
log.info(
'Exception occured while logging signature of calling'
'method in rest connector')
log.exception(exception)
#Make the request and time it's execution
response = None
elapsed = None
try:
start = time()
response = func(*args, **kwargs)
elapsed = time() - start
except Exception as exception:
log.critical('Call to Requests failed due to exception')
log.exception(exception)
raise exception
#requests lib 1.0.0 renamed body to data in the request object
request_body = ''
if 'body' in dir(response.request):
request_body = response.request.body
elif 'data' in dir(response.request):
request_body = response.request.data
else:
log.info(
"Unable to log request body, neither a 'data' nor a "
"'body' object could be found")
#requests lib 1.0.4 removed params from response.request
request_params = ''
request_url = response.request.url
if 'params' in dir(response.request):
request_params = response.request.params
elif '?' in request_url:
request_url, request_params = request_url.split('?')
logline = ''.join([
'\n{0}\nREQUEST SENT\n{0}\n'.format('-' * 12),
'request method..: {0}\n'.format(response.request.method),
'request url.....: {0}\n'.format(request_url),
'request params..: {0}\n'.format(request_params),
'request headers.: {0}\n'.format(response.request.headers),
'request body....: {0}\n'.format(request_body)])
try:
log.log(level, logline.decode('utf-8', 'replace'))
except Exception as exception:
#Ignore all exceptions that happen in logging, then log them
log.log(level, '\n{0}\nREQUEST INFO\n{0}\n'.format('-' * 12))
log.exception(exception)
logline = ''.join([
'\n{0}\nRESPONSE RECIEVED\n{0}\n'.format('-' * 17),
'response status..: {0}\n'.format(response),
'response time....: {0}\n'.format(elapsed),
'response headers.: {0}\n'.format(response.headers),
'response body....: {0}\n'.format(response.content),
'-' * 79])
try:
log.log(level, logline.decode('utf-8', 'replace'))
except Exception as exception:
#Ignore all exceptions that happen in logging, then log them
log.log(level, '\n{0}\nRESPONSE INFO\n{0}\n'.format('-' * 13))
log.exception(exception)
return response
return _wrapper
return _decorator
def _inject_exception(exception_handlers):
"""Paramaterized decorator takes a list of exception_handler objects"""
def _decorator(func):
"""Accepts a function and returns wrapped version of that function."""
def _wrapper(*args, **kwargs):
"""Wrapper for any function that returns a Requests response.
Allows exception handlers to raise custom exceptions based on
response object attributes such as status_code.
"""
response = func(*args, **kwargs)
if exception_handlers:
for handler in exception_handlers:
handler.check_for_errors(response)
return response
return _wrapper
return _decorator
class BaseRestClient(BaseClient):
"""Re-implementation of Requests' api.py that removes many assumptions.
Adds verbose logging.
Adds support for response-code based exception injection.
(Raising exceptions based on response code)
@see: http://docs.python-requests.org/en/latest/api/#configurations
"""
_exception_handlers = []
_log = cclogging.getLogger(__name__)
def __init__(self):
super(BaseRestClient, self).__init__()
@_inject_exception(_exception_handlers)
@_log_transaction(log=_log)
def request(self, method, url, **kwargs):
""" Performs <method> HTTP request to <url> using the requests lib"""
return requests.request(method, url, **kwargs)
def put(self, url, **kwargs):
""" HTTP PUT request """
return self.request('PUT', url, **kwargs)
def copy(self, url, **kwargs):
""" HTTP COPY request """
return self.request('COPY', url, **kwargs)
def post(self, url, data=None, **kwargs):
""" HTTP POST request """
return self.request('POST', url, data=data, **kwargs)
def get(self, url, **kwargs):
""" HTTP GET request """
return self.request('GET', url, **kwargs)
def head(self, url, **kwargs):
""" HTTP HEAD request """
return self.request('HEAD', url, **kwargs)
def delete(self, url, **kwargs):
""" HTTP DELETE request """
return self.request('DELETE', url, **kwargs)
def options(self, url, **kwargs):
""" HTTP OPTIONS request """
return self.request('OPTIONS', url, **kwargs)
def patch(self, url, **kwargs):
""" HTTP PATCH request """
return self.request('PATCH', url, **kwargs)
@classmethod
def add_exception_handler(cls, handler):
"""Adds a specific L{ExceptionHandler} to the rest connector
@warning: SHOULD ONLY BE CALLED FROM A PROVIDER THROUGH A TEST
FIXTURE
"""
cls._exception_handlers.append(handler)
@classmethod
def delete_exception_handler(cls, handler):
"""Removes a L{ExceptionHandler} from the rest connector
@warning: SHOULD ONLY BE CALLED FROM A PROVIDER THROUGH A TEST
FIXTURE
"""
if handler in cls._exception_handlers:
cls._exception_handlers.remove(handler)
class RestClient(BaseRestClient):
"""
@summary: Allows clients to inherit all requests-defined RESTfull
verbs. Redefines request() so that keyword args are passed
through a named dictionary instead of kwargs.
Client methods can then take paramaters that may overload
request paramaters, which allows client method calls to
override parts of the request with paramters sent directly
to requests, overiding the client method logic either in
part or whole on the fly.
@see: http://docs.python-requests.org/en/latest/api/#configurations
"""
def __init__(self):
super(RestClient, self).__init__()
self.default_headers = {}
def request(
self, method, url, headers=None, params=None, data=None,
requestslib_kwargs=None):
#set requestslib_kwargs to an empty dict if None
requestslib_kwargs = requestslib_kwargs if (
requestslib_kwargs is not None) else {}
#Set defaults
params = params if params is not None else {}
verify = False
#If headers are provided by both, headers "wins" over default_headers
headers = dict(self.default_headers, **(headers or {}))
#Override url if present in requestslib_kwargs
if 'url' in requestslib_kwargs.keys():
url = requestslib_kwargs.get('url', None) or url
del requestslib_kwargs['url']
#Override method if present in requestslib_kwargs
if 'method' in requestslib_kwargs.keys():
method = requestslib_kwargs.get('method', None) or method
del requestslib_kwargs['method']
#The requests lib already removes None key/value pairs, but we force it
#here in case that behavior ever changes
for key in requestslib_kwargs.keys():
if requestslib_kwargs[key] is None:
del requestslib_kwargs[key]
#Create the final paramaters for the call to the base request()
#Wherever a paramater is provided both by the calling method AND
#the requests_lib kwargs dictionary, requestslib_kwargs "wins"
requestslib_kwargs = dict({'headers': headers,
'params': params,
'verify': verify,
'data': data},
**requestslib_kwargs)
#Make the request
return super(RestClient, self).request(method, url,
**requestslib_kwargs)
class AutoMarshallingRestClient(RestClient):
"""@TODO: Turn serialization and deserialization into decorators so
that we can support serialization and deserialization on a per-method
basis"""
def __init__(self, serialize_format=None, deserialize_format=None):
super(AutoMarshallingRestClient, self).__init__()
self.serialize_format = serialize_format
self.deserialize_format = deserialize_format or self.serialize_format
def request(self, method, url, headers=None, params=None, data=None,
response_entity_type=None, request_entity=None,
requestslib_kwargs=None):
#defaults requestslib_kwargs to a dictionary if it is None
requestslib_kwargs = requestslib_kwargs if (requestslib_kwargs is not
None) else {}
#set the 'data' paramater of the request to either what's already in
#requestslib_kwargs, or the deserialized output of the request_entity
if request_entity is not None:
requestslib_kwargs = dict(
{'data': request_entity.serialize(self.serialize_format)},
**requestslib_kwargs)
#Make the request
response = super(AutoMarshallingRestClient, self).request(
method, url, headers=headers, params=params, data=data,
requestslib_kwargs=requestslib_kwargs)
#Append the deserialized data object to the response
response.request.__dict__['entity'] = None
response.__dict__['entity'] = None
#If present, append the serialized request data object to
#response.request
if response.request is not None:
response.request.__dict__['entity'] = request_entity
if response_entity_type is not None:
response.__dict__['entity'] = response_entity_type.deserialize(
response.content,
self.deserialize_format)
return response

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

285
cafe/engine/clients/ssh.py Normal file
View File

@ -0,0 +1,285 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import time
import socket
import exceptions
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from paramiko.resource import ResourceManager
from paramiko.client import SSHClient
import paramiko
from cafe.common.reporting import cclogging
from cafe.engine.clients.base import BaseClient
class SSHBaseClient(BaseClient):
_log = cclogging.getLogger(__name__)
def __init__(self, host, username, password, timeout=20, port=22):
super(SSHBaseClient, self).__init__()
self.host = host
self.port = port
self.username = username
self.password = password
self.timeout = int(timeout)
self._chan = None
def _get_ssh_connection(self):
"""Returns an ssh connection to the specified host"""
_timeout = True
ssh = SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
_start_time = time.time()
saved_exception = exceptions.StandardError()
#doing this because the log file fills up with these messages
#this way it only logs it once
log_attempted = False
socket_error_logged = False
auth_error_logged = False
ssh_error_logged = False
while not self._is_timed_out(self.timeout, _start_time):
try:
if not log_attempted:
self._log.debug('Attempting to SSH connect to: ')
self._log.debug('host: %s, username: %s' %
(self.host, self.username))
log_attempted = True
ssh.connect(hostname=self.host,
username=self.username,
password=self.password,
timeout=self.timeout,
key_filename=[],
look_for_keys=False,
allow_agent=False)
_timeout = False
break
except socket.error as e:
if not socket_error_logged:
self._log.error('Socket Error: %s' % str(e))
socket_error_logged = True
saved_exception = e
continue
except paramiko.AuthenticationException as e:
if not auth_error_logged:
self._log.error('Auth Exception: %s' % str(e))
auth_error_logged = True
saved_exception = e
time.sleep(2)
continue
except paramiko.SSHException as e:
if not ssh_error_logged:
self._log.error('SSH Exception: %s' % str(e))
ssh_error_logged = True
saved_exception = e
time.sleep(2)
continue
#Wait 2 seconds otherwise
time.sleep(2)
if _timeout:
self._log.error('SSHConnector timed out while trying to establish a connection')
raise saved_exception
#This MUST be done because the transport gets garbage collected if it
#is not done here, which causes the connection to close on invoke_shell
#which is needed for exec_shell_command
ResourceManager.register(self, ssh.get_transport())
return ssh
def _is_timed_out(self, timeout, start_time):
return (time.time() - timeout) > start_time
def connect_until_closed(self):
"""Connect to the server and wait until connection is lost"""
try:
ssh = self._get_ssh_connection()
_transport = ssh.get_transport()
_start_time = time.time()
_timed_out = self._is_timed_out(self.timeout, _start_time)
while _transport.is_active() and not _timed_out:
time.sleep(5)
_timed_out = self._is_timed_out(self.timeout, _start_time)
ssh.close()
except (EOFError, paramiko.AuthenticationException, socket.error):
return
def exec_command(self, cmd):
"""Execute the specified command on the server.
:returns: data read from standard output of the command
"""
self._log.debug('EXECing: %s' % str(cmd))
ssh = self._get_ssh_connection()
stdin, stdout, stderr = ssh.exec_command(cmd)
output = stdout.read()
ssh.close()
self._log.debug('EXEC-OUTPUT: %s' % str(output))
return output
def test_connection_auth(self):
""" Returns true if ssh can connect to server"""
try:
connection = self._get_ssh_connection()
connection.close()
except paramiko.AuthenticationException:
return False
return True
def start_shell(self):
"""Starts a shell instance of SSH to use with multiple commands."""
#large width and height because of need to parse output of commands
#in exec_shell_command
self._chan = self._get_ssh_connection().invoke_shell(width=9999999,
height=9999999)
#wait until buffer has data
while not self._chan.recv_ready():
time.sleep(1)
#clearing initial buffer, usually login information
while self._chan.recv_ready():
self._chan.recv(1024)
def exec_shell_command(self, cmd, stop_after_send=False):
"""
Executes a command in shell mode and receives all of the response.
Parses the response and returns the output of the command and the
prompt.
"""
if not cmd.endswith('\n'):
cmd = '%s\n' % cmd
self._log.debug('EXEC-SHELLing: %s' % cmd)
if self._chan is None or self._chan.closed:
self.start_shell()
while not self._chan.send_ready():
time.sleep(1)
self._chan.send(cmd)
if stop_after_send:
self._chan.get_transport().set_keepalive(1000)
return None
while not self._chan.recv_ready():
time.sleep(1)
output = ''
while self._chan.recv_ready():
output += self._chan.recv(1024)
self._log.debug('SHELL-COMMAND-RETURN: \n%s' % output)
prompt = output[output.rfind('\r\n') + 2:]
output = output[output.find('\r\n') + 2:output.rfind('\r\n')]
self._chan.get_transport().set_keepalive(1000)
return output, prompt
def exec_shell_command_wait_for_prompt(self, cmd, prompt='#', timeout=300):
"""
Executes a command in shell mode and receives all of the response.
Parses the response and returns the output of the command and the
prompt.
"""
if not cmd.endswith('\n'):
cmd = '%s\n' % cmd
self._log.debug('EXEC-SHELLing: %s' % cmd)
if self._chan is None or self._chan.closed:
self.start_shell()
while not self._chan.send_ready():
time.sleep(1)
self._chan.send(cmd)
while not self._chan.recv_ready():
time.sleep(1)
output = ''
max_time = time.time() + timeout
while time.time() < max_time:
current = self._chan.recv(1024)
output += current
if current.find(prompt) != -1:
self._log.debug('SHELL-PROMPT-FOUND: %s' % prompt)
break
self._log.debug('Current response: %s' % current)
self._log.debug('Looking for prompt: %s. Time remaining until timeout: %s'
% (prompt, max_time - time.time()))
while not self._chan.recv_ready() and time.time() < max_time:
time.sleep(5)
self._chan.get_transport().set_keepalive(1000)
self._log.debug('SHELL-COMMAND-RETURN: %s' % output)
prompt = output[output.rfind('\r\n') + 2:]
output = output[output.find('\r\n') + 2:output.rfind('\r\n')]
return output, prompt
def make_directory(self, directory_name):
self._log.info('Making a Directory')
transport = paramiko.Transport((self.host, self.port))
transport.connect(username=self.username, password=self.password)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.mkdir(directory_name)
except IOError, exception:
self._log.warning("Exception in making a directory: %s" % exception)
return False
else:
sftp.close()
transport.close()
return True
def browse_folder(self):
self._log.info('Browsing a folder')
transport = paramiko.Transport((self.host, self.port))
transport.connect(username=self.username, password=self.password)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.listdir()
except IOError, exception:
self._log.warning("Exception in browsing folder file: %s" % exception)
return False
else:
sftp.close()
transport.close()
return True
def upload_a_file(self, server_file_path, client_file_path):
self._log.info("uploading file from %s to %s"
% (client_file_path, server_file_path))
transport = paramiko.Transport((self.host, self.port))
transport.connect(username=self.username, password=self.password)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.put(client_file_path, server_file_path)
except IOError, exception:
self._log.warning("Exception in uploading file: %s" % exception)
return False
else:
sftp.close()
transport.close()
return True
def download_a_file(self, server_filepath, client_filepath):
transport = paramiko.Transport(self.host)
transport.connect(username=self.username, password=self.password)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.get(server_filepath, client_filepath)
except IOError:
return False
else:
sftp.close()
transport.close()
return True
def end_shell(self):
if not self._chan.closed:
self._chan.close()
self._chan = None

128
cafe/engine/config.py Normal file
View File

@ -0,0 +1,128 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import ConfigParser
_ENGINE_CONFIG_FILE_ENV_VAR = 'CCTNG_CONFIG_FILE'
class NonExistentConfigPathError(Exception):
pass
class ConfigEnvironmentVariableError(Exception):
pass
class EngineConfig(object):
'''
Config interface for the global engine configuration
'''
SECTION_NAME = 'CCTNG_ENGINE'
def __init__(self, config_file_path=None, section_name=None):
#Support for setting the section name as a class or instance
#constant, as both 'SECTION_NAME' and 'CONFIG_SECTION_NAME'
self._section_name = (section_name or
getattr(self, 'SECTION_NAME', None) or
getattr(self, 'CONFIG_SECTION_NAME', None))
self._datasource = None
config_file_path = config_file_path or self.default_config_file
#Check the path
if not os.path.exists(config_file_path):
msg = 'Could not verify the existence of config file at {0}'\
.format(config_file_path)
raise NonExistentConfigPathError(msg)
#Read the file in and turn it into a SafeConfigParser instance
try:
self._datasource = ConfigParser.SafeConfigParser()
self._datasource.read(config_file_path)
except Exception as e:
raise e
@property
def default_config_file(self):
engine_config_file_path = None
try:
engine_config_file_path = os.environ[_ENGINE_CONFIG_FILE_ENV_VAR]
except KeyError:
msg = "'{0}' environment variable was not set.".format(
_ENGINE_CONFIG_FILE_ENV_VAR)
raise ConfigEnvironmentVariableError(msg)
except Exception as exception:
print ("Unexpected exception while attempting to access '{0}' "
"environment variable.".format(_ENGINE_CONFIG_FILE_ENV_VAR))
raise exception
return(engine_config_file_path)
def get(self, item_name, default=None):
try:
return self._datasource.get(self._section_name, item_name)
except ConfigParser.NoOptionError as no_option_err:
if not default:
raise no_option_err
return default
def get_raw(self, item_name, default=None):
'''Performs a get() on SafeConfigParser object without interpolation
'''
try:
return self._datasource.get(self._section_name, item_name,
raw=True)
except ConfigParser.NoOptionError as no_option_err:
if not default:
raise no_option_err
return default
def get_boolean(self, item_name, default=None):
try:
return self._datasource.getboolean(self._section_name,
item_name)
except ConfigParser.NoOptionError as no_option_err:
if not default:
raise no_option_err
return default
#Provided for implementations of cafe, unused by the engine itself
@property
def data_directory(self):
return self.get_raw("data_directory")
#Provided for implementations of cafe, unused by the engine itself
@property
def temp_directory(self):
return self.get_raw("temp_directory")
#Used by the engine for the output of engine and implementation logs
@property
def log_directory(self):
return os.getenv("CLOUDCAFE_LOG_PATH", self.get_raw("log_directory", default="."))
#Used by the engine for the output of engine and implementation logs
@property
def master_log_file_name(self):
return self.get_raw("master_log_file_name", default="engine-master")
#Used by the engine for the output of engine and implementation logs
@property
def use_verbose_logging(self):
return self.get_boolean("use_verbose_logging", False)

View File

@ -0,0 +1,16 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

261
cafe/engine/models/base.py Normal file
View File

@ -0,0 +1,261 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from xml.etree import ElementTree
from cafe.common.reporting import cclogging
class CommonToolsMixin(object):
"""Methods used to make building data models easier, common to all types"""
@staticmethod
def _bool_to_string(value, true_string='true', false_string='false'):
"""Returns a string representation of a boolean value, or the value
provided if the value is not an instance of bool
"""
if isinstance(value, bool):
return true_string if value is True else false_string
return value
@staticmethod
def _remove_empty_values(dictionary):
'''Returns a new dictionary based on 'dictionary', minus any keys with
values that are None
'''
return dict((k, v) for k, v in dictionary if v is not None)
class JSON_ToolsMixin(object):
"""Methods used to make building json data models easier"""
pass
class XML_ToolsMixin(object):
"""Methods used to make building xml data models easier"""
@staticmethod
def _set_xml_etree_element(
xml_etree, property_dict, exclude_empty_properties=True):
'''Sets a dictionary of keys and values as properties of the xml etree
element if value is not None. Optionally, add all keys and values as
properties if only if exclude_empty_properties == False.
'''
if exclude_empty_properties:
property_dict = CommonToolsMixin._remove_empty_keys(property_dict)
for key in property_dict:
xml_etree.set(str(key), str(property_dict[key]))
return xml_etree
@staticmethod
def _remove_xml_etree_namespace(doc, namespace):
"""Remove namespace in the passed document in place."""
ns = u'{%s}' % namespace
nsl = len(ns)
for elem in doc.getiterator():
for key in elem.attrib:
if key.startswith(ns):
new_key = key[nsl:]
elem.attrib[new_key] = elem.attrib[key]
del elem.attrib[key]
if elem.tag.startswith(ns):
elem.tag = elem.tag[nsl:]
return doc
class BAD_XML_TOOLS(object):
'''THESE ARE BAD. DON'T USE THEM. They were created in a more innocent
age, and are here for backwards compatability only.
'''
def _auto_value_to_dict(self, value):
ret = None
if isinstance(value, (int, str, unicode, bool)):
ret = value
elif isinstance(value, list):
ret = []
for item in value:
ret.append(self._auto_value_to_dict(item))
elif isinstance(value, dict):
ret = {}
for key in value.keys():
ret[key] = self._auto_value_to_dict(value[key])
elif isinstance(value, BaseMarshallingDomain):
ret = value._obj_to_json()
return ret
def _auto_to_dict(self):
ret = {}
for attr in vars(self).keys():
value = vars(self).get(attr)
if value is not None and attr != '_log':
ret[attr] = self._auto_value_to_dict(value)
if hasattr(self, 'ROOT_TAG'):
return {self.ROOT_TAG: ret}
else:
return ret
def _auto_to_xml(self):
#XML is almost impossible to do without a schema definition because it
#cannot be determined when an instance variable should be an attribute
#of an element or text between that element's tags
ret = ElementTree.Element(self.ROOT_TAG)
for attr in vars(self).keys():
value = vars(self).get(attr)
if value is not None:
assigned = self._auto_value_to_xml(attr, value)
if isinstance(assigned, ElementTree.Element):
ret.append(assigned)
else:
ret.set(attr, str(assigned))
return ret
class BaseModel(object):
__REPR_SEPARATOR__ = '\n'
def __init__(self):
self._log = cclogging.getLogger(
cclogging.get_object_namespace(self.__class__))
def __eq__(self, obj):
try:
if vars(obj) == vars(self):
return True
except:
pass
return False
def __ne__(self, obj):
if obj is None:
return True
if vars(obj) == vars(self):
return False
else:
return True
def __str__(self):
strng = '<{0} object> {1}'.format(
self.__class__.__name__, self.__REPR_SEPARATOR__)
for key in self.__dict__.keys():
if str(key) == '_log':
continue
strng = '{0}{1} = {2}{3}'.format(
strng, str(key), str(self.__dict__[key]),
self.__REPR_SEPARATOR__)
return strng
def __repr__(self):
return self.__str__()
#Splitting the xml and json stuff into mixins cleans up the code but still
#muddies the AutoMarshallingModel namespace. We could create
#tool objects in the AutoMarshallingModel, which would just act as
#sub-namespaces, to keep it clean. --Jose
class AutoMarshallingModel(
BaseModel, CommonToolsMixin, JSON_ToolsMixin, XML_ToolsMixin,
BAD_XML_TOOLS):
"""
@summary: A class used as a base to build and contain the logic necessary
to automatically create serialized requests and automatically
deserialize responses in a format-agnostic way.
"""
_log = cclogging.getLogger(__name__)
def __init__(self):
super(AutoMarshallingModel, self).__init__()
self._log = cclogging.getLogger(
cclogging.get_object_namespace(self.__class__))
def serialize(self, format_type):
serialization_exception = None
try:
serialize_method = '_obj_to_{0}'.format(format_type)
return getattr(self, serialize_method)()
except Exception as serialization_exception:
pass
if serialization_exception:
try:
self._log.error(
'Error occured during serialization of a data model into'
'the "{0}: \n{1}" format'.format(
format_type, serialization_exception))
self._log.exception(serialization_exception)
except Exception as exception:
self._log.exception(exception)
self._log.debug(
"Unable to log information regarding the "
"deserialization exception due to '{0}'".format(
serialization_exception))
return None
@classmethod
def deserialize(cls, serialized_str, format_type):
cls._log = cclogging.getLogger(
cclogging.get_object_namespace(cls))
model_object = None
deserialization_exception = None
if serialized_str and len(serialized_str) > 0:
try:
deserialize_method = '_{0}_to_obj'.format(format_type)
model_object = getattr(cls, deserialize_method)(serialized_str)
except Exception as deserialization_exception:
cls._log.exception(deserialization_exception)
#Try to log string and format_type if deserialization broke
if deserialization_exception is not None:
try:
cls._log.debug(
"Deserialization Error: Attempted to deserialize type"
" using type: {0}".format(format_type.decode(
encoding='UTF-8', errors='ignore')))
cls._log.debug(
"Deserialization Error: Unble to deserialize the "
"following:\n{0}".format(serialized_str.decode(
encoding='UTF-8', errors='ignore')))
except Exception as exception:
cls._log.exception(exception)
cls._log.debug(
"Unable to log information regarding the "
"deserialization exception")
return model_object
#Serialization Functions
def _obj_to_json(self):
raise NotImplementedError
def _obj_to_xml(self):
raise NotImplementedError
#Deserialization Functions
@classmethod
def _xml_to_obj(cls, serialized_str):
raise NotImplementedError
@classmethod
def _json_to_obj(cls, serialized_str):
raise NotImplementedError
class AutoMarshallingListModel(list, AutoMarshallingModel):
"""List-like AutoMarshallingModel used for some special cases"""
def __str__(self):
return list.__str__(self)

View File

@ -0,0 +1,37 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
'''
@summary: Responses directly from the command line
'''
class CommandLineResponse(object):
'''Bare bones object for any Command Line Connector response
@ivar Command: The full original command string for this response
@type Command: C{str}
@ivar StandardOut: The Standard Out generated by this command
@type StandardOut: C{list} of C{str}
@ivar StandardError: The Standard Error generated by this command
@type StandardError: C{list} of C{str}
@ivar ReturnCode: The command's return code
@type ReturnCode: C{int}
'''
def __init__(self):
self.command = ""
self.standard_out = []
self.standard_error = []
self.return_code = None

View File

@ -0,0 +1,114 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import ConfigParser
from cafe.common.reporting import cclogging
class ConfigDataException(Exception):
pass
class NonExistentConfigPathError(Exception):
pass
class ConfigEnvironmentVariableError(Exception):
pass
#Decorator
def expected_values(*values):
def decorator(fn):
def wrapped():
class UnexpectedConfigOptionValueError(Exception):
pass
value = fn()
if value not in values:
raise UnexpectedConfigOptionValueError(value)
return fn()
return wrapped
return decorator
class BaseConfigSectionInterface(object):
"""
Base class for building an interface for the data contained in a
SafeConfigParser object, as loaded from the config file as defined
by the engine's config file.
This is meant to be a generic interface so that in the future
get() and getboolean() can be reimplemented to provide data from a
database
"""
def __init__(self, config_file_path, section_name):
self._log = cclogging.getLogger(
cclogging.get_object_namespace(self.__class__))
self._datasource = ConfigParser.SafeConfigParser()
self._section_name = section_name
#Check the path
if not os.path.exists(config_file_path):
msg = 'Could not verify the existence of config file at {0}'\
.format(config_file_path)
raise NonExistentConfigPathError(msg)
#Read the file in and turn it into a SafeConfigParser instance
try:
self._datasource.read(config_file_path)
except Exception as exception:
self._log.exception(exception)
raise exception
def get(self, item_name, default=None):
try:
return self._datasource.get(self._section_name, item_name)
except ConfigParser.NoOptionError as e:
self._log.error(str(e))
return default
except ConfigParser.NoSectionError as e:
self._log.error(str(e))
pass
def get_raw(self, item_name, default=None):
'''Performs a get() on SafeConfigParser object without interopolation
'''
try:
return self._datasource.get(self._section_name, item_name,
raw=True)
except ConfigParser.NoOptionError as e:
self._log.error(str(e))
return default
except ConfigParser.NoSectionError as e:
self._log.error(str(e))
pass
def get_boolean(self, item_name, default=None):
try:
return self._datasource.getboolean(self._section_name,
item_name)
except ConfigParser.NoOptionError as e:
self._log.error(str(e))
return default
except ConfigParser.NoSectionError as e:
self._log.error(str(e))
pass

23
cafe/engine/provider.py Normal file
View File

@ -0,0 +1,23 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cafe.common.reporting import cclogging
class BaseProvider(object):
def __init__(self):
self._log = cclogging.getLogger(
cclogging.get_object_namespace(self.__class__))

3
pip-requires Normal file
View File

@ -0,0 +1,3 @@
#decorator
#paramiko
#requests<1.0

138
setup.py Normal file
View File

@ -0,0 +1,138 @@
"""
Copyright 2013 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import sys
import pwd
import grp
import cafe
try:
from setuptools import setup, find_packages
except ImportError:
from distutils.core import setup, find_packages
if sys.argv[-1] == 'publish':
os.system('python setup.py sdist upload')
sys.exit()
requires = open('pip-requires').readlines()
''' @todo: entry point should be read from a configuration and not hard coded
to the unittest driver's runner '''
setup(
name='cafe',
version=cafe.__version__,
description='The Common Automation Framework Engine',
long_description='{0}\n\n{1}'.format(
open('README.md').read(),
open('HISTORY.md').read()),
author='Rackspace Cloud QE',
author_email='cloud-cafe@lists.rackspace.com',
url='http://rackspace.com',
packages=find_packages(exclude=[]),
package_data={'': ['LICENSE', 'NOTICE']},
package_dir={'cafe': 'cafe'},
include_package_data=True,
install_requires=requires,
license=open('LICENSE').read(),
zip_safe=False,
#https://the-hitchhikers-guide-to-packaging.readthedocs.org/en/latest/specification.html
classifiers=(
'Development Status :: 1 - Planning',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: Other/Proprietary License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
),
entry_points = {
'console_scripts':
['cafe-runner = cafe.drivers.unittest.runner:'
'entry_point']}
)
''' @todo: need to clean this up or do it with puppet/chef '''
# Default Config Options
root_dir = "{0}/.cloudcafe".format(os.path.expanduser("~"))
log_dir = "{0}/logs".format(root_dir)
data_dir = "{0}/data".format(root_dir)
temp_dir = "{0}/temp".format(root_dir)
config_dir = "{0}/configs".format(root_dir)
use_verbose_logging = "False"
# Copy over the default configurations
if(os.path.exists("~install")):
os.remove("~install")
# Report
print('\n'.join(["\t\t ( (",
"\t\t ) )",
"\t\t ......... ",
"\t\t | |___ ",
"\t\t | |_ |",
"\t\t | :-) |_| |",
"\t\t | |___|",
"\t\t |_______|",
"\t\t === CAFE Core ==="]))
print("========================================================")
print("CAFE Core installed with the options:")
print("Config File: {0}/engine.config".format(config_dir))
print("log_directory={0}".format(log_dir))
print("data_directory={0}".format(data_dir))
print("temp_directory={0}".format(temp_dir))
print("use_verbose_logging={0}".format(use_verbose_logging))
print("========================================================")
else:
# State file
temp = open("~install", "w")
temp.close()
''' todo: This is MAC/Linux Only '''
# get who really executed this
sudo_user = os.getenv("SUDO_USER")
uid = pwd.getpwnam(sudo_user).pw_uid
gid = pwd.getpwnam(sudo_user).pw_gid
# Build Default directories
if not os.path.exists(root_dir):
os.makedirs(root_dir)
os.chown(root_dir, uid, gid)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
os.chown(log_dir, uid, gid)
if not os.path.exists(data_dir):
os.makedirs(data_dir)
os.chown(data_dir, uid, gid)
if not os.path.exists(temp_dir):
os.makedirs(temp_dir)
os.chown(temp_dir, uid, gid)
if not os.path.exists(config_dir):
os.makedirs(config_dir)
os.chown(config_dir, uid, gid)
# Build the default configuration file
if(os.path.exists("{0}/engine.config".format(config_dir)) == False):
config = open("{0}/engine.config".format(config_dir), "w")
config.write("[CCTNG_ENGINE]\n")
config.write("log_directory={0}\n".format(log_dir))
config.write("data_directory={0}\n".format(data_dir))
config.write("temp_directory={0}\n".format(temp_dir))
config.write("use_verbose_logging={0}\n".format(use_verbose_logging))
config.close()
os.chown("{0}/engine.config".format(config_dir), uid, gid)