Merge "Get rid of nose and Proboscis"

This commit is contained in:
Zuul 2024-03-29 08:26:04 +00:00 committed by Gerrit Code Review
commit a2775ec2aa
101 changed files with 7 additions and 24209 deletions

View File

@ -12,6 +12,10 @@ However, Trove team is not going to migrate all the existing functional tests to
Since Victoria, the upstream CI jobs keep failing because of the poor performance of the CI devstack host (virtual machine), trove project contributors should guarantee any proposed patch passes both the functional test and trove tempest test by themselves, the code reviewer may ask for the test result.
.. note::
Since Caracal, functional test are removed from Trove project.
Install DevStack
----------------

View File

@ -1,42 +0,0 @@
#!/usr/bin/env python
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import argparse
import os
import sys
import run_tests
def import_tests():
from trove.tests.examples import snippets
snippets.monkey_patch_uuid_and_date()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Generate Example Snippets')
parser.add_argument('--fix-examples', action='store_true',
help='Fix the examples rather than failing tests.')
args = parser.parse_args()
if args.fix_examples:
os.environ['TESTS_FIX_EXAMPLES'] = 'True'
# Remove the '--fix-examples' argument from sys.argv as it is not a
# valid argument in the run_tests module.
sys.argv.pop(sys.argv.index('--fix-examples'))
run_tests.main(import_tests)

View File

@ -1,230 +0,0 @@
# Trove integration script - trovestack
## Steps to setup environment
Install a fresh Ubuntu 22.04 (jammy) image. We suggest creating a development virtual machine using the image.
1. Login to the machine as root
1. Make sure we have git installed
```
# apt-get update
# apt-get install git-core -y
```
1. Add a user named ubuntu if you do not already have one:
```
# adduser ubuntu
# visudo
```
Add this line to the file below the root user
ubuntu ALL=(ALL:ALL) ALL
Or use this if you dont want to type your password to sudo a command:
ubuntu ALL=(ALL) NOPASSWD: ALL
if /dev/pts/0 does not have read/write for your user
# chmod 666 /dev/pts/0
> Note that this number can change and if you can not connect to the screen session then the /dev/pts/# needs modding like above.
1. Login with ubuntu and download the Trove code.
```shell
# su ubuntu
$ mkdir -p /opt/stack
$ cd /opt/stack
```
> Note that it is important that you clone the repository
here. This is a change from the earlier trove-integration where
you could clone trove-integration anywhere you wanted (like HOME)
and trove would get cloned for you in the right place. Since
trovestack is now in the trove repository, if you wish to test
changes that you have made to trove, it is advisable for you to
have your trove repository in /opt/stack to avoid another trove
repository being cloned for you.
1. Clone this repo and go into the scripts directory
```
$ git clone https://github.com/openstack/trove.git
$ cd trove/integration/scripts/
```
## Running trovestack
Run this to get the command list with a short description of each
$ ./trovestack
### Install Trove
*This brings up trove services and initializes the trove database.*
$ ./trovestack install
### Connecting to the screen session
$ screen -x stack
If that command fails with the error
Cannot open your terminal '/dev/pts/1'
If that command fails with the error chmod the corresponding /dev/pts/#
$ chmod 660 /dev/pts/1
### Navigate the log screens
To produce the list of screens that you can scroll through and select
ctrl+a then "
An example of screen list:
```
..... (full list ommitted)
20 c-vol
21 h-eng
22 h-api
23 h-api-cfn
24 h-api-cw
25 tr-api
26 tr-tmgr
27 tr-cond
```
Alternatively, to go directly to a specific screen window
ctrl+a then '
then enter a number (like 25) or name (like tr-api)
### Detach from the screen session
Allows the services to continue running in the background
ctrl+a then d
### Kick start the build/test-init/build-image commands
*Add mysql as a parameter to set build and add the mysql guest image. This will also populate /etc/trove/test.conf with appropriate values for running the integration tests.*
$ ./trovestack kick-start mysql
### Initialize the test configuration and set up test users (overwrites /etc/trove/test.conf)
$ ./trovestack test-init
### Build guest agent image
The trove guest agent image could be created using `trovestack` script
according to the following command:
```shell
PATH_DEVSTACK_OUTPUT=/opt/stack \
./trovestack build-image \
${datastore_type} \
${guest_os} \
${guest_os_release} \
${dev_mode}
```
- If the script is running as a part of DevStack, the viriable
`PATH_DEVSTACK_OUTPUT` is set automatically.
- if `dev_mode=false`, the trove code for guest agent is injected into the
image at the building time.
- If `dev_mode=true`, no Trove code is injected into the guest image. The guest
agent will download Trove code during the service initialization.
For example, build a Mysql image for Ubuntu jammy operating system:
```shell
$ ./trovestack build-image mysql ubuntu jammy false
```
### Running Integration Tests
Check the values in /etc/trove/test.conf in case it has been re-initialized prior to running the tests. For example, from the previous mysql steps:
"dbaas_datastore": "%datastore_type%",
"dbaas_datastore_version": "%datastore_version%",
should be:
"dbaas_datastore": "mysql",
"dbaas_datastore_version": "5.5",
Once Trove is running on DevStack, you can run the integration tests locally.
$./trovestack int-tests
This will runs all of the blackbox tests by default. Use the `--group` option to run a different group:
$./trovestack int-tests --group=simple_blackbox
You can also specify the `TESTS_USE_INSTANCE_ID` environment variable to have the integration tests use an existing instance for the tests rather than creating a new one.
$./TESTS_DO_NOT_DELETE_INSTANCE=True TESTS_USE_INSTANCE_ID=INSTANCE_UUID ./trovestack int-tests --group=simple_blackbox
## Reset your environment
### Stop all the services running in the screens and refresh the environment
$ killall -9 screen
$ screen -wipe
$ RECLONE=yes ./trovestack install
$ ./trovestack kick-start mysql
or
$ RECLONE=yes ./trovestack install
$ ./trovestack test-init
$ ./trovestack build-image mysql
## Recover after reboot
If the VM was restarted, then the process for bringing up Openstack and Trove is quite simple
$./trovestack start-deps
$./trovestack start
Use screen to ensure all modules have started without error
$screen -r stack
## VMware Fusion 5 speed improvement
Running Ubuntu with KVM or Qemu can be extremely slow without certain optimizations. The following are some VMware settings that can improve performance and may also apply to other virtualization platforms.
1. Shutdown the Ubuntu VM.
2. Go to VM Settings -> Processors & Memory -> Advanced Options.
Check the "Enable hypervisor applications in this virtual machine"
3. Go to VM Settings -> Advanced.
Set the "Troubleshooting" option to "None"
4. After setting these create a snapshot so that in cases where things break down you can revert to a clean snapshot.
5. Boot up the VM and run the `./trovestack install`
6. To verify that KVM is setup properly after the devstack installation you can run these commands.
```
ubuntu@ubuntu:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
```
## VMware Workstation performance improvements
In recent versions of VMWare, you can get much better performance if you enable the right virtualization options. For example, in VMWare Workstation (found in version 10.0.2), click on VM->Settings->Processor.
You should see a box of "Virtualization Engine" options that can be changed only when the VM is shutdown.
Make sure you check "Virtualize Intel VT-x/EPT or AMD-V/RVI" and "Virtualize CPU performance counters". Set the preferred mode to "Automatic".
Then boot the VM and ensure that the proper virtualization is enabled.
```
ubuntu@ubuntu:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
```

View File

@ -1,44 +0,0 @@
{
"report_directory":"rdli-test-report",
"white_box":false,
"test_mgmt":false,
"use_local_ovz":false,
"use_venv":false,
"glance_code_root":"/opt/stack/glance",
"glance_api_conf":"/vagrant/conf/glance-api.conf",
"glance_reg_conf":"/vagrant/conf/glance-reg.conf",
"glance_images_directory": "/glance_images",
"glance_image": "fakey_fakerson.tar.gz",
"instance_flavor_name":"m1.rd-tiny",
"instance_bigger_flavor_name":"m1.rd-smaller",
"nova_code_root":"/opt/stack/nova",
"nova_conf":"/home/vagrant/nova.conf",
"keystone_code_root":"/opt/stack/keystone",
"keystone_conf":"/etc/keystone/keystone.conf",
"trove_code_root":"/opt/stack/trove",
"trove_conf":"/tmp/trove.conf",
"trove_version":"v1.0",
"trove_api_updated":"2012-08-01T00:00:00Z",
"trove_must_have_volume":false,
"trove_can_have_volume":true,
"trove_main_instance_has_volume": true,
"trove_max_accepted_volume_size": 1000,
"trove_max_instances_per_user": 55,
"trove_max_volumes_per_user": 100,
"use_reaper":false,
"root_removed_from_instance_api": true,
"root_timestamp_disabled": false,
"openvz_disabled": false,
"management_api_disabled": true,
"dbaas_image": 1,
"dns_driver":"trove.dns.rsdns.driver.RsDnsDriver",
"dns_instance_entry_factory":"trove.dns.rsdns.driver.RsDnsInstanceEntryFactory",
"databases_page_size": 20,
"instances_page_size": 20,
"users_page_size": 20,
"rabbit_runs_locally":false,
"dns_instance_entry_factory":"trove.dns.rsdns.driver.RsDnsInstanceEntryFactory",
"sentinel": null
}

View File

@ -1,245 +0,0 @@
#!/usr/bin/env python
#
# # Copyright (c) 2011 OpenStack, LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Runs the tests.
There are a few initialization issues to deal with.
The first is flags, which must be initialized before any imports. The test
configuration has the same problem (it was based on flags back when the tests
resided outside of the Nova code).
The command line is picked apart so that Nose won't see commands it isn't
compatible with, such as "--flagfile" or "--group".
This script imports all other tests to make them known to Proboscis before
passing control to proboscis.TestProgram which itself calls nose, which then
call unittest.TestProgram and exits.
If "repl" is a command line argument, then the original stdout and stderr is
saved and sys.exit is neutralized so that unittest.TestProgram will not exit
and instead sys.stdout and stderr are restored so that interactive mode can
be used.
"""
import atexit
import gettext
import os
import sys
import proboscis
from nose import config
from nose import core
from tests.colorizer import NovaTestRunner
if os.environ.get("PYDEV_DEBUG", "False") == 'True':
from pydev import pydevd
pydevd.settrace('10.0.2.2', port=7864, stdoutToServer=True,
stderrToServer=True)
def add_support_for_localization():
"""Adds support for localization in the logging.
If ../nova/__init__.py exists, add ../ to Python search path, so that
it will override what happens to be installed in
/usr/(local/)lib/python...
"""
path = os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)
possible_topdir = os.path.normpath(path)
if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
sys.path.insert(0, possible_topdir)
gettext.install('nova')
MAIN_RUNNER = None
def initialize_rdl_config(config_file):
from trove.common import cfg
from oslo_log import log
from trove.db import get_db_api
conf = cfg.CONF
cfg.parse_args(['int_tests'], default_config_files=[config_file])
log.setup(conf, None)
try:
get_db_api().configure_db(conf)
conf_file = conf.find_file(conf.api_paste_config)
except RuntimeError as error:
import traceback
print(traceback.format_exc())
sys.exit("ERROR: %s" % error)
def _clean_up():
"""Shuts down any services this program has started and shows results."""
from tests.util import report
report.update()
if MAIN_RUNNER is not None:
MAIN_RUNNER.on_exit()
from tests.util.services import get_running_services
for service in get_running_services():
sys.stderr.write("Stopping service ")
for c in service.cmd:
sys.stderr.write(c + " ")
sys.stderr.write("...\n\r")
service.stop()
def import_tests():
# The DNS stuff is problematic. Not loading the other tests allow us to
# run its functional tests only.
ADD_DOMAINS = os.environ.get("ADD_DOMAINS", "False") == 'True'
if not ADD_DOMAINS:
# F401 unused imports needed for tox tests
from trove.tests.api import backups # noqa
from trove.tests.api import configurations # noqa
from trove.tests.api import databases # noqa
from trove.tests.api import datastores # noqa
from trove.tests.api import instances as rd_instances # noqa
from trove.tests.api import instances_actions as acts # noqa
from trove.tests.api import instances_delete # noqa
from trove.tests.api import instances_resize # noqa
from trove.tests.api import limits # noqa
from trove.tests.api.mgmt import datastore_versions # noqa
from trove.tests.api.mgmt import instances_actions as mgmt_acts # noqa
from trove.tests.api import replication # noqa
from trove.tests.api import root # noqa
from trove.tests.api import user_access # noqa
from trove.tests.api import users # noqa
from trove.tests.api import versions # noqa
from trove.tests.db import migrations # noqa
# Groups that exist as core int-tests are registered from the
# trove.tests.int_tests module
from trove.tests import int_tests
def run_main(test_importer):
add_support_for_localization()
# Strip non-nose arguments out before passing this to nosetests
repl = False
nose_args = []
conf_file = "~/test.conf"
show_elapsed = True
groups = []
print("RUNNING TEST ARGS : " + str(sys.argv))
extra_test_conf_lines = []
rdl_config_file = None
nova_flag_file = None
index = 0
while index < len(sys.argv):
arg = sys.argv[index]
if arg[:2] == "-i" or arg == '--repl':
repl = True
elif arg[:7] == "--conf=":
conf_file = os.path.expanduser(arg[7:])
print("Setting TEST_CONF to " + conf_file)
os.environ["TEST_CONF"] = conf_file
elif arg[:8] == "--group=":
groups.append(arg[8:])
elif arg == "--test-config":
if index >= len(sys.argv) - 1:
print('Expected an argument to follow "--test-conf".')
sys.exit()
conf_line = sys.argv[index + 1]
extra_test_conf_lines.append(conf_line)
elif arg[:11] == "--flagfile=":
pass
elif arg[:14] == "--config-file=":
rdl_config_file = arg[14:]
elif arg[:13] == "--nova-flags=":
nova_flag_file = arg[13:]
elif arg.startswith('--hide-elapsed'):
show_elapsed = False
else:
nose_args.append(arg)
index += 1
# Many of the test decorators depend on configuration values, so before
# start importing modules we have to load the test config followed by the
# flag files.
from trove.tests.config import CONFIG
# Find config file.
if not "TEST_CONF" in os.environ:
raise RuntimeError("Please define an environment variable named " +
"TEST_CONF with the location to a conf file.")
file_path = os.path.expanduser(os.environ["TEST_CONF"])
if not os.path.exists(file_path):
raise RuntimeError("Could not find TEST_CONF at " + file_path + ".")
# Load config file and then any lines we read from the arguments.
CONFIG.load_from_file(file_path)
for line in extra_test_conf_lines:
CONFIG.load_from_line(line)
if CONFIG.white_box: # If white-box testing, set up the flags.
# Handle loading up RDL's config file madness.
initialize_rdl_config(rdl_config_file)
# Set up the report, and print out how we're running the tests.
from tests.util import report
from datetime import datetime
report.log("Trove Integration Tests, %s" % datetime.now())
report.log("Invoked via command: " + str(sys.argv))
report.log("Groups = " + str(groups))
report.log("Test conf file = %s" % os.environ["TEST_CONF"])
if CONFIG.white_box:
report.log("")
report.log("Test config file = %s" % rdl_config_file)
report.log("")
report.log("sys.path:")
for path in sys.path:
report.log("\t%s" % path)
# Now that all configurations are loaded its time to import everything
test_importer()
atexit.register(_clean_up)
c = config.Config(stream=sys.stdout,
env=os.environ,
verbosity=3,
plugins=core.DefaultPluginManager())
runner = NovaTestRunner(stream=c.stream,
verbosity=c.verbosity,
config=c,
show_elapsed=show_elapsed,
known_bugs=CONFIG.known_bugs)
MAIN_RUNNER = runner
if repl:
# Turn off the following "feature" of the unittest module in case
# we want to start a REPL.
sys.exit = lambda x: None
proboscis.TestProgram(argv=nose_args, groups=groups, config=c,
testRunner=MAIN_RUNNER).run_and_exit()
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
if __name__ == "__main__":
run_main(import_tests)

View File

@ -1,95 +0,0 @@
{
"include-files":["core.test.conf"],
"fake_mode": true,
"dbaas_url":"http://localhost:8779/v1.0",
"version_url":"http://localhost:8779",
"nova_auth_url":"http://localhost:8779/v1.0/auth",
"trove_auth_url":"http://localhost:8779/v1.0/auth",
"trove_client_insecure":false,
"auth_strategy":"fake",
"trove_version":"v1.0",
"trove_api_updated":"2012-08-01T00:00:00Z",
"trove_dns_support":false,
"trove_ip_support":false,
"nova_client": null,
"users": [
{
"auth_user":"admin",
"auth_key":"password",
"tenant":"admin-1000",
"requirements": {
"is_admin":true,
"services": ["trove"]
}
},
{
"auth_user":"jsmith",
"auth_key":"password",
"tenant":"2500",
"requirements": {
"is_admin":false,
"services": ["trove"]
}
},
{
"auth_user":"hub_cap",
"auth_key":"password",
"tenant":"3000",
"requirements": {
"is_admin":false,
"services": ["trove"]
}
}
],
"flavors": [
{
"id": 1,
"name": "m1.tiny",
"ram": 512
},
{
"id": 2,
"name": "m1.small",
"ram": 2048
},
{
"id": 3,
"name": "m1.medium",
"ram": 4096
},
{
"id": 4,
"name": "m1.large",
"ram": 8192
},
{
"id": 5,
"name": "m1.xlarge",
"ram": 16384
},
{
"id": 6,
"name": "tinier",
"ram": 506
},
{
"id": 7,
"name": "m1.rd-tiny",
"ram": 512
},
{
"id": 8,
"name": "m1.rd-smaller",
"ram": 768
}
],
"sentinel": null
}

View File

@ -1,25 +0,0 @@
# Copyright (c) 2011 OpenStack, LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`tests` -- Integration / Functional Tests for Nova
===================================
.. automodule:: tests
:platform: Unix
:synopsis: Tests for Nova.
.. moduleauthor:: Nirmal Ranganathan <nirmal.ranganathan@rackspace.com>
.. moduleauthor:: Tim Simpson <tim.simpson@rackspace.com>
"""

View File

@ -1,445 +0,0 @@
#!/usr/bin/env python
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Colorizer Code is borrowed from Twisted:
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Unittest runner for Nova.
To run all tests
python run_tests.py
To run a single test:
python run_tests.py test_compute:ComputeTestCase.test_run_terminate
To run a single test module:
python run_tests.py test_compute
or
python run_tests.py api.test_wsgi
"""
import gettext
import heapq
import logging
import os
import unittest
import sys
import time
gettext.install('nova')
from nose import config
from nose import core
from nose import result
from proboscis import case
from proboscis import SkipTest
class _AnsiColorizer(object):
"""
A colorizer is an object that loosely wraps around a stream, allowing
callers to write text to the stream in a particular color.
Colorizer classes must implement C{supported()} and C{write(text, color)}.
"""
_colors = dict(black=30, red=31, green=32, yellow=33,
blue=34, magenta=35, cyan=36, white=37)
def __init__(self, stream):
self.stream = stream
def supported(cls, stream=sys.stdout):
"""
A class method that returns True if the current platform supports
coloring terminal output using this method. Returns False otherwise.
"""
if not stream.isatty():
return False # auto color only on TTYs
try:
import curses
except ImportError:
return False
else:
try:
try:
return curses.tigetnum("colors") > 2
except curses.error:
curses.setupterm()
return curses.tigetnum("colors") > 2
except:
raise
# guess false in case of error
return False
supported = classmethod(supported)
def write(self, text, color):
"""
Write the given text to the stream in the given color.
@param text: Text to be written to the stream.
@param color: A string label for a color. e.g. 'red', 'white'.
"""
color = self._colors[color]
self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text))
class _Win32Colorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
from win32console import GetStdHandle, STD_OUT_HANDLE, \
FOREGROUND_RED, FOREGROUND_BLUE, FOREGROUND_GREEN, \
FOREGROUND_INTENSITY
red, green, blue, bold = (FOREGROUND_RED, FOREGROUND_GREEN,
FOREGROUND_BLUE, FOREGROUND_INTENSITY)
self.stream = stream
self.screenBuffer = GetStdHandle(STD_OUT_HANDLE)
self._colors = {
'normal': red | green | blue,
'red': red | bold,
'green': green | bold,
'blue': blue | bold,
'yellow': red | green | bold,
'magenta': red | blue | bold,
'cyan': green | blue | bold,
'white': red | green | blue | bold
}
def supported(cls, stream=sys.stdout):
try:
import win32console
screenBuffer = win32console.GetStdHandle(
win32console.STD_OUT_HANDLE)
except ImportError:
return False
import pywintypes
try:
screenBuffer.SetConsoleTextAttribute(
win32console.FOREGROUND_RED |
win32console.FOREGROUND_GREEN |
win32console.FOREGROUND_BLUE)
except pywintypes.error:
return False
else:
return True
supported = classmethod(supported)
def write(self, text, color):
color = self._colors[color]
self.screenBuffer.SetConsoleTextAttribute(color)
self.stream.write(text)
self.screenBuffer.SetConsoleTextAttribute(self._colors['normal'])
class _NullColorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
self.stream = stream
def supported(cls, stream=sys.stdout):
return True
supported = classmethod(supported)
def write(self, text, color):
self.stream.write(text)
def get_elapsed_time_color(elapsed_time):
if elapsed_time > 1.0:
return 'yellow'
elif elapsed_time > 0.25:
return 'cyan'
else:
return 'green'
class NovaTestResult(case.TestResult):
def __init__(self, *args, **kw):
self.show_elapsed = kw.pop('show_elapsed')
self.known_bugs = kw.pop('known_bugs', {})
super(NovaTestResult, self).__init__(*args, **kw)
self.num_slow_tests = 5
self.slow_tests = [] # this is a fixed-sized heap
self._last_case = None
self.colorizer = None
# NOTE(vish): reset stdout for the terminal check
stdout = sys.stdout
sys.stdout = sys.__stdout__
for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]:
if colorizer.supported():
self.colorizer = colorizer(self.stream)
break
sys.stdout = stdout
# NOTE(lorinh): Initialize start_time in case a sqlalchemy-migrate
# error results in it failing to be initialized later. Otherwise,
# _handleElapsedTime will fail, causing the wrong error message to
# be outputted.
self.start_time = time.time()
def _intercept_known_bugs(self, test, err):
name = str(test)
excuse = self.known_bugs.get(name, None)
if excuse:
tracker_id, error_string = excuse
if error_string in str(err[1]):
skip = SkipTest("KNOWN BUG: %s\n%s"
% (tracker_id, str(err[1])))
self.onError(test)
super(NovaTestResult, self).addSkip(test, skip)
else:
result = (RuntimeError, RuntimeError(
'Test "%s" contains known bug %s.\n'
'Expected the following error string:\n%s\n'
'What was seen was the following:\n%s\n'
'If the bug is no longer happening, please change '
'the test config.'
% (name, tracker_id, error_string, str(err))), None)
self.onError(test)
super(NovaTestResult, self).addError(test, result)
return True
return False
def getDescription(self, test):
return str(test)
def _handleElapsedTime(self, test):
self.elapsed_time = time.time() - self.start_time
item = (self.elapsed_time, test)
# Record only the n-slowest tests using heap
if len(self.slow_tests) >= self.num_slow_tests:
heapq.heappushpop(self.slow_tests, item)
else:
heapq.heappush(self.slow_tests, item)
def _writeElapsedTime(self, test):
color = get_elapsed_time_color(self.elapsed_time)
self.colorizer.write(" %.2f" % self.elapsed_time, color)
def _writeResult(self, test, long_result, color, short_result, success):
if self.showAll:
self.colorizer.write(long_result, color)
if self.show_elapsed and success:
self._writeElapsedTime(test)
self.stream.writeln()
elif self.dots:
self.stream.write(short_result)
self.stream.flush()
# NOTE(vish): copied from unittest with edit to add color
def addSuccess(self, test):
if self._intercept_known_bugs(test, None):
return
unittest.TestResult.addSuccess(self, test)
self._handleElapsedTime(test)
self._writeResult(test, 'OK', 'green', '.', True)
# NOTE(vish): copied from unittest with edit to add color
def addFailure(self, test, err):
if self._intercept_known_bugs(test, err):
return
self.onError(test)
unittest.TestResult.addFailure(self, test, err)
self._handleElapsedTime(test)
self._writeResult(test, 'FAIL', 'red', 'F', False)
# NOTE(vish): copied from nose with edit to add color
def addError(self, test, err):
"""Overrides normal addError to add support for
errorClasses. If the exception is a registered class, the
error will be added to the list for that class, not errors.
"""
if self._intercept_known_bugs(test, err):
return
self.onError(test)
self._handleElapsedTime(test)
stream = getattr(self, 'stream', None)
ec, ev, tb = err
try:
exc_info = self._exc_info_to_string(err, test)
except TypeError:
# 2.3 compat
exc_info = self._exc_info_to_string(err)
for cls, (storage, label, isfail) in self.errorClasses.items():
if result.isclass(ec) and issubclass(ec, cls):
if isfail:
test.passed = False
storage.append((test, exc_info))
# Might get patched into a streamless result
if stream is not None:
if self.showAll:
message = [label]
detail = result._exception_detail(err[1])
if detail:
message.append(detail)
stream.writeln(": ".join(message))
elif self.dots:
stream.write(label[:1])
return
self.errors.append((test, exc_info))
test.passed = False
if stream is not None:
self._writeResult(test, 'ERROR', 'red', 'E', False)
@staticmethod
def get_doc(cls_or_func):
"""Grabs the doc abbreviated doc string."""
try:
return cls_or_func.__doc__.split("\n")[0].strip()
except (AttributeError, IndexError):
return None
def startTest(self, test):
unittest.TestResult.startTest(self, test)
self.start_time = time.time()
test_name = None
try:
entry = test.test.__proboscis_case__.entry
if entry.method:
current_class = entry.method.im_class
test_name = self.get_doc(entry.home) or entry.home.__name__
else:
current_class = entry.home
except AttributeError:
current_class = test.test.__class__
if self.showAll:
if current_class.__name__ != self._last_case:
self.stream.writeln(current_class.__name__)
self._last_case = current_class.__name__
try:
doc = self.get_doc(current_class)
except (AttributeError, IndexError):
doc = None
if doc:
self.stream.writeln(' ' + doc)
if not test_name:
if hasattr(test.test, 'shortDescription'):
test_name = test.test.shortDescription()
if not test_name:
test_name = test.test._testMethodName
self.stream.write('\t%s' % str(test_name).ljust(60))
self.stream.flush()
class NovaTestRunner(core.TextTestRunner):
def __init__(self, *args, **kwargs):
self.show_elapsed = kwargs.pop('show_elapsed')
self.known_bugs = kwargs.pop('known_bugs', {})
self.__result = None
self.__finished = False
self.__start_time = None
super(NovaTestRunner, self).__init__(*args, **kwargs)
def _makeResult(self):
self.__result = NovaTestResult(
self.stream,
self.descriptions,
self.verbosity,
self.config,
show_elapsed=self.show_elapsed,
known_bugs=self.known_bugs)
self.__start_time = time.time()
return self.__result
def _writeSlowTests(self, result_):
# Pare out 'fast' tests
slow_tests = [item for item in result_.slow_tests
if get_elapsed_time_color(item[0]) != 'green']
if slow_tests:
slow_total_time = sum(item[0] for item in slow_tests)
self.stream.writeln("Slowest %i tests took %.2f secs:"
% (len(slow_tests), slow_total_time))
for elapsed_time, test in sorted(slow_tests, reverse=True):
time_str = "%.2f" % elapsed_time
self.stream.writeln(" %s %s" % (time_str.ljust(10), test))
def on_exit(self):
if self.__result is None:
print("Exiting before tests even started.")
else:
if not self.__finished:
msg = "Tests aborted, trying to print available results..."
print(msg)
stop_time = time.time()
self.__result.printErrors()
self.__result.printSummary(self.__start_time, stop_time)
self.config.plugins.finalize(self.__result)
if self.show_elapsed:
self._writeSlowTests(self.__result)
def run(self, test):
result_ = super(NovaTestRunner, self).run(test)
if self.show_elapsed:
self._writeSlowTests(result_)
self.__finished = True
return result_
if __name__ == '__main__':
logging.setup()
# If any argument looks like a test name but doesn't have "nova.tests" in
# front of it, automatically add that so we don't have to type as much
show_elapsed = True
argv = []
test_fixture = os.getenv("UNITTEST_FIXTURE", "trove")
for x in sys.argv:
if x.startswith('test_'):
argv.append('%s.tests.%s' % (test_fixture, x))
elif x.startswith('--hide-elapsed'):
show_elapsed = False
else:
argv.append(x)
testdir = os.path.abspath(os.path.join(test_fixture, "tests"))
c = config.Config(stream=sys.stdout,
env=os.environ,
verbosity=3,
workingDir=testdir,
plugins=core.DefaultPluginManager())
runner = NovaTestRunner(stream=c.stream,
verbosity=c.verbosity,
config=c,
show_elapsed=show_elapsed)
sys.exit(not core.run(config=c, testRunner=runner, argv=argv))

View File

@ -1,63 +0,0 @@
# Copyright (c) 2011 OpenStack, LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from proboscis import test
from proboscis.asserts import fail
from tests.util.services import Service
from trove.tests.config import CONFIG
def dbaas_url():
return str(CONFIG.values.get("dbaas_url"))
def nova_url():
return str(CONFIG.values.get("nova_client")['url'])
class Daemon(object):
"""Starts a daemon."""
def __init__(self, alternate_path=None, conf_file_name=None,
extra_cmds=None, service_path_root=None, service_path=None):
# The path to the daemon bin if the other one doesn't work.
self.alternate_path = alternate_path
self.extra_cmds = extra_cmds or []
# The name of a test config value which points to a conf file.
self.conf_file_name = conf_file_name
# The name of a test config value, which is inserted into the service_path.
self.service_path_root = service_path_root
# The first path to the daemon bin we try.
self.service_path = service_path or "%s"
def run(self):
# Print out everything to make it
print("Looking for config value %s..." % self.service_path_root)
print(CONFIG.values[self.service_path_root])
path = self.service_path % CONFIG.values[self.service_path_root]
print("Path = %s" % path)
if not os.path.exists(path):
path = self.alternate_path
if path is None:
fail("Could not find path to %s" % self.service_path_root)
conf_path = str(CONFIG.values[self.conf_file_name])
cmds = CONFIG.python_cmd_list() + [path] + self.extra_cmds + \
[conf_path]
print("Running cmds: %s" % cmds)
self.service = Service(cmds)
if not self.service.is_service_alive():
self.service.start()

View File

@ -1,76 +0,0 @@
"""Creates a report for the test.
"""
import os
import shutil
from os import path
from trove.tests.config import CONFIG
USE_LOCAL_OVZ = CONFIG.use_local_ovz
class Reporter(object):
"""Saves the logs from a test run."""
def __init__(self, root_path):
self.root_path = root_path
if not path.exists(self.root_path):
os.mkdir(self.root_path)
for file in os.listdir(self.root_path):
if file.endswith(".log"):
os.remove(path.join(self.root_path, file))
def _find_all_instance_ids(self):
instances = []
if USE_LOCAL_OVZ:
for dir in os.listdir("/var/lib/vz/private"):
instances.append(dir)
return instances
def log(self, msg):
with open("%s/report.log" % self.root_path, 'a') as file:
file.write(str(msg) + "\n")
def _save_syslog(self):
try:
shutil.copyfile("/var/log/syslog", "host-syslog.log")
except (shutil.Error, IOError) as err:
self.log("ERROR logging syslog : %s" % (err))
def _update_instance(self, id):
root = "%s/%s" % (self.root_path, id)
def save_file(path, short_name):
if USE_LOCAL_OVZ:
try:
shutil.copyfile("/var/lib/vz/private/%s/%s" % (id, path),
"%s-%s.log" % (root, short_name))
except (shutil.Error, IOError) as err:
self.log("ERROR logging %s for instance id %s! : %s"
% (path, id, err))
else:
#TODO: Can we somehow capture these (maybe SSH to the VM)?
pass
save_file("/var/log/firstboot", "firstboot")
save_file("/var/log/syslog", "syslog")
save_file("/var/log/nova/guest.log", "nova-guest")
def _update_instances(self):
for id in self._find_all_instance_ids():
self._update_instance(id)
def update(self):
self._update_instances()
self._save_syslog()
REPORTER = Reporter(CONFIG.report_directory)
def log(msg):
REPORTER.log(msg)
def update():
REPORTER.update()

View File

@ -1,110 +0,0 @@
# Copyright (c) 2012 OpenStack, LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Test utility for RPC checks.
Functionality to check for rabbit here depends on having rabbit running on
the same machine as the tests, so that the rabbitmqctl commands will function.
The functionality is turned on or off by the test config "rabbit_runs_locally".
"""
import re
from trove.tests.config import CONFIG
from services import start_proc
if CONFIG.values.get('rabbit_runs_locally', False) == True:
DIRECT_ACCESS = True
class Rabbit(object):
def declare_queue(self, topic):
"""Call this to declare a queue from Python."""
#from trove.rpc.impl_kombu import Connection
from trove.openstack.common.rpc import create_connection
with create_connection() as conn:
consumer = conn.declare_topic_consumer(topic=topic)
def get_queue_items(self, queue_name):
"""Determines if the queue exists and if so the message count.
If the queue exists the return value is an integer, otherwise
its None.
Be careful because queue_name is used in a regex and can't have
any unescaped characters.
"""
proc = start_proc(["/usr/bin/sudo", "rabbitmqctl", "list_queues"],
shell=False)
for line in iter(proc.stdout.readline, ""):
print("LIST QUEUES:" + line)
m = re.search(r"%s\s+([0-9]+)" % queue_name, line)
if m:
return int(m.group(1))
return None
@property
def is_alive(self):
"""Calls list_queues, should fail."""
try:
stdout, stderr = self.run(0, "rabbitmqctl", "list_queues")
for lines in stdout, stderr:
for line in lines:
if "no_exists" in line:
return False
return True
except Exception:
return False
def reset(self):
out, err = self.run(0, "rabbitmqctl", "reset")
print(out)
print(err)
def run(self, check_exit_code, *cmd):
cmds = ["/usr/bin/sudo"] + list(cmd)
proc = start_proc(cmds)
lines = proc.stdout.readlines()
err_lines = proc.stderr.readlines()
return lines, err_lines
def start(self):
print("Calling rabbitmqctl start_app")
out = self.run(0, "rabbitmqctl", "start_app")
print(out)
out, err = self.run(0, "rabbitmqctl", "change_password", "guest",
CONFIG.values['rabbit_password'])
print(out)
print(err)
def stop(self):
print("Calling rabbitmqctl stop_app")
out = self.run(0, "rabbitmqctl", "stop_app")
print(out)
else:
DIRECT_ACCESS = False
class Rabbit(object):
def __init__(self):
raise RuntimeError("rabbit_runs_locally is set to False in the "
"test config, so this test cannot be run.")

View File

@ -1,280 +0,0 @@
# Copyright (c) 2011 OpenStack, LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Functions to initiate and shut down services needed by the tests."""
import os
import re
import subprocess
import time
from collections import namedtuple
from httplib2 import Http
from nose.plugins.skip import SkipTest
from proboscis import decorators
def _is_web_service_alive(url):
"""Does a HTTP GET request to see if the web service is up."""
client = Http()
try:
resp = client.request(url, 'GET')
return resp != None
except Exception:
return False
_running_services = []
def get_running_services():
""" Returns the list of services which this program has started."""
return _running_services
def start_proc(cmd, shell=False):
"""Given a command, starts and returns a process."""
env = os.environ.copy()
proc = subprocess.Popen(
cmd,
shell=shell,
stdin=subprocess.PIPE,
bufsize=0,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env
)
return proc
MemoryInfo = namedtuple("MemoryInfo", ['mapped', 'writeable', 'shared'])
class Service(object):
"""Starts and stops a service under test.
The methods to start and stop the service will not actually do anything
if they detect the service is already running on this machine. This is
because it may be useful for developers to start the services themselves
some other way.
"""
# TODO(tim.simpson): Hard to follow, consider renaming certain attributes.
def __init__(self, cmd):
"""Defines a service to run."""
if not isinstance(cmd, list):
raise TypeError()
self.cmd = cmd
self.do_not_manage_proc = False
self.proc = None
def __del__(self):
if self.is_running:
self.stop()
def ensure_started(self):
"""Starts the service if it is not running."""
if not self.is_running:
self.start()
def find_proc_id(self):
"""Finds and returns the process id."""
if not self.cmd:
return False
# The cmd[1] signifies the executable python script. It gets invoked
# as python /path/to/executable args, so the entry is
# /path/to/executable
actual_command = self.cmd[1].split("/")[-1]
proc_command = ["/usr/bin/pgrep", "-f", actual_command]
proc = start_proc(proc_command, shell=False)
# this is to make sure there is only one pid returned from the pgrep
has_two_lines = False
pid = None
for line in iter(proc.stdout.readline, ""):
if has_two_lines:
raise RuntimeError("Found PID twice.")
pid = int(line)
has_two_lines = True
return pid
def get_memory_info(self):
"""Returns how much memory the process is using according to pmap."""
pid = self.find_proc_id()
if not pid:
raise RuntimeError("Can't find PID, so can't get memory.")
proc = start_proc(["/usr/bin/pmap", "-d", str(pid)],
shell=False)
for line in iter(proc.stdout.readline, ""):
m = re.search(r"mapped\:\s([0-9]+)K\s+"
r"writeable/private:\s([0-9]+)K\s+"
r"shared:\s+([0-9]+)K", line)
if m:
return MemoryInfo(int(m.group(1)), int(m.group(2)),
int(m.group(3)))
raise RuntimeError("Memory info not found.")
def get_fd_count_from_proc_file(self):
"""Returns file descriptors according to /proc/<id>/status."""
pid = self.find_proc_id()
with open("/proc/%d/status" % pid) as status:
for line in status.readlines():
index = line.find(":")
name = line[:index]
value = line[index + 1:]
if name == "FDSize":
return int(value)
raise RuntimeError("FDSize not found!")
def get_fd_count(self):
"""Returns file descriptors according to /proc/<id>/status."""
pid = self.find_proc_id()
cmd = "Finding file descriptors..."
print("CMD" + cmd)
proc = start_proc(['ls', '-la', '/proc/%d/fd' % pid], shell=False)
count = -3
has_two_lines = False
for line in iter(proc.stdout.readline, ""):
print("\t" + line)
count += 1
if not count:
raise RuntimeError("Could not get file descriptors!")
return count
with open("/proc/%d/fd" % pid) as status:
for line in status.readlines():
index = line.find(":")
name = line[:index]
value = line[index + 1:]
if name == "FDSize":
return int(value)
raise RuntimeError("FDSize not found!")
def kill_proc(self):
"""Kills the process, wherever it may be."""
pid = self.find_proc_id()
if pid:
start_proc("sudo kill -9 " + str(pid), shell=True)
time.sleep(1)
if self.is_service_alive():
raise RuntimeError('Cannot kill process, PID=' +
str(self.proc.pid))
def is_service_alive(self, proc_name_index=1):
"""Searches for the process to see if its alive.
This function will return true even if this class has not started
the service (searches using ps).
"""
if not self.cmd:
return False
time.sleep(1)
# The cmd[1] signifies the executable python script. It gets invoked
# as python /path/to/executable args, so the entry is
# /path/to/executable
actual_command = self.cmd[proc_name_index].split("/")[-1]
print(actual_command)
proc_command = ["/usr/bin/pgrep", "-f", actual_command]
print(proc_command)
proc = start_proc(proc_command, shell=False)
line = proc.stdout.readline()
print(line)
# pgrep only returns a pid. if there is no pid, it'll return nothing
return len(line) != 0
@property
def is_running(self):
"""Returns true if the service has already been started.
Returns true if this program has started the service or if it
previously detected it had started. The main use of this property
is to know if the service was already begun by this program-
use is_service_alive for a more definitive answer.
"""
return self.proc or self.do_not_manage_proc
def restart(self, extra_args):
if self.do_not_manage_proc:
raise RuntimeError("Can't restart proc as the tests don't own it.")
self.stop()
time.sleep(2)
self.start(extra_args=extra_args)
def start(self, time_out=30, extra_args=None):
"""Starts the service if necessary."""
extra_args = extra_args or []
if self.is_running:
raise RuntimeError("Process is already running.")
if self.is_service_alive():
self.do_not_manage_proc = True
return
self.proc = start_proc(self.cmd + extra_args, shell=False)
if not self._wait_for_start(time_out=time_out):
self.stop()
raise RuntimeError("Issued the command successfully but the "
"service (" + str(self.cmd + extra_args) +
") never seemed to start.")
_running_services.append(self)
def stop(self):
"""Stops the service, but only if this program started it."""
if self.do_not_manage_proc:
return
if not self.proc:
raise RuntimeError("Process was not started.")
self.proc.terminate()
self.proc.kill()
self.proc.wait()
self.proc.stdin.close()
self.kill_proc()
self.proc = None
global _running_services
_running_services = [svc for svc in _running_services if svc != self]
def _wait_for_start(self, time_out):
"""Waits until time_out (in seconds) for service to appear."""
give_up_time = time.time() + time_out
while time.time() < give_up_time:
if self.is_service_alive():
return True
return False
class NativeService(Service):
def is_service_alive(self):
return super(NativeService, self).is_service_alive(proc_name_index=0)
class WebService(Service):
"""Starts and stops a web service under test."""
def __init__(self, cmd, url):
"""Defines a service to run."""
Service.__init__(self, cmd)
if not isinstance(url, (str, unicode)):
raise TypeError()
self.url = url
self.do_not_manage_proc = self.is_service_alive()
def is_service_alive(self):
"""Searches for the process to see if its alive."""
return _is_web_service_alive(self.url)

View File

@ -1,251 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Rackspace Hosting
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import functools
import gettext
import os
import sys
import traceback
import eventlet
from oslo_log import log as logging
import proboscis
import urllib
import wsgi_intercept
from wsgi_intercept.httplib2_intercept import install as wsgi_install
from trove.common import cfg
from trove.common.rpc import service as rpc_service
from trove.common.rpc import version as rpc_version
from trove.common import utils
from trove import rpc
from trove.tests.config import CONFIG
from trove.tests import root_logger
eventlet.monkey_patch(thread=False)
CONF = cfg.CONF
original_excepthook = sys.excepthook
def add_support_for_localization():
"""Adds support for localization in the logging.
If ../nova/__init__.py exists, add ../ to Python search path, so that
it will override what happens to be installed in
/usr/(local/)lib/python...
"""
path = os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)
possible_topdir = os.path.normpath(path)
if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
sys.path.insert(0, possible_topdir)
gettext.install('nova')
def initialize_trove(config_file):
from trove.common import pastedeploy
root_logger.DefaultRootLogger()
cfg.CONF(args=[],
project='trove',
default_config_files=[config_file])
logging.setup(CONF, None)
topic = CONF.taskmanager_queue
rpc.init(CONF)
taskman_service = rpc_service.RpcService(
CONF.taskmanager_rpc_encr_key, topic=topic,
rpc_api_version=rpc_version.RPC_API_VERSION,
manager='trove.taskmanager.manager.Manager')
taskman_service.start()
return pastedeploy.paste_deploy_app(config_file, 'trove', {})
def datastore_init():
# Adds the datastore for mysql (needed to make most calls work).
from trove.configuration.models import DatastoreConfigurationParameters
from trove.datastore import models
models.DBDatastore.create(
id=CONFIG.dbaas_datastore_id, name=CONFIG.dbaas_datastore,
default_version_id=CONFIG.dbaas_datastore_version_id)
models.DBDatastore.create(id=utils.generate_uuid(),
name=CONFIG.dbaas_datastore_name_no_versions,
default_version_id=None)
main_dsv = models.DBDatastoreVersion.create(
id=CONFIG.dbaas_datastore_version_id,
datastore_id=CONFIG.dbaas_datastore_id,
name=CONFIG.dbaas_datastore_version,
manager="mysql",
image_id='c00000c0-00c0-0c00-00c0-000c000000cc',
packages='test packages',
active=1)
models.DBDatastoreVersion.create(
id="d00000d0-00d0-0d00-00d0-000d000000dd",
datastore_id=CONFIG.dbaas_datastore_id,
name='mysql_inactive_version', manager="mysql",
image_id='c00000c0-00c0-0c00-00c0-000c000000cc',
packages=None, active=0)
def add_parm(name, data_type, max_size, min_size=0, restart_required=0):
DatastoreConfigurationParameters.create(
datastore_version_id=main_dsv.id,
name=name,
restart_required=restart_required,
max_size=max_size,
min_size=0,
data_type=data_type,
deleted=0,
deleted_at=None)
add_parm('key_buffer_size', 'integer', 4294967296)
add_parm('connect_timeout', 'integer', 65535)
add_parm('join_buffer_size', 'integer', 4294967296)
add_parm('local_infile', 'integer', 1)
add_parm('collation_server', 'string', None, None)
add_parm('innodb_buffer_pool_size', 'integer', 57671680,
restart_required=1)
def initialize_database():
from trove.db import get_db_api
from trove.db.sqlalchemy import session
db_api = get_db_api()
db_api.drop_db(CONF) # Destroys the database, if it exists.
db_api.db_sync(CONF)
session.configure_db(CONF)
datastore_init()
db_api.configure_db(CONF)
def initialize_fakes(app):
# Set up WSGI interceptor. This sets up a fake host that responds each
# time httplib tries to communicate to localhost, port 8779.
def wsgi_interceptor(*args, **kwargs):
def call_back(env, start_response):
path_info = env.get('PATH_INFO')
if path_info:
env['PATH_INFO'] = urllib.parse.unquote(path_info)
return app.__call__(env, start_response)
return call_back
wsgi_intercept.add_wsgi_intercept('localhost',
CONF.bind_port,
wsgi_interceptor)
from trove.tests.util import event_simulator
event_simulator.monkey_patch()
from trove.tests.fakes import taskmanager
taskmanager.monkey_patch()
def parse_args_for_test_config():
test_conf = 'etc/tests/localhost.test.conf'
repl = False
new_argv = []
for index in range(len(sys.argv)):
arg = sys.argv[index]
print(arg)
if arg[:14] == "--test-config=":
test_conf = arg[14:]
elif arg == "--repl":
repl = True
else:
new_argv.append(arg)
sys.argv = new_argv
return test_conf, repl
def run_tests(repl):
"""Runs all of the tests."""
if repl:
# Actually show errors in the repl.
sys.excepthook = original_excepthook
def no_thanks(exit_code):
print("Tests finished with exit code %d." % exit_code)
sys.exit = no_thanks
proboscis.TestProgram().run_and_exit()
if repl:
import code
code.interact()
def import_tests():
# F401 unused imports needed for tox tests
from trove.tests.api import backups # noqa
from trove.tests.api import configurations # noqa
from trove.tests.api import databases # noqa
from trove.tests.api import datastores # noqa
from trove.tests.api import instances as rd_instances # noqa
from trove.tests.api import instances_actions as rd_actions # noqa
from trove.tests.api import instances_delete # noqa
from trove.tests.api import instances_resize # noqa
from trove.tests.api import limits # noqa
from trove.tests.api.mgmt import instances_actions as mgmt_actions # noqa
from trove.tests.api import replication # noqa
from trove.tests.api import root # noqa
from trove.tests.api import user_access # noqa
from trove.tests.api import users # noqa
from trove.tests.api import versions # noqa
from trove.tests.db import migrations # noqa
def main(import_func):
try:
wsgi_install()
add_support_for_localization()
# Load Trove app
# Paste file needs absolute path
config_file = os.path.realpath('etc/trove/trove.conf.test')
# 'etc/trove/test-api-paste.ini'
app = initialize_trove(config_file)
# Initialize sqlite database.
initialize_database()
# Swap out WSGI, httplib, and other components with test doubles.
initialize_fakes(app)
# Initialize the test configuration.
test_config_file, repl = parse_args_for_test_config()
CONFIG.load_from_file(test_config_file)
import_func()
from trove.tests.util import event_simulator
event_simulator.run_main(functools.partial(run_tests, repl))
except Exception as e:
# Printing the error manually like this is necessary due to oddities
# with sys.excepthook.
print("Run tests failed: %s" % e)
traceback.print_exc()
raise
if __name__ == "__main__":
main(import_tests)

View File

@ -5,18 +5,10 @@
hacking>=3.0.1,<3.1.0 # Apache-2.0
bandit[baseline]>=1.7.7 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
nose>=1.3.7 # LGPL
nosexcover>=1.0.10 # BSD
openstack.nose-plugin>=0.7 # Apache-2.0
WebTest>=2.0.27 # MIT
wsgi-intercept>=1.4.1 # MIT License
proboscis>=1.2.5.3 # Apache-2.0
python-troveclient>=2.2.0 # Apache-2.0
testtools>=2.2.0 # MIT
pymongo!=3.1,>=3.0.2 # Apache-2.0
redis>=2.10.0 # MIT
cassandra-driver!=3.6.0,>=2.1.4 # Apache-2.0
couchdb>=0.8 # Apache-2.0
stestr>=1.1.0 # Apache-2.0
doc8>=0.8.1 # Apache-2.0
astroid==1.6.5 # LGPLv2.1

View File

@ -40,14 +40,15 @@ commands = oslo_debug_helper {posargs}
[testenv:cover]
allowlist_externals = sh
rm
setenv =
{[testenv]setenv}
PYTHON=coverage run --source trove
commands =
rm -f trove_test.sqlite
coverage erase
sh -c 'OS_TEST_PATH={toxinidir}/backup/tests/unittests stestr run --serial {posargs}'
sh -c 'OS_TEST_PATH={toxinidir}/trove/tests/unittests stestr run --serial {posargs}'
#coverage run -a run_tests.py
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report --fail-under=46

View File

@ -1,507 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_not_equal
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis.asserts import fail
from proboscis.decorators import time_out
from proboscis import SkipTest
from proboscis import test
from troveclient.compat import exceptions
from trove.common import cfg
from trove.common import exception
from trove.common.utils import generate_uuid
from trove.common.utils import poll_until
from trove import tests
from trove.tests.api.instances import instance_info
from trove.tests.api.instances import TIMEOUT_INSTANCE_CREATE
from trove.tests.api.instances import TIMEOUT_INSTANCE_DELETE
from trove.tests.api.instances import TIMEOUT_INSTANCE_RESTORE
from trove.tests.api.instances import WaitForGuestInstallationToFinish
from trove.tests.config import CONFIG
from trove.tests.util import create_dbaas_client
from trove.tests.util.users import Requirements
BACKUP_NAME = 'backup_test'
BACKUP_DESC = 'test description'
TIMEOUT_BACKUP_CREATE = 60 * 30
TIMEOUT_BACKUP_DELETE = 120
backup_info = None
incremental_info = None
incremental_db = generate_uuid()
incremental_restore_instance_id = None
total_num_dbs = 0
backup_count_prior_to_create = 0
backup_count_for_instance_prior_to_create = 0
@test(depends_on_groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class CreateBackups(object):
@test
def test_backup_create_instance(self):
"""Test create backup for a given instance."""
# Necessary to test that the count increases.
global backup_count_prior_to_create
backup_count_prior_to_create = len(instance_info.dbaas.backups.list())
global backup_count_for_instance_prior_to_create
backup_count_for_instance_prior_to_create = len(
instance_info.dbaas.instances.backups(instance_info.id))
datastore_version = instance_info.dbaas.datastore_versions.get(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version)
result = instance_info.dbaas.backups.create(BACKUP_NAME,
instance_info.id,
BACKUP_DESC)
global backup_info
backup_info = result
assert_equal(BACKUP_NAME, result.name)
assert_equal(BACKUP_DESC, result.description)
assert_equal(instance_info.id, result.instance_id)
assert_equal('NEW', result.status)
instance = instance_info.dbaas.instances.get(instance_info.id)
assert_true(instance.status in ['ACTIVE', 'BACKUP', 'HEALTHY'])
assert_equal(instance_info.dbaas_datastore,
result.datastore['type'])
assert_equal(instance_info.dbaas_datastore_version,
result.datastore['version'])
assert_equal(datastore_version.id, result.datastore['version_id'])
class BackupRestoreMixin(object):
def verify_backup(self, backup_id):
def result_is_active():
backup = instance_info.dbaas.backups.get(backup_id)
if backup.status == "COMPLETED":
return True
else:
assert_not_equal("FAILED", backup.status)
return False
poll_until(result_is_active)
def instance_is_totally_gone(self, instance_id):
def instance_is_gone():
try:
instance_info.dbaas.instances.get(
instance_id)
return False
except exceptions.NotFound:
return True
poll_until(
instance_is_gone, time_out=TIMEOUT_INSTANCE_DELETE)
def backup_is_totally_gone(self, backup_id):
def backup_is_gone():
try:
instance_info.dbaas.backups.get(backup_id)
return False
except exceptions.NotFound:
return True
poll_until(backup_is_gone, time_out=TIMEOUT_BACKUP_DELETE)
def verify_instance_is_active(self, instance_id):
# This version just checks the REST API status.
def result_is_active():
instance = instance_info.dbaas.instances.get(instance_id)
if instance.status in CONFIG.running_status:
return True
else:
# If its not ACTIVE, anything but BUILD must be
# an error.
assert_equal("BUILD", instance.status)
if instance_info.volume is not None:
assert_equal(instance.volume.get('used', None), None)
return False
poll_until(result_is_active, sleep_time=5,
time_out=TIMEOUT_INSTANCE_CREATE)
@test(depends_on_classes=[CreateBackups],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class WaitForBackupCreateToFinish(BackupRestoreMixin):
"""Wait until the backup creation is finished."""
@test
@time_out(TIMEOUT_BACKUP_CREATE)
def test_backup_created(self):
"""Wait for the backup to be finished."""
self.verify_backup(backup_info.id)
@test(depends_on=[WaitForBackupCreateToFinish],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class ListBackups(object):
@test
def test_backup_list(self):
"""Test list backups."""
result = instance_info.dbaas.backups.list()
assert_equal(backup_count_prior_to_create + 1, len(result))
backup = result[0]
assert_equal(BACKUP_NAME, backup.name)
assert_equal(BACKUP_DESC, backup.description)
assert_not_equal(0.0, backup.size)
assert_equal(instance_info.id, backup.instance_id)
assert_equal('COMPLETED', backup.status)
@test
def test_backup_list_filter_datastore(self):
"""Test list backups and filter by datastore."""
result = instance_info.dbaas.backups.list(
datastore=instance_info.dbaas_datastore)
assert_equal(backup_count_prior_to_create + 1, len(result))
backup = result[0]
assert_equal(BACKUP_NAME, backup.name)
assert_equal(BACKUP_DESC, backup.description)
assert_not_equal(0.0, backup.size)
assert_equal(instance_info.id, backup.instance_id)
assert_equal('COMPLETED', backup.status)
@test
def test_backup_list_filter_different_datastore(self):
"""Test list backups and filter by datastore."""
result = instance_info.dbaas.backups.list(
datastore=CONFIG.dbaas_datastore_name_no_versions)
# There should not be any backups for this datastore
assert_equal(0, len(result))
@test
def test_backup_list_filter_datastore_not_found(self):
"""Test list backups and filter by datastore."""
assert_raises(exceptions.NotFound, instance_info.dbaas.backups.list,
datastore='NOT_FOUND')
@test
def test_backup_list_for_instance(self):
"""Test backup list for instance."""
result = instance_info.dbaas.instances.backups(instance_info.id)
assert_equal(backup_count_for_instance_prior_to_create + 1,
len(result))
backup = result[0]
assert_equal(BACKUP_NAME, backup.name)
assert_equal(BACKUP_DESC, backup.description)
assert_not_equal(0.0, backup.size)
assert_equal(instance_info.id, backup.instance_id)
assert_equal('COMPLETED', backup.status)
@test
def test_backup_get(self):
"""Test get backup."""
backup = instance_info.dbaas.backups.get(backup_info.id)
assert_equal(backup_info.id, backup.id)
assert_equal(backup_info.name, backup.name)
assert_equal(backup_info.description, backup.description)
assert_equal(instance_info.id, backup.instance_id)
assert_not_equal(0.0, backup.size)
assert_equal('COMPLETED', backup.status)
assert_equal(instance_info.dbaas_datastore,
backup.datastore['type'])
assert_equal(instance_info.dbaas_datastore_version,
backup.datastore['version'])
datastore_version = instance_info.dbaas.datastore_versions.get(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version)
assert_equal(datastore_version.id, backup.datastore['version_id'])
# Test to make sure that user in other tenant is not able
# to GET this backup
reqs = Requirements(is_admin=False)
other_user = CONFIG.users.find_user(
reqs,
black_list=[instance_info.user.auth_user])
other_client = create_dbaas_client(other_user)
assert_raises(exceptions.NotFound, other_client.backups.get,
backup_info.id)
@test(runs_after=[ListBackups],
depends_on=[WaitForBackupCreateToFinish],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class IncrementalBackups(BackupRestoreMixin):
@test
def test_create_db(self):
global total_num_dbs
total_num_dbs = len(instance_info.dbaas.databases.list(
instance_info.id))
databases = [{'name': incremental_db}]
instance_info.dbaas.databases.create(instance_info.id, databases)
assert_equal(202, instance_info.dbaas.last_http_code)
total_num_dbs += 1
@test(runs_after=['test_create_db'])
def test_create_incremental_backup(self):
result = instance_info.dbaas.backups.create("incremental-backup",
backup_info.instance_id,
parent_id=backup_info.id)
global incremental_info
incremental_info = result
assert_equal(202, instance_info.dbaas.last_http_code)
# Wait for the backup to finish
self.verify_backup(incremental_info.id)
assert_equal(backup_info.id, incremental_info.parent_id)
@test(groups=[tests.DBAAS_API_BACKUPS],
depends_on_classes=[IncrementalBackups],
enabled=CONFIG.swift_enabled)
class RestoreUsingBackup(object):
@classmethod
def _restore(cls, backup_ref):
restorePoint = {"backupRef": backup_ref}
result = instance_info.dbaas.instances.create(
instance_info.name + "_restore",
instance_info.dbaas_flavor_href,
instance_info.volume,
datastore=instance_info.dbaas_datastore,
datastore_version=instance_info.dbaas_datastore_version,
nics=instance_info.nics,
restorePoint=restorePoint)
assert_equal(200, instance_info.dbaas.last_http_code)
assert_equal("BUILD", result.status)
return result.id
@test(depends_on=[IncrementalBackups])
def test_restore_incremental(self):
"""Restore from incremental backup."""
global incremental_restore_instance_id
incremental_restore_instance_id = self._restore(incremental_info.id)
@test(depends_on_classes=[RestoreUsingBackup],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class WaitForRestoreToFinish(object):
@classmethod
def _poll(cls, instance_id_to_poll):
"""Shared "instance restored" test logic."""
# This version just checks the REST API status.
def result_is_active():
instance = instance_info.dbaas.instances.get(instance_id_to_poll)
if instance.status in CONFIG.running_status:
return True
else:
# If its not ACTIVE, anything but BUILD must be
# an error.
assert_equal("BUILD", instance.status)
if instance_info.volume is not None:
assert_equal(instance.volume.get('used', None), None)
return False
poll_until(result_is_active, time_out=TIMEOUT_INSTANCE_RESTORE,
sleep_time=10)
@test
def test_instance_restored_incremental(self):
try:
self._poll(incremental_restore_instance_id)
except exception.PollTimeOut:
fail('Timed out')
@test(enabled=(not CONFIG.fake_mode and CONFIG.swift_enabled),
depends_on_classes=[WaitForRestoreToFinish],
groups=[tests.DBAAS_API_BACKUPS])
class VerifyRestore(object):
@classmethod
def _poll(cls, instance_id, db):
def db_is_found():
databases = instance_info.dbaas.databases.list(instance_id)
if db in [d.name for d in databases]:
return True
else:
return False
poll_until(db_is_found, time_out=60 * 10, sleep_time=10)
@test
def test_database_restored_incremental(self):
try:
self._poll(incremental_restore_instance_id, incremental_db)
assert_equal(total_num_dbs, len(instance_info.dbaas.databases.list(
incremental_restore_instance_id)))
except exception.PollTimeOut:
fail('Timed out')
@test(groups=[tests.DBAAS_API_BACKUPS], enabled=CONFIG.swift_enabled,
depends_on_classes=[VerifyRestore])
class DeleteRestoreInstance(object):
@classmethod
def _delete(cls, instance_id):
"""Test delete restored instance."""
instance_info.dbaas.instances.delete(instance_id)
assert_equal(202, instance_info.dbaas.last_http_code)
def instance_is_gone():
try:
instance_info.dbaas.instances.get(instance_id)
return False
except exceptions.NotFound:
return True
poll_until(instance_is_gone, time_out=TIMEOUT_INSTANCE_DELETE)
assert_raises(exceptions.NotFound, instance_info.dbaas.instances.get,
instance_id)
@test
def test_delete_restored_instance_incremental(self):
try:
self._delete(incremental_restore_instance_id)
except exception.PollTimeOut:
fail('Timed out')
@test(depends_on_classes=[DeleteRestoreInstance],
groups=[tests.DBAAS_API_BACKUPS],
enabled=CONFIG.swift_enabled)
class DeleteBackups(object):
@test
def test_backup_delete_not_found(self):
"""Test delete unknown backup."""
assert_raises(exceptions.NotFound, instance_info.dbaas.backups.delete,
'nonexistent_backup')
@test
def test_backup_delete_other(self):
"""Test another user cannot delete backup."""
# Test to make sure that user in other tenant is not able
# to DELETE this backup
reqs = Requirements(is_admin=False)
other_user = CONFIG.users.find_user(
reqs,
black_list=[instance_info.user.auth_user])
other_client = create_dbaas_client(other_user)
assert_raises(exceptions.NotFound, other_client.backups.delete,
backup_info.id)
@test(runs_after=[test_backup_delete_other])
def test_backup_delete(self):
"""Test backup deletion."""
instance_info.dbaas.backups.delete(backup_info.id)
assert_equal(202, instance_info.dbaas.last_http_code)
def backup_is_gone():
try:
instance_info.dbaas.backups.get(backup_info.id)
return False
except exceptions.NotFound:
return True
poll_until(backup_is_gone, time_out=TIMEOUT_BACKUP_DELETE)
@test(runs_after=[test_backup_delete])
def test_incremental_deleted(self):
"""Test backup children are deleted."""
if incremental_info is None:
raise SkipTest("Incremental Backup not created")
assert_raises(exceptions.NotFound, instance_info.dbaas.backups.get,
incremental_info.id)
@test(depends_on=[WaitForGuestInstallationToFinish],
runs_after=[DeleteBackups],
enabled=CONFIG.swift_enabled)
class FakeTestHugeBackupOnSmallInstance(BackupRestoreMixin):
report = CONFIG.get_report()
def tweak_fake_guest(self, size):
from trove.tests.fakes import guestagent
guestagent.BACKUP_SIZE = size
@test
def test_load_mysql_with_data(self):
if not CONFIG.fake_mode:
raise SkipTest("Must run in fake mode.")
self.tweak_fake_guest(1.9)
@test(depends_on=[test_load_mysql_with_data])
def test_create_huge_backup(self):
if not CONFIG.fake_mode:
raise SkipTest("Must run in fake mode.")
self.new_backup = instance_info.dbaas.backups.create(
BACKUP_NAME,
instance_info.id,
BACKUP_DESC)
assert_equal(202, instance_info.dbaas.last_http_code)
@test(depends_on=[test_create_huge_backup])
def test_verify_huge_backup_completed(self):
if not CONFIG.fake_mode:
raise SkipTest("Must run in fake mode.")
self.verify_backup(self.new_backup.id)
@test(depends_on=[test_verify_huge_backup_completed])
def test_try_to_restore_on_small_instance_with_volume(self):
if not CONFIG.fake_mode:
raise SkipTest("Must run in fake mode.")
assert_raises(exceptions.Forbidden,
instance_info.dbaas.instances.create,
instance_info.name + "_restore",
instance_info.dbaas_flavor_href,
{'size': 1},
datastore=instance_info.dbaas_datastore,
datastore_version=(instance_info.
dbaas_datastore_version),
nics=instance_info.nics,
restorePoint={"backupRef": self.new_backup.id})
assert_equal(403, instance_info.dbaas.last_http_code)
@test(depends_on=[test_verify_huge_backup_completed])
def test_try_to_restore_on_small_instance_with_flavor_only(self):
if not CONFIG.fake_mode:
raise SkipTest("Must run in fake mode.")
self.orig_conf_value = cfg.CONF.get(
instance_info.dbaas_datastore).volume_support
cfg.CONF.get(instance_info.dbaas_datastore).volume_support = False
assert_raises(exceptions.Forbidden,
instance_info.dbaas.instances.create,
instance_info.name + "_restore", 11,
datastore=instance_info.dbaas_datastore,
datastore_version=(instance_info.
dbaas_datastore_version),
nics=instance_info.nics,
restorePoint={"backupRef": self.new_backup.id})
assert_equal(403, instance_info.dbaas.last_http_code)
cfg.CONF.get(
instance_info.dbaas_datastore
).volume_support = self.orig_conf_value

View File

@ -1,860 +0,0 @@
# Copyright 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
import json
import netaddr
from time import sleep
import uuid
from proboscis import after_class
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_not_equal
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis.asserts import fail
from proboscis import before_class
from proboscis.decorators import time_out
from proboscis import SkipTest
from proboscis import test
from troveclient.compat import exceptions
from trove.common.utils import poll_until
from trove import tests
from trove.tests.api.instances import assert_unprocessable
from trove.tests.api.instances import instance_info
from trove.tests.api.instances import InstanceTestInfo
from trove.tests.api.instances import TIMEOUT_INSTANCE_CREATE
from trove.tests.api.instances import TIMEOUT_INSTANCE_DELETE
from trove.tests.config import CONFIG
from trove.tests.util.check import AttrCheck
from trove.tests.util.check import CollectionCheck
from trove.tests.util.check import TypeCheck
from trove.tests.util import create_dbaas_client
from trove.tests.util.mysql import create_mysql_connection
from trove.tests.util.users import Requirements
CONFIG_NAME = "test_configuration"
CONFIG_DESC = "configuration description"
configuration_default = None
configuration_info = None
configuration_href = None
configuration_instance = InstanceTestInfo()
configuration_instance_id = None
sql_variables = [
'key_buffer_size',
'connect_timeout',
'join_buffer_size',
]
def _is_valid_timestamp(time_string):
try:
datetime.strptime(time_string, "%Y-%m-%dT%H:%M:%S")
except ValueError:
return False
return True
# helper methods to validate configuration is applied to instance
def _execute_query(host, user_name, password, query):
print("Starting to query database, host: %s, user: %s, password: %s, "
"query: %s" % (host, user_name, password, query))
with create_mysql_connection(host, user_name, password) as db:
result = db.execute(query)
return result
def _get_address(instance_id):
result = instance_info.dbaas_admin.mgmt.instances.show(instance_id)
try:
return next(str(ip) for ip in result.ip
if netaddr.valid_ipv4(ip))
except StopIteration:
fail("No IPV4 ip found")
def _test_configuration_is_applied_to_instance(instance, configuration_id):
if CONFIG.fake_mode:
raise SkipTest("configuration from sql does not work in fake mode")
instance_test = instance_info.dbaas.instances.get(instance.id)
assert_equal(configuration_id, instance_test.configuration['id'])
if configuration_id:
testconfig_info = instance_info.dbaas.configurations.get(
configuration_id)
else:
testconfig_info = instance_info.dbaas.instance.configuration(
instance.id)
testconfig_info['configuration']
conf_instances = instance_info.dbaas.configurations.instances(
configuration_id)
config_instance_ids = [inst.id for inst in conf_instances]
assert_true(instance_test.id in config_instance_ids)
cfg_names = testconfig_info.values.keys()
host = _get_address(instance.id)
for user in instance.users:
username = user['name']
password = user['password']
concat_variables = "','".join(cfg_names)
query = ("show variables where Variable_name "
"in ('%s');" % concat_variables)
actual_values = _execute_query(host, username, password, query)
print("actual_values %s" % actual_values)
print("testconfig_info.values %s" % testconfig_info.values)
assert_true(len(actual_values) == len(cfg_names))
# check the configs exist
attrcheck = AttrCheck()
allowed_attrs = [actual_key for actual_key, actual_value in actual_values]
attrcheck.contains_allowed_attrs(
testconfig_info.values, allowed_attrs,
msg="Configurations parameters")
def _get_parameter_type(name):
instance_info.dbaas.configuration_parameters.get_parameter(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version,
name)
resp, body = instance_info.dbaas.client.last_response
print(resp)
print(body)
return json.loads(body.decode())['type']
# check the config values are correct
for key, value in actual_values:
key_type = _get_parameter_type(key)
# mysql returns 'ON' and 'OFF' for True and False respectively
if value == 'ON':
converted_key_value = (str(key), 1)
elif value == 'OFF':
converted_key_value = (str(key), 0)
else:
if key_type == 'integer':
value = int(value)
converted_key_value = (str(key), value)
print("converted_key_value: %s" % str(converted_key_value))
assert_true(converted_key_value in testconfig_info.values.items())
class ConfigurationsTestBase(object):
@staticmethod
def expected_instance_datastore_configs(instance_id):
"""Given an instance retrieve the expected test configurations for
instance's datastore.
"""
instance = instance_info.dbaas.instances.get(instance_id)
datastore_type = instance.datastore['type']
datastore_test_configs = CONFIG.get(datastore_type, {})
return datastore_test_configs.get("configurations", {})
@staticmethod
def expected_default_datastore_configs():
"""Returns the expected test configurations for the default datastore
defined in the Test Config as dbaas_datastore.
"""
default_datastore = CONFIG.get('dbaas_datastore', None)
datastore_test_configs = CONFIG.get(default_datastore, {})
return datastore_test_configs.get("configurations", {})
@test(depends_on_groups=[tests.DBAAS_API_BACKUPS],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class CreateConfigurations(ConfigurationsTestBase):
@test
def test_expected_configurations_parameters(self):
"""Test get expected configurations parameters."""
allowed_attrs = ["configuration-parameters"]
instance_info.dbaas.configuration_parameters.parameters(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version)
resp, body = instance_info.dbaas.client.last_response
attrcheck = AttrCheck()
config_parameters_dict = json.loads(body.decode())
attrcheck.contains_allowed_attrs(
config_parameters_dict, allowed_attrs,
msg="Configurations parameters")
# sanity check that a few options are in the list
config_params_list = config_parameters_dict['configuration-parameters']
config_param_keys = []
for param in config_params_list:
config_param_keys.append(param['name'])
expected_configs = self.expected_default_datastore_configs()
expected_config_params = expected_configs.get('parameters_list')
# check for duplicate configuration parameters
msg = "check for duplicate configuration parameters"
assert_equal(len(config_param_keys), len(set(config_param_keys)), msg)
for expected_config_item in expected_config_params:
assert_true(expected_config_item in config_param_keys)
@test
def test_expected_get_configuration_parameter(self):
# tests get on a single parameter to verify it has expected attributes
param_name = 'key_buffer_size'
allowed_config_params = ['name', 'restart_required',
'max', 'min', 'type',
'deleted', 'deleted_at',
'datastore_version_id']
param = instance_info.dbaas.configuration_parameters.get_parameter(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version,
param_name)
resp, body = instance_info.dbaas.client.last_response
print("params: %s" % param)
print("resp: %s" % resp)
print("body: %s" % body)
attrcheck = AttrCheck()
config_parameter_dict = json.loads(body.decode())
print("config_parameter_dict: %s" % config_parameter_dict)
attrcheck.contains_allowed_attrs(
config_parameter_dict,
allowed_config_params,
msg="Get Configuration parameter")
assert_equal(param_name, config_parameter_dict['name'])
with TypeCheck('ConfigurationParameter', param) as parameter:
parameter.has_field('name', str)
parameter.has_field('restart_required', bool)
parameter.has_field('max', int)
parameter.has_field('min', int)
parameter.has_field('type', str)
parameter.has_field('datastore_version_id', str)
@test
def test_configurations_create_invalid_values(self):
"""Test create configurations with invalid values."""
values = '{"this_is_invalid": 123}'
try:
instance_info.dbaas.configurations.create(
CONFIG_NAME,
values,
CONFIG_DESC)
except exceptions.UnprocessableEntity:
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 422)
@test
def test_configurations_create_invalid_value_type(self):
"""Test create configuration with invalid value type."""
values = '{"key_buffer_size": "this is a string not int"}'
assert_unprocessable(instance_info.dbaas.configurations.create,
CONFIG_NAME, values, CONFIG_DESC)
@test
def test_configurations_create_value_out_of_bounds(self):
"""Test create configuration with value out of bounds."""
expected_configs = self.expected_default_datastore_configs()
values = json.dumps(expected_configs.get('out_of_bounds_over'))
assert_unprocessable(instance_info.dbaas.configurations.create,
CONFIG_NAME, values, CONFIG_DESC)
values = json.dumps(expected_configs.get('out_of_bounds_under'))
assert_unprocessable(instance_info.dbaas.configurations.create,
CONFIG_NAME, values, CONFIG_DESC)
@test
def test_valid_configurations_create(self):
"""create a configuration with valid parameters from config."""
expected_configs = self.expected_default_datastore_configs()
values = json.dumps(expected_configs.get('valid_values'))
expected_values = json.loads(values)
result = instance_info.dbaas.configurations.create(
CONFIG_NAME,
values,
CONFIG_DESC,
datastore=instance_info.dbaas_datastore,
datastore_version=instance_info.dbaas_datastore_version)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 200)
with TypeCheck('Configuration', result) as configuration:
configuration.has_field('name', str)
configuration.has_field('description', str)
configuration.has_field('values', dict)
configuration.has_field('datastore_name', str)
configuration.has_field('datastore_version_id', str)
configuration.has_field('datastore_version_name', str)
global configuration_info
configuration_info = result
assert_equal(configuration_info.name, CONFIG_NAME)
assert_equal(configuration_info.description, CONFIG_DESC)
assert_equal(configuration_info.values, expected_values)
@test(runs_after=[test_valid_configurations_create])
def test_appending_to_existing_configuration(self):
"""test_appending_to_existing_configuration"""
# test being able to update and insert new parameter name and values
# to an existing configuration
expected_configs = self.expected_default_datastore_configs()
values = json.dumps(expected_configs.get('appending_values'))
# ensure updated timestamp is different than created
if not CONFIG.fake_mode:
sleep(1)
instance_info.dbaas.configurations.edit(configuration_info.id,
values)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 200)
@test(depends_on_classes=[CreateConfigurations],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class AfterConfigurationsCreation(ConfigurationsTestBase):
@test
def test_assign_configuration_to_invalid_instance(self):
"""test assigning to an instance that does not exist"""
invalid_id = "invalid-inst-id"
try:
instance_info.dbaas.instances.modify(invalid_id,
configuration_info.id)
except exceptions.NotFound:
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 404)
@test
def test_assign_configuration_to_valid_instance(self):
"""test assigning a configuration to an instance"""
print("instance_info.id: %s" % instance_info.id)
print("configuration_info: %s" % configuration_info)
print("configuration_info.id: %s" % configuration_info.id)
config_id = configuration_info.id
instance_info.dbaas.instances.modify(instance_info.id,
configuration=config_id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
@test(depends_on=[test_assign_configuration_to_valid_instance])
def test_assign_configuration_to_instance_with_config(self):
"""test assigning a configuration to an instance conflicts"""
config_id = configuration_info.id
assert_raises(exceptions.BadRequest,
instance_info.dbaas.instances.modify, instance_info.id,
configuration=config_id)
@test(depends_on=[test_assign_configuration_to_valid_instance])
@time_out(30)
def test_get_configuration_details_from_instance_validation(self):
"""validate the configuration after attaching"""
print("instance_info.id: %s" % instance_info.id)
inst = instance_info.dbaas.instances.get(instance_info.id)
configuration_id = inst.configuration['id']
print("configuration_info: %s" % configuration_id)
assert_not_equal(None, configuration_id)
_test_configuration_is_applied_to_instance(instance_info,
configuration_id)
@test(depends_on=[test_get_configuration_details_from_instance_validation])
def test_configurations_get(self):
"""test that the instance shows up on the assigned configuration"""
result = instance_info.dbaas.configurations.get(configuration_info.id)
assert_equal(configuration_info.id, result.id)
assert_equal(configuration_info.name, result.name)
assert_equal(configuration_info.description, result.description)
# check the result field types
with TypeCheck("configuration", result) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("description", str)
check.has_field("values", dict)
check.has_field("created", str)
check.has_field("updated", str)
check.has_field("instance_count", int)
print(result.values)
# check for valid timestamps
assert_true(_is_valid_timestamp(result.created))
assert_true(_is_valid_timestamp(result.updated))
# check that created and updated timestamps differ, since
# test_appending_to_existing_configuration should have changed the
# updated timestamp
if not CONFIG.fake_mode:
assert_not_equal(result.created, result.updated)
assert_equal(result.instance_count, 1)
with CollectionCheck("configuration_values", result.values) as check:
# check each item has the correct type according to the rules
for (item_key, item_val) in result.values.items():
print("item_key: %s" % item_key)
print("item_val: %s" % item_val)
dbaas = instance_info.dbaas
param = dbaas.configuration_parameters.get_parameter(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version,
item_key)
if param.type == 'integer':
check.has_element(item_key, int)
if param.type == 'string':
check.has_element(item_key, str)
if param.type == 'boolean':
check.has_element(item_key, bool)
# Test to make sure that another user is not able to GET this config
reqs = Requirements(is_admin=False)
test_auth_user = instance_info.user.auth_user
other_user = CONFIG.users.find_user(reqs, black_list=[test_auth_user])
other_user_tenant_id = other_user.tenant_id
client_tenant_id = instance_info.user.tenant_id
if other_user_tenant_id == client_tenant_id:
other_user = CONFIG.users.find_user(
reqs, black_list=[instance_info.user.auth_user,
other_user])
print(other_user)
print(other_user.__dict__)
other_client = create_dbaas_client(other_user)
assert_raises(exceptions.NotFound, other_client.configurations.get,
configuration_info.id)
@test(depends_on_classes=[AfterConfigurationsCreation],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class ListConfigurations(ConfigurationsTestBase):
@test
def test_configurations_list(self):
# test listing configurations show up
result = instance_info.dbaas.configurations.list()
for conf in result:
with TypeCheck("Configuration", conf) as check:
check.has_field('id', str)
check.has_field('name', str)
check.has_field('description', str)
check.has_field('datastore_version_id', str)
check.has_field('datastore_version_name', str)
check.has_field('datastore_name', str)
exists = [config for config in result if
config.id == configuration_info.id]
assert_equal(1, len(exists))
configuration = exists[0]
assert_equal(configuration.id, configuration_info.id)
assert_equal(configuration.name, configuration_info.name)
assert_equal(configuration.description, configuration_info.description)
@test
def test_configurations_list_for_instance(self):
# test getting an instance shows the configuration assigned shows up
instance = instance_info.dbaas.instances.get(instance_info.id)
assert_equal(instance.configuration['id'], configuration_info.id)
assert_equal(instance.configuration['name'], configuration_info.name)
# expecting two things in links, href and bookmark
assert_equal(2, len(instance.configuration['links']))
link = instance.configuration['links'][0]
global configuration_href
configuration_href = link['href']
@test
def test_get_default_configuration_on_instance(self):
# test the api call to get the default template of an instance exists
result = instance_info.dbaas.instances.configuration(instance_info.id)
global configuration_default
configuration_default = result
assert_not_equal(None, result.configuration)
@test
def test_changing_configuration_with_nondynamic_parameter(self):
"""test_changing_configuration_with_nondynamic_parameter"""
expected_configs = self.expected_default_datastore_configs()
values = json.dumps(expected_configs.get('nondynamic_parameter'))
instance_info.dbaas.configurations.update(configuration_info.id,
values)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
instance_info.dbaas.configurations.get(configuration_info.id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 200)
@test(depends_on=[test_changing_configuration_with_nondynamic_parameter])
@time_out(20)
def test_waiting_for_instance_in_restart_required(self):
"""test_waiting_for_instance_in_restart_required"""
def result_is_not_active():
instance = instance_info.dbaas.instances.get(
instance_info.id)
if instance.status in CONFIG.running_status:
return False
else:
return True
poll_until(result_is_not_active)
instance = instance_info.dbaas.instances.get(instance_info.id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 200)
assert_equal('RESTART_REQUIRED', instance.status)
@test(depends_on=[test_waiting_for_instance_in_restart_required])
def test_restart_service_should_return_active(self):
"""test_restart_service_should_return_active"""
instance_info.dbaas.instances.restart(instance_info.id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
def result_is_active():
instance = instance_info.dbaas.instances.get(
instance_info.id)
if instance.status in CONFIG.running_status:
return True
else:
assert_true(instance.status in ['REBOOT', 'SHUTDOWN'])
return False
poll_until(result_is_active)
@test(depends_on=[test_restart_service_should_return_active])
@time_out(30)
def test_get_configuration_details_from_instance_validation(self):
"""test_get_configuration_details_from_instance_validation"""
inst = instance_info.dbaas.instances.get(instance_info.id)
configuration_id = inst.configuration['id']
assert_not_equal(None, inst.configuration['id'])
_test_configuration_is_applied_to_instance(instance_info,
configuration_id)
@test(depends_on=[test_configurations_list])
def test_compare_list_and_details_timestamps(self):
# compare config timestamps between list and details calls
result = instance_info.dbaas.configurations.list()
list_config = [config for config in result if
config.id == configuration_info.id]
assert_equal(1, len(list_config))
details_config = instance_info.dbaas.configurations.get(
configuration_info.id)
assert_equal(list_config[0].created, details_config.created)
assert_equal(list_config[0].updated, details_config.updated)
@test(depends_on_classes=[ListConfigurations],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class StartInstanceWithConfiguration(ConfigurationsTestBase):
@test
def test_start_instance_with_configuration(self):
"""test that a new instance will apply the configuration on create"""
global configuration_instance
databases = []
databases.append({"name": "firstdbconfig", "character_set": "latin2",
"collate": "latin2_general_ci"})
databases.append({"name": "db2"})
configuration_instance.databases = databases
users = []
users.append({"name": "liteconf", "password": "liteconfpass",
"databases": [{"name": "firstdbconfig"}]})
configuration_instance.users = users
configuration_instance.name = "TEST_" + str(uuid.uuid4()) + "_config"
flavor_href = instance_info.dbaas_flavor_href
configuration_instance.dbaas_flavor_href = flavor_href
configuration_instance.volume = instance_info.volume
configuration_instance.dbaas_datastore = instance_info.dbaas_datastore
configuration_instance.dbaas_datastore_version = \
instance_info.dbaas_datastore_version
configuration_instance.nics = instance_info.nics
result = instance_info.dbaas.instances.create(
configuration_instance.name,
configuration_instance.dbaas_flavor_href,
configuration_instance.volume,
configuration_instance.databases,
configuration_instance.users,
nics=configuration_instance.nics,
availability_zone="nova",
datastore=configuration_instance.dbaas_datastore,
datastore_version=configuration_instance.dbaas_datastore_version,
configuration=configuration_href)
assert_equal(200, instance_info.dbaas.last_http_code)
assert_equal("BUILD", result.status)
configuration_instance.id = result.id
@test(depends_on_classes=[StartInstanceWithConfiguration],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class WaitForConfigurationInstanceToFinish(ConfigurationsTestBase):
@test
@time_out(TIMEOUT_INSTANCE_CREATE)
def test_instance_with_configuration_active(self):
"""wait for the instance created with configuration"""
def result_is_active():
instance = instance_info.dbaas.instances.get(
configuration_instance.id)
if instance.status in CONFIG.running_status:
return True
else:
assert_equal("BUILD", instance.status)
return False
poll_until(result_is_active)
@test(depends_on=[test_instance_with_configuration_active])
@time_out(30)
def test_get_configuration_details_from_instance_validation(self):
"""Test configuration is applied correctly to the instance."""
inst = instance_info.dbaas.instances.get(configuration_instance.id)
configuration_id = inst.configuration['id']
assert_not_equal(None, configuration_id)
_test_configuration_is_applied_to_instance(configuration_instance,
configuration_id)
@test(depends_on=[WaitForConfigurationInstanceToFinish],
groups=[tests.DBAAS_API_CONFIGURATIONS])
class DeleteConfigurations(ConfigurationsTestBase):
@before_class
def setUp(self):
# need to store the parameter details that will be deleted
config_param_name = sql_variables[1]
instance_info.dbaas.configuration_parameters.get_parameter(
instance_info.dbaas_datastore,
instance_info.dbaas_datastore_version,
config_param_name)
resp, body = instance_info.dbaas.client.last_response
print(resp)
print(body)
self.config_parameter_dict = json.loads(body.decode())
@after_class(always_run=True)
def tearDown(self):
# need to "undelete" the parameter that was deleted from the mgmt call
if instance_info.dbaas:
ds = instance_info.dbaas_datastore
ds_v = instance_info.dbaas_datastore_version
version = instance_info.dbaas.datastore_versions.get(
ds, ds_v)
client = instance_info.dbaas_admin.mgmt_configs
print(self.config_parameter_dict)
client.create(version.id,
self.config_parameter_dict['name'],
self.config_parameter_dict['restart_required'],
self.config_parameter_dict['type'],
self.config_parameter_dict['max'],
self.config_parameter_dict['min'])
@test
def test_delete_invalid_configuration_not_found(self):
# test deleting a configuration that does not exist throws exception
invalid_configuration_id = "invalid-config-id"
assert_raises(exceptions.NotFound,
instance_info.dbaas.configurations.delete,
invalid_configuration_id)
@test(depends_on=[test_delete_invalid_configuration_not_found])
def test_delete_configuration_parameter_with_mgmt_api(self):
# testing a param that is assigned to an instance can be deleted
# and doesn't affect an unassign later. So we delete a parameter
# that is used by a test (connect_timeout)
ds = instance_info.dbaas_datastore
ds_v = instance_info.dbaas_datastore_version
version = instance_info.dbaas.datastore_versions.get(
ds, ds_v)
client = instance_info.dbaas_admin.mgmt_configs
config_param_name = self.config_parameter_dict['name']
client.delete(version.id, config_param_name)
assert_raises(
exceptions.NotFound,
instance_info.dbaas.configuration_parameters.get_parameter,
ds,
ds_v,
config_param_name)
@test(depends_on=[test_delete_configuration_parameter_with_mgmt_api])
def test_unable_delete_instance_configurations(self):
# test deleting a configuration that is assigned to
# an instance is not allowed.
assert_raises(exceptions.BadRequest,
instance_info.dbaas.configurations.delete,
configuration_info.id)
@test(depends_on=[test_unable_delete_instance_configurations])
@time_out(30)
def test_unassign_configuration_from_instances(self):
"""test to unassign configuration from instance"""
instance_info.dbaas.instances.update(configuration_instance.id,
remove_configuration=True)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
instance_info.dbaas.instances.update(instance_info.id,
remove_configuration=True)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
instance_info.dbaas.instances.get(instance_info.id)
def result_has_no_configuration():
instance = instance_info.dbaas.instances.get(inst_info.id)
if hasattr(instance, 'configuration'):
return False
else:
return True
inst_info = instance_info
poll_until(result_has_no_configuration)
inst_info = configuration_instance
poll_until(result_has_no_configuration)
instance = instance_info.dbaas.instances.get(instance_info.id)
assert_equal('RESTART_REQUIRED', instance.status)
@test(depends_on=[test_unassign_configuration_from_instances])
def test_assign_in_wrong_state(self):
# test assigning a config to an instance in RESTART state
assert_raises(exceptions.BadRequest,
instance_info.dbaas.instances.modify,
configuration_instance.id,
configuration=configuration_info.id)
@test(depends_on=[test_assign_in_wrong_state])
def test_no_instances_on_configuration(self):
"""test_no_instances_on_configuration"""
result = instance_info.dbaas.configurations.get(configuration_info.id)
assert_equal(configuration_info.id, result.id)
assert_equal(configuration_info.name, result.name)
assert_equal(configuration_info.description, result.description)
assert_equal(result.instance_count, 0)
print(configuration_instance.id)
print(instance_info.id)
@test(depends_on=[test_unassign_configuration_from_instances])
@time_out(120)
def test_restart_service_should_return_active(self):
"""test that after restarting the instance it becomes active"""
instance_info.dbaas.instances.restart(instance_info.id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
def result_is_active():
instance = instance_info.dbaas.instances.get(
instance_info.id)
if instance.status in CONFIG.running_status:
return True
else:
assert_equal("REBOOT", instance.status)
return False
poll_until(result_is_active)
@test(depends_on=[test_restart_service_should_return_active])
def test_assign_config_and_name_to_instance_using_patch(self):
"""test_assign_config_and_name_to_instance_using_patch"""
new_name = 'new_name'
report = CONFIG.get_report()
report.log("instance_info.id: %s" % instance_info.id)
report.log("configuration_info: %s" % configuration_info)
report.log("configuration_info.id: %s" % configuration_info.id)
report.log("instance name:%s" % instance_info.name)
report.log("instance new name:%s" % new_name)
saved_name = instance_info.name
config_id = configuration_info.id
instance_info.dbaas.instances.update(instance_info.id,
configuration=config_id,
name=new_name)
assert_equal(202, instance_info.dbaas.last_http_code)
check = instance_info.dbaas.instances.get(instance_info.id)
assert_equal(200, instance_info.dbaas.last_http_code)
assert_equal(check.name, new_name)
# restore instance name
instance_info.dbaas.instances.update(instance_info.id,
name=saved_name)
assert_equal(202, instance_info.dbaas.last_http_code)
instance = instance_info.dbaas.instances.get(instance_info.id)
assert_equal('RESTART_REQUIRED', instance.status)
# restart to be sure configuration is applied
instance_info.dbaas.instances.restart(instance_info.id)
assert_equal(202, instance_info.dbaas.last_http_code)
sleep(2)
def result_is_active():
instance = instance_info.dbaas.instances.get(
instance_info.id)
if instance.status in CONFIG.running_status:
return True
else:
assert_equal("REBOOT", instance.status)
return False
poll_until(result_is_active)
# test assigning a configuration to an instance that
# already has an assigned configuration with patch
config_id = configuration_info.id
assert_raises(exceptions.BadRequest,
instance_info.dbaas.instances.update,
instance_info.id, configuration=config_id)
@test(runs_after=[test_assign_config_and_name_to_instance_using_patch])
def test_unassign_configuration_after_patch(self):
"""Remove the configuration from the instance"""
instance_info.dbaas.instances.update(instance_info.id,
remove_configuration=True)
assert_equal(202, instance_info.dbaas.last_http_code)
instance = instance_info.dbaas.instances.get(instance_info.id)
assert_equal('RESTART_REQUIRED', instance.status)
# restart to be sure configuration has been unassigned
instance_info.dbaas.instances.restart(instance_info.id)
assert_equal(202, instance_info.dbaas.last_http_code)
sleep(2)
def result_is_active():
instance = instance_info.dbaas.instances.get(
instance_info.id)
if instance.status in CONFIG.running_status:
return True
else:
assert_equal("REBOOT", instance.status)
return False
poll_until(result_is_active)
result = instance_info.dbaas.configurations.get(configuration_info.id)
assert_equal(result.instance_count, 0)
@test
def test_unassign_configuration_from_invalid_instance_using_patch(self):
# test unassign config group from an invalid instance
invalid_id = "invalid-inst-id"
try:
instance_info.dbaas.instances.update(invalid_id,
remove_configuration=True)
except exceptions.NotFound:
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 404)
@test(runs_after=[test_unassign_configuration_after_patch])
def test_delete_unassigned_configuration(self):
"""test_delete_unassigned_configuration"""
instance_info.dbaas.configurations.delete(configuration_info.id)
resp, body = instance_info.dbaas.client.last_response
assert_equal(resp.status, 202)
@test(depends_on=[test_delete_unassigned_configuration])
@time_out(TIMEOUT_INSTANCE_DELETE)
def test_delete_configuration_instance(self):
"""test_delete_configuration_instance"""
instance_info.dbaas.instances.delete(configuration_instance.id)
assert_equal(202, instance_info.dbaas.last_http_code)
def instance_is_gone():
try:
instance_info.dbaas.instances.get(configuration_instance.id)
return False
except exceptions.NotFound:
return True
poll_until(instance_is_gone)
assert_raises(exceptions.NotFound, instance_info.dbaas.instances.get,
configuration_instance.id)

View File

@ -1,189 +0,0 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.api.instances import instance_info
from trove.tests import util
from trove.tests.util import test_config
FAKE = test_config.values['fake_mode']
@test(depends_on_groups=[tests.DBAAS_API_USERS_ACCESS],
groups=[tests.DBAAS_API_DATABASES])
class TestDatabases(object):
"""Test the creation and deletion of additional MySQL databases"""
dbname = "third #?@some_-"
dbname_urlencoded = "third%20%23%3F%40some_-"
dbname2 = "seconddb"
created_dbs = [dbname, dbname2]
system_dbs = ['information_schema', 'mysql', 'lost+found']
@before_class
def setUp(self):
self.dbaas = util.create_dbaas_client(instance_info.user)
self.dbaas_admin = util.create_dbaas_client(instance_info.admin_user)
@test
def test_cannot_create_taboo_database_names(self):
for name in self.system_dbs:
databases = [{"name": name, "character_set": "latin2",
"collate": "latin2_general_ci"}]
assert_raises(exceptions.BadRequest, self.dbaas.databases.create,
instance_info.id, databases)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_create_database(self):
databases = []
databases.append({"name": self.dbname, "character_set": "latin2",
"collate": "latin2_general_ci"})
databases.append({"name": self.dbname2})
self.dbaas.databases.create(instance_info.id, databases)
assert_equal(202, self.dbaas.last_http_code)
if not FAKE:
time.sleep(5)
@test(depends_on=[test_create_database])
def test_create_database_list(self):
databases = self.dbaas.databases.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
found = False
for db in self.created_dbs:
for result in databases:
if result.name == db:
found = True
assert_true(found, "Database '%s' not found in result" % db)
found = False
@test(depends_on=[test_create_database])
def test_fails_when_creating_a_db_twice(self):
databases = []
databases.append({"name": self.dbname, "character_set": "latin2",
"collate": "latin2_general_ci"})
databases.append({"name": self.dbname2})
assert_raises(exceptions.BadRequest, self.dbaas.databases.create,
instance_info.id, databases)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_create_database_list_system(self):
# Databases that should not be returned in the list
databases = self.dbaas.databases.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
found = False
for db in self.system_dbs:
found = any(result.name == db for result in databases)
msg = "Database '%s' SHOULD NOT be found in result" % db
assert_false(found, msg)
found = False
@test
def test_create_database_on_missing_instance(self):
databases = [{"name": "invalid_db", "character_set": "latin2",
"collate": "latin2_general_ci"}]
assert_raises(exceptions.NotFound, self.dbaas.databases.create,
-1, databases)
assert_equal(404, self.dbaas.last_http_code)
@test(runs_after=[test_create_database])
def test_delete_database(self):
self.dbaas.databases.delete(instance_info.id, self.dbname_urlencoded)
assert_equal(202, self.dbaas.last_http_code)
if not FAKE:
time.sleep(5)
dbs = self.dbaas.databases.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
found = any(result.name == self.dbname_urlencoded for result in dbs)
assert_false(found, "Database '%s' SHOULD NOT be found in result" %
self.dbname_urlencoded)
@test(runs_after=[test_delete_database])
def test_cannot_delete_taboo_database_names(self):
for name in self.system_dbs:
assert_raises(exceptions.BadRequest, self.dbaas.databases.delete,
instance_info.id, name)
assert_equal(400, self.dbaas.last_http_code)
@test(runs_after=[test_delete_database])
def test_delete_database_on_missing_instance(self):
assert_raises(exceptions.NotFound, self.dbaas.databases.delete,
-1, self.dbname_urlencoded)
assert_equal(404, self.dbaas.last_http_code)
@test
def test_database_name_too_long(self):
databases = []
name = ("aasdlkhaglkjhakjdkjgfakjgadgfkajsg"
"34523dfkljgasldkjfglkjadsgflkjagsdd")
databases.append({"name": name})
assert_raises(exceptions.BadRequest, self.dbaas.databases.create,
instance_info.id, databases)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_invalid_database_name(self):
databases = []
databases.append({"name": "sdfsd,"})
assert_raises(exceptions.BadRequest, self.dbaas.databases.create,
instance_info.id, databases)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_pagination(self):
databases = []
databases.append({"name": "Sprockets", "character_set": "latin2",
"collate": "latin2_general_ci"})
databases.append({"name": "Cogs"})
databases.append({"name": "Widgets"})
self.dbaas.databases.create(instance_info.id, databases)
assert_equal(202, self.dbaas.last_http_code)
if not FAKE:
time.sleep(5)
limit = 2
databases = self.dbaas.databases.list(instance_info.id, limit=limit)
assert_equal(200, self.dbaas.last_http_code)
marker = databases.next
# Better get only as many as we asked for
assert_true(len(databases) <= limit)
assert_true(databases.next is not None)
assert_equal(marker, databases[-1].name)
marker = databases.next
# I better get new databases if I use the marker I was handed.
databases = self.dbaas.databases.list(instance_info.id, limit=limit,
marker=marker)
assert_equal(200, self.dbaas.last_http_code)
assert_true(marker not in [database.name for database in databases])
# Now fetch again with a larger limit.
databases = self.dbaas.databases.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
assert_true(databases.next is None)

View File

@ -1,197 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nose.tools import assert_equal
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.util.check import TypeCheck
from trove.tests.util import create_dbaas_client
from trove.tests.util import test_config
from trove.tests.util.users import Requirements
NAME = "nonexistent"
@test(groups=[tests.DBAAS_API_DATASTORES],
depends_on_groups=[tests.DBAAS_API_VERSIONS])
class Datastores(object):
@before_class
def setUp(self):
rd_user = test_config.users.find_user(
Requirements(is_admin=False, services=["trove"]))
rd_admin = test_config.users.find_user(
Requirements(is_admin=True, services=["trove"]))
self.rd_client = create_dbaas_client(rd_user)
self.rd_admin = create_dbaas_client(rd_admin)
@test
def test_datastore_list_attrs(self):
datastores = self.rd_client.datastores.list()
for datastore in datastores:
with TypeCheck('Datastore', datastore) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("links", list)
check.has_field("versions", list)
@test
def test_datastore_get(self):
# Test get by name
datastore_by_name = self.rd_client.datastores.get(
test_config.dbaas_datastore)
with TypeCheck('Datastore', datastore_by_name) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("links", list)
assert_equal(datastore_by_name.name, test_config.dbaas_datastore)
# test get by id
datastore_by_id = self.rd_client.datastores.get(
datastore_by_name.id)
with TypeCheck('Datastore', datastore_by_id) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("links", list)
check.has_field("versions", list)
assert_equal(datastore_by_id.id, datastore_by_name.id)
@test
def test_datastore_not_found(self):
try:
assert_raises(exceptions.NotFound,
self.rd_client.datastores.get, NAME)
except exceptions.BadRequest as e:
assert_equal(e.message,
"Datastore '%s' cannot be found." % NAME)
@test
def test_create_inactive_datastore_by_admin(self):
datastores = self.rd_admin.datastores.list()
for ds in datastores:
if ds.name == test_config.dbaas_datastore_name_no_versions:
for version in ds.versions:
if version['name'] == 'inactive_version':
return
# Create datastore version for testing
# 'Test_Datastore_1' is also used in other test cases.
# Will be deleted in test_delete_datastore_version
self.rd_admin.mgmt_datastore_versions.create(
"inactive_version", test_config.dbaas_datastore_name_no_versions,
"test_manager", None, image_tags=['trove'],
active='false', default='false'
)
@test(depends_on=[test_create_inactive_datastore_by_admin])
def test_datastore_with_no_active_versions_is_hidden(self):
datastores = self.rd_client.datastores.list()
name_list = [datastore.name for datastore in datastores]
assert_true(
test_config.dbaas_datastore_name_no_versions not in name_list)
@test(depends_on=[test_create_inactive_datastore_by_admin])
def test_datastore_with_no_active_versions_is_visible_for_admin(self):
datastores = self.rd_admin.datastores.list()
name_list = [datastore.name for datastore in datastores]
assert_true(test_config.dbaas_datastore_name_no_versions in name_list)
@test(groups=[tests.DBAAS_API_DATASTORES])
class DatastoreVersions(object):
@before_class
def setUp(self):
rd_user = test_config.users.find_user(
Requirements(is_admin=False, services=["trove"]))
self.rd_client = create_dbaas_client(rd_user)
self.datastore_active = self.rd_client.datastores.get(
test_config.dbaas_datastore)
self.datastore_version_active = self.rd_client.datastore_versions.list(
self.datastore_active.id)[0]
@test
def test_datastore_version_list_attrs(self):
versions = self.rd_client.datastore_versions.list(
self.datastore_active.name)
for version in versions:
with TypeCheck('DatastoreVersion', version) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("links", list)
@test
def test_datastore_version_get_attrs(self):
version = self.rd_client.datastore_versions.get(
self.datastore_active.name, self.datastore_version_active.name)
with TypeCheck('DatastoreVersion', version) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("datastore", str)
check.has_field("links", list)
assert_equal(version.name, self.datastore_version_active.name)
@test
def test_datastore_version_get_by_uuid_attrs(self):
version = self.rd_client.datastore_versions.get_by_uuid(
self.datastore_version_active.id)
with TypeCheck('DatastoreVersion', version) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("datastore", str)
check.has_field("links", list)
assert_equal(version.name, self.datastore_version_active.name)
@test
def test_datastore_version_not_found(self):
assert_raises(exceptions.BadRequest,
self.rd_client.datastore_versions.get,
self.datastore_active.name, NAME)
@test
def test_datastore_version_list_by_uuid(self):
versions = self.rd_client.datastore_versions.list(
self.datastore_active.id)
for version in versions:
with TypeCheck('DatastoreVersion', version) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("links", list)
@test
def test_datastore_version_get_by_uuid(self):
version = self.rd_client.datastore_versions.get(
self.datastore_active.id, self.datastore_version_active.id)
with TypeCheck('DatastoreVersion', version) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("datastore", str)
check.has_field("links", list)
assert_equal(version.name, self.datastore_version_active.name)
@test
def test_datastore_version_invalid_uuid(self):
try:
self.rd_client.datastore_versions.get_by_uuid(
self.datastore_version_active.id)
except exceptions.BadRequest as e:
assert_equal(e.message,
"Datastore version '%s' cannot be found." %
test_config.dbaas_datastore_version)

File diff suppressed because it is too large Load Diff

View File

@ -1,610 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import time
from proboscis import after_class
from proboscis import asserts
from proboscis import before_class
from proboscis.decorators import time_out
from proboscis import SkipTest
from proboscis import test
from troveclient.compat.exceptions import BadRequest
from troveclient.compat.exceptions import HTTPNotImplemented
from trove.common import cfg
from trove.common.utils import poll_until
from trove import tests
from trove.tests.api.instances import assert_unprocessable
from trove.tests.api.instances import EPHEMERAL_SUPPORT
from trove.tests.api.instances import instance_info
from trove.tests.api.instances import VOLUME_SUPPORT
from trove.tests.config import CONFIG
import trove.tests.util as testsutil
from trove.tests.util.check import TypeCheck
from trove.tests.util import LocalSqlClient
from trove.tests.util.server_connection import create_server_connection
MYSQL_USERNAME = "test_user"
MYSQL_PASSWORD = "abcde"
FAKE_MODE = CONFIG.fake_mode
# If true, then we will actually log into the database.
USE_IP = not FAKE_MODE
class MySqlConnection(object):
def __init__(self, host):
self.host = host
def connect(self):
"""Connect to MySQL database."""
print("Connecting to MySQL, mysql --host %s -u %s -p%s"
% (self.host, MYSQL_USERNAME, MYSQL_PASSWORD))
sql_engine = LocalSqlClient.init_engine(MYSQL_USERNAME, MYSQL_PASSWORD,
self.host)
self.client = LocalSqlClient(sql_engine, use_flush=False)
def is_connected(self):
cmd = "SELECT 1;"
try:
with self.client:
self.client.execute(cmd)
return True
except Exception as e:
print(
"Failed to execute command: %s, error: %s" % (cmd, str(e))
)
return False
def execute(self, cmd):
try:
with self.client:
self.client.execute(cmd)
return True
except Exception as e:
print(
"Failed to execute command: %s, error: %s" % (cmd, str(e))
)
return False
# Use default value from trove.common.cfg, and it could be overridden by
# a environment variable when the tests run.
def get_resize_timeout():
value_from_env = os.environ.get("TROVE_RESIZE_TIME_OUT", None)
if value_from_env:
return int(value_from_env)
return cfg.CONF.resize_time_out
TIME_OUT_TIME = get_resize_timeout()
class ActionTestBase(object):
"""Has some helpful functions for testing actions.
The test user must be created for some of these functions to work.
"""
def set_up(self):
"""If you're using this as a base class, call this method first."""
self.dbaas = instance_info.dbaas
if USE_IP:
address = instance_info.get_address()
self.connection = MySqlConnection(address)
@property
def instance(self):
return self.dbaas.instances.get(self.instance_id)
@property
def instance_address(self):
return instance_info.get_address()
@property
def instance_mgmt_address(self):
return instance_info.get_address(mgmt=True)
@property
def instance_id(self):
return instance_info.id
def create_user(self):
"""Create a MySQL user we can use for this test."""
users = [{"name": MYSQL_USERNAME, "password": MYSQL_PASSWORD,
"databases": [{"name": MYSQL_USERNAME}]}]
self.dbaas.users.create(instance_info.id, users)
def has_user():
users = self.dbaas.users.list(instance_info.id)
return any([user.name == MYSQL_USERNAME for user in users])
poll_until(has_user, time_out=30)
if not FAKE_MODE:
time.sleep(5)
def ensure_mysql_is_running(self):
if USE_IP:
self.connection.connect()
asserts.assert_true(self.connection.is_connected(),
"Unable to connect to MySQL.")
self.proc_id = self.find_mysql_proc_on_instance()
asserts.assert_is_not_none(self.proc_id,
"MySQL process can not be found.")
asserts.assert_is_not_none(self.instance)
asserts.assert_true(self.instance.status in CONFIG.running_status)
def find_mysql_proc_on_instance(self):
server = create_server_connection(
self.instance_id,
ip_address=self.instance_mgmt_address
)
container_exist_cmd = 'sudo docker ps -q'
pid_cmd = "sudo docker inspect database -f '{{.State.Pid}}'"
try:
server.execute(container_exist_cmd)
except Exception as err:
asserts.fail("Failed to execute command: %s, error: %s" %
(container_exist_cmd, str(err)))
try:
stdout = server.execute(pid_cmd)
return int(stdout)
except ValueError:
return None
except Exception as err:
asserts.fail("Failed to execute command: %s, error: %s" %
(pid_cmd, str(err)))
def log_current_users(self):
users = self.dbaas.users.list(self.instance_id)
CONFIG.get_report().log("Current user count = %d" % len(users))
for user in users:
CONFIG.get_report().log("\t" + str(user))
def _build_expected_msg(self):
expected = {
'instance_size': instance_info.dbaas_flavor.ram,
'tenant_id': instance_info.user.tenant_id,
'instance_id': instance_info.id,
'instance_name': instance_info.name,
'created_at': testsutil.iso_time(
instance_info.initial_result.created),
'launched_at': testsutil.iso_time(self.instance.updated),
'modify_at': testsutil.iso_time(self.instance.updated)
}
return expected
@test(depends_on_groups=[tests.DBAAS_API_INSTANCES])
def create_user():
"""Create a test user so that subsequent tests can log in."""
helper = ActionTestBase()
helper.set_up()
if USE_IP:
try:
helper.create_user()
except BadRequest:
pass # Ignore this if the user already exists.
helper.connection.connect()
asserts.assert_true(helper.connection.is_connected(),
"Test user must be able to connect to MySQL.")
class RebootTestBase(ActionTestBase):
"""Tests restarting MySQL."""
def call_reboot(self):
raise NotImplementedError()
def wait_for_successful_restart(self):
"""Wait until status becomes running.
Reboot is an async operation, make sure the instance is rebooting
before active.
"""
def _is_rebooting():
instance = self.instance
if instance.status == "REBOOT":
return True
return False
poll_until(_is_rebooting, time_out=TIME_OUT_TIME)
def is_finished_rebooting():
instance = self.instance
asserts.assert_not_equal(instance.status, "ERROR")
if instance.status in CONFIG.running_status:
return True
return False
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
def assert_mysql_proc_is_different(self):
if not USE_IP:
return
new_proc_id = self.find_mysql_proc_on_instance()
asserts.assert_not_equal(new_proc_id, self.proc_id,
"MySQL process ID should be different!")
def successful_restart(self):
"""Restart MySQL via the REST API successfully."""
self.call_reboot()
self.wait_for_successful_restart()
self.assert_mysql_proc_is_different()
def wait_for_failure_status(self):
"""Wait until status becomes running."""
def is_finished_rebooting():
instance = self.instance
if instance.status in ['REBOOT', 'ACTIVE', 'HEALTHY']:
return False
# The reason we check for BLOCKED as well as SHUTDOWN is because
# Upstart might try to bring mysql back up after the borked
# connection and the guest status can be either
asserts.assert_true(instance.status in ("SHUTDOWN", "BLOCKED"))
return True
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
def wait_for_status(self, status, timeout=60, sleep_time=5):
def is_status():
instance = self.instance
if instance.status in status:
return True
return False
poll_until(is_status, time_out=timeout, sleep_time=sleep_time)
def wait_for_operating_status(self, status, timeout=60, sleep_time=5):
def is_status():
instance = self.instance
if instance.operating_status in status:
return True
return False
poll_until(is_status, time_out=timeout, sleep_time=sleep_time)
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
depends_on_groups=[tests.DBAAS_API_DATABASES],
depends_on=[create_user])
class RestartTests(RebootTestBase):
"""Test restarting MySQL."""
def call_reboot(self):
self.instance.restart()
asserts.assert_equal(202, self.dbaas.last_http_code)
@before_class
def test_set_up(self):
self.set_up()
@test
def test_ensure_mysql_is_running(self):
"""Make sure MySQL is accessible before restarting."""
self.ensure_mysql_is_running()
@test(depends_on=[test_ensure_mysql_is_running])
def test_successful_restart(self):
"""Restart MySQL via the REST API successfully."""
self.successful_restart()
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
depends_on_classes=[RestartTests])
class StopTests(RebootTestBase):
"""Test stopping MySQL."""
def call_reboot(self):
self.instance.restart()
@before_class
def test_set_up(self):
self.set_up()
@test
def test_ensure_mysql_is_running(self):
"""Make sure MySQL is accessible before restarting."""
self.ensure_mysql_is_running()
@test(depends_on=[test_ensure_mysql_is_running])
def test_stop_mysql(self):
"""Stops MySQL by admin."""
instance_info.dbaas_admin.management.stop(self.instance_id)
self.wait_for_operating_status(['SHUTDOWN'], timeout=90, sleep_time=10)
@test(depends_on=[test_stop_mysql])
def test_volume_info_while_mysql_is_down(self):
"""
Confirms the get call behaves appropriately while an instance is
down.
"""
if not VOLUME_SUPPORT:
raise SkipTest("Not testing volumes.")
instance = self.dbaas.instances.get(self.instance_id)
with TypeCheck("instance", instance) as check:
check.has_field("volume", dict)
check.true('size' in instance.volume)
check.true('used' in instance.volume)
check.true(isinstance(instance.volume.get('size', None), int))
check.true(isinstance(instance.volume.get('used', None), float))
@test(depends_on=[test_volume_info_while_mysql_is_down])
def test_successful_restart_from_shutdown(self):
"""Restart MySQL via the REST API successfully when MySQL is down."""
self.successful_restart()
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
depends_on_classes=[StopTests])
class RebootTests(RebootTestBase):
"""Test restarting instance."""
def call_reboot(self):
instance_info.dbaas_admin.management.reboot(self.instance_id)
@before_class
def test_set_up(self):
self.set_up()
asserts.assert_true(hasattr(self, 'dbaas'))
asserts.assert_true(self.dbaas is not None)
@test
def test_ensure_mysql_is_running(self):
"""Make sure MySQL is accessible before rebooting."""
self.ensure_mysql_is_running()
@after_class(depends_on=[test_ensure_mysql_is_running])
def test_successful_reboot(self):
"""MySQL process is different after rebooting."""
if FAKE_MODE:
raise SkipTest("Cannot run this in fake mode.")
self.successful_restart()
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
depends_on_classes=[RebootTests])
class ResizeInstanceTest(ActionTestBase):
"""Test resizing instance."""
@property
def flavor_id(self):
return instance_info.dbaas_flavor_href
def wait_for_resize(self):
def is_finished_resizing():
instance = self.instance
if instance.status == "RESIZE":
return False
asserts.assert_true(instance.status in CONFIG.running_status)
return True
poll_until(is_finished_resizing, time_out=TIME_OUT_TIME)
@before_class
def setup(self):
self.set_up()
if USE_IP:
self.connection.connect()
asserts.assert_true(self.connection.is_connected(),
"Should be able to connect before resize.")
@test
def test_instance_resize_same_size_should_fail(self):
asserts.assert_raises(BadRequest, self.dbaas.instances.resize_instance,
self.instance_id, self.flavor_id)
@test(enabled=VOLUME_SUPPORT)
def test_instance_resize_to_ephemeral_in_volume_support_should_fail(self):
flavor_name = CONFIG.values.get('instance_bigger_eph_flavor_name',
'eph.rd-smaller')
flavor_id = None
for item in instance_info.flavors:
if item.name == flavor_name:
flavor_id = item.id
asserts.assert_is_not_none(flavor_id)
def is_active():
return self.instance.status in CONFIG.running_status
poll_until(is_active, time_out=TIME_OUT_TIME)
asserts.assert_true(self.instance.status in CONFIG.running_status)
asserts.assert_raises(HTTPNotImplemented,
self.dbaas.instances.resize_instance,
self.instance_id, flavor_id)
@test(enabled=EPHEMERAL_SUPPORT)
def test_instance_resize_to_non_ephemeral_flavor_should_fail(self):
flavor_name = CONFIG.values.get('instance_bigger_flavor_name',
'm1-small')
flavor_id = None
for item in instance_info.flavors:
if item.name == flavor_name:
flavor_id = item.id
asserts.assert_is_not_none(flavor_id)
asserts.assert_raises(BadRequest, self.dbaas.instances.resize_instance,
self.instance_id, flavor_id)
def obtain_flavor_ids(self):
old_id = self.instance.flavor['id']
self.expected_old_flavor_id = old_id
if EPHEMERAL_SUPPORT:
flavor_name = CONFIG.values.get('instance_bigger_eph_flavor_name',
'eph.rd-smaller')
else:
flavor_name = CONFIG.values.get('instance_bigger_flavor_name',
'm1.small')
new_flavor = None
for item in instance_info.flavors:
if item.name == flavor_name:
new_flavor = item
break
asserts.assert_is_not_none(new_flavor)
self.old_dbaas_flavor = instance_info.dbaas_flavor
instance_info.dbaas_flavor = new_flavor
self.expected_new_flavor_id = new_flavor.id
@test(depends_on=[test_instance_resize_same_size_should_fail])
def test_status_changed_to_resize(self):
"""test_status_changed_to_resize"""
self.log_current_users()
self.obtain_flavor_ids()
self.dbaas.instances.resize_instance(
self.instance_id,
self.expected_new_flavor_id)
asserts.assert_equal(202, self.dbaas.last_http_code)
# (WARNING) IF THE RESIZE IS WAY TOO FAST THIS WILL FAIL
assert_unprocessable(
self.dbaas.instances.resize_instance,
self.instance_id,
self.expected_new_flavor_id)
@test(depends_on=[test_status_changed_to_resize])
@time_out(TIME_OUT_TIME)
def test_instance_returns_to_active_after_resize(self):
"""test_instance_returns_to_active_after_resize"""
self.wait_for_resize()
@test(depends_on=[test_instance_returns_to_active_after_resize,
test_status_changed_to_resize])
def test_resize_instance_usage_event_sent(self):
expected = self._build_expected_msg()
expected['old_instance_size'] = self.old_dbaas_flavor.ram
instance_info.consumer.check_message(instance_info.id,
'trove.instance.modify_flavor',
**expected)
@test(depends_on=[test_instance_returns_to_active_after_resize],
runs_after=[test_resize_instance_usage_event_sent])
def resize_should_not_delete_users(self):
"""Resize should not delete users."""
# Resize has an incredibly weird bug where users are deleted after
# a resize. The code below is an attempt to catch this while proceeding
# with the rest of the test (note the use of runs_after).
if USE_IP:
users = self.dbaas.users.list(self.instance_id)
usernames = [user.name for user in users]
if MYSQL_USERNAME not in usernames:
self.create_user()
asserts.fail("Resize made the test user disappear.")
@test(depends_on=[test_instance_returns_to_active_after_resize],
runs_after=[resize_should_not_delete_users])
def test_make_sure_mysql_is_running_after_resize(self):
self.ensure_mysql_is_running()
@test(depends_on=[test_make_sure_mysql_is_running_after_resize])
def test_instance_has_new_flavor_after_resize(self):
actual = self.instance.flavor['id']
asserts.assert_equal(actual, self.expected_new_flavor_id)
@test(depends_on_classes=[ResizeInstanceTest],
groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
enabled=VOLUME_SUPPORT)
class ResizeInstanceVolumeTest(ActionTestBase):
"""Resize the volume of the instance."""
@before_class
def setUp(self):
self.set_up()
self.old_volume_size = int(instance_info.volume['size'])
self.new_volume_size = self.old_volume_size + 1
self.old_volume_fs_size = instance_info.get_volume_filesystem_size()
# Create some databases to check they still exist after the resize
self.expected_dbs = ['salmon', 'halibut']
databases = []
for name in self.expected_dbs:
databases.append({"name": name})
instance_info.dbaas.databases.create(instance_info.id, databases)
@test
@time_out(60)
def test_volume_resize(self):
"""test_volume_resize"""
instance_info.dbaas.instances.resize_volume(instance_info.id,
self.new_volume_size)
@test(depends_on=[test_volume_resize])
def test_volume_resize_success(self):
"""test_volume_resize_success"""
def check_resize_status():
instance = instance_info.dbaas.instances.get(instance_info.id)
if instance.status in CONFIG.running_status:
return True
elif instance.status in ["RESIZE", "SHUTDOWN"]:
return False
else:
asserts.fail("Status should not be %s" % instance.status)
poll_until(check_resize_status, sleep_time=5, time_out=300,
initial_delay=5)
instance = instance_info.dbaas.instances.get(instance_info.id)
asserts.assert_equal(instance.volume['size'], self.new_volume_size)
@test(depends_on=[test_volume_resize_success])
def test_volume_filesystem_resize_success(self):
"""test_volume_filesystem_resize_success"""
# The get_volume_filesystem_size is a mgmt call through the guestagent
# and the volume resize occurs through the fake nova-volume.
# Currently the guestagent fakes don't have access to the nova fakes so
# it doesn't know that a volume resize happened and to what size so
# we can't fake the filesystem size.
if FAKE_MODE:
raise SkipTest("Cannot run this in fake mode.")
new_volume_fs_size = instance_info.get_volume_filesystem_size()
asserts.assert_true(self.old_volume_fs_size < new_volume_fs_size)
# The total filesystem size is not going to be exactly the same size of
# cinder volume but it should round to it. (e.g. round(1.9) == 2)
asserts.assert_equal(round(new_volume_fs_size), self.new_volume_size)
@test(depends_on=[test_volume_resize_success])
def test_resize_volume_usage_event_sent(self):
"""test_resize_volume_usage_event_sent"""
expected = self._build_expected_msg()
expected['volume_size'] = self.new_volume_size
expected['old_volume_size'] = self.old_volume_size
instance_info.consumer.check_message(instance_info.id,
'trove.instance.modify_volume',
**expected)
@test(depends_on=[test_volume_resize_success])
def test_volume_resize_success_databases(self):
"""test_volume_resize_success_databases"""
databases = instance_info.dbaas.databases.list(instance_info.id)
db_list = []
for database in databases:
db_list.append(database.name)
for name in self.expected_dbs:
if name not in db_list:
asserts.fail(
"Database %s was not found after the volume resize. "
"Returned list: %s" % (name, databases))

View File

@ -1,102 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Rackspace Hosting
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
import time
from proboscis import asserts
from proboscis.decorators import time_out
from proboscis import SkipTest
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.api import configurations
from trove.tests.api.instances import instance_info
from trove.tests.config import CONFIG
def do_not_delete_instance():
return os.environ.get("TESTS_DO_NOT_DELETE_INSTANCE", None) is not None
@test(depends_on_groups=[tests.DBAAS_API_REPLICATION],
groups=[tests.DBAAS_API_INSTANCES_DELETE],
enabled=not do_not_delete_instance())
class TestDeleteInstance(object):
@time_out(3 * 60)
@test
def test_delete(self):
"""Delete instance for clean up."""
if not hasattr(instance_info, "initial_result"):
raise SkipTest("Instance was never created, skipping test...")
# Update the report so the logs inside the instance will be saved.
CONFIG.get_report().update()
dbaas = instance_info.dbaas
dbaas.instances.delete(instance_info.id)
attempts = 0
try:
time.sleep(1)
result = True
while result is not None:
attempts += 1
result = dbaas.instances.get(instance_info.id)
asserts.assert_equal(200, dbaas.last_http_code)
asserts.assert_equal("SHUTDOWN", result.status)
time.sleep(1)
except exceptions.NotFound:
pass
except Exception as ex:
asserts.fail("A failure occurred when trying to GET instance %s "
"for the %d time: %s" %
(str(instance_info.id), attempts, str(ex)))
@test(depends_on=[test_delete])
def test_instance_status_deleted_in_db(self):
"""test_instance_status_deleted_in_db"""
dbaas_admin = instance_info.dbaas_admin
mgmt_details = dbaas_admin.management.index(deleted=True)
for instance in mgmt_details:
if instance.id == instance_info.id:
asserts.assert_equal(instance.service_status, 'DELETED')
break
else:
asserts.fail("Could not find instance %s" % instance_info.id)
@test(depends_on=[test_instance_status_deleted_in_db])
def test_delete_datastore(self):
dbaas_admin = instance_info.dbaas_admin
datastore = dbaas_admin.datastores.get(
CONFIG.dbaas_datastore_name_no_versions)
versions = dbaas_admin.datastore_versions.list(datastore.id)
for version in versions:
dbaas_admin.mgmt_datastore_versions.delete(version.id)
# Delete the datastore
dbaas_admin.datastores.delete(datastore.id)
@test(depends_on=[test_instance_status_deleted_in_db])
def test_delete_configuration(self):
"""Delete configurations created during testing."""
dbaas_admin = instance_info.dbaas_admin
configs = dbaas_admin.configurations.list()
for config in configs:
if config.name == configurations.CONFIG_NAME:
dbaas_admin.configurations.delete(config.id)

View File

@ -1,393 +0,0 @@
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from novaclient.exceptions import BadRequest
from novaclient.v2.servers import Server
from oslo_messaging._drivers.common import RPCException
from proboscis import test
from testtools import TestCase
from trove.common.exception import PollTimeOut
from trove.common.exception import TroveError
from trove.common import template
from trove.common import utils
from trove.datastore.models import DatastoreVersion
from trove.guestagent import api as guest
from trove.instance.models import DBInstance
from trove.instance.models import InstanceServiceStatus
from trove.instance import service_status as srvstatus
from trove.instance.tasks import InstanceTasks
from trove.taskmanager import models
from trove.tests.fakes import nova
from trove.tests.unittests import trove_testtools
from trove.tests.util import test_config
GROUP = 'dbaas.api.instances.resize'
OLD_FLAVOR_ID = 1
NEW_FLAVOR_ID = 2
OLD_FLAVOR = nova.FLAVORS.get(OLD_FLAVOR_ID)
NEW_FLAVOR = nova.FLAVORS.get(NEW_FLAVOR_ID)
class ResizeTestBase(TestCase):
def _init(self):
self.instance_id = 500
context = trove_testtools.TroveTestContext(self)
self.db_info = DBInstance.create(
name="instance",
flavor_id=OLD_FLAVOR_ID,
tenant_id=999,
volume_size=None,
datastore_version_id=test_config.dbaas_datastore_version_id,
task_status=InstanceTasks.RESIZING)
self.server = mock.MagicMock(spec=Server)
self.instance = models.BuiltInstanceTasks(
context,
self.db_info,
self.server,
datastore_status=InstanceServiceStatus.create(
instance_id=self.db_info.id,
status=srvstatus.ServiceStatuses.RUNNING))
self.instance.server.flavor = {'id': OLD_FLAVOR_ID}
self.guest = mock.MagicMock(spec=guest.API)
self.instance._guest = self.guest
self.instance.refresh_compute_server_info = lambda: None
self.instance._refresh_datastore_status = lambda: None
self.instance.update_db = mock.Mock()
self.instance.set_datastore_status_to_paused = mock.Mock()
self.poll_until_side_effects = []
self.action = None
def tearDown(self):
super(ResizeTestBase, self).tearDown()
self.db_info.delete()
def _poll_until(self, *args, **kwargs):
try:
effect = self.poll_until_side_effects.pop(0)
except IndexError:
effect = None
if isinstance(effect, Exception):
raise effect
elif effect is not None:
new_status, new_flavor_id = effect
self.server.status = new_status
self.instance.server.flavor['id'] = new_flavor_id
def _datastore_changes_to(self, new_status):
self.instance.datastore_status.status = new_status
@test(groups=[GROUP, GROUP + '.resize'])
class ResizeTests(ResizeTestBase):
def setUp(self):
super(ResizeTests, self).setUp()
self._init()
# By the time flavor objects pass over amqp to the
# resize action they have been turned into dicts
self.action = models.ResizeAction(self.instance,
OLD_FLAVOR.__dict__,
NEW_FLAVOR.__dict__)
def _start_mysql(self):
datastore = mock.Mock(spec=DatastoreVersion)
datastore.datastore_name = 'mysql'
datastore.name = 'mysql-5.7'
datastore.manager = 'mysql'
config = template.SingleInstanceConfigTemplate(
datastore, NEW_FLAVOR.__dict__, self.instance.id)
self.instance.guest.start_db_with_conf_changes(config.render(),
datastore.name)
def test_guest_wont_stop_mysql(self):
self.guest.stop_db.side_effect = RPCException("Could not stop MySQL!")
self.assertRaises(RPCException, self.action.execute)
self.assertEqual(1, self.guest.stop_db.call_count)
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_nova_wont_resize(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
self.server.resize.side_effect = BadRequest(400)
self.server.status = "ACTIVE"
self.assertRaises(BadRequest, self.action.execute)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.guest.restart.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_nova_resize_timeout(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
self.server.status = "ACTIVE"
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
mock_poll_until.side_effect = [None, PollTimeOut()]
self.assertRaises(PollTimeOut, self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 2
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_nova_doesnt_change_flavor(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
("VERIFY_RESIZE", OLD_FLAVOR_ID),
None,
("ACTIVE", OLD_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.assertRaisesRegex(TroveError,
"flavor_id=.* and not .*",
self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 3
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.guest.reset_configuration.assert_called_once_with(
mock.ANY)
self.instance.server.revert_resize.assert_called_once()
self.guest.restart.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_nova_resize_fails(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("ERROR", OLD_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.assertRaisesRegex(TroveError,
"status=ERROR and not VERIFY_RESIZE",
self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 2
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_nova_resizes_in_weird_state(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("ACTIVE", NEW_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.assertRaisesRegex(TroveError,
"status=ACTIVE and not VERIFY_RESIZE",
self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 2
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.guest.restart.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_guest_is_not_okay(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("VERIFY_RESIZE", NEW_FLAVOR_ID),
None,
PollTimeOut(),
("ACTIVE", OLD_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.instance.set_datastore_status_to_paused.side_effect = (
lambda: self._datastore_changes_to(
srvstatus.ServiceStatuses.PAUSED))
self.assertRaises(PollTimeOut, self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 5
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.set_datastore_status_to_paused.assert_called_once()
self.instance.guest.reset_configuration.assert_called_once_with(
mock.ANY)
self.instance.server.revert_resize.assert_called_once()
self.guest.restart.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_mysql_is_not_okay(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("VERIFY_RESIZE", NEW_FLAVOR_ID),
PollTimeOut(),
("ACTIVE", OLD_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.instance.set_datastore_status_to_paused.side_effect = (
lambda: self._datastore_changes_to(
srvstatus.ServiceStatuses.SHUTDOWN))
self._start_mysql()
self.assertRaises(PollTimeOut, self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 4
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.set_datastore_status_to_paused.assert_called_once()
self.instance.guest.reset_configuration.assert_called_once_with(
mock.ANY)
self.instance.server.revert_resize.assert_called_once()
self.guest.restart.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_confirm_resize_fails(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("VERIFY_RESIZE", NEW_FLAVOR_ID),
None,
None,
("SHUTDOWN", NEW_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.instance.set_datastore_status_to_paused.side_effect = (
lambda: self._datastore_changes_to(
srvstatus.ServiceStatuses.RUNNING))
self.server.confirm_resize.side_effect = BadRequest(400)
self._start_mysql()
self.assertRaises(BadRequest, self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 5
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.set_datastore_status_to_paused.assert_called_once()
self.instance.server.confirm_resize.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
def test_revert_nova_fails(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("VERIFY_RESIZE", NEW_FLAVOR_ID),
None,
PollTimeOut(),
("ERROR", OLD_FLAVOR_ID)])
mock_poll_until.side_effect = self._poll_until
self.instance.set_datastore_status_to_paused.side_effect = (
lambda: self._datastore_changes_to(
srvstatus.ServiceStatuses.PAUSED))
self.assertRaises(PollTimeOut, self.action.execute)
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 5
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.resize.assert_called_once_with(NEW_FLAVOR_ID)
self.instance.set_datastore_status_to_paused.assert_called_once()
self.instance.guest.reset_configuration.assert_called_once_with(
mock.ANY)
self.instance.server.revert_resize.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)
@test(groups=[GROUP, GROUP + '.migrate'])
class MigrateTests(ResizeTestBase):
def setUp(self):
super(MigrateTests, self).setUp()
self._init()
self.action = models.MigrateAction(self.instance)
def test_successful_migrate(self):
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
self.poll_until_side_effects.extend([
None,
("VERIFY_RESIZE", NEW_FLAVOR_ID),
None,
None])
mock_poll_until.side_effect = self._poll_until
self.instance.set_datastore_status_to_paused.side_effect = (
lambda: self._datastore_changes_to(
srvstatus.ServiceStatuses.RUNNING))
self.action.execute()
expected_calls = [
mock.call(mock.ANY, sleep_time=2, time_out=120)] * 4
self.assertEqual(expected_calls, mock_poll_until.call_args_list)
# Make sure self.poll_until_side_effects is empty
self.assertFalse(self.poll_until_side_effects)
self.assertEqual(1, self.guest.stop_db.call_count)
self.server.migrate.assert_called_once_with(force_host=None)
self.instance.set_datastore_status_to_paused.assert_called_once()
self.instance.server.confirm_resize.assert_called_once()
self.instance.update_db.assert_called_once_with(
task_status=InstanceTasks.NONE)

View File

@ -1,176 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Rackspace Hosting
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from datetime import datetime
from nose.tools import assert_equal
from nose.tools import assert_true
from oslo_utils import timeutils
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove.common import cfg
from trove.tests.fakes import limits as fake_limits
from trove.tests.util import create_dbaas_client
from trove.tests.util.users import Users
CONF = cfg.CONF
GROUP = "dbaas.api.limits"
DEFAULT_RATE = CONF.http_get_rate
DEFAULT_MAX_VOLUMES = CONF.max_volumes_per_tenant
DEFAULT_MAX_INSTANCES = CONF.max_instances_per_tenant
DEFAULT_MAX_BACKUPS = CONF.max_backups_per_tenant
DEFAULT_MAX_RAM = CONF.max_ram_per_tenant
def ensure_limits_are_not_faked(func):
def _cd(*args, **kwargs):
fake_limits.ENABLED = True
try:
return func(*args, **kwargs)
finally:
fake_limits.ENABLED = False
@test(groups=[GROUP])
class Limits(object):
@before_class
def setUp(self):
users = [
{
"auth_user": "rate_limit",
"auth_key": "password",
"tenant": "4000",
"requirements": {
"is_admin": False,
"services": ["trove"]
}
},
{
"auth_user": "rate_limit_exceeded",
"auth_key": "password",
"tenant": "4050",
"requirements": {
"is_admin": False,
"services": ["trove"]
}
}]
self._users = Users(users)
rate_user = self._get_user('rate_limit')
self.rd_client = create_dbaas_client(rate_user)
def _get_user(self, name):
return self._users.find_user_by_name(name)
def __is_available(self, next_available):
dt_next = timeutils.parse_isotime(next_available)
dt_now = datetime.now()
return dt_next.time() < dt_now.time()
# def _get_limits_as_dict(self, limits):
# d = {}
# for limit in limits:
# d[l.verb] = limit
# return d
# @test
# @ensure_limits_are_not_faked
# def test_limits_index(self):
# """Test_limits_index."""
# limits = self.rd_client.limits.list()
# d = self._get_limits_as_dict(limits)
# # remove the abs_limits from the rate limits
# abs_limits = d.pop("ABSOLUTE", None)
# assert_equal(abs_limits.verb, "ABSOLUTE")
# assert_equal(int(abs_limits.max_instances), DEFAULT_MAX_INSTANCES)
# assert_equal(int(abs_limits.max_backups), DEFAULT_MAX_BACKUPS)
# assert_equal(int(abs_limits.max_volumes), DEFAULT_MAX_VOLUMES)
# assert_equal(int(abs_limits.max_ram), DEFAULT_MAX_RAM)
# for k in d:
# assert_equal(d[k].verb, k)
# assert_equal(d[k].unit, "MINUTE")
# assert_true(int(d[k].remaining) <= DEFAULT_RATE)
# assert_true(d[k].nextAvailable is not None)
@test
@ensure_limits_are_not_faked
def test_limits_get_remaining(self):
"""Test_limits_get_remaining."""
limits = ()
for i in range(5):
limits = self.rd_client.limits.list()
d = self._get_limits_as_dict(limits)
abs_limits = d["ABSOLUTE"]
get = d["GET"]
assert_equal(int(abs_limits.max_instances), DEFAULT_MAX_INSTANCES)
assert_equal(int(abs_limits.max_backups), DEFAULT_MAX_BACKUPS)
assert_equal(int(abs_limits.max_volumes), DEFAULT_MAX_VOLUMES)
assert_equal(int(abs_limits.max_ram), DEFAULT_MAX_RAM)
assert_equal(get.verb, "GET")
assert_equal(get.unit, "MINUTE")
assert_true(int(get.remaining) <= DEFAULT_RATE - 5)
assert_true(get.nextAvailable is not None)
@test
@ensure_limits_are_not_faked
def test_limits_exception(self):
"""Test_limits_exception."""
# use a different user to avoid throttling tests run out of order
rate_user_exceeded = self._get_user('rate_limit_exceeded')
rd_client = create_dbaas_client(rate_user_exceeded)
get = None
encountered = False
for i in range(DEFAULT_RATE + 50):
try:
limits = rd_client.limits.list()
d = self._get_limits_as_dict(limits)
get = d["GET"]
abs_limits = d["ABSOLUTE"]
assert_equal(get.verb, "GET")
assert_equal(get.unit, "MINUTE")
assert_equal(int(abs_limits.max_instances),
DEFAULT_MAX_INSTANCES)
assert_equal(int(abs_limits.max_backups),
DEFAULT_MAX_BACKUPS)
assert_equal(int(abs_limits.max_volumes),
DEFAULT_MAX_VOLUMES)
assert_equal(int(abs_limits.max_ram,),
DEFAULT_MAX_RAM)
except exceptions.OverLimit:
encountered = True
assert_true(encountered)
assert_true(int(get.remaining) <= 50)

View File

@ -1,226 +0,0 @@
# Copyright 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import asserts
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.util import create_dbaas_client
from trove.tests.util import test_config
from trove.tests.util.users import Requirements
GROUP = "dbaas.api.mgmt.configurations"
@test(groups=[GROUP, tests.DBAAS_API, tests.PRE_INSTANCES])
class ConfigGroupsSetupBeforeInstanceCreation(object):
@before_class
def setUp(self):
self.user = test_config.users.find_user(Requirements(is_admin=True))
self.admin_client = create_dbaas_client(self.user)
self.datastore_version_id = self.admin_client.datastore_versions.get(
"mysql", "5.5").id
@test
def test_valid_config_create_type(self):
name = "testconfig-create"
restart_required = 1
data_type = "string"
max_size = None
min_size = None
client = self.admin_client.mgmt_configs
param_list = client.parameters_by_version(
self.datastore_version_id)
asserts.assert_true(name not in [p.name for p in param_list])
client.create(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
param_list = client.parameters_by_version(
self.datastore_version_id)
asserts.assert_true(name in [p.name for p in param_list])
param = client.get_parameter_by_version(
self.datastore_version_id, name)
asserts.assert_equal(name, param.name)
asserts.assert_equal(restart_required, param.restart_required)
asserts.assert_equal(data_type, param.type)
# test the modify
restart_required = 0
data_type = "integer"
max_size = "10"
min_size = "1"
client.modify(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
param = client.get_parameter_by_version(
self.datastore_version_id, name)
asserts.assert_equal(name, param.name)
asserts.assert_equal(restart_required, param.restart_required)
asserts.assert_equal(data_type, param.type)
asserts.assert_equal(max_size, param.max)
asserts.assert_equal(min_size, param.min)
client.delete(self.datastore_version_id, name)
# test show deleted params work
param_list = client.list_all_parameter_by_version(
self.datastore_version_id)
asserts.assert_true(name in [p.name for p in param_list])
param = client.get_any_parameter_by_version(
self.datastore_version_id, name)
asserts.assert_equal(name, param.name)
asserts.assert_equal(restart_required, param.restart_required)
asserts.assert_equal(data_type, param.type)
asserts.assert_equal(int(max_size), int(param.max))
asserts.assert_equal(int(min_size), int(param.min))
asserts.assert_equal(True, param.deleted)
asserts.assert_true(param.deleted_at)
def test_create_config_type_twice_fails(self):
name = "test-delete-config-types"
restart_required = 1
data_type = "string"
max_size = None
min_size = None
client = self.admin_client.mgmt_configs
client.create(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
asserts.assert_raises(exceptions.BadRequest,
client.create,
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
client.delete(self.datastore_version_id, name)
config_list = client.parameters_by_version(self.datastore_version_id)
asserts.assert_true(name not in [conf.name for conf in config_list])
# testing that recreate of a deleted parameter works.
client.create(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
config_list = client.parameters_by_version(self.datastore_version_id)
asserts.assert_false(name not in [conf.name for conf in config_list])
@test
def test_delete_config_type(self):
name = "test-delete-config-types"
restart_required = 1
data_type = "string"
max_size = None
min_size = None
client = self.admin_client.mgmt_configs
client.create(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
client.delete(self.datastore_version_id, name)
config_list = client.parameters_by_version(self.datastore_version_id)
asserts.assert_true(name not in [conf.name for conf in config_list])
@test
def test_delete_config_type_fail(self):
asserts.assert_raises(
exceptions.BadRequest,
self.admin_client.mgmt_configs.delete,
self.datastore_version_id,
"test-delete-config-types")
@test
def test_invalid_config_create_type(self):
name = "testconfig_invalid_type"
restart_required = 1
data_type = "other"
max_size = None
min_size = None
asserts.assert_raises(
exceptions.BadRequest,
self.admin_client.mgmt_configs.create,
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
@test
def test_invalid_config_create_restart_required(self):
name = "testconfig_invalid_restart_required"
restart_required = 5
data_type = "string"
max_size = None
min_size = None
asserts.assert_raises(
exceptions.BadRequest,
self.admin_client.mgmt_configs.create,
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
@test
def test_config_parameter_was_deleted_then_recreate_updates_it(self):
name = "test-delete-and-recreate-param"
restart_required = 1
data_type = "string"
max_size = None
min_size = None
client = self.admin_client.mgmt_configs
client.create(
self.datastore_version_id,
name,
restart_required,
data_type,
max_size,
min_size)
client.delete(self.datastore_version_id, name)
client.create(
self.datastore_version_id,
name,
0,
data_type,
max_size,
min_size)
param_list = client.list_all_parameter_by_version(
self.datastore_version_id)
asserts.assert_true(name in [p.name for p in param_list])
param = client.get_any_parameter_by_version(
self.datastore_version_id, name)
asserts.assert_equal(False, param.deleted)

View File

@ -1,163 +0,0 @@
# Copyright [2015] Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis import before_class
from proboscis.check import Check
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.config import CONFIG
from trove.tests.util import create_client
from trove.tests.util import create_dbaas_client
from trove.tests.util import create_glance_client
from trove.tests.util import test_config
from trove.tests.util.users import Requirements
@test(groups=[tests.DBAAS_API_MGMT_DATASTORES],
depends_on_groups=[tests.DBAAS_API_DATASTORES])
class MgmtDataStoreVersion(object):
"""Tests the mgmt datastore version methods."""
@before_class
def setUp(self):
"""Create client for tests."""
reqs = Requirements(is_admin=True)
self.user = CONFIG.users.find_user(reqs)
self.client = create_dbaas_client(self.user)
self.images = []
glance_user = test_config.users.find_user(
Requirements(is_admin=True, services=["glance"]))
self.glance_client = create_glance_client(glance_user)
images = self.glance_client.images.list()
for image in images:
self.images.append(image.id)
def _find_ds_version_by_name(self, ds_version_name):
ds_versions = self.client.mgmt_datastore_versions.list()
for ds_version in ds_versions:
if ds_version_name == ds_version.name:
return ds_version
@test
def test_mgmt_ds_version_list_original_count(self):
"""Tests the mgmt datastore version list method."""
self.ds_versions = self.client.mgmt_datastore_versions.list()
# datastore-versions should exist for a functional Trove deployment.
assert_true(len(self.ds_versions) > 0)
@test
def mgmt_datastore_version_list_requires_admin_account(self):
"""Test admin is required to list datastore versions."""
client = create_client(is_admin=False)
assert_raises(exceptions.Unauthorized,
client.mgmt_datastore_versions.list)
@test(depends_on=[test_mgmt_ds_version_list_original_count])
def test_mgmt_ds_version_list_fields_present(self):
"""Verify that all expected fields are returned by list method."""
expected_fields = [
'id',
'name',
'datastore_id',
'datastore_name',
'datastore_manager',
'image',
'packages',
'active',
'default',
]
for ds_version in self.ds_versions:
with Check() as check:
for field in expected_fields:
check.true(hasattr(ds_version, field),
"List lacks field %s." % field)
@test(depends_on=[test_mgmt_ds_version_list_original_count])
def test_mgmt_ds_version_get(self):
"""Tests the mgmt datastore version get method."""
test_version = self.ds_versions[0]
found_ds_version = self.client.mgmt_datastore_versions.get(
test_version.id)
assert_equal(test_version.name, found_ds_version.name)
assert_equal(test_version.datastore_id, found_ds_version.datastore_id)
assert_equal(test_version.datastore_name,
found_ds_version.datastore_name)
assert_equal(test_version.datastore_manager,
found_ds_version.datastore_manager)
assert_equal(test_version.image, found_ds_version.image)
assert_equal(test_version.packages, found_ds_version.packages)
assert_equal(test_version.active, found_ds_version.active)
assert_equal(test_version.default, found_ds_version.default)
@test(depends_on=[test_mgmt_ds_version_list_original_count])
def test_mgmt_ds_version_create(self):
"""Tests the mgmt datastore version create method."""
response = self.client.mgmt_datastore_versions.create(
'test_version1', 'test_ds', 'test_mgr',
self.images[0], ['vertica-7.1'])
assert_equal(None, response)
assert_equal(202, self.client.last_http_code)
# Since we created one more ds_version
# lets check count of total ds_versions, it should be increased by 1
new_ds_versions = self.client.mgmt_datastore_versions.list()
assert_equal(len(self.ds_versions) + 1,
len(new_ds_versions))
# Match the contents of newly created ds_version.
self.created_version = self._find_ds_version_by_name('test_version1')
assert_equal('test_version1', self.created_version.name)
assert_equal('test_ds', self.created_version.datastore_name)
assert_equal('test_mgr', self.created_version.datastore_manager)
assert_equal(self.images[0], self.created_version.image)
assert_equal(['vertica-7.1'], self.created_version.packages)
assert_true(self.created_version.active)
assert_false(self.created_version.default)
@test(depends_on=[test_mgmt_ds_version_create])
def test_mgmt_ds_version_patch(self):
"""Tests the mgmt datastore version edit method."""
self.client.mgmt_datastore_versions.edit(
self.created_version.id, image=self.images[1],
packages=['pkg1'])
assert_equal(202, self.client.last_http_code)
# Lets match the content of patched datastore
patched_ds_version = self._find_ds_version_by_name('test_version1')
assert_equal(self.images[1], patched_ds_version.image)
assert_equal(['pkg1'], patched_ds_version.packages)
@test(depends_on=[test_mgmt_ds_version_patch])
def test_mgmt_ds_version_delete(self):
"""Tests the mgmt datastore version delete method."""
self.client.mgmt_datastore_versions.delete(self.created_version.id)
assert_equal(202, self.client.last_http_code)
# Delete the created datastore as well.
self.client.datastores.delete(self.created_version.datastore_id)
# Lets match the total count of ds_version,
# it should get back to original
ds_versions = self.client.mgmt_datastore_versions.list()
assert_equal(len(self.ds_versions), len(ds_versions))

View File

@ -1,209 +0,0 @@
# Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from novaclient.v2.servers import Server
from proboscis import after_class
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_raises
from proboscis import before_class
from proboscis import SkipTest
from proboscis import test
from trove.backup import models as backup_models
from trove.backup import state
from trove.common.context import TroveContext
from trove.common import exception
from trove.extensions.mgmt.instances.models import MgmtInstance
from trove.extensions.mgmt.instances.service import MgmtInstanceController
from trove.instance import models as imodels
from trove.instance.models import DBInstance
from trove.instance import service_status as srvstatus
from trove.instance.tasks import InstanceTasks
from trove.tests.config import CONFIG
from trove.tests.util import create_dbaas_client
from trove.tests.util import test_config
from trove.tests.util.users import Requirements
GROUP = "dbaas.api.mgmt.action.reset-task-status"
class MgmtInstanceBase(object):
def setUp(self):
self._create_instance()
self.controller = MgmtInstanceController()
def tearDown(self):
self.db_info.delete()
def _create_instance(self):
self.context = TroveContext(is_admin=True)
self.tenant_id = 999
self.db_info = DBInstance.create(
id="inst-id-1",
name="instance",
flavor_id=1,
datastore_version_id=test_config.dbaas_datastore_version_id,
tenant_id=self.tenant_id,
volume_size=None,
task_status=InstanceTasks.NONE)
self.server = mock.MagicMock(spec=Server)
self.instance = imodels.Instance(
self.context,
self.db_info,
self.server,
datastore_status=imodels.InstanceServiceStatus(
srvstatus.ServiceStatuses.RUNNING))
def _make_request(self, path='/', context=None, **kwargs):
from webob import Request
path = '/'
print("path: %s" % path)
return Request.blank(path=path, environ={'trove.context': context},
**kwargs)
def _reload_db_info(self):
self.db_info = DBInstance.find_by(id=self.db_info.id, deleted=False)
@test(groups=[GROUP])
class RestartTaskStatusTests(MgmtInstanceBase):
@before_class
def setUp(self):
super(RestartTaskStatusTests, self).setUp()
self.backups_to_clear = []
@after_class
def tearDown(self):
super(RestartTaskStatusTests, self).tearDown()
def _change_task_status_to(self, new_task_status):
self.db_info.task_status = new_task_status
self.db_info.save()
def _make_request(self, path='/', context=None, **kwargs):
req = super(RestartTaskStatusTests, self)._make_request(path, context,
**kwargs)
req.method = 'POST'
body = {'reset-task-status': {}}
return req, body
def reset_task_status(self):
with mock.patch.object(MgmtInstance, 'load') as mock_load:
mock_load.return_value = self.instance
req, body = self._make_request(context=self.context)
self.controller = MgmtInstanceController()
resp = self.controller.action(req, body, self.tenant_id,
self.db_info.id)
mock_load.assert_called_once_with(context=self.context,
id=self.db_info.id)
return resp
@test
def mgmt_restart_task_requires_admin_account(self):
context = TroveContext(is_admin=False)
req, body = self._make_request(context=context)
self.controller = MgmtInstanceController()
assert_raises(exception.Forbidden, self.controller.action,
req, body, self.tenant_id, self.db_info.id)
@test
def mgmt_restart_task_returns_json(self):
resp = self.reset_task_status()
out = resp.data("application/json")
assert_equal(out, None)
@test
def mgmt_restart_task_changes_status_to_none(self):
self._change_task_status_to(InstanceTasks.BUILDING)
self.reset_task_status()
self._reload_db_info()
assert_equal(self.db_info.task_status, InstanceTasks.NONE)
@test
def mgmt_reset_task_status_clears_backups(self):
if CONFIG.fake_mode:
raise SkipTest("Test requires an instance.")
self.reset_task_status()
self._reload_db_info()
assert_equal(self.db_info.task_status, InstanceTasks.NONE)
user = test_config.users.find_user(Requirements(is_admin=False))
dbaas = create_dbaas_client(user)
admin = test_config.users.find_user(Requirements(is_admin=True))
admin_dbaas = create_dbaas_client(admin)
result = dbaas.instances.backups(self.db_info.id)
assert_equal(0, len(result))
# Create some backups.
backup_models.DBBackup.create(
name="forever_new",
description="forever new",
tenant_id=self.tenant_id,
state=state.BackupState.NEW,
instance_id=self.db_info.id,
deleted=False)
backup_models.DBBackup.create(
name="forever_build",
description="forever build",
tenant_id=self.tenant_id,
state=state.BackupState.BUILDING,
instance_id=self.db_info.id,
deleted=False)
backup_models.DBBackup.create(
name="forever_completed",
description="forever completed",
tenant_id=self.tenant_id,
state=state.BackupState.COMPLETED,
instance_id=self.db_info.id,
deleted=False)
# List the backups for this instance.
# There ought to be three in the admin tenant, but
# none in a different user's tenant.
result = dbaas.instances.backups(self.db_info.id)
assert_equal(0, len(result))
result = admin_dbaas.instances.backups(self.db_info.id)
assert_equal(3, len(result))
self.backups_to_clear = result
# Reset the task status.
self.reset_task_status()
self._reload_db_info()
result = admin_dbaas.instances.backups(self.db_info.id)
assert_equal(3, len(result))
for backup in result:
if backup.name == 'forever_completed':
assert_equal(backup.status,
state.BackupState.COMPLETED)
else:
assert_equal(backup.status, state.BackupState.FAILED)
@test(runs_after=[mgmt_reset_task_status_clears_backups])
def clear_test_backups(self):
for backup in self.backups_to_clear:
found_backup = backup_models.DBBackup.find_by(id=backup.id)
found_backup.delete()
admin = test_config.users.find_user(Requirements(is_admin=True))
admin_dbaas = create_dbaas_client(admin)
if not CONFIG.fake_mode:
result = admin_dbaas.instances.backups(self.db_info.id)
assert_equal(0, len(result))

View File

@ -1,177 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Rackspace Hosting
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from proboscis import after_class
from proboscis import asserts
from proboscis.asserts import Check
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove.tests.config import CONFIG
from trove.tests.util import create_client
from trove.tests.util import create_dbaas_client
from trove.tests.util import get_standby_instance_flavor
from trove.tests.util.users import Requirements
class QuotasBase(object):
def setUp(self):
self.user1 = CONFIG.users.find_user(Requirements(is_admin=False))
self.user2 = CONFIG.users.find_user(Requirements(is_admin=False))
asserts.assert_not_equal(self.user1.tenant, self.user2.tenant,
"Not enough users to run QuotasTest."
+ " Needs >=2.")
self.client1 = create_dbaas_client(self.user1)
self.client2 = create_dbaas_client(self.user2)
self.mgmt_client = create_client(is_admin=True)
''' Orig quotas from config
"trove_max_instances_per_tenant": 55,
"trove_max_volumes_per_tenant": 100, '''
self.original_quotas1 = self.mgmt_client.quota.show(self.user1.tenant)
self.original_quotas2 = self.mgmt_client.quota.show(self.user2.tenant)
def tearDown(self):
self.mgmt_client.quota.update(self.user1.tenant,
self.original_quotas1)
self.mgmt_client.quota.update(self.user2.tenant,
self.original_quotas2)
@test(groups=["dbaas.api.mgmt.quotas"])
class DefaultQuotasTest(QuotasBase):
@before_class
def setUp(self):
super(DefaultQuotasTest, self).setUp()
@after_class
def tearDown(self):
super(DefaultQuotasTest, self).tearDown()
@test
def check_quotas_are_set_to_defaults(self):
quotas = self.mgmt_client.quota.show(self.user1.tenant)
with Check() as check:
check.equal(CONFIG.trove_max_instances_per_tenant,
quotas["instances"])
check.equal(CONFIG.trove_max_volumes_per_user,
quotas["volumes"])
asserts.assert_equal(len(quotas), 2)
@test(groups=["dbaas.api.mgmt.quotas"])
class ChangeInstancesQuota(QuotasBase):
@before_class
def setUp(self):
super(ChangeInstancesQuota, self).setUp()
self.mgmt_client.quota.update(self.user1.tenant, {"instances": 0})
asserts.assert_equal(200, self.mgmt_client.last_http_code)
@after_class
def tearDown(self):
super(ChangeInstancesQuota, self).tearDown()
@test
def check_user2_is_not_affected_on_instances_quota_change(self):
user2_current_quota = self.mgmt_client.quota.show(self.user2.tenant)
asserts.assert_equal(self.original_quotas2, user2_current_quota,
"Changing one user's quota affected another"
+ "user's quota."
+ " Original: %s. After Quota Change: %s" %
(self.original_quotas2, user2_current_quota))
@test
def verify_correct_update(self):
quotas = self.mgmt_client.quota.show(self.user1.tenant)
with Check() as check:
check.equal(0, quotas["instances"])
check.equal(CONFIG.trove_max_volumes_per_tenant,
quotas["volumes"])
asserts.assert_equal(len(quotas), 2)
@test
def create_too_many_instances(self):
flavor, flavor_href = get_standby_instance_flavor(self.client1)
asserts.assert_raises(exceptions.OverLimit,
self.client1.instances.create,
"too_many_instances",
flavor_href, {'size': 1})
asserts.assert_equal(413, self.client1.last_http_code)
@test(groups=["dbaas.api.mgmt.quotas"])
class ChangeVolumesQuota(QuotasBase):
@before_class
def setUp(self):
super(ChangeVolumesQuota, self).setUp()
self.mgmt_client.quota.update(self.user1.tenant, {"volumes": 0})
asserts.assert_equal(200, self.mgmt_client.last_http_code)
@after_class
def tearDown(self):
super(ChangeVolumesQuota, self).tearDown()
@test
def check_volumes_overlimit(self):
flavor, flavor_href = get_standby_instance_flavor(self.client1)
asserts.assert_raises(exceptions.OverLimit,
self.client1.instances.create,
"too_large_volume",
flavor_href,
{'size': CONFIG.trove_max_accepted_volume_size
+ 1})
asserts.assert_equal(413, self.client1.last_http_code)
@test
def check_user2_is_not_affected_on_volumes_quota_change(self):
user2_current_quota = self.mgmt_client.quota.show(self.user2.tenant)
asserts.assert_equal(self.original_quotas2, user2_current_quota,
"Changing one user's quota affected another"
+ "user's quota."
+ " Original: %s. After Quota Change: %s" %
(self.original_quotas2, user2_current_quota))
@test
def verify_correct_update(self):
quotas = self.mgmt_client.quota.show(self.user1.tenant)
with Check() as check:
check.equal(CONFIG.trove_max_instances_per_tenant,
quotas["instances"])
check.equal(0, quotas["volumes"])
asserts.assert_equal(len(quotas), 2)
@test
def create_too_large_volume(self):
flavor, flavor_href = get_standby_instance_flavor(self.client1)
asserts.assert_raises(exceptions.OverLimit,
self.client1.instances.create,
"too_large_volume",
flavor_href,
{'size': CONFIG.trove_max_accepted_volume_size
+ 1})
asserts.assert_equal(413, self.client1.last_http_code)
# create an instance when I set the limit back to
# multiple updates to the quota and it should do what you expect

View File

@ -1,444 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from time import sleep
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis.asserts import fail
from proboscis.decorators import time_out
from proboscis import SkipTest
from proboscis import test
from troveclient.compat import exceptions
from trove.common.utils import generate_uuid
from trove.common.utils import poll_until
from trove import tests
from trove.tests.api.instances import CheckInstance
from trove.tests.api.instances import instance_info
from trove.tests.api.instances import TIMEOUT_INSTANCE_DELETE
from trove.tests.api.instances import TIMEOUT_INSTANCE_RESTORE
from trove.tests.config import CONFIG
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.util.server_connection import create_server_connection
class SlaveInstanceTestInfo(object):
"""Stores slave instance information."""
def __init__(self):
self.id = None
self.replicated_db = generate_uuid()
slave_instance = SlaveInstanceTestInfo()
existing_db_on_master = generate_uuid()
backup_count = None
def _get_user_count(server_info):
cmd = (
'docker exec -e MYSQL_PWD=$(sudo cat /var/lib/mysql/conf.d/root.cnf | '
'grep password | awk "{print \$3}") database mysql -uroot -N -e '
'"select count(*) from mysql.user where user like \\"slave_%\\""'
)
server = create_server_connection(server_info.id)
try:
stdout = server.execute(cmd)
return int(stdout.rstrip())
except Exception as e:
fail("Failed to execute command: %s, error: %s" % (cmd, str(e)))
def slave_is_running(running=True):
def check_slave_is_running():
server = create_server_connection(slave_instance.id)
cmd = (
'docker exec -e MYSQL_PWD=$(sudo cat '
'/var/lib/mysql/conf.d/root.cnf | grep password '
'| awk "{print \$3}") database mysql -uroot -N -e '
'"SELECT SERVICE_STATE FROM '
'performance_schema.replication_connection_status"'
)
try:
stdout = server.execute(cmd)
stdout = stdout.rstrip()
except Exception as e:
fail("Failed to execute command %s, error: %s" %
(cmd, str(e)))
expected = b"ON" if running else b""
return stdout == expected
return check_slave_is_running
def backup_count_matches(count):
def check_backup_count_matches():
backup = instance_info.dbaas.instances.backups(instance_info.id)
return count == len(backup)
return check_backup_count_matches
def instance_is_active(id):
instance = instance_info.dbaas.instances.get(id)
if instance.status in CONFIG.running_status:
return True
else:
assert_true(instance.status in ['PROMOTE', 'EJECT', 'BUILD', 'BACKUP'])
return False
def create_slave():
result = instance_info.dbaas.instances.create(
instance_info.name + "_slave",
nics=instance_info.nics,
replica_of=instance_info.id)
assert_equal(200, instance_info.dbaas.last_http_code)
assert_equal("BUILD", result.status)
return result.id
def validate_slave(master, slave):
new_slave = instance_info.dbaas.instances.get(slave.id)
assert_equal(200, instance_info.dbaas.last_http_code)
ns_dict = new_slave._info
CheckInstance(ns_dict).replica_of()
assert_equal(master.id, ns_dict['replica_of']['id'])
def validate_master(master, slaves):
new_master = instance_info.dbaas.instances.get(master.id)
assert_equal(200, instance_info.dbaas.last_http_code)
nm_dict = new_master._info
CheckInstance(nm_dict).slaves()
master_ids = set([replica['id'] for replica in nm_dict['replicas']])
asserted_ids = set([slave.id for slave in slaves])
assert_true(asserted_ids.issubset(master_ids))
@test(depends_on_groups=[tests.DBAAS_API_CONFIGURATIONS],
groups=[tests.DBAAS_API_REPLICATION],
enabled=CONFIG.swift_enabled)
class CreateReplicationSlave(object):
@test
def test_create_db_on_master(self):
"""test_create_db_on_master"""
databases = [{'name': existing_db_on_master}]
# Ensure that the auth_token in the dbaas client is not stale
instance_info.dbaas.authenticate()
instance_info.dbaas.databases.create(instance_info.id, databases)
assert_equal(202, instance_info.dbaas.last_http_code)
@test(runs_after=['test_create_db_on_master'])
def test_create_slave(self):
"""test_create_slave"""
global backup_count
backup_count = len(
instance_info.dbaas.instances.backups(instance_info.id))
slave_instance.id = create_slave()
@test(groups=[tests.DBAAS_API_REPLICATION],
enabled=CONFIG.swift_enabled,
depends_on_classes=[CreateReplicationSlave])
class WaitForCreateSlaveToFinish(object):
"""Wait until the instance is created and set up as slave."""
@test
@time_out(TIMEOUT_INSTANCE_RESTORE)
def test_slave_created(self):
"""Wait for replica to be created."""
poll_until(lambda: instance_is_active(slave_instance.id))
@test(enabled=(not CONFIG.fake_mode and CONFIG.swift_enabled),
depends_on_classes=[WaitForCreateSlaveToFinish],
groups=[tests.DBAAS_API_REPLICATION])
class VerifySlave(object):
def db_is_found(self, database_to_find):
def find_database():
databases = instance_info.dbaas.databases.list(slave_instance.id)
return (database_to_find in [d.name for d in databases])
return find_database
@test
@time_out(10 * 60)
def test_correctly_started_replication(self):
"""test_correctly_started_replication"""
poll_until(slave_is_running())
@test(runs_after=[test_correctly_started_replication])
@time_out(60)
def test_backup_deleted(self):
"""test_backup_deleted"""
poll_until(backup_count_matches(backup_count))
@test(depends_on=[test_correctly_started_replication])
def test_slave_is_read_only(self):
"""test_slave_is_read_only"""
cmd = (
'docker exec -e MYSQL_PWD=$(sudo cat '
'/var/lib/mysql/conf.d/root.cnf | grep password | '
'awk "{print \$3}") database mysql -uroot -NBq -e '
'"select @@read_only"'
)
server = create_server_connection(slave_instance.id)
try:
stdout = server.execute(cmd)
stdout = int(stdout.rstrip())
except Exception as e:
fail("Failed to execute command %s, error: %s" %
(cmd, str(e)))
assert_equal(stdout, 1)
@test(depends_on=[test_slave_is_read_only])
def test_create_db_on_master(self):
"""test_create_db_on_master"""
databases = [{'name': slave_instance.replicated_db}]
instance_info.dbaas.databases.create(instance_info.id, databases)
assert_equal(202, instance_info.dbaas.last_http_code)
@test(depends_on=[test_create_db_on_master])
@time_out(5 * 60)
def test_database_replicated_on_slave(self):
"""test_database_replicated_on_slave"""
poll_until(self.db_is_found(slave_instance.replicated_db))
@test(runs_after=[test_database_replicated_on_slave])
@time_out(5 * 60)
def test_existing_db_exists_on_slave(self):
"""test_existing_db_exists_on_slave"""
poll_until(self.db_is_found(existing_db_on_master))
@test(depends_on=[test_existing_db_exists_on_slave])
def test_slave_user_exists(self):
"""test_slave_user_exists"""
assert_equal(_get_user_count(slave_instance), 1)
assert_equal(_get_user_count(instance_info), 1)
@test(groups=[tests.DBAAS_API_REPLICATION],
depends_on_classes=[VerifySlave],
enabled=CONFIG.swift_enabled)
class TestInstanceListing(object):
"""Test replication information in instance listing."""
@test
def test_get_slave_instance(self):
"""test_get_slave_instance"""
validate_slave(instance_info, slave_instance)
@test
def test_get_master_instance(self):
"""test_get_master_instance"""
validate_master(instance_info, [slave_instance])
@test(groups=[tests.DBAAS_API_REPLICATION],
depends_on_classes=[TestInstanceListing],
enabled=CONFIG.swift_enabled)
class TestReplicationFailover(object):
"""Test replication failover functionality."""
@staticmethod
def promote(master, slave):
if CONFIG.fake_mode:
raise SkipTest("promote_replica_source not supported in fake mode")
instance_info.dbaas.instances.promote_to_replica_source(slave)
assert_equal(202, instance_info.dbaas.last_http_code)
poll_until(lambda: instance_is_active(slave.id))
validate_master(slave, [master])
validate_slave(slave, master)
@test
def test_promote_master(self):
if CONFIG.fake_mode:
raise SkipTest("promote_master not supported in fake mode")
assert_raises(exceptions.BadRequest,
instance_info.dbaas.instances.promote_to_replica_source,
instance_info.id)
@test
def test_eject_slave(self):
if CONFIG.fake_mode:
raise SkipTest("eject_replica_source not supported in fake mode")
assert_raises(exceptions.BadRequest,
instance_info.dbaas.instances.eject_replica_source,
slave_instance.id)
@test
def test_eject_valid_master(self):
if CONFIG.fake_mode:
raise SkipTest("eject_replica_source not supported in fake mode")
# assert_raises(exceptions.BadRequest,
# instance_info.dbaas.instances.eject_replica_source,
# instance_info.id)
# Uncomment once BUG_EJECT_VALID_MASTER is fixed
raise SkipKnownBug(runners.BUG_EJECT_VALID_MASTER)
@test(depends_on=[test_promote_master, test_eject_slave,
test_eject_valid_master])
def test_promote_to_replica_source(self):
"""test_promote_to_replica_source"""
TestReplicationFailover.promote(instance_info, slave_instance)
@test(depends_on=[test_promote_to_replica_source])
def test_promote_back_to_replica_source(self):
"""test_promote_back_to_replica_source"""
TestReplicationFailover.promote(slave_instance, instance_info)
@test(depends_on=[test_promote_back_to_replica_source], enabled=False)
def add_second_slave(self):
"""add_second_slave"""
if CONFIG.fake_mode:
raise SkipTest("three site promote not supported in fake mode")
self._third_slave = SlaveInstanceTestInfo()
self._third_slave.id = create_slave()
poll_until(lambda: instance_is_active(self._third_slave.id))
poll_until(slave_is_running())
sleep(15)
validate_master(instance_info, [slave_instance, self._third_slave])
validate_slave(instance_info, self._third_slave)
@test(depends_on=[add_second_slave], enabled=False)
def test_three_site_promote(self):
"""Promote the second slave"""
if CONFIG.fake_mode:
raise SkipTest("three site promote not supported in fake mode")
TestReplicationFailover.promote(instance_info, self._third_slave)
validate_master(self._third_slave, [slave_instance, instance_info])
validate_slave(self._third_slave, instance_info)
@test(depends_on=[test_three_site_promote], enabled=False)
def disable_master(self):
"""Stop trove-guestagent on master"""
if CONFIG.fake_mode:
raise SkipTest("eject_replica_source not supported in fake mode")
cmd = "sudo systemctl stop guest-agent.service"
server = create_server_connection(self._third_slave.id)
try:
stdout = server.execute(cmd)
stdout = int(stdout.rstrip())
except Exception as e:
fail("Failed to execute command %s, error: %s" %
(cmd, str(e)))
assert_equal(stdout, 1)
@test(depends_on=[disable_master], enabled=False)
def test_eject_replica_master(self):
if CONFIG.fake_mode:
raise SkipTest("eject_replica_source not supported in fake mode")
sleep(70)
instance_info.dbaas.instances.eject_replica_source(self._third_slave)
assert_equal(202, instance_info.dbaas.last_http_code)
poll_until(lambda: instance_is_active(self._third_slave.id))
validate_master(instance_info, [slave_instance])
validate_slave(instance_info, slave_instance)
@test(groups=[tests.DBAAS_API_REPLICATION],
depends_on=[TestReplicationFailover],
enabled=CONFIG.swift_enabled)
class DetachReplica(object):
@test
def delete_before_detach_replica(self):
assert_raises(exceptions.Forbidden,
instance_info.dbaas.instances.delete,
instance_info.id)
@test
@time_out(5 * 60)
def test_detach_replica(self):
"""test_detach_replica"""
if CONFIG.fake_mode:
raise SkipTest("Detach replica not supported in fake mode")
instance_info.dbaas.instances.update(slave_instance.id,
detach_replica_source=True)
assert_equal(202, instance_info.dbaas.last_http_code)
poll_until(slave_is_running(False))
@test(depends_on=[test_detach_replica])
@time_out(5 * 60)
def test_slave_is_not_read_only(self):
"""test_slave_is_not_read_only"""
if CONFIG.fake_mode:
raise SkipTest("Test not_read_only not supported in fake mode")
# wait until replica is no longer read only
def check_not_read_only():
cmd = (
'docker exec -e MYSQL_PWD=$(sudo cat '
'/var/lib/mysql/conf.d/root.cnf | grep password | '
'awk "{print \$3}") database mysql -uroot -NBq -e '
'"select @@read_only"'
)
server = create_server_connection(slave_instance.id)
try:
stdout = server.execute(cmd)
stdout = int(stdout)
except Exception:
return False
return stdout == 0
poll_until(check_not_read_only)
@test(groups=[tests.DBAAS_API_REPLICATION],
depends_on=[DetachReplica],
enabled=CONFIG.swift_enabled)
class DeleteSlaveInstance(object):
@test
@time_out(TIMEOUT_INSTANCE_DELETE)
def test_delete_slave_instance(self):
"""test_delete_slave_instance"""
instance_info.dbaas.instances.delete(slave_instance.id)
assert_equal(202, instance_info.dbaas.last_http_code)
def instance_is_gone():
try:
instance_info.dbaas.instances.get(slave_instance.id)
return False
except exceptions.NotFound:
return True
poll_until(instance_is_gone)
assert_raises(exceptions.NotFound, instance_info.dbaas.instances.get,
slave_instance.id)

View File

@ -1,176 +0,0 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nose.plugins.skip import SkipTest
import proboscis
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from proboscis.asserts import assert_not_equal
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.api import instances
from trove.tests.util import test_config
@test(groups=[tests.DBAAS_API_USERS_ROOT],
depends_on_groups=[tests.DBAAS_API_INSTANCES])
class TestRoot(object):
root_enabled_timestamp = 'Never'
@proboscis.before_class
def setUp(self):
# Reuse the instance created previously.
self.id = instances.instance_info.id
self.dbaas = instances.instance_info.dbaas
self.dbaas_admin = instances.instance_info.dbaas_admin
def _verify_root_timestamp(self, id):
reh = self.dbaas_admin.management.root_enabled_history(id)
timestamp = reh.enabled
assert_equal(self.root_enabled_timestamp, timestamp)
assert_equal(id, reh.id)
def _root(self):
self.dbaas.root.create(self.id)
assert_equal(200, self.dbaas.last_http_code)
reh = self.dbaas_admin.management.root_enabled_history
self.root_enabled_timestamp = reh(self.id).enabled
@test
def test_root_initially_disabled(self):
"""Test that root is disabled."""
enabled = self.dbaas.root.is_root_enabled(self.id)
assert_equal(200, self.dbaas.last_http_code)
is_enabled = enabled
if hasattr(enabled, 'rootEnabled'):
is_enabled = enabled.rootEnabled
assert_false(is_enabled, "Root SHOULD NOT be enabled.")
@test
def test_create_user_os_admin_failure(self):
users = [{"name": "os_admin", "password": "12345"}]
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
self.id, users)
@test
def test_delete_user_os_admin_failure(self):
assert_raises(exceptions.BadRequest, self.dbaas.users.delete,
self.id, "os_admin")
@test(depends_on=[test_root_initially_disabled],
enabled=not test_config.values['root_removed_from_instance_api'])
def test_root_initially_disabled_details(self):
"""Use instance details to test that root is disabled."""
instance = self.dbaas.instances.get(self.id)
assert_true(hasattr(instance, 'rootEnabled'),
"Instance has no rootEnabled property.")
assert_false(instance.rootEnabled, "Root SHOULD NOT be enabled.")
assert_equal(self.root_enabled_timestamp, 'Never')
@test(depends_on=[test_root_initially_disabled_details])
def test_root_disabled_in_mgmt_api(self):
"""Verifies in the management api that the timestamp exists."""
self._verify_root_timestamp(self.id)
@test(depends_on=[test_root_initially_disabled_details])
def test_root_disable_when_root_not_enabled(self):
reh = self.dbaas_admin.management.root_enabled_history
self.root_enabled_timestamp = reh(self.id).enabled
assert_raises(exceptions.NotFound, self.dbaas.root.delete,
self.id)
self._verify_root_timestamp(self.id)
@test(depends_on=[test_root_disable_when_root_not_enabled])
def test_enable_root(self):
self._root()
@test(depends_on=[test_enable_root])
def test_enabled_timestamp(self):
assert_not_equal(self.root_enabled_timestamp, 'Never')
@test(depends_on=[test_enable_root])
def test_root_not_in_users_list(self):
"""
Tests that despite having enabled root, user root doesn't appear
in the users list for the instance.
"""
users = self.dbaas.users.list(self.id)
usernames = [user.name for user in users]
assert_true('root' not in usernames)
@test(depends_on=[test_enable_root])
def test_root_now_enabled(self):
"""Test that root is now enabled."""
enabled = self.dbaas.root.is_root_enabled(self.id)
assert_equal(200, self.dbaas.last_http_code)
assert_true(enabled, "Root SHOULD be enabled.")
@test(depends_on=[test_root_now_enabled],
enabled=not test_config.values['root_removed_from_instance_api'])
def test_root_now_enabled_details(self):
"""Use instance details to test that root is now enabled."""
instance = self.dbaas.instances.get(self.id)
assert_true(hasattr(instance, 'rootEnabled'),
"Instance has no rootEnabled property.")
assert_true(instance.rootEnabled, "Root SHOULD be enabled.")
assert_not_equal(self.root_enabled_timestamp, 'Never')
self._verify_root_timestamp(self.id)
@test(depends_on=[test_root_now_enabled_details])
def test_reset_root(self):
if test_config.values['root_timestamp_disabled']:
raise SkipTest("Enabled timestamp not enabled yet")
old_ts = self.root_enabled_timestamp
self._root()
assert_not_equal(self.root_enabled_timestamp, 'Never')
assert_equal(self.root_enabled_timestamp, old_ts)
@test(depends_on=[test_reset_root])
def test_root_still_enabled(self):
"""Test that after root was reset it's still enabled."""
enabled = self.dbaas.root.is_root_enabled(self.id)
assert_equal(200, self.dbaas.last_http_code)
assert_true(enabled, "Root SHOULD still be enabled.")
@test(depends_on=[test_root_still_enabled],
enabled=not test_config.values['root_removed_from_instance_api'])
def test_root_still_enabled_details(self):
"""Use instance details to test that after root was reset,
it's still enabled.
"""
instance = self.dbaas.instances.get(self.id)
assert_true(hasattr(instance, 'rootEnabled'),
"Instance has no rootEnabled property.")
assert_true(instance.rootEnabled, "Root SHOULD still be enabled.")
assert_not_equal(self.root_enabled_timestamp, 'Never')
self._verify_root_timestamp(self.id)
@test(depends_on=[test_enable_root])
def test_root_cannot_be_deleted(self):
"""Even if root was enabled, the user root cannot be deleted."""
assert_raises(exceptions.BadRequest, self.dbaas.users.delete,
self.id, "root")
@test(depends_on=[test_root_still_enabled_details])
def test_root_disable(self):
reh = self.dbaas_admin.management.root_enabled_history
self.root_enabled_timestamp = reh(self.id).enabled
self.dbaas.root.delete(self.id)
assert_equal(204, self.dbaas.last_http_code)
self._verify_root_timestamp(self.id)

View File

@ -1,503 +0,0 @@
# Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from random import choice
from proboscis import after_class
from proboscis import asserts
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.api.instances import instance_info
from trove.tests import util
from trove.tests.util import test_config
FAKE = test_config.values['fake_mode']
class UserAccessBase(object):
"""
Base class for Positive and Negative TestUserAccess classes
"""
users = []
databases = []
def set_up(self):
self.dbaas = util.create_dbaas_client(instance_info.user)
self.users = ["test_access_user"]
self.databases = [("test_access_db%02i" % i) for i in range(4)]
def _user_list_from_names(self, usernames):
return [{"name": name,
"password": "password",
"databases": []} for name in usernames]
def _grant_access_singular(self, user, databases, expected_response=202):
"""Grant a single user access to the databases listed.
Potentially, expect an exception in the process.
"""
try:
self.dbaas.users.grant(instance_info.id, user, databases)
except exceptions.BadRequest:
asserts.assert_equal(400, expected_response)
except exceptions.NotFound:
asserts.assert_equal(404, expected_response)
except exceptions.ClientException:
asserts.assert_equal(500, expected_response)
finally:
asserts.assert_equal(expected_response, self.dbaas.last_http_code)
def _grant_access_plural(self, users, databases, expected_response=202):
"""Grant each user in the list access to all the databases listed.
Potentially, expect an exception in the process.
"""
for user in users:
self._grant_access_singular(user, databases, expected_response)
def _revoke_access_singular(self, user, database, expected_response=202):
"""Revoke from a user access to the given database .
Potentially, expect an exception in the process.
"""
try:
self.dbaas.users.revoke(instance_info.id, user, database)
asserts.assert_true(expected_response, self.dbaas.last_http_code)
except exceptions.BadRequest:
asserts.assert_equal(400, self.dbaas.last_http_code)
except exceptions.NotFound:
asserts.assert_equal(404, self.dbaas.last_http_code)
def _revoke_access_plural(self, users, databases, expected_response=202):
"""Revoke from each user access to each database.
Potentially, expect an exception in the process.
"""
for user in users:
for database in databases:
self._revoke_access_singular(user,
database,
expected_response)
def _test_access(self, users, databases, expected_response=200):
"""Verify that each user in the list has access to each database in
the list.
"""
for user in users:
access = self.dbaas.users.list_access(instance_info.id, user)
asserts.assert_equal(expected_response, self.dbaas.last_http_code)
access = [db.name for db in access]
asserts.assert_equal(set(access), set(databases))
def _test_ignore_access(self, users, databases, expected_response=200):
databases = [d for d in databases if d not in ['lost+found',
'mysql',
'information_schema']]
self._test_access(users, databases, expected_response)
def _reset_access(self):
for user in self.users:
for database in self.databases + self.ghostdbs:
try:
self.dbaas.users.revoke(instance_info.id, user, database)
asserts.assert_true(self.dbaas.last_http_code in [202, 404]
)
except exceptions.NotFound:
# This is all right here, since we're resetting.
pass
self._test_access(self.users, [])
@test(depends_on_groups=[tests.DBAAS_API_USERS],
groups=[tests.DBAAS_API_USERS_ACCESS])
class TestUserAccessPasswordChange(UserAccessBase):
"""Test that change_password works."""
@before_class
def setUp(self):
super(TestUserAccessPasswordChange, self).set_up()
def _check_mysql_connection(self, username, password, success=True):
# This can only test connections for users with the host %.
# Much more difficult to simulate connection attempts from other hosts.
if FAKE:
# "Fake mode; cannot test mysql connection."
return
conn = util.mysql_connection()
if success:
conn.create(instance_info.get_address(), username, password)
else:
conn.assert_fails(instance_info.get_address(), username, password)
def _pick_a_user(self):
users = self._user_list_from_names(self.users)
return choice(users) # Pick one, it doesn't matter.
@test()
def test_change_password_bogus_user(self):
user = self._pick_a_user()
user["name"] = "thisuserhasanamethatstoolong"
asserts.assert_raises(exceptions.BadRequest,
self.dbaas.users.change_passwords,
instance_info.id, [user])
asserts.assert_equal(400, self.dbaas.last_http_code)
@test()
def test_change_password_nonexistent_user(self):
user = self._pick_a_user()
user["name"] = "thisuserDNE"
asserts.assert_raises(exceptions.NotFound,
self.dbaas.users.change_passwords,
instance_info.id, [user])
asserts.assert_equal(404, self.dbaas.last_http_code)
@test()
def test_create_user_and_dbs(self):
users = self._user_list_from_names(self.users)
# Default password for everyone is 'password'.
self.dbaas.users.create(instance_info.id, users)
asserts.assert_equal(202, self.dbaas.last_http_code)
databases = [{"name": db}
for db in self.databases]
self.dbaas.databases.create(instance_info.id, databases)
asserts.assert_equal(202, self.dbaas.last_http_code)
@test(depends_on=[test_create_user_and_dbs])
def test_initial_connection(self):
user = self._pick_a_user()
self._check_mysql_connection(user["name"], "password")
@test(depends_on=[test_initial_connection])
def test_change_password(self):
# Doesn't actually change anything, just tests that the call doesn't
# have any problems. As an aside, also checks that a user can
# change its password to the same thing again.
user = self._pick_a_user()
password = user["password"]
self.dbaas.users.change_passwords(instance_info.id, [user])
asserts.assert_equal(202, self.dbaas.last_http_code)
self._check_mysql_connection(user["name"], password)
@test(depends_on=[test_change_password])
def test_change_password_back(self):
"""Test change and restore user password."""
user = self._pick_a_user()
old_password = user["password"]
new_password = "NEWPASSWORD"
user["password"] = new_password
self.dbaas.users.change_passwords(instance_info.id, [user])
asserts.assert_equal(202, self.dbaas.last_http_code)
self._check_mysql_connection(user["name"], new_password)
user["password"] = old_password
self.dbaas.users.change_passwords(instance_info.id, [user])
asserts.assert_equal(202, self.dbaas.last_http_code)
self._check_mysql_connection(user["name"], old_password)
@after_class(always_run=True)
def tearDown(self):
for database in self.databases:
self.dbaas.databases.delete(instance_info.id, database)
asserts.assert_equal(202, self.dbaas.last_http_code)
for username in self.users:
self.dbaas.users.delete(instance_info.id, username)
@test(depends_on_classes=[TestUserAccessPasswordChange],
groups=[tests.DBAAS_API_USERS_ACCESS])
class TestUserAccessPositive(UserAccessBase):
"""Test the creation and deletion of user grants."""
@before_class
def setUp(self):
super(TestUserAccessPositive, self).set_up()
# None of the ghosts are real databases or users.
self.ghostdbs = ["test_user_access_ghost_db"]
self.ghostusers = ["test_ghostuser"]
self.revokedbs = self.databases[:1]
self.remainingdbs = self.databases[1:]
def _ensure_nothing_else_created(self):
# Make sure grants and revokes do not create users or databases.
databases = self.dbaas.databases.list(instance_info.id)
database_names = [db.name for db in databases]
for ghost in self.ghostdbs:
asserts.assert_true(ghost not in database_names)
users = self.dbaas.users.list(instance_info.id)
user_names = [user.name for user in users]
for ghost in self.ghostusers:
asserts.assert_true(ghost not in user_names)
@test()
def test_create_user_and_dbs(self):
users = self._user_list_from_names(self.users)
self.dbaas.users.create(instance_info.id, users)
asserts.assert_equal(202, self.dbaas.last_http_code)
databases = [{"name": db}
for db in self.databases]
self.dbaas.databases.create(instance_info.id, databases)
asserts.assert_equal(202, self.dbaas.last_http_code)
@test(depends_on=[test_create_user_and_dbs])
def test_no_access(self):
# No users have any access to any database.
self._reset_access()
self._test_access(self.users, [])
@test(depends_on=[test_no_access])
def test_grant_full_access(self):
# The users are granted access to all test databases.
self._reset_access()
self._grant_access_plural(self.users, self.databases)
self._test_access(self.users, self.databases)
@test(depends_on=[test_no_access])
def test_grant_full_access_ignore_databases(self):
# The users are granted access to all test databases.
all_dbs = []
all_dbs.extend(self.databases)
all_dbs.extend(['lost+found', 'mysql', 'information_schema'])
self._reset_access()
self._grant_access_plural(self.users, self.databases)
self._test_ignore_access(self.users, self.databases)
@test(depends_on=[test_grant_full_access])
def test_grant_idempotence(self):
# Grant operations can be repeated with no ill effects.
self._reset_access()
for repeat in range(3):
self._grant_access_plural(self.users, self.databases)
self._test_access(self.users, self.databases)
@test(depends_on=[test_grant_full_access])
def test_revoke_one_database(self):
# Revoking permission removes that database from a user's list.
self._reset_access()
self._grant_access_plural(self.users, self.databases)
self._test_access(self.users, self.databases)
self._revoke_access_plural(self.users, self.revokedbs)
self._test_access(self.users, self.remainingdbs)
@test(depends_on=[test_grant_full_access])
def test_revoke_non_idempotence(self):
# Revoking access cannot be repeated.
self._reset_access()
self._grant_access_plural(self.users, self.databases)
self._revoke_access_plural(self.users, self.revokedbs)
self._revoke_access_plural(self.users, self.revokedbs, 404)
self._test_access(self.users, self.remainingdbs)
@test(depends_on=[test_grant_full_access])
def test_revoke_all_access(self):
# Revoking access to all databases will leave their access empty.
self._reset_access()
self._grant_access_plural(self.users, self.databases)
self._revoke_access_plural(self.users, self.revokedbs)
self._test_access(self.users, self.remainingdbs)
@test(depends_on=[test_grant_full_access])
def test_grant_ghostdbs(self):
# Grants to imaginary databases are acceptable, and are honored.
self._reset_access()
self._ensure_nothing_else_created()
self._grant_access_plural(self.users, self.ghostdbs)
self._ensure_nothing_else_created()
@test(depends_on=[test_grant_full_access])
def test_revoke_ghostdbs(self):
# Revokes to imaginary databases are acceptable, and are honored.
self._reset_access()
self._ensure_nothing_else_created()
self._grant_access_plural(self.users, self.ghostdbs)
self._revoke_access_plural(self.users, self.ghostdbs)
self._ensure_nothing_else_created()
@test(depends_on=[test_grant_full_access])
def test_grant_ghostusers(self):
# You cannot grant permissions to imaginary users, as imaginary users
# don't have passwords we can pull from mysql.users
self._reset_access()
self._grant_access_plural(self.ghostusers, self.databases, 404)
@test(depends_on=[test_grant_full_access])
def test_revoke_ghostusers(self):
# You cannot revoke permissions from imaginary users, as imaginary
# users don't have passwords we can pull from mysql.users
self._reset_access()
self._revoke_access_plural(self.ghostusers, self.databases, 404)
@after_class(always_run=True)
def tearDown(self):
self._reset_access()
for database in self.databases:
self.dbaas.databases.delete(instance_info.id, database)
asserts.assert_equal(202, self.dbaas.last_http_code)
for username in self.users:
self.dbaas.users.delete(instance_info.id, username)
@test(depends_on_classes=[TestUserAccessPositive],
groups=[tests.DBAAS_API_USERS_ACCESS])
class TestUserAccessNegative(UserAccessBase):
"""Negative tests for the creation and deletion of user grants."""
@before_class
def setUp(self):
super(TestUserAccessNegative, self).set_up()
self.users = ["qe_user?neg3F", "qe_user#neg23"]
self.databases = [("qe_user_neg_db%02i" % i) for i in range(2)]
self.ghostdbs = []
def _add_users(self, users, expected_response=202):
user_list = self._user_list_from_names(users)
try:
self.dbaas.users.create(instance_info.id, user_list)
asserts.assert_equal(self.dbaas.last_http_code, 202)
except exceptions.BadRequest:
asserts.assert_equal(self.dbaas.last_http_code, 400)
asserts.assert_equal(expected_response, self.dbaas.last_http_code)
@test()
def test_create_duplicate_user_and_dbs(self):
"""
Create the same user to the first DB - allowed, not part of change
"""
users = self._user_list_from_names(self.users)
self.dbaas.users.create(instance_info.id, users)
asserts.assert_equal(202, self.dbaas.last_http_code)
databases = [{"name": db} for db in self.databases]
self.dbaas.databases.create(instance_info.id, databases)
asserts.assert_equal(202, self.dbaas.last_http_code)
@test(depends_on=[test_create_duplicate_user_and_dbs])
def test_neg_duplicate_useraccess(self):
"""
Grant duplicate users access to all database.
"""
username = "qe_user.neg2E"
self._add_users([username])
self._add_users([username], 400)
for repeat in range(3):
self._grant_access_plural(self.users, self.databases)
self._test_access(self.users, self.databases)
@test()
def test_re_create_user(self):
user_list = ["re_create_user"]
# create, grant, then check a new user
self._add_users(user_list)
self._test_access(user_list, [])
self._grant_access_singular(user_list[0], self.databases)
self._test_access(user_list, self.databases)
# drop the user temporarily
self.dbaas.users.delete(instance_info.id, user_list[0])
# check his access - user should not be found
asserts.assert_raises(exceptions.NotFound,
self.dbaas.users.list_access,
instance_info.id,
user_list[0])
# re-create the user
self._add_users(user_list)
# check his access - should not exist
self._test_access(user_list, [])
# grant user access to all database.
self._grant_access_singular(user_list[0], self.databases)
# check his access - user should exist
self._test_access(user_list, self.databases)
# revoke users access
self._revoke_access_plural(user_list, self.databases)
def _negative_user_test(self, username, databases,
create_response=202, grant_response=202,
access_response=200, revoke_response=202):
# Try and fail to create the user.
self._add_users([username], create_response)
self._grant_access_singular(username, databases, grant_response)
access = None
try:
access = self.dbaas.users.list_access(instance_info.id, username)
asserts.assert_equal(200, self.dbaas.last_http_code)
except exceptions.BadRequest:
asserts.assert_equal(400, self.dbaas.last_http_code)
except exceptions.NotFound:
asserts.assert_equal(404, self.dbaas.last_http_code)
finally:
asserts.assert_equal(access_response, self.dbaas.last_http_code)
if access is not None:
access = [db.name for db in access]
asserts.assert_equal(set(access), set(self.databases))
self._revoke_access_plural([username], databases, revoke_response)
@test
def test_user_withperiod(self):
# This is actually fine; we escape dots in the user-host pairing.
self._negative_user_test("test.user", self.databases)
@test
def test_user_empty_no_host(self):
# This creates a request to .../<instance-id>/users//databases,
# which is parsed to mean "show me user 'databases', which in this
# case is a valid username, but not one of an extant user.
self._negative_user_test("", self.databases, 400, 500, 404, 404)
@test
def test_user_empty_with_host(self):
# self._negative_user_test("", self.databases, 400, 400, 400, 400)
# Try and fail to create the user.
empty_user = {"name": "", "host": "%",
"password": "password", "databases": []}
asserts.assert_raises(exceptions.BadRequest,
self.dbaas.users.create,
instance_info.id,
[empty_user])
asserts.assert_equal(400, self.dbaas.last_http_code)
asserts.assert_raises(exceptions.BadRequest, self.dbaas.users.grant,
instance_info.id, "", [], "%")
asserts.assert_equal(400, self.dbaas.last_http_code)
asserts.assert_raises(exceptions.BadRequest,
self.dbaas.users.list_access,
instance_info.id, "", "%")
asserts.assert_equal(400, self.dbaas.last_http_code)
asserts.assert_raises(exceptions.BadRequest, self.dbaas.users.revoke,
instance_info.id, "", "db", "%")
asserts.assert_equal(400, self.dbaas.last_http_code)
@test
def test_user_nametoolong(self):
# You cannot create a user with this name.
# Grant revoke, and access filter this username as invalid.
self._negative_user_test("exceed_limit_user", self.databases,
400, 400, 400, 400)
@test
def test_user_allspaces(self):
self._negative_user_test(" ", self.databases, 400, 400, 400, 400)
@after_class(always_run=True)
def tearDown(self):
self._reset_access()
for database in self.databases:
self.dbaas.databases.delete(instance_info.id, database)
asserts.assert_equal(202, self.dbaas.last_http_code)
for username in self.users:
self.dbaas.users.delete(instance_info.id, username)

View File

@ -1,446 +0,0 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from urllib import parse as urllib_parse
from proboscis import after_class
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from proboscis.asserts import assert_raises
from proboscis.asserts import assert_true
from proboscis.asserts import fail
from proboscis import before_class
from proboscis import test
from troveclient.compat import exceptions
from trove import tests
from trove.tests.api.instances import instance_info
from trove.tests import util
from trove.tests.util import test_config
FAKE = test_config.values['fake_mode']
@test(depends_on_groups=[tests.DBAAS_API_USERS_ROOT],
groups=[tests.DBAAS_API_USERS],
enabled=not test_config.values['fake_mode'])
class TestMysqlAccessNegative(object):
"""Make sure that MySQL server was secured."""
@test
def test_mysql_admin(self):
"""Ensure we aren't allowed access with os_admin and wrong password."""
util.mysql_connection().assert_fails(
instance_info.get_address(), "os_admin", "asdfd-asdf234")
@test
def test_mysql_root(self):
"""Ensure we aren't allowed access with root and wrong password."""
util.mysql_connection().assert_fails(
instance_info.get_address(), "root", "dsfgnear")
@test(depends_on_classes=[TestMysqlAccessNegative],
groups=[tests.DBAAS_API_USERS])
class TestUsers(object):
"""
Test the creation and deletion of users
"""
username = "tes!@#tuser"
password = "testpa$^%ssword"
username1 = "anous*&^er"
password1 = "anopas*?.sword"
db1 = "usersfirstdb"
db2 = "usersseconddb"
created_users = [username, username1]
system_users = ['root', 'debian_sys_maint']
def __init__(self):
self.dbaas = util.create_dbaas_client(instance_info.user)
self.dbaas_admin = util.create_dbaas_client(instance_info.admin_user)
@before_class
def setUp(self):
databases = [{"name": self.db1, "character_set": "latin2",
"collate": "latin2_general_ci"},
{"name": self.db2}]
try:
self.dbaas.databases.create(instance_info.id, databases)
except exceptions.BadRequest as e:
if "Validation error" in e.message:
raise
if not FAKE:
time.sleep(5)
@after_class
def tearDown(self):
self.dbaas.databases.delete(instance_info.id, self.db1)
self.dbaas.databases.delete(instance_info.id, self.db2)
@test()
def test_delete_nonexistent_user(self):
assert_raises(exceptions.NotFound, self.dbaas.users.delete,
instance_info.id, "thisuserDNE")
assert_equal(404, self.dbaas.last_http_code)
@test()
def test_create_users(self):
users = []
users.append({"name": self.username, "password": self.password,
"databases": [{"name": self.db1}]})
users.append({"name": self.username1, "password": self.password1,
"databases": [{"name": self.db1}, {"name": self.db2}]})
self.dbaas.users.create(instance_info.id, users)
assert_equal(202, self.dbaas.last_http_code)
# Do we need this?
if not FAKE:
time.sleep(5)
self.check_database_for_user(self.username, self.password,
[self.db1])
self.check_database_for_user(self.username1, self.password1,
[self.db1, self.db2])
@test(depends_on=[test_create_users])
def test_create_users_list(self):
# tests for users that should be listed
users = self.dbaas.users.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
found = False
for user in self.created_users:
for result in users:
if user == result.name:
found = True
assert_true(found, "User '%s' not found in result" % user)
found = False
@test(depends_on=[test_create_users])
def test_fails_when_creating_user_twice(self):
users = []
users.append({"name": self.username, "password": self.password,
"databases": [{"name": self.db1}]})
users.append({"name": self.username1, "password": self.password1,
"databases": [{"name": self.db1}, {"name": self.db2}]})
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, users)
assert_equal(400, self.dbaas.last_http_code)
@test(depends_on=[test_create_users_list])
def test_cannot_create_root_user(self):
# Tests that the user root (in Config:ignore_users) cannot be created.
users = [{"name": "root", "password": "12345",
"databases": [{"name": self.db1}]}]
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, users)
@test(depends_on=[test_create_users_list])
def test_get_one_user(self):
user = self.dbaas.users.get(instance_info.id, username=self.username,
hostname='%')
assert_equal(200, self.dbaas.last_http_code)
assert_equal(user.name, self.username)
assert_equal(1, len(user.databases))
for db in user.databases:
assert_equal(db["name"], self.db1)
self.check_database_for_user(self.username, self.password, [self.db1])
@test(depends_on=[test_create_users_list])
def test_create_users_list_system(self):
# tests for users that should not be listed
users = self.dbaas.users.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
for user in self.system_users:
found = any(result.name == user for result in users)
msg = "User '%s' SHOULD NOT BE found in result" % user
assert_false(found, msg)
@test(depends_on=[test_create_users_list],
runs_after=[test_fails_when_creating_user_twice])
def test_delete_users(self):
self.dbaas.users.delete(instance_info.id, self.username, hostname='%')
assert_equal(202, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, self.username1, hostname='%')
assert_equal(202, self.dbaas.last_http_code)
if not FAKE:
time.sleep(5)
self._check_connection(self.username, self.password)
self._check_connection(self.username1, self.password1)
@test(depends_on=[test_create_users_list, test_delete_users])
def test_hostnames_default_if_not_present(self):
# These tests rely on test_delete_users as they create users only
# they use.
username = "testuser_nohost"
user = {"name": username, "password": "password", "databases": []}
self.dbaas.users.create(instance_info.id, [user])
user["host"] = "%"
# Can't create the user a second time if it already exists.
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, [user])
self.dbaas.users.delete(instance_info.id, username)
@test(depends_on=[test_create_users_list, test_delete_users])
def test_hostnames_make_users_unique(self):
"""test_hostnames_make_users_unique."""
username = "testuser_unique"
hostnames = ["192.168.0.1", "192.168.0.2"]
users = [{"name": username, "password": "password", "databases": [],
"host": hostname}
for hostname in hostnames]
# Nothing wrong with creating two users with the same name, so long
# as their hosts are different.
self.dbaas.users.create(instance_info.id, users)
for hostname in hostnames:
self.dbaas.users.delete(instance_info.id, username,
hostname=hostname)
@test()
def test_updateduser_newname_host_unique(self):
# The updated_username@hostname should not exist already
users = []
old_name = "testuser1"
hostname = "192.168.0.1"
users.append({"name": old_name, "password": "password",
"host": hostname, "databases": []})
users.append({"name": "testuser2", "password": "password",
"host": hostname, "databases": []})
self.dbaas.users.create(instance_info.id, users)
user_new = {"name": "testuser2"}
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
old_name, user_new, hostname)
assert_equal(400, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, old_name, hostname=hostname)
self.dbaas.users.delete(instance_info.id, "testuser2",
hostname=hostname)
@test()
def test_updateduser_name_newhost_unique(self):
# The username@updated_hostname should not exist already
users = []
username = "testuser"
hostname1 = "192.168.0.1"
hostname2 = "192.168.0.2"
users.append({"name": username, "password": "password",
"host": hostname1, "databases": []})
users.append({"name": username, "password": "password",
"host": hostname2, "databases": []})
self.dbaas.users.create(instance_info.id, users)
user_new = {"host": "192.168.0.2"}
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
username, user_new, hostname1)
assert_equal(400, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, username, hostname=hostname1)
self.dbaas.users.delete(instance_info.id, username, hostname=hostname2)
@test()
def test_updateduser_newname_newhost_unique(self):
# The updated_username@updated_hostname should not exist already
users = []
username = "testuser1"
hostname1 = "192.168.0.1"
hostname2 = "192.168.0.2"
users.append({"name": username, "password": "password",
"host": hostname1, "databases": []})
users.append({"name": "testuser2", "password": "password",
"host": hostname2, "databases": []})
self.dbaas.users.create(instance_info.id, users)
user_new = {"name": "testuser2", "host": "192.168.0.2"}
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
username, user_new, hostname1)
assert_equal(400, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, username, hostname=hostname1)
self.dbaas.users.delete(instance_info.id, "testuser2",
hostname=hostname2)
@test()
def test_updateduser_newhost_invalid(self):
# Ensure invalid hostnames/usernames aren't allowed to enter the system
users = []
username = "testuser1"
hostname1 = "192.168.0.1"
users.append({"name": username, "password": "password",
"host": hostname1, "databases": []})
self.dbaas.users.create(instance_info.id, users)
hostname1 = hostname1.replace('.', '%2e')
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
username, {"host": "badjuju"}, hostname1)
assert_equal(400, self.dbaas.last_http_code)
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
username, {"name": " bad username "}, hostname1)
assert_equal(400, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, username, hostname=hostname1)
@test()
def test_cannot_change_rootpassword(self):
# Cannot change password for a root user
user_new = {"password": "12345"}
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
"root", user_new)
@test()
def test_updateuser_emptyhost(self):
# Cannot update the user hostname with an empty string
users = []
username = "testuser1"
hostname = "192.168.0.1"
users.append({"name": username, "password": "password",
"host": hostname, "databases": []})
self.dbaas.users.create(instance_info.id, users)
user_new = {"host": ""}
assert_raises(exceptions.BadRequest,
self.dbaas.users.update_attributes, instance_info.id,
username, user_new, hostname)
assert_equal(400, self.dbaas.last_http_code)
self.dbaas.users.delete(instance_info.id, username, hostname=hostname)
@test(depends_on=[test_create_users])
def test_hostname_ipv4_restriction(self):
# By default, user hostnames are required to be % or IPv4 addresses.
user = {"name": "ipv4_nodice", "password": "password",
"databases": [], "host": "disallowed_host"}
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, [user])
def show_databases(self, user, password):
print("Going to connect to %s, %s, %s"
% (instance_info.get_address(), user, password))
with util.mysql_connection().create(instance_info.get_address(),
user, password) as db:
print(db)
dbs = db.execute("show databases")
return [row['Database'] for row in dbs]
def check_database_for_user(self, user, password, dbs):
if not FAKE:
# Make the real call to the database to check things.
actual_list = self.show_databases(user, password)
for db in dbs:
assert_true(
db in actual_list,
"No match for db %s in dblist. %s :(" % (db, actual_list))
# Confirm via API list.
result = self.dbaas.users.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
for item in result:
if item.name == user:
break
else:
fail("User %s not added to collection." % user)
# Confirm via API get.
result = self.dbaas.users.get(instance_info.id, user, '%')
assert_equal(200, self.dbaas.last_http_code)
if result.name != user:
fail("User %s not found via get." % user)
@test
def test_username_too_long(self):
users = [{"name": "1233asdwer345tyg56", "password": self.password,
"database": self.db1}]
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, users)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_invalid_username(self):
users = []
users.append({"name": "user,", "password": self.password,
"database": self.db1})
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, users)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_invalid_password(self):
users = [{"name": "anouser", "password": "sdf,;",
"database": self.db1}]
assert_raises(exceptions.BadRequest, self.dbaas.users.create,
instance_info.id, users)
assert_equal(400, self.dbaas.last_http_code)
@test
def test_pagination(self):
users = []
users.append({"name": "Jetson", "password": "george",
"databases": [{"name": "Sprockets"}]})
users.append({"name": "Jetson", "password": "george",
"host": "127.0.0.1",
"databases": [{"name": "Sprockets"}]})
users.append({"name": "Spacely", "password": "cosmo",
"databases": [{"name": "Sprockets"}]})
users.append({"name": "Spacely", "password": "cosmo",
"host": "127.0.0.1",
"databases": [{"name": "Sprockets"}]})
users.append({"name": "Uniblab", "password": "fired",
"databases": [{"name": "Sprockets"}]})
users.append({"name": "Uniblab", "password": "fired",
"host": "192.168.0.10",
"databases": [{"name": "Sprockets"}]})
self.dbaas.users.create(instance_info.id, users)
assert_equal(202, self.dbaas.last_http_code)
if not FAKE:
time.sleep(5)
limit = 2
users = self.dbaas.users.list(instance_info.id, limit=limit)
assert_equal(200, self.dbaas.last_http_code)
marker = users.next
# Better get only as many as we asked for
assert_true(len(users) <= limit)
assert_true(users.next is not None)
expected_marker = "%s@%s" % (users[-1].name, users[-1].host)
expected_marker = urllib_parse.quote(expected_marker)
assert_equal(marker, expected_marker)
marker = users.next
# I better get new users if I use the marker I was handed.
users = self.dbaas.users.list(instance_info.id, limit=limit,
marker=marker)
assert_equal(200, self.dbaas.last_http_code)
assert_true(marker not in [user.name for user in users])
# Now fetch again with a larger limit.
users = self.dbaas.users.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
assert_true(users.next is None)
def _check_connection(self, username, password):
if not FAKE:
util.mysql_connection().assert_fails(instance_info.get_address(),
username, password)
# Also determine the db is gone via API.
result = self.dbaas.users.list(instance_info.id)
assert_equal(200, self.dbaas.last_http_code)
for item in result:
if item.name == username:
fail("User %s was not deleted." % username)

View File

@ -1,87 +0,0 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis.asserts import assert_equal
from proboscis import before_class
from proboscis import SkipTest
from proboscis import test
from troveclient.compat.exceptions import ClientException
from trove import tests
from trove.tests.util import create_dbaas_client
from trove.tests.util import test_config
from trove.tests.util.users import Requirements
@test(groups=[tests.DBAAS_API_VERSIONS])
class Versions(object):
"""Test listing all versions and verify the current version."""
@before_class
def setUp(self):
"""Sets up the client."""
user = test_config.users.find_user(Requirements(is_admin=False))
self.client = create_dbaas_client(user)
@test
def test_list_versions_index(self):
"""test_list_versions_index"""
versions = self.client.versions.index(test_config.version_url)
assert_equal(1, len(versions))
assert_equal("CURRENT", versions[0].status,
message="Version status: %s" % versions[0].status)
expected_version = test_config.values['trove_version']
assert_equal(expected_version, versions[0].id,
message="Version ID: %s" % versions[0].id)
expected_api_updated = test_config.values['trove_api_updated']
assert_equal(expected_api_updated, versions[0].updated,
message="Version updated: %s" % versions[0].updated)
def _request(self, url, method='GET', response='200'):
resp, body = None, None
full_url = test_config.version_url + url
try:
resp, body = self.client.client.request(full_url, method)
assert_equal(resp.get('status', ''), response)
except ClientException as ce:
assert_equal(str(ce.http_status), response)
return body
@test
def test_no_slash_no_version(self):
self._request('')
@test
def test_no_slash_with_version(self):
if test_config.auth_strategy == "fake":
raise SkipTest("Skipping this test since auth is faked.")
self._request('/v1.0', response='401')
@test
def test_with_slash_no_version(self):
self._request('/')
@test
def test_with_slash_with_version(self):
if test_config.auth_strategy == "fake":
raise SkipTest("Skipping this test since auth is faked.")
self._request('/v1.0/', response='401')
@test
def test_request_no_version(self):
self._request('/dbaas/instances', response='404')
@test
def test_request_bogus_version(self):
self._request('/0.0/', response='404')

View File

@ -1,190 +0,0 @@
# Copyright 2014 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests database migration scripts for mysql.
To run the tests, you'll need to set up db user named 'openstack_citest'
with password 'openstack_citest' on localhost. This user needs db
admin rights (i.e. create/drop database)
"""
import glob
import os
import migrate.versioning.api as migration_api
from migrate.versioning import repository
from oslo_concurrency import processutils
from oslo_log import log as logging
from proboscis import after_class
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_true
from proboscis import before_class
from proboscis import SkipTest
from proboscis import test
import sqlalchemy
import sqlalchemy.exc
from trove.common.i18n import _
import trove.db.sqlalchemy.migrate_repo
from trove.tests.util import event_simulator
GROUP = "dbaas.db.migrations"
LOG = logging.getLogger(__name__)
@test(groups=[GROUP])
class ProjectTestCase(object):
"""Test migration scripts integrity."""
@test
def test_all_migrations_have_downgrade(self):
topdir = os.path.normpath(os.path.join(os.path.dirname(__file__),
os.pardir, os.pardir, os.pardir))
py_glob = os.path.join(topdir, "trove", "db", "sqlalchemy",
"migrate_repo", "versions", "*.py")
downgrades_found = []
for path in glob.iglob(py_glob):
has_downgrade = False
with open(path, "r") as f:
for line in f:
if 'def downgrade(' in line:
has_downgrade = True
if has_downgrade:
fname = os.path.basename(path)
downgrades_found.append(fname)
helpful_msg = (_("The following migration scripts have a "
"downgrade implementation:\n\t%s") %
'\n\t'.join(sorted(downgrades_found)))
assert_equal(downgrades_found, [], helpful_msg)
@test(depends_on_classes=[ProjectTestCase],
groups=[GROUP])
class TestTroveMigrations(object):
"""Test sqlalchemy-migrate migrations."""
USER = "openstack_citest"
PASSWD = "openstack_citest"
DATABASE = "openstack_citest"
@before_class
def setUp(self):
event_simulator.allowable_empty_sleeps = 1
@after_class
def tearDown(self):
event_simulator.allowable_empty_sleeps = 0
def __init__(self):
self.MIGRATE_FILE = trove.db.sqlalchemy.migrate_repo.__file__
self.REPOSITORY = repository.Repository(
os.path.abspath(os.path.dirname(self.MIGRATE_FILE)))
self.INIT_VERSION = 0
def _get_connect_string(self, backend, database=None):
"""Get database connection string."""
args = {'backend': backend,
'user': self.USER,
'passwd': self.PASSWD}
template = "%(backend)s://%(user)s:%(passwd)s@localhost"
if database is not None:
args['database'] = database
template += "/%(database)s"
return template % args
def _is_backend_avail(self, backend):
"""Check database backend availability."""
connect_uri = self._get_connect_string(backend)
engine = sqlalchemy.create_engine(connect_uri)
try:
connection = engine.connect()
except Exception:
# any error here means the database backend is not available
return False
else:
connection.close()
return True
finally:
if engine is not None:
engine.dispose()
def _execute_cmd(self, cmd=None):
"""Shell out and run the given command."""
out, err = processutils.trycmd(cmd, shell=True)
# Until someone wants to rewrite this to avoid the warning
# we need to handle it for newer versions of mysql
valid_err = err == '' or \
err == 'mysql: [Warning] Using a password on the ' \
'command line interface can be insecure.\n'
assert_true(valid_err,
"Failed to run: '%(cmd)s' "
"Output: '%(stdout)s' "
"Error: '%(stderr)s'" %
{'cmd': cmd, 'stdout': out, 'stderr': err})
def _reset_mysql(self):
"""Reset the MySQL test database
Drop the MySQL test database if it already exists and create
a new one.
"""
sql = ("drop database if exists %(database)s; "
"create database %(database)s;" % {'database': self.DATABASE})
cmd = ("mysql -u \"%(user)s\" -p%(password)s -h %(host)s "
"-e \"%(sql)s\"" % {'user': self.USER, 'password': self.PASSWD,
'host': 'localhost', 'sql': sql})
self._execute_cmd(cmd)
@test
def test_mysql_migration(self):
db_backend = "mysql+pymysql"
# Gracefully skip this test if the developer do not have
# MySQL running. MySQL should always be available on
# the infrastructure
if not self._is_backend_avail(db_backend):
raise SkipTest("MySQL is not available.")
self._reset_mysql()
connect_string = self._get_connect_string(db_backend, self.DATABASE)
engine = sqlalchemy.create_engine(connect_string)
self._walk_versions(engine)
engine.dispose()
def _walk_versions(self, engine=None):
"""Walk through and test the migration scripts
Determine latest version script from the repo, then
upgrade from 1 through to the latest.
"""
# Place the database under version control
migration_api.version_control(engine, self.REPOSITORY,
self.INIT_VERSION)
assert_equal(self.INIT_VERSION,
migration_api.db_version(engine, self.REPOSITORY))
LOG.debug('Latest version is %s', self.REPOSITORY.latest)
versions = range(self.INIT_VERSION + 1, self.REPOSITORY.latest + 1)
# Walk from version 1 to the latest, testing the upgrade paths.
for version in versions:
self._migrate_up(engine, version)
def _migrate_up(self, engine, version):
"""Migrate up to a new version of database."""
migration_api.upgrade(engine, self.REPOSITORY, version)
assert_equal(version,
migration_api.db_version(engine, self.REPOSITORY))

View File

@ -1,85 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_true
from proboscis.asserts import fail
from trove.dns import driver
LOG = logging.getLogger(__name__)
ENTRIES = {}
class FakeDnsDriver(driver.DnsDriver):
def create_entry(self, entry, content):
"""Pretend to create a DNS entry somewhere.
Since nothing else tests that this works, there's nothing more to do
here.
"""
entry.content = content
assert_true(entry.name not in ENTRIES)
LOG.debug("Adding fake DNS entry for hostname %s.", entry.name)
ENTRIES[entry.name] = entry
def delete_entry(self, name, type, dns_zone=None):
LOG.debug("Deleting fake DNS entry for hostname %s", name)
ENTRIES.pop(name, None)
class FakeDnsInstanceEntryFactory(driver.DnsInstanceEntryFactory):
def create_entry(self, instance_id):
# Construct hostname using pig-latin.
hostname = "%s-lay" % instance_id
LOG.debug("Mapping instance_id %(id)s to hostname %(host)s",
{'id': instance_id, 'host': hostname})
return driver.DnsEntry(name=hostname, content=None,
type="A", ttl=42, dns_zone=None)
class FakeDnsChecker(object):
"""Used by tests to make sure a DNS record was written in fake mode."""
def __call__(self, mgmt_instance):
"""
Given an instance ID and ip address, confirm that the proper DNS
record was stored in Designate or some other DNS system.
"""
entry = FakeDnsInstanceEntryFactory().create_entry(mgmt_instance.id)
# Confirm DNS entry shown to user is what we expect.
assert_equal(entry.name, mgmt_instance.hostname)
hostname = entry.name
for i in ENTRIES:
print(i)
print("\t%s" % ENTRIES[i])
assert_true(hostname in ENTRIES,
"Hostname %s not found in DNS entries!" % hostname)
entry = ENTRIES[hostname]
# See if the ip address assigned to the record is what we expect.
# This isn't perfect, but for Fake Mode its good enough. If we
# really want to know exactly what it should be then we should restore
# the ability to return the IP from the API as well as a hostname,
# since that lines up to the DnsEntry's content field.
ip_addresses = mgmt_instance.server['addresses']
for address in ip_addresses:
if entry.content == address['address']:
return
fail("Couldn't find IP address %s among these values: %s"
% (entry.content, ip_addresses))

View File

@ -1,338 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import proboscis
from trove import tests
from trove.tests.scenario import groups
from trove.tests.scenario.groups import backup_group
from trove.tests.scenario.groups import cluster_group
from trove.tests.scenario.groups import configuration_group
from trove.tests.scenario.groups import database_actions_group
from trove.tests.scenario.groups import guest_log_group
from trove.tests.scenario.groups import instance_actions_group
from trove.tests.scenario.groups import instance_create_group
from trove.tests.scenario.groups import instance_delete_group
from trove.tests.scenario.groups import instance_error_create_group
from trove.tests.scenario.groups import instance_force_delete_group
from trove.tests.scenario.groups import instance_upgrade_group
from trove.tests.scenario.groups import module_group
from trove.tests.scenario.groups import replication_group
from trove.tests.scenario.groups import root_actions_group
from trove.tests.scenario.groups import user_actions_group
def build_group(*groups):
def merge(collection, *items):
for item in items:
if isinstance(item, list):
merge(collection, *item)
else:
if item not in collection:
collection.append(item)
out = []
merge(out, *groups)
return out
def register(group_names, *test_groups, **kwargs):
if kwargs:
register(group_names, kwargs.values())
for suffix, grp_set in kwargs.items():
# Recursively call without the kwargs
register([name + '_' + suffix for name in group_names], *grp_set)
return
# Do the actual registration here
proboscis.register(groups=build_group(group_names),
depends_on_groups=build_group(*test_groups))
# Now register the same groups with '-' instead of '_'
proboscis.register(
groups=build_group([name.replace('_', '-') for name in group_names]),
depends_on_groups=build_group(*test_groups))
# Base groups for all other groups
base_groups = [
tests.DBAAS_API_VERSIONS,
]
# Cluster-based groups
cluster_create_groups = list(base_groups)
cluster_create_groups.extend([groups.CLUSTER_DELETE_WAIT])
cluster_actions_groups = list(cluster_create_groups)
cluster_actions_groups.extend([groups.CLUSTER_ACTIONS_SHRINK_WAIT])
cluster_root_groups = list(cluster_create_groups)
cluster_root_groups.extend([groups.CLUSTER_ACTIONS_ROOT_ENABLE])
cluster_root_actions_groups = list(cluster_actions_groups)
cluster_root_actions_groups.extend([groups.CLUSTER_ACTIONS_ROOT_ACTIONS])
cluster_restart_groups = list(cluster_create_groups)
cluster_restart_groups.extend([groups.CLUSTER_ACTIONS_RESTART_WAIT])
cluster_upgrade_groups = list(cluster_create_groups)
cluster_upgrade_groups.extend([groups.CLUSTER_UPGRADE_WAIT])
cluster_config_groups = list(cluster_create_groups)
cluster_config_groups.extend([groups.CLUSTER_CFGGRP_DELETE])
cluster_config_actions_groups = list(cluster_config_groups)
cluster_config_actions_groups.extend([groups.CLUSTER_ACTIONS_CFGGRP_ACTIONS])
cluster_groups = list(cluster_actions_groups)
cluster_groups.extend([cluster_group.GROUP])
# Single-instance based groups
instance_create_groups = list(base_groups)
instance_create_groups.extend([groups.INST_CREATE,
groups.INST_DELETE_WAIT])
instance_error_create_groups = list(base_groups)
instance_error_create_groups.extend([instance_error_create_group.GROUP])
instance_force_delete_groups = list(base_groups)
instance_force_delete_groups.extend([instance_force_delete_group.GROUP])
instance_init_groups = list(base_groups)
instance_init_groups.extend([instance_create_group.GROUP,
instance_delete_group.GROUP])
instance_upgrade_groups = list(instance_create_groups)
instance_upgrade_groups.extend([instance_upgrade_group.GROUP])
backup_groups = list(instance_create_groups)
backup_groups.extend([groups.BACKUP,
groups.BACKUP_INST])
backup_incremental_groups = list(backup_groups)
backup_incremental_groups.extend([backup_group.GROUP])
backup_negative_groups = list(backup_groups)
backup_negative_groups.extend([groups.BACKUP_CREATE_NEGATIVE])
configuration_groups = list(instance_create_groups)
configuration_groups.extend([configuration_group.GROUP])
configuration_create_groups = list(base_groups)
configuration_create_groups.extend([groups.CFGGRP_CREATE,
groups.CFGGRP_DELETE])
database_actions_groups = list(instance_create_groups)
database_actions_groups.extend([database_actions_group.GROUP])
guest_log_groups = list(instance_create_groups)
guest_log_groups.extend([guest_log_group.GROUP])
instance_actions_groups = list(instance_create_groups)
instance_actions_groups.extend([instance_actions_group.GROUP])
instance_groups = list(instance_actions_groups)
instance_groups.extend([instance_error_create_group.GROUP,
instance_force_delete_group.GROUP])
module_groups = list(instance_create_groups)
module_groups.extend([module_group.GROUP])
module_create_groups = list(base_groups)
module_create_groups.extend([groups.MODULE_CREATE,
groups.MODULE_DELETE])
replication_groups = list(instance_create_groups)
replication_groups.extend([groups.REPL_INST_DELETE_WAIT])
replication_promote_groups = list(replication_groups)
replication_promote_groups.extend([replication_group.GROUP])
root_actions_groups = list(instance_create_groups)
root_actions_groups.extend([root_actions_group.GROUP])
user_actions_groups = list(instance_create_groups)
user_actions_groups.extend([user_actions_group.GROUP])
# groups common to all datastores
common_groups = list(instance_create_groups)
# NOTE(lxkong): Remove the module related tests(module_groups) for now because
# of no use case.
common_groups.extend([guest_log_groups, instance_init_groups])
integration_groups = [
tests.DBAAS_API_VERSIONS,
tests.DBAAS_API_DATASTORES,
tests.DBAAS_API_MGMT_DATASTORES,
tests.DBAAS_API_INSTANCES,
tests.DBAAS_API_USERS_ROOT,
tests.DBAAS_API_USERS,
tests.DBAAS_API_USERS_ACCESS,
tests.DBAAS_API_DATABASES,
tests.DBAAS_API_INSTANCE_ACTIONS,
tests.DBAAS_API_BACKUPS,
tests.DBAAS_API_CONFIGURATIONS,
tests.DBAAS_API_REPLICATION,
tests.DBAAS_API_INSTANCES_DELETE
]
# We intentionally make the functional tests running in series and dependent
# on each other, so that one test case failure will stop the whole testing.
proboscis.register(groups=["mysql"],
depends_on_groups=integration_groups)
register(
["mysql_supported"],
single=[instance_create_group.GROUP,
backup_group.GROUP,
configuration_group.GROUP,
database_actions_group.GROUP,
guest_log_group.GROUP,
instance_actions_group.GROUP,
instance_error_create_group.GROUP,
instance_force_delete_group.GROUP,
root_actions_group.GROUP,
user_actions_group.GROUP,
instance_delete_group.GROUP],
multi=[replication_group.GROUP,
instance_delete_group.GROUP]
)
register(
["mariadb_supported"],
single=[instance_create_group.GROUP,
backup_group.GROUP,
configuration_group.GROUP,
database_actions_group.GROUP,
guest_log_group.GROUP,
instance_actions_group.GROUP,
instance_error_create_group.GROUP,
instance_force_delete_group.GROUP,
root_actions_group.GROUP,
user_actions_group.GROUP,
instance_delete_group.GROUP],
multi=[replication_group.GROUP,
instance_delete_group.GROUP]
)
register(
["db2_supported"],
single=[common_groups,
configuration_groups,
database_actions_groups,
user_actions_groups, ],
multi=[]
)
register(
["cassandra_supported"],
single=[common_groups,
backup_groups,
database_actions_groups,
configuration_groups,
user_actions_groups, ],
multi=[cluster_actions_groups,
cluster_root_actions_groups,
cluster_config_actions_groups, ]
)
register(
["couchbase_supported"],
single=[common_groups,
backup_groups,
root_actions_groups, ],
multi=[]
)
register(
["couchdb_supported"],
single=[common_groups,
backup_groups,
database_actions_groups,
root_actions_groups,
user_actions_groups, ],
multi=[]
)
register(
["mongodb_supported"],
single=[common_groups,
backup_groups,
configuration_groups,
database_actions_groups,
root_actions_groups,
user_actions_groups, ],
multi=[cluster_actions_groups, ]
)
register(
["percona_supported"],
single=[common_groups,
backup_incremental_groups,
configuration_groups,
database_actions_groups,
instance_upgrade_groups,
root_actions_groups,
user_actions_groups, ],
multi=[replication_promote_groups, ]
)
register(
["postgresql_supported"],
single=[common_groups,
backup_incremental_groups,
database_actions_groups,
configuration_groups,
root_actions_groups,
user_actions_groups, ],
multi=[replication_groups, ]
)
register(
["pxc_supported"],
single=[common_groups,
backup_incremental_groups,
configuration_groups,
database_actions_groups,
root_actions_groups,
user_actions_groups, ],
multi=[]
# multi=[cluster_actions_groups,
# cluster_root_actions_groups, ]
)
# Redis instances does not support inherit root state from backups,
# so a customized root actions group is created, instance backuping
# and restoring tests will not be included.
redis_root_actions_groups = list(instance_create_groups)
redis_root_actions_groups.extend([groups.ROOT_ACTION_ENABLE,
groups.ROOT_ACTION_DISABLE])
register(
["redis_supported"],
single=[common_groups,
backup_groups,
configuration_groups,
redis_root_actions_groups, ],
multi=[replication_promote_groups, ]
# multi=[cluster_actions_groups,
# replication_promote_groups, ]
)
register(
["vertica_supported"],
single=[common_groups,
configuration_groups,
root_actions_groups, ],
multi=[cluster_actions_groups,
cluster_root_actions_groups, ]
)

View File

@ -1,175 +0,0 @@
# Copyright 2016 Tesora Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Labels for all the sub-groups are listed here, so that they can be
# referenced by other groups (thus avoiding circular references when
# loading modules). The main GROUP label is still defined in each
# respective group file.
# Backup Group
BACKUP = "scenario.backup_grp"
BACKUP_CREATE = "scenario.backup_create_grp"
BACKUP_CREATE_NEGATIVE = "scenario.backup_create_negative_grp"
BACKUP_CREATE_WAIT = "scenario.backup_create_wait_grp"
BACKUP_DELETE = "scenario.backup_delete_grp"
BACKUP_INST = "scenario.backup_inst_grp"
BACKUP_INST_CREATE = "scenario.backup_inst_create_grp"
BACKUP_INST_CREATE_WAIT = "scenario.backup_inst_create_wait_grp"
BACKUP_INST_DELETE = "scenario.backup_inst_delete_grp"
BACKUP_INST_DELETE_WAIT = "scenario.backup_inst_delete_wait_grp"
BACKUP_INC = "scenario.backup_inc_grp"
BACKUP_INC_CREATE = "scenario.backup_inc_create_grp"
BACKUP_INC_DELETE = "scenario.backup_inc_delete_grp"
BACKUP_INC_INST = "scenario.backup_inc_inst_grp"
BACKUP_INC_INST_CREATE = "scenario.backup_inc_inst_create_grp"
BACKUP_INC_INST_CREATE_WAIT = "scenario.backup_inc_inst_create_wait_grp"
BACKUP_INC_INST_DELETE = "scenario.backup_inc_inst_delete_grp"
BACKUP_INC_INST_DELETE_WAIT = "scenario.backup_inc_inst_delete_wait_grp"
# Configuration Group
CFGGRP_CREATE = "scenario.cfggrp_create_grp"
CFGGRP_DELETE = "scenario.cfggrp_delete_grp"
CFGGRP_INST = "scenario.cfggrp_inst_grp"
CFGGRP_INST_CREATE = "scenario.cfggrp_inst_create_grp"
CFGGRP_INST_CREATE_WAIT = "scenario.cfggrp_inst_create_wait_grp"
CFGGRP_INST_DELETE = "scenario.cfggrp_inst_delete_grp"
CFGGRP_INST_DELETE_WAIT = "scenario.cfggrp_inst_delete_wait_grp"
# Cluster Actions Group
CLUSTER_CFGGRP_CREATE = "scenario.cluster_actions_cfggrp_create_grp"
CLUSTER_CFGGRP_DELETE = "scenario.cluster_actions_cfggrp_delete_grp"
CLUSTER_ACTIONS = "scenario.cluster_actions_grp"
CLUSTER_ACTIONS_CFGGRP_ACTIONS = "scenario.cluster_actions_cfggrp_actions_grp"
CLUSTER_ACTIONS_ROOT_ENABLE = "scenario.cluster_actions_root_enable_grp"
CLUSTER_ACTIONS_ROOT_ACTIONS = "scenario.cluster_actions_root_actions_grp"
CLUSTER_ACTIONS_ROOT_GROW = "scenario.cluster_actions_root_grow_grp"
CLUSTER_ACTIONS_ROOT_SHRINK = "scenario.cluster_actions_root_shrink_grp"
CLUSTER_ACTIONS_GROW_SHRINK = "scenario.cluster_actions_grow_shrink_grp"
CLUSTER_ACTIONS_GROW = "scenario.cluster_actions_grow_grp"
CLUSTER_ACTIONS_GROW_WAIT = "scenario.cluster_actions_grow_wait_grp"
CLUSTER_ACTIONS_SHRINK = "scenario.cluster_actions_shrink_grp"
CLUSTER_ACTIONS_SHRINK_WAIT = "scenario.cluster_actions_shrink_wait_grp"
CLUSTER_ACTIONS_RESTART = "scenario.cluster_actions_restart_grp"
CLUSTER_ACTIONS_RESTART_WAIT = "scenario.cluster_actions_restart_wait_grp"
# Cluster Create Group (in cluster_actions file)
CLUSTER_CREATE = "scenario.cluster_create_grp"
CLUSTER_CREATE_WAIT = "scenario.cluster_create_wait_grp"
# Cluster Delete Group (in cluster_actions file)
CLUSTER_DELETE = "scenario.cluster_delete_grp"
CLUSTER_DELETE_WAIT = "scenario.cluster_delete_wait_grp"
# Cluster Upgrade Group (in cluster_actions file)
CLUSTER_UPGRADE = "scenario.cluster_upgrade_grp"
CLUSTER_UPGRADE_WAIT = "scenario.cluster_upgrade_wait_grp"
# Database Actions Group
DB_ACTION_CREATE = "scenario.db_action_create_grp"
DB_ACTION_DELETE = "scenario.db_action_delete_grp"
DB_ACTION_INST = "scenario.db_action_inst_grp"
DB_ACTION_INST_CREATE = "scenario.db_action_inst_create_grp"
DB_ACTION_INST_CREATE_WAIT = "scenario.db_action_inst_create_wait_grp"
DB_ACTION_INST_DELETE = "scenario.db_action_inst_delete_grp"
DB_ACTION_INST_DELETE_WAIT = "scenario.db_action_inst_delete_wait_grp"
# Instance Actions Group
INST_ACTIONS = "scenario.inst_actions_grp"
INST_ACTIONS_RESIZE = "scenario.inst_actions_resize_grp"
INST_ACTIONS_RESIZE_WAIT = "scenario.inst_actions_resize_wait_grp"
# Instance Upgrade Group
INST_UPGRADE = "scenario.inst_upgrade_grp"
# Instance Create Group
INST_CREATE = "scenario.inst_create_grp"
INST_CREATE_WAIT = "scenario.inst_create_wait_grp"
INST_INIT_CREATE = "scenario.inst_init_create_grp"
INST_INIT_CREATE_WAIT = "scenario.inst_init_create_wait_grp"
INST_INIT_DELETE = "scenario.inst_init_delete_grp"
INST_INIT_DELETE_WAIT = "scenario.inst_init_delete_wait_grp"
# Instance Delete Group
INST_DELETE = "scenario.inst_delete_grp"
INST_DELETE_WAIT = "scenario.inst_delete_wait_grp"
# Instance Error Create Group
INST_ERROR_CREATE = "scenario.inst_error_create_grp"
INST_ERROR_CREATE_WAIT = "scenario.inst_error_create_wait_grp"
INST_ERROR_DELETE = "scenario.inst_error_delete_grp"
INST_ERROR_DELETE_WAIT = "scenario.inst_error_delete_wait_grp"
# Instance Force Delete Group
INST_FORCE_DELETE = "scenario.inst_force_delete_grp"
INST_FORCE_DELETE_WAIT = "scenario.inst_force_delete_wait_grp"
# Module Group
MODULE_CREATE = "scenario.module_create_grp"
MODULE_DELETE = "scenario.module_delete_grp"
MODULE_INST = "scenario.module_inst_grp"
MODULE_INST_CREATE = "scenario.module_inst_create_grp"
MODULE_INST_CREATE_WAIT = "scenario.module_inst_create_wait_grp"
MODULE_INST_DELETE = "scenario.module_inst_delete_grp"
MODULE_INST_DELETE_WAIT = "scenario.module_inst_delete_wait_grp"
# Replication Group
REPL_INST = "scenario.repl_inst_grp"
REPL_INST_CREATE = "scenario.repl_inst_create_grp"
REPL_INST_CREATE_WAIT = "scenario.repl_inst_create_wait_grp"
REPL_INST_MULTI_CREATE = "scenario.repl_inst_multi_create_grp"
REPL_INST_DELETE_NON_AFFINITY_WAIT = "scenario.repl_inst_delete_noaff_wait_grp"
REPL_INST_MULTI_CREATE_WAIT = "scenario.repl_inst_multi_create_wait_grp"
REPL_INST_MULTI_PROMOTE = "scenario.repl_inst_multi_promote_grp"
REPL_INST_DELETE = "scenario.repl_inst_delete_grp"
REPL_INST_DELETE_WAIT = "scenario.repl_inst_delete_wait_grp"
# Root Actions Group
ROOT_ACTION_ENABLE = "scenario.root_action_enable_grp"
ROOT_ACTION_DISABLE = "scenario.root_action_disable_grp"
ROOT_ACTION_INST = "scenario.root_action_inst_grp"
ROOT_ACTION_INST_CREATE = "scenario.root_action_inst_create_grp"
ROOT_ACTION_INST_CREATE_WAIT = "scenario.root_action_inst_create_wait_grp"
ROOT_ACTION_INST_DELETE = "scenario.root_action_inst_delete_grp"
ROOT_ACTION_INST_DELETE_WAIT = "scenario.root_action_inst_delete_wait_grp"
# User Actions Group
USER_ACTION_CREATE = "scenario.user_action_create_grp"
USER_ACTION_DELETE = "scenario.user_action_delete_grp"
USER_ACTION_INST = "scenario.user_action_inst_grp"
USER_ACTION_INST_CREATE = "scenario.user_action_inst_create_grp"
USER_ACTION_INST_CREATE_WAIT = "scenario.user_action_inst_create_wait_grp"
USER_ACTION_INST_DELETE = "scenario.user_action_inst_delete_grp"
USER_ACTION_INST_DELETE_WAIT = "scenario.user_action_inst_delete_wait_grp"
# Instance Log Group
INST_LOG = "scenario.inst_log_grp"

View File

@ -1,407 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.backup_restore_group"
class BackupRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'backup_runners'
_runner_cls = 'BackupRunner'
@test(depends_on_groups=[groups.INST_CREATE],
groups=[GROUP, groups.BACKUP, groups.BACKUP_CREATE])
class BackupCreateGroup(TestGroup):
"""Test Backup Create functionality."""
def __init__(self):
super(BackupCreateGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def add_data_for_backup(self):
"""Add data to instance for restore verification."""
self.test_runner.run_add_data_for_backup()
@test(runs_after=[add_data_for_backup])
def verify_data_for_backup(self):
"""Verify data in instance."""
self.test_runner.run_verify_data_for_backup()
@test(runs_after=[verify_data_for_backup])
def save_backup_counts(self):
"""Store the existing backup counts."""
self.test_runner.run_save_backup_counts()
@test(runs_after=[save_backup_counts])
def backup_create(self):
"""Check that create backup is started successfully."""
self.test_runner.run_backup_create()
@test(depends_on_classes=[BackupCreateGroup],
groups=[GROUP, groups.BACKUP_CREATE_NEGATIVE])
class BackupCreateNegativeGroup(TestGroup):
"""Test Backup Create Negative functionality."""
def __init__(self):
super(BackupCreateNegativeGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def backup_delete_while_backup_running(self):
"""Ensure delete backup fails while it is running."""
self.test_runner.run_backup_delete_while_backup_running()
@test(runs_after=[backup_delete_while_backup_running])
def restore_instance_from_not_completed_backup(self):
"""Ensure a restore fails while the backup is running."""
self.test_runner.run_restore_instance_from_not_completed_backup()
@test(runs_after=[restore_instance_from_not_completed_backup])
def backup_create_another_backup_running(self):
"""Ensure create backup fails when another backup is running."""
self.test_runner.run_backup_create_another_backup_running()
@test(runs_after=[backup_create_another_backup_running])
def instance_action_right_after_backup_create(self):
"""Ensure any instance action fails while backup is running."""
self.test_runner.run_instance_action_right_after_backup_create()
@test(runs_after=[instance_action_right_after_backup_create])
def delete_unknown_backup(self):
"""Ensure deleting an unknown backup fails."""
self.test_runner.run_delete_unknown_backup()
@test(runs_after=[instance_action_right_after_backup_create])
def backup_create_instance_invalid(self):
"""Ensure create backup fails with invalid instance id."""
self.test_runner.run_backup_create_instance_invalid()
@test(runs_after=[instance_action_right_after_backup_create])
def backup_create_instance_not_found(self):
"""Ensure create backup fails with unknown instance id."""
self.test_runner.run_backup_create_instance_not_found()
@test(depends_on_classes=[BackupCreateNegativeGroup],
groups=[GROUP, groups.BACKUP, groups.BACKUP_CREATE_WAIT])
class BackupCreateWaitGroup(TestGroup):
"""Wait for Backup Create to Complete."""
def __init__(self):
super(BackupCreateWaitGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def backup_create_completed(self):
"""Check that the backup completes successfully."""
self.test_runner.run_backup_create_completed()
@test(depends_on=[backup_create_completed])
def instance_goes_active(self):
"""Check that the instance goes active after the backup."""
self.test_runner.run_instance_goes_active()
@test(depends_on=[backup_create_completed])
def backup_list(self):
"""Test list backups."""
self.test_runner.run_backup_list()
@test(depends_on=[backup_create_completed])
def backup_list_filter_datastore(self):
"""Test list backups and filter by datastore."""
self.test_runner.run_backup_list_filter_datastore()
@test(depends_on=[backup_create_completed])
def backup_list_filter_datastore_not_found(self):
"""Test list backups and filter by unknown datastore."""
self.test_runner.run_backup_list_filter_datastore_not_found()
@test(depends_on=[backup_create_completed])
def backup_list_for_instance(self):
"""Test backup list for instance."""
self.test_runner.run_backup_list_for_instance()
@test(depends_on=[backup_create_completed])
def backup_get(self):
"""Test backup show."""
self.test_runner.run_backup_get()
@test(depends_on=[backup_create_completed])
def backup_get_unauthorized_user(self):
"""Ensure backup show fails for an unauthorized user."""
self.test_runner.run_backup_get_unauthorized_user()
@test(depends_on_classes=[BackupCreateWaitGroup],
groups=[GROUP, groups.BACKUP_INC, groups.BACKUP_INC_CREATE])
class BackupIncCreateGroup(TestGroup):
"""Test Backup Incremental Create functionality."""
def __init__(self):
super(BackupIncCreateGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def add_data_for_inc_backup_1(self):
"""Add data to instance for inc backup 1."""
self.test_runner.run_add_data_for_inc_backup_1()
@test(depends_on=[add_data_for_inc_backup_1])
def verify_data_for_inc_backup_1(self):
"""Verify data in instance for inc backup 1."""
self.test_runner.run_verify_data_for_inc_backup_1()
@test(depends_on=[verify_data_for_inc_backup_1])
def inc_backup_1(self):
"""Run incremental backup 1."""
self.test_runner.run_inc_backup_1()
@test(depends_on=[inc_backup_1])
def wait_for_inc_backup_1(self):
"""Check that inc backup 1 completes successfully."""
self.test_runner.run_wait_for_inc_backup_1()
@test(depends_on=[wait_for_inc_backup_1])
def add_data_for_inc_backup_2(self):
"""Add data to instance for inc backup 2."""
self.test_runner.run_add_data_for_inc_backup_2()
@test(depends_on=[add_data_for_inc_backup_2])
def verify_data_for_inc_backup_2(self):
"""Verify data in instance for inc backup 2."""
self.test_runner.run_verify_data_for_inc_backup_2()
@test(depends_on=[wait_for_inc_backup_1],
runs_after=[verify_data_for_inc_backup_2])
def instance_goes_active_inc_1(self):
"""Check that the instance goes active after the inc 1 backup."""
self.test_runner.run_instance_goes_active()
@test(depends_on=[verify_data_for_inc_backup_2],
runs_after=[instance_goes_active_inc_1])
def inc_backup_2(self):
"""Run incremental backup 2."""
self.test_runner.run_inc_backup_2()
@test(depends_on=[inc_backup_2])
def wait_for_inc_backup_2(self):
"""Check that inc backup 2 completes successfully."""
self.test_runner.run_wait_for_inc_backup_2()
@test(depends_on=[wait_for_inc_backup_2])
def instance_goes_active_inc_2(self):
"""Check that the instance goes active after the inc 2 backup."""
self.test_runner.run_instance_goes_active()
@test(depends_on_classes=[BackupIncCreateGroup],
groups=[GROUP, groups.BACKUP_INST, groups.BACKUP_INST_CREATE])
class BackupInstCreateGroup(TestGroup):
"""Test Backup Instance Create functionality."""
def __init__(self):
super(BackupInstCreateGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def restore_from_backup(self):
"""Check that restoring an instance from a backup starts."""
self.test_runner.run_restore_from_backup()
@test(depends_on_classes=[BackupInstCreateGroup],
groups=[GROUP, groups.BACKUP_INST, groups.BACKUP_INST_CREATE_WAIT])
class BackupInstCreateWaitGroup(TestGroup):
"""Test Backup Instance Create completes."""
def __init__(self):
super(BackupInstCreateWaitGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def restore_from_backup_completed(self):
"""Wait until restoring an instance from a backup completes."""
self.test_runner.run_restore_from_backup_completed()
@test(depends_on=[restore_from_backup_completed])
def verify_data_in_restored_instance(self):
"""Verify data in restored instance."""
self.test_runner.run_verify_data_in_restored_instance()
@test(depends_on=[restore_from_backup_completed])
def verify_databases_in_restored_instance(self):
"""Verify databases in restored instance."""
self.test_runner.run_verify_databases_in_restored_instance()
@test(depends_on_classes=[BackupInstCreateWaitGroup],
groups=[GROUP, groups.BACKUP_INST, groups.BACKUP_INST_DELETE])
class BackupInstDeleteGroup(TestGroup):
"""Test Backup Instance Delete functionality."""
def __init__(self):
super(BackupInstDeleteGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def delete_restored_instance(self):
"""Test deleting the restored instance."""
self.test_runner.run_delete_restored_instance()
@test(depends_on_classes=[BackupInstDeleteGroup],
groups=[GROUP, groups.BACKUP_INST, groups.BACKUP_INST_DELETE_WAIT])
class BackupInstDeleteWaitGroup(TestGroup):
"""Test Backup Instance Delete completes."""
def __init__(self):
super(BackupInstDeleteWaitGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def wait_for_restored_instance_delete(self):
"""Wait until deleting the restored instance completes."""
self.test_runner.run_wait_for_restored_instance_delete()
@test(depends_on_classes=[BackupInstDeleteWaitGroup],
groups=[GROUP, groups.BACKUP_INC_INST,
groups.BACKUP_INC_INST_CREATE])
class BackupIncInstCreateGroup(TestGroup):
"""Test Backup Incremental Instance Create functionality."""
def __init__(self):
super(BackupIncInstCreateGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def restore_from_inc_1_backup(self):
"""Check that restoring an instance from inc 1 backup starts."""
self.test_runner.run_restore_from_inc_1_backup()
@test(depends_on_classes=[BackupIncInstCreateGroup],
groups=[GROUP, groups.BACKUP_INC_INST,
groups.BACKUP_INC_INST_CREATE_WAIT])
class BackupIncInstCreateWaitGroup(TestGroup):
"""Test Backup Incremental Instance Create completes."""
def __init__(self):
super(BackupIncInstCreateWaitGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def restore_from_inc_1_backup_completed(self):
"""Wait until restoring an inst from inc 1 backup completes."""
self.test_runner.run_restore_from_inc_1_backup_completed()
@test(depends_on=[restore_from_inc_1_backup_completed])
def verify_data_in_restored_inc_1_instance(self):
"""Verify data in restored inc 1 instance."""
self.test_runner.run_verify_data_in_restored_inc_1_instance()
@test(depends_on=[restore_from_inc_1_backup_completed])
def verify_databases_in_restored_inc_1_instance(self):
"""Verify databases in restored inc 1 instance."""
self.test_runner.run_verify_databases_in_restored_inc_1_instance()
@test(depends_on_classes=[BackupIncInstCreateWaitGroup],
groups=[GROUP, groups.BACKUP_INC_INST,
groups.BACKUP_INC_INST_DELETE])
class BackupIncInstDeleteGroup(TestGroup):
"""Test Backup Incremental Instance Delete functionality."""
def __init__(self):
super(BackupIncInstDeleteGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def delete_restored_inc_1_instance(self):
"""Test deleting the restored inc 1 instance."""
self.test_runner.run_delete_restored_inc_1_instance()
@test(depends_on_classes=[BackupIncInstDeleteGroup],
groups=[GROUP, groups.BACKUP_INC_INST,
groups.BACKUP_INC_INST_DELETE_WAIT])
class BackupIncInstDeleteWaitGroup(TestGroup):
"""Test Backup Incremental Instance Delete completes."""
def __init__(self):
super(BackupIncInstDeleteWaitGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def wait_for_restored_inc_1_instance_delete(self):
"""Wait until deleting the restored inc 1 instance completes."""
self.test_runner.run_wait_for_restored_inc_1_instance_delete()
@test(depends_on_classes=[BackupIncInstDeleteWaitGroup],
groups=[GROUP, groups.BACKUP_INC, groups.BACKUP_INC_DELETE])
class BackupIncDeleteGroup(TestGroup):
"""Test Backup Incremental Delete functionality."""
def __init__(self):
super(BackupIncDeleteGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def delete_inc_2_backup(self):
"""Test deleting the inc 2 backup."""
# We only delete the inc 2 backup, as the inc 1 should be deleted
# by the full backup delete that runs after.
self.test_runner.run_delete_inc_2_backup()
@test(depends_on_classes=[BackupIncDeleteGroup],
groups=[GROUP, groups.BACKUP, groups.BACKUP_DELETE])
class BackupDeleteGroup(TestGroup):
"""Test Backup Delete functionality."""
def __init__(self):
super(BackupDeleteGroup, self).__init__(
BackupRunnerFactory.instance())
@test
def delete_backup_unauthorized_user(self):
"""Ensure deleting backup by an unauthorized user fails."""
self.test_runner.run_delete_backup_unauthorized_user()
@test(runs_after=[delete_backup_unauthorized_user])
def delete_backup(self):
"""Test deleting the backup."""
self.test_runner.run_delete_backup()
@test(depends_on=[delete_backup])
def check_for_incremental_backup(self):
"""Test that backup children are deleted."""
self.test_runner.run_check_for_incremental_backup()
@test
def remove_backup_data_from_instance(self):
"""Remove the backup data from the original instance."""
self.test_runner.run_remove_backup_data_from_instance()

View File

@ -1,517 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.cluster_group"
class ClusterRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'cluster_runners'
_runner_cls = 'ClusterRunner'
@test(groups=[GROUP, groups.CLUSTER_CFGGRP_CREATE],
runs_after_groups=[groups.MODULE_DELETE,
groups.CFGGRP_INST_DELETE,
groups.INST_ACTIONS_RESIZE_WAIT,
groups.DB_ACTION_INST_DELETE,
groups.USER_ACTION_DELETE,
groups.USER_ACTION_INST_DELETE,
groups.ROOT_ACTION_INST_DELETE,
groups.REPL_INST_DELETE_WAIT,
groups.INST_DELETE])
class ClusterConfigurationCreateGroup(TestGroup):
def __init__(self):
super(ClusterConfigurationCreateGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def create_initial_configuration(self):
"""Create a configuration group for a new cluster."""
self.test_runner.run_initial_configuration_create()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS, groups.CLUSTER_CREATE],
runs_after_groups=[groups.CLUSTER_CFGGRP_CREATE])
class ClusterCreateGroup(TestGroup):
def __init__(self):
super(ClusterCreateGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_create(self):
"""Create a cluster."""
self.test_runner.run_cluster_create()
@test(groups=[GROUP, groups.CLUSTER_CREATE_WAIT],
depends_on_groups=[groups.CLUSTER_CREATE],
runs_after_groups=[groups.MODULE_INST_DELETE_WAIT,
groups.CFGGRP_INST_DELETE_WAIT,
groups.DB_ACTION_INST_DELETE_WAIT,
groups.USER_ACTION_INST_DELETE_WAIT,
groups.ROOT_ACTION_INST_DELETE_WAIT,
groups.INST_DELETE_WAIT])
class ClusterCreateWaitGroup(TestGroup):
def __init__(self):
super(ClusterCreateWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_create_wait(self):
"""Wait for cluster create to complete."""
self.test_runner.run_cluster_create_wait()
@test(depends_on=[cluster_create_wait])
def verify_initial_configuration(self):
"""Verify initial configuration values on the cluster."""
self.test_runner.run_verify_initial_configuration()
@test(depends_on=[cluster_create_wait])
def add_initial_cluster_data(self):
"""Add data to cluster."""
self.test_runner.run_add_initial_cluster_data()
@test(depends_on=[add_initial_cluster_data])
def verify_initial_cluster_data(self):
"""Verify the initial data exists on cluster."""
self.test_runner.run_verify_initial_cluster_data()
@test(depends_on=[cluster_create_wait])
def cluster_list(self):
"""List the clusters."""
self.test_runner.run_cluster_list()
@test(depends_on=[cluster_create_wait])
def cluster_show(self):
"""Show a cluster."""
self.test_runner.run_cluster_show()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_RESTART],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT])
class ClusterRestartGroup(TestGroup):
def __init__(self):
super(ClusterRestartGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_restart(self):
"""Restart the cluster."""
self.test_runner.run_cluster_restart()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_RESTART_WAIT],
depends_on_groups=[groups.CLUSTER_ACTIONS_RESTART])
class ClusterRestartWaitGroup(TestGroup):
def __init__(self):
super(ClusterRestartWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_restart_wait(self):
"""Wait for cluster restart to complete."""
self.test_runner.run_cluster_restart_wait()
@test(depends_on=[cluster_restart_wait])
def verify_initial_cluster_data(self):
"""Verify the initial data still exists after cluster restart."""
self.test_runner.run_verify_initial_cluster_data()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_ROOT_ENABLE],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT],
runs_after_groups=[groups.CLUSTER_ACTIONS_RESTART_WAIT])
class ClusterRootEnableGroup(TestGroup):
def __init__(self):
super(ClusterRootEnableGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_root_enable(self):
"""Root Enable."""
self.test_runner.run_cluster_root_enable()
@test(depends_on=[cluster_root_enable])
def verify_cluster_root_enable(self):
"""Verify Root Enable."""
self.test_runner.run_verify_cluster_root_enable()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_GROW_SHRINK,
groups.CLUSTER_ACTIONS_GROW],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT],
runs_after_groups=[groups.CLUSTER_ACTIONS_ROOT_ENABLE])
class ClusterGrowGroup(TestGroup):
def __init__(self):
super(ClusterGrowGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_grow(self):
"""Grow cluster."""
self.test_runner.run_cluster_grow()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_GROW_SHRINK,
groups.CLUSTER_ACTIONS_GROW_WAIT],
depends_on_groups=[groups.CLUSTER_ACTIONS_GROW])
class ClusterGrowWaitGroup(TestGroup):
def __init__(self):
super(ClusterGrowWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_grow_wait(self):
"""Wait for cluster grow to complete."""
self.test_runner.run_cluster_grow_wait()
@test(depends_on=[cluster_grow_wait])
def verify_initial_configuration(self):
"""Verify initial configuration values on the cluster."""
self.test_runner.run_verify_initial_configuration()
@test(depends_on=[cluster_grow_wait])
def verify_initial_cluster_data_after_grow(self):
"""Verify the initial data still exists after cluster grow."""
self.test_runner.run_verify_initial_cluster_data()
@test(depends_on=[cluster_grow_wait],
runs_after=[verify_initial_cluster_data_after_grow])
def add_grow_cluster_data(self):
"""Add more data to cluster after grow."""
self.test_runner.run_add_grow_cluster_data()
@test(depends_on=[add_grow_cluster_data])
def verify_grow_cluster_data(self):
"""Verify the data added after cluster grow."""
self.test_runner.run_verify_grow_cluster_data()
@test(depends_on=[add_grow_cluster_data],
runs_after=[verify_grow_cluster_data])
def remove_grow_cluster_data(self):
"""Remove the data added after cluster grow."""
self.test_runner.run_remove_grow_cluster_data()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_ROOT_ACTIONS,
groups.CLUSTER_ACTIONS_ROOT_GROW],
depends_on_groups=[groups.CLUSTER_ACTIONS_GROW_WAIT])
class ClusterRootEnableGrowGroup(TestGroup):
def __init__(self):
super(ClusterRootEnableGrowGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def verify_cluster_root_enable_after_grow(self):
"""Verify Root Enabled after grow."""
self.test_runner.run_verify_cluster_root_enable()
@test(groups=[GROUP, groups.CLUSTER_UPGRADE],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT],
runs_after_groups=[groups.CLUSTER_ACTIONS_GROW_WAIT,
groups.CLUSTER_ACTIONS_ROOT_GROW])
class ClusterUpgradeGroup(TestGroup):
def __init__(self):
super(ClusterUpgradeGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_upgrade(self):
"""Upgrade cluster."""
self.test_runner.run_cluster_upgrade()
@test(groups=[GROUP, groups.CLUSTER_UPGRADE_WAIT],
depends_on_groups=[groups.CLUSTER_UPGRADE])
class ClusterUpgradeWaitGroup(TestGroup):
def __init__(self):
super(ClusterUpgradeWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_upgrade_wait(self):
"""Wait for cluster upgrade to complete."""
self.test_runner.run_cluster_upgrade_wait()
@test(depends_on=[cluster_upgrade_wait])
def verify_initial_configuration(self):
"""Verify initial configuration values on the cluster."""
self.test_runner.run_verify_initial_configuration()
@test(depends_on=[cluster_upgrade_wait])
def verify_initial_cluster_data_after_upgrade(self):
"""Verify the initial data still exists after cluster upgrade."""
self.test_runner.run_verify_initial_cluster_data()
@test(depends_on=[cluster_upgrade_wait],
runs_after=[verify_initial_cluster_data_after_upgrade])
def add_upgrade_cluster_data_after_upgrade(self):
"""Add more data to cluster after upgrade."""
self.test_runner.run_add_upgrade_cluster_data()
@test(depends_on=[add_upgrade_cluster_data_after_upgrade])
def verify_upgrade_cluster_data_after_upgrade(self):
"""Verify the data added after cluster upgrade."""
self.test_runner.run_verify_upgrade_cluster_data()
@test(depends_on=[add_upgrade_cluster_data_after_upgrade],
runs_after=[verify_upgrade_cluster_data_after_upgrade])
def remove_upgrade_cluster_data_after_upgrade(self):
"""Remove the data added after cluster upgrade."""
self.test_runner.run_remove_upgrade_cluster_data()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_GROW_SHRINK,
groups.CLUSTER_ACTIONS_SHRINK],
depends_on_groups=[groups.CLUSTER_ACTIONS_GROW_WAIT],
runs_after_groups=[groups.CLUSTER_UPGRADE_WAIT])
class ClusterShrinkGroup(TestGroup):
def __init__(self):
super(ClusterShrinkGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_shrink(self):
"""Shrink cluster."""
self.test_runner.run_cluster_shrink()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_SHRINK_WAIT],
depends_on_groups=[groups.CLUSTER_ACTIONS_SHRINK])
class ClusterShrinkWaitGroup(TestGroup):
def __init__(self):
super(ClusterShrinkWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_shrink_wait(self):
"""Wait for the cluster shrink to complete."""
self.test_runner.run_cluster_shrink_wait()
@test(depends_on=[cluster_shrink_wait])
def verify_initial_configuration(self):
"""Verify initial configuration values on the cluster."""
self.test_runner.run_verify_initial_configuration()
@test(depends_on=[cluster_shrink_wait])
def verify_initial_cluster_data_after_shrink(self):
"""Verify the initial data still exists after cluster shrink."""
self.test_runner.run_verify_initial_cluster_data()
@test(runs_after=[verify_initial_cluster_data_after_shrink])
def add_shrink_cluster_data(self):
"""Add more data to cluster after shrink."""
self.test_runner.run_add_shrink_cluster_data()
@test(depends_on=[add_shrink_cluster_data])
def verify_shrink_cluster_data(self):
"""Verify the data added after cluster shrink."""
self.test_runner.run_verify_shrink_cluster_data()
@test(depends_on=[add_shrink_cluster_data],
runs_after=[verify_shrink_cluster_data])
def remove_shrink_cluster_data(self):
"""Remove the data added after cluster shrink."""
self.test_runner.run_remove_shrink_cluster_data()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_ROOT_ACTIONS,
groups.CLUSTER_ACTIONS_ROOT_SHRINK],
depends_on_groups=[groups.CLUSTER_ACTIONS_SHRINK_WAIT])
class ClusterRootEnableShrinkGroup(TestGroup):
def __init__(self):
super(ClusterRootEnableShrinkGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def verify_cluster_root_enable_after_shrink(self):
"""Verify Root Enable after shrink."""
self.test_runner.run_verify_cluster_root_enable()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_ACTIONS_CFGGRP_ACTIONS],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT],
runs_after_groups=[groups.CLUSTER_ACTIONS_ROOT_SHRINK])
class ClusterConfigurationActionsGroup(TestGroup):
def __init__(self):
super(ClusterConfigurationActionsGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def detach_initial_configuration(self):
"""Detach initial configuration group."""
self.test_runner.run_detach_initial_configuration()
@test(depends_on=[detach_initial_configuration])
def restart_cluster_after_detach(self):
"""Restarting cluster after configuration change."""
self.test_runner.restart_after_configuration_change()
@test
def create_dynamic_configuration(self):
"""Create a configuration group with only dynamic entries."""
self.test_runner.run_create_dynamic_configuration()
@test
def create_non_dynamic_configuration(self):
"""Create a configuration group with only non-dynamic entries."""
self.test_runner.run_create_non_dynamic_configuration()
@test(depends_on=[create_dynamic_configuration,
restart_cluster_after_detach])
def attach_dynamic_configuration(self):
"""Test attach dynamic group."""
self.test_runner.run_attach_dynamic_configuration()
@test(depends_on=[attach_dynamic_configuration])
def verify_dynamic_configuration(self):
"""Verify dynamic values on the cluster."""
self.test_runner.run_verify_dynamic_configuration()
@test(depends_on=[attach_dynamic_configuration],
runs_after=[verify_dynamic_configuration])
def detach_dynamic_configuration(self):
"""Test detach dynamic group."""
self.test_runner.run_detach_dynamic_configuration()
@test(depends_on=[create_non_dynamic_configuration,
detach_initial_configuration],
runs_after=[detach_dynamic_configuration])
def attach_non_dynamic_configuration(self):
"""Test attach non-dynamic group."""
self.test_runner.run_attach_non_dynamic_configuration()
@test(depends_on=[attach_non_dynamic_configuration])
def restart_cluster_after_attach(self):
"""Restarting cluster after configuration change."""
self.test_runner.restart_after_configuration_change()
@test(depends_on=[restart_cluster_after_attach])
def verify_non_dynamic_configuration(self):
"""Verify non-dynamic values on the cluster."""
self.test_runner.run_verify_non_dynamic_configuration()
@test(depends_on=[attach_non_dynamic_configuration],
runs_after=[verify_non_dynamic_configuration])
def detach_non_dynamic_configuration(self):
"""Test detach non-dynamic group."""
self.test_runner.run_detach_non_dynamic_configuration()
@test(runs_after=[detach_dynamic_configuration,
detach_non_dynamic_configuration])
def verify_initial_cluster_data(self):
"""Verify the initial data still exists."""
self.test_runner.run_verify_initial_cluster_data()
@test(depends_on=[detach_dynamic_configuration])
def delete_dynamic_configuration(self):
"""Test delete dynamic configuration group."""
self.test_runner.run_delete_dynamic_configuration()
@test(depends_on=[detach_non_dynamic_configuration])
def delete_non_dynamic_configuration(self):
"""Test delete non-dynamic configuration group."""
self.test_runner.run_delete_non_dynamic_configuration()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_DELETE],
depends_on_groups=[groups.CLUSTER_CREATE_WAIT],
runs_after_groups=[groups.CLUSTER_ACTIONS_ROOT_ENABLE,
groups.CLUSTER_ACTIONS_ROOT_GROW,
groups.CLUSTER_ACTIONS_ROOT_SHRINK,
groups.CLUSTER_ACTIONS_GROW_WAIT,
groups.CLUSTER_ACTIONS_SHRINK_WAIT,
groups.CLUSTER_UPGRADE_WAIT,
groups.CLUSTER_ACTIONS_RESTART_WAIT,
groups.CLUSTER_CFGGRP_CREATE,
groups.CLUSTER_ACTIONS_CFGGRP_ACTIONS])
class ClusterDeleteGroup(TestGroup):
def __init__(self):
super(ClusterDeleteGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def remove_initial_cluster_data(self):
"""Remove the initial data from cluster."""
self.test_runner.run_remove_initial_cluster_data()
@test(runs_after=[remove_initial_cluster_data])
def cluster_delete(self):
"""Delete an existing cluster."""
self.test_runner.run_cluster_delete()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_DELETE_WAIT],
depends_on_groups=[groups.CLUSTER_DELETE])
class ClusterDeleteWaitGroup(TestGroup):
def __init__(self):
super(ClusterDeleteWaitGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def cluster_delete_wait(self):
"""Wait for the existing cluster to be gone."""
self.test_runner.run_cluster_delete_wait()
@test(groups=[GROUP, groups.CLUSTER_ACTIONS,
groups.CLUSTER_CFGGRP_DELETE],
depends_on_groups=[groups.CLUSTER_CFGGRP_CREATE],
runs_after_groups=[groups.CLUSTER_DELETE_WAIT])
class ClusterConfigurationDeleteGroup(TestGroup):
def __init__(self):
super(ClusterConfigurationDeleteGroup, self).__init__(
ClusterRunnerFactory.instance())
@test
def delete_initial_configuration(self):
"""Delete initial configuration group."""
self.test_runner.run_delete_initial_configuration()

View File

@ -1,301 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.configuration_group"
class ConfigurationRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'configuration_runners'
_runner_cls = 'ConfigurationRunner'
@test(groups=[GROUP, groups.CFGGRP_CREATE],
depends_on_groups=[groups.BACKUP_DELETE])
class ConfigurationCreateGroup(TestGroup):
"""Test Configuration Group functionality."""
def __init__(self):
super(ConfigurationCreateGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def create_bad_group(self):
"""Ensure a group with bad entries fails create."""
self.test_runner.run_create_bad_group()
@test
def create_invalid_groups(self):
"""Ensure a group with invalid entries fails create."""
self.test_runner.run_create_invalid_groups()
@test
def delete_non_existent_group(self):
"""Ensure delete non-existent group fails."""
self.test_runner.run_delete_non_existent_group()
@test
def delete_bad_group_id(self):
"""Ensure delete bad group fails."""
self.test_runner.run_delete_bad_group_id()
@test
def create_dynamic_group(self):
"""Create a group with only dynamic entries."""
self.test_runner.run_create_dynamic_group()
@test
def create_non_dynamic_group(self):
"""Create a group with only non-dynamic entries."""
self.test_runner.run_create_non_dynamic_group()
@test(depends_on=[create_dynamic_group, create_non_dynamic_group])
def list_configuration_groups(self):
"""Test list configuration groups."""
self.test_runner.run_list_configuration_groups()
@test(depends_on=[create_dynamic_group])
def dynamic_configuration_show(self):
"""Test show on dynamic group."""
self.test_runner.run_dynamic_configuration_show()
@test(depends_on=[create_non_dynamic_group])
def non_dynamic_configuration_show(self):
"""Test show on non-dynamic group."""
self.test_runner.run_non_dynamic_configuration_show()
@test(depends_on=[create_dynamic_group])
def dynamic_conf_get_unauthorized_user(self):
"""Ensure show dynamic fails with unauthorized user."""
self.test_runner.run_dynamic_conf_get_unauthorized_user()
@test(depends_on=[create_non_dynamic_group])
def non_dynamic_conf_get_unauthorized_user(self):
"""Ensure show non-dynamic fails with unauthorized user."""
self.test_runner.run_non_dynamic_conf_get_unauthorized_user()
@test(depends_on_classes=[ConfigurationCreateGroup],
groups=[GROUP, groups.CFGGRP_INST,
groups.CFGGRP_INST_CREATE])
class ConfigurationInstCreateGroup(TestGroup):
"""Test Instance Configuration Group Create functionality."""
def __init__(self):
super(ConfigurationInstCreateGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def attach_non_existent_group(self):
"""Ensure attach non-existent group fails."""
self.test_runner.run_attach_non_existent_group()
@test
def attach_non_existent_group_to_non_existent_inst(self):
"""Ensure attach non-existent group to non-existent inst fails."""
self.test_runner.run_attach_non_existent_group_to_non_existent_inst()
@test
def detach_group_with_none_attached(self):
"""Test detach with none attached."""
self.test_runner.run_detach_group_with_none_attached()
@test
def attach_dynamic_group_to_non_existent_inst(self):
"""Ensure attach dynamic group to non-existent inst fails."""
self.test_runner.run_attach_dynamic_group_to_non_existent_inst()
@test
def attach_non_dynamic_group_to_non_existent_inst(self):
"""Ensure attach non-dynamic group to non-existent inst fails."""
self.test_runner.run_attach_non_dynamic_group_to_non_existent_inst()
@test
def list_dynamic_inst_conf_groups_before(self):
"""Count list instances for dynamic group before attach."""
self.test_runner.run_list_dynamic_inst_conf_groups_before()
@test(depends_on=[list_dynamic_inst_conf_groups_before],
runs_after=[attach_non_existent_group,
detach_group_with_none_attached])
def attach_dynamic_group(self):
"""Test attach dynamic group."""
self.test_runner.run_attach_dynamic_group()
@test(depends_on=[attach_dynamic_group])
def verify_dynamic_values(self):
"""Verify dynamic values on the instance."""
self.test_runner.run_verify_dynamic_values()
@test(depends_on=[attach_dynamic_group],
runs_after=[verify_dynamic_values])
def list_dynamic_inst_conf_groups_after(self):
"""Test list instances for dynamic group after attach."""
self.test_runner.run_list_dynamic_inst_conf_groups_after()
@test(depends_on=[attach_dynamic_group],
runs_after=[list_dynamic_inst_conf_groups_after])
def attach_dynamic_group_again(self):
"""Ensure attaching dynamic group again fails."""
self.test_runner.run_attach_dynamic_group_again()
@test(depends_on=[attach_dynamic_group],
runs_after=[attach_dynamic_group_again])
def delete_attached_dynamic_group(self):
"""Ensure deleting attached dynamic group fails."""
self.test_runner.run_delete_attached_dynamic_group()
@test(depends_on=[attach_dynamic_group],
runs_after=[delete_attached_dynamic_group])
def update_dynamic_group(self):
"""Test update dynamic group."""
self.test_runner.run_update_dynamic_group()
@test(depends_on=[attach_dynamic_group],
runs_after=[update_dynamic_group])
def detach_dynamic_group(self):
"""Test detach dynamic group."""
self.test_runner.run_detach_dynamic_group()
@test(runs_after=[detach_dynamic_group])
def list_non_dynamic_inst_conf_groups_before(self):
"""Count list instances for non-dynamic group before attach."""
self.test_runner.run_list_non_dynamic_inst_conf_groups_before()
@test(runs_after=[list_non_dynamic_inst_conf_groups_before,
attach_non_existent_group])
def attach_non_dynamic_group(self):
"""Test attach non-dynamic group."""
self.test_runner.run_attach_non_dynamic_group()
@test(depends_on=[attach_non_dynamic_group])
def verify_non_dynamic_values(self):
"""Verify non-dynamic values on the instance."""
self.test_runner.run_verify_non_dynamic_values()
@test(depends_on=[attach_non_dynamic_group],
runs_after=[verify_non_dynamic_values])
def list_non_dynamic_inst_conf_groups_after(self):
"""Test list instances for non-dynamic group after attach."""
self.test_runner.run_list_non_dynamic_inst_conf_groups_after()
@test(depends_on=[attach_non_dynamic_group],
runs_after=[list_non_dynamic_inst_conf_groups_after])
def attach_non_dynamic_group_again(self):
"""Ensure attaching non-dynamic group again fails."""
self.test_runner.run_attach_non_dynamic_group_again()
@test(depends_on=[attach_non_dynamic_group],
runs_after=[attach_non_dynamic_group_again])
def delete_attached_non_dynamic_group(self):
"""Ensure deleting attached non-dynamic group fails."""
self.test_runner.run_delete_attached_non_dynamic_group()
@test(depends_on=[attach_non_dynamic_group],
runs_after=[delete_attached_non_dynamic_group])
def update_non_dynamic_group(self):
"""Test update non-dynamic group."""
self.test_runner.run_update_non_dynamic_group()
@test(depends_on=[attach_non_dynamic_group],
runs_after=[update_non_dynamic_group])
def detach_non_dynamic_group(self):
"""Test detach non-dynamic group."""
self.test_runner.run_detach_non_dynamic_group()
@test(runs_after=[detach_non_dynamic_group])
def create_instance_with_conf(self):
"""Test create instance with conf group."""
self.test_runner.run_create_instance_with_conf()
@test(depends_on_classes=[ConfigurationInstCreateGroup],
groups=[GROUP, groups.CFGGRP_INST,
groups.CFGGRP_INST_CREATE_WAIT])
class ConfigurationInstCreateWaitGroup(TestGroup):
"""Test that Instance Configuration Group Create Completes."""
def __init__(self):
super(ConfigurationInstCreateWaitGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def wait_for_conf_instance(self):
"""Test create instance with conf group completes."""
self.test_runner.run_wait_for_conf_instance()
@test(depends_on=[wait_for_conf_instance])
def verify_instance_values(self):
"""Verify configuration values on the instance."""
self.test_runner.run_verify_instance_values()
@test(depends_on_classes=[ConfigurationInstCreateWaitGroup],
groups=[GROUP, groups.CFGGRP_INST,
groups.CFGGRP_INST_DELETE])
class ConfigurationInstDeleteGroup(TestGroup):
"""Test Instance Configuration Group Delete functionality."""
def __init__(self):
super(ConfigurationInstDeleteGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def delete_conf_instance(self):
"""Test delete instance with conf group."""
self.test_runner.run_delete_conf_instance()
@test(depends_on_classes=[ConfigurationInstDeleteGroup],
groups=[GROUP, groups.CFGGRP_INST,
groups.CFGGRP_INST_DELETE_WAIT])
class ConfigurationInstDeleteWaitGroup(TestGroup):
"""Test that Instance Configuration Group Delete Completes."""
def __init__(self):
super(ConfigurationInstDeleteWaitGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def wait_for_delete_conf_instance(self):
"""Wait for delete instance with conf group to complete."""
self.test_runner.run_wait_for_delete_conf_instance()
@test(depends_on_classes=[ConfigurationInstDeleteWaitGroup],
groups=[GROUP, groups.CFGGRP_DELETE])
class ConfigurationDeleteGroup(TestGroup):
"""Test Configuration Group Delete functionality."""
def __init__(self):
super(ConfigurationDeleteGroup, self).__init__(
ConfigurationRunnerFactory.instance())
@test
def delete_dynamic_group(self):
"""Test delete dynamic group."""
self.test_runner.run_delete_dynamic_group()
@test
def delete_non_dynamic_group(self):
"""Test delete non-dynamic group."""
self.test_runner.run_delete_non_dynamic_group()

View File

@ -1,179 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.database_actions_group"
class DatabaseActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'database_actions_runners'
_runner_cls = 'DatabaseActionsRunner'
class InstanceCreateRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_create_runners'
_runner_cls = 'InstanceCreateRunner'
@test(depends_on_groups=[groups.CFGGRP_DELETE],
groups=[GROUP, groups.DB_ACTION_CREATE])
class DatabaseActionsCreateGroup(TestGroup):
"""Test Database Actions Create functionality."""
def __init__(self):
super(DatabaseActionsCreateGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
@test
def create_databases(self):
"""Create databases on an existing instance."""
self.test_runner.run_databases_create()
@test(depends_on=[create_databases])
def list_databases(self):
"""List the created databases."""
self.test_runner.run_databases_list()
@test(depends_on=[create_databases],
runs_after=[list_databases])
def create_database_with_no_attributes(self):
"""Ensure creating a database with blank specification fails."""
self.test_runner.run_database_create_with_no_attributes()
@test(depends_on=[create_databases],
runs_after=[create_database_with_no_attributes])
def create_database_with_blank_name(self):
"""Ensure creating a database with blank name fails."""
self.test_runner.run_database_create_with_blank_name()
@test(depends_on=[create_databases],
runs_after=[create_database_with_blank_name])
def create_existing_database(self):
"""Ensure creating an existing database fails."""
self.test_runner.run_existing_database_create()
@test(depends_on_classes=[DatabaseActionsCreateGroup],
groups=[GROUP, groups.DB_ACTION_DELETE])
class DatabaseActionsDeleteGroup(TestGroup):
"""Test Database Actions Delete functionality."""
def __init__(self):
super(DatabaseActionsDeleteGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
@test
def delete_database(self):
"""Delete the created databases."""
self.test_runner.run_database_delete()
@test(runs_after=[delete_database])
def delete_nonexisting_database(self):
"""Delete non-existing databases."""
self.test_runner.run_nonexisting_database_delete()
@test(runs_after=[delete_nonexisting_database])
def create_system_database(self):
"""Ensure creating a system database fails."""
self.test_runner.run_system_database_create()
@test(runs_after=[create_system_database])
def delete_system_database(self):
"""Ensure deleting a system database fails."""
self.test_runner.run_system_database_delete()
@test(depends_on_classes=[DatabaseActionsDeleteGroup],
groups=[GROUP, groups.DB_ACTION_INST, groups.DB_ACTION_INST_CREATE])
class DatabaseActionsInstCreateGroup(TestGroup):
"""Test Database Actions Instance Create functionality."""
def __init__(self):
super(DatabaseActionsInstCreateGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def create_initialized_instance(self):
"""Create an instance with initial databases."""
self.instance_create_runner.run_initialized_instance_create(
with_dbs=True, with_users=False, configuration_id=None,
name_suffix='_db')
@test(depends_on_classes=[DatabaseActionsInstCreateGroup],
groups=[GROUP, groups.DB_ACTION_INST, groups.DB_ACTION_INST_CREATE_WAIT])
class DatabaseActionsInstCreateWaitGroup(TestGroup):
"""Wait for Database Actions Instance Create to complete."""
def __init__(self):
super(DatabaseActionsInstCreateWaitGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def wait_for_instances(self):
"""Waiting for database instance to become active."""
self.instance_create_runner.run_wait_for_init_instance()
@test(depends_on=[wait_for_instances])
def add_initialized_instance_data(self):
"""Add data to the database instance."""
self.instance_create_runner.run_add_initialized_instance_data()
@test(runs_after=[add_initialized_instance_data])
def validate_initialized_instance(self):
"""Validate the database instance data and properties."""
self.instance_create_runner.run_validate_initialized_instance()
@test(depends_on_classes=[DatabaseActionsInstCreateWaitGroup],
groups=[GROUP, groups.DB_ACTION_INST, groups.DB_ACTION_INST_DELETE])
class DatabaseActionsInstDeleteGroup(TestGroup):
"""Test Database Actions Instance Delete functionality."""
def __init__(self):
super(DatabaseActionsInstDeleteGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def delete_initialized_instance(self):
"""Delete the database instance."""
self.instance_create_runner.run_initialized_instance_delete()
@test(depends_on_classes=[DatabaseActionsInstDeleteGroup],
groups=[GROUP, groups.DB_ACTION_INST, groups.DB_ACTION_INST_DELETE_WAIT])
class DatabaseActionsInstDeleteWaitGroup(TestGroup):
"""Wait for Database Actions Instance Delete to complete."""
def __init__(self):
super(DatabaseActionsInstDeleteWaitGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def wait_for_delete_initialized_instance(self):
"""Wait for the database instance to delete."""
self.instance_create_runner.run_wait_for_init_delete()

View File

@ -1,324 +0,0 @@
# Copyright 2015 Tesora Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.guest_log_group"
class GuestLogRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'guest_log_runners'
_runner_cls = 'GuestLogRunner'
@test(depends_on_groups=[groups.DB_ACTION_INST_DELETE_WAIT],
groups=[GROUP, groups.INST_LOG])
class GuestLogGroup(TestGroup):
"""Test Guest Log functionality."""
def __init__(self):
super(GuestLogGroup, self).__init__(
GuestLogRunnerFactory.instance())
@test
def test_log_list(self):
"""Test that log-list works."""
self.test_runner.run_test_log_list()
@test
def test_admin_log_list(self):
"""Test that log-list works for admin user."""
self.test_runner.run_test_admin_log_list()
@test
def test_log_enable_sys(self):
"""Ensure log-enable on SYS log fails."""
self.test_runner.run_test_log_enable_sys()
@test
def test_log_disable_sys(self):
"""Ensure log-disable on SYS log fails."""
self.test_runner.run_test_log_disable_sys()
@test
def test_log_show_unauth_user(self):
"""Ensure log-show by unauth client on USER log fails."""
self.test_runner.run_test_log_show_unauth_user()
@test
def test_log_list_unauth_user(self):
"""Ensure log-list by unauth client on USER log fails."""
self.test_runner.run_test_log_list_unauth_user()
@test
def test_log_generator_unauth_user(self):
"""Ensure log-generator by unauth client on USER log fails."""
self.test_runner.run_test_log_generator_unauth_user()
@test
def test_log_generator_publish_unauth_user(self):
"""Ensure log-generator by unauth client with publish fails."""
self.test_runner.run_test_log_generator_publish_unauth_user()
@test
def test_log_show_unexposed_user(self):
"""Ensure log-show on unexposed log fails for auth client."""
self.test_runner.run_test_log_show_unexposed_user()
@test
def test_log_enable_unexposed_user(self):
"""Ensure log-enable on unexposed log fails for auth client."""
self.test_runner.run_test_log_enable_unexposed_user()
@test
def test_log_disable_unexposed_user(self):
"""Ensure log-disable on unexposed log fails for auth client."""
self.test_runner.run_test_log_disable_unexposed_user()
@test
def test_log_publish_unexposed_user(self):
"""Ensure log-publish on unexposed log fails for auth client."""
self.test_runner.run_test_log_publish_unexposed_user()
@test
def test_log_discard_unexposed_user(self):
"""Ensure log-discard on unexposed log fails for auth client."""
self.test_runner.run_test_log_discard_unexposed_user()
# USER log tests
@test(runs_after=[test_log_list, test_admin_log_list])
def test_log_show(self):
"""Test that log-show works on USER log."""
self.test_runner.run_test_log_show()
@test(runs_after=[test_log_show])
def test_log_enable_user(self):
"""Test log-enable on USER log."""
self.test_runner.run_test_log_enable_user()
@test(runs_after=[test_log_enable_user])
def test_log_enable_flip_user(self):
"""Test that flipping restart-required log-enable works."""
self.test_runner.run_test_log_enable_flip_user()
@test(runs_after=[test_log_enable_flip_user])
def test_restart_datastore(self):
"""Test restart datastore if required."""
self.test_runner.run_test_restart_datastore()
@test(runs_after=[test_restart_datastore])
def test_wait_for_restart(self):
"""Wait for restart to complete."""
self.test_runner.run_test_wait_for_restart()
@test(runs_after=[test_wait_for_restart])
def test_log_publish_user(self):
"""Test log-publish on USER log."""
self.test_runner.run_test_log_publish_user()
@test(runs_after=[test_log_publish_user])
def test_add_data(self):
"""Add data for second log-publish on USER log."""
self.test_runner.run_test_add_data()
@test(runs_after=[test_add_data])
def test_verify_data(self):
"""Verify data for second log-publish on USER log."""
self.test_runner.run_test_verify_data()
@test(runs_after=[test_verify_data])
def test_log_publish_again_user(self):
"""Test log-publish again on USER log."""
self.test_runner.run_test_log_publish_again_user()
@test(runs_after=[test_log_publish_again_user])
def test_log_generator_user(self):
"""Test log-generator on USER log."""
self.test_runner.run_test_log_generator_user()
@test(runs_after=[test_log_generator_user])
def test_log_generator_publish_user(self):
"""Test log-generator with publish on USER log."""
self.test_runner.run_test_log_generator_publish_user()
@test(runs_after=[test_log_generator_publish_user])
def test_log_generator_swift_client_user(self):
"""Test log-generator on USER log with passed-in Swift client."""
self.test_runner.run_test_log_generator_swift_client_user()
@test(runs_after=[test_log_generator_swift_client_user])
def test_add_data_again(self):
"""Add more data for log-generator row-by-row test on USER log."""
self.test_runner.run_test_add_data_again()
@test(runs_after=[test_add_data_again])
def test_verify_data_again(self):
"""Verify data for log-generator row-by-row test on USER log."""
self.test_runner.run_test_verify_data_again()
@test(runs_after=[test_verify_data_again])
def test_log_generator_user_by_row(self):
"""Test log-generator on USER log row-by-row."""
self.test_runner.run_test_log_generator_user_by_row()
@test(depends_on=[test_log_publish_user],
runs_after=[test_log_generator_user_by_row])
def test_log_save_user(self):
"""Test log-save on USER log."""
self.test_runner.run_test_log_save_user()
@test(depends_on=[test_log_publish_user],
runs_after=[test_log_save_user])
def test_log_save_publish_user(self):
"""Test log-save on USER log with publish."""
self.test_runner.run_test_log_save_publish_user()
@test(runs_after=[test_log_save_publish_user])
def test_log_discard_user(self):
"""Test log-discard on USER log."""
self.test_runner.run_test_log_discard_user()
@test(runs_after=[test_log_discard_user])
def test_log_disable_user(self):
"""Test log-disable on USER log."""
self.test_runner.run_test_log_disable_user()
@test(runs_after=[test_log_disable_user])
def test_restart_datastore_again(self):
"""Test restart datastore again if required."""
self.test_runner.run_test_restart_datastore()
@test(runs_after=[test_restart_datastore_again])
def test_wait_for_restart_again(self):
"""Wait for restart to complete again."""
self.test_runner.run_test_wait_for_restart()
@test(runs_after=[test_wait_for_restart_again])
def test_log_show_after_stop_details(self):
"""Get log-show details before adding data."""
self.test_runner.run_test_log_show_after_stop_details()
@test(runs_after=[test_log_show_after_stop_details])
def test_add_data_again_after_stop(self):
"""Add more data to ensure logging has stopped on USER log."""
self.test_runner.run_test_add_data_again_after_stop()
@test(runs_after=[test_add_data_again_after_stop])
def test_verify_data_again_after_stop(self):
"""Verify data for stopped logging on USER log."""
self.test_runner.run_test_verify_data_again_after_stop()
@test(runs_after=[test_verify_data_again_after_stop])
def test_log_show_after_stop(self):
"""Test that log-show has same values on USER log."""
self.test_runner.run_test_log_show_after_stop()
@test(runs_after=[test_log_show_after_stop])
def test_log_enable_user_after_stop(self):
"""Test log-enable still works on USER log."""
self.test_runner.run_test_log_enable_user_after_stop()
@test(runs_after=[test_log_enable_user_after_stop])
def test_restart_datastore_after_stop_start(self):
"""Test restart datastore after stop/start if required."""
self.test_runner.run_test_restart_datastore()
@test(runs_after=[test_restart_datastore_after_stop_start])
def test_wait_for_restart_after_stop_start(self):
"""Wait for restart to complete again after stop/start."""
self.test_runner.run_test_wait_for_restart()
@test(runs_after=[test_wait_for_restart_after_stop_start])
def test_add_data_again_after_stop_start(self):
"""Add more data to ensure logging works again on USER log."""
self.test_runner.run_test_add_data_again_after_stop_start()
@test(runs_after=[test_add_data_again_after_stop_start])
def test_verify_data_again_after_stop_start(self):
"""Verify data for re-enabled logging on USER log."""
self.test_runner.run_test_verify_data_again_after_stop_start()
@test(runs_after=[test_verify_data_again_after_stop_start])
def test_log_publish_after_stop_start(self):
"""Test log-publish after stop/start on USER log."""
self.test_runner.run_test_log_publish_after_stop_start()
@test(runs_after=[test_log_publish_after_stop_start])
def test_log_disable_user_after_stop_start(self):
"""Test log-disable on USER log after stop/start."""
self.test_runner.run_test_log_disable_user_after_stop_start()
@test(runs_after=[test_log_disable_user_after_stop_start])
def test_restart_datastore_after_final_stop(self):
"""Test restart datastore again if required after final stop."""
self.test_runner.run_test_restart_datastore()
@test(runs_after=[test_restart_datastore_after_final_stop])
def test_wait_for_restart_after_final_stop(self):
"""Wait for restart to complete again after final stop."""
self.test_runner.run_test_wait_for_restart()
# SYS log tests
@test
def test_log_show_sys(self):
"""Test that log-show works for SYS log."""
self.test_runner.run_test_log_show_sys()
@test(runs_after=[test_log_show_sys])
def test_log_publish_sys(self):
"""Test log-publish on SYS log."""
self.test_runner.run_test_log_publish_sys()
@test(runs_after=[test_log_publish_sys])
def test_log_publish_again_sys(self):
"""Test log-publish again on SYS log."""
self.test_runner.run_test_log_publish_again_sys()
@test(depends_on=[test_log_publish_again_sys])
def test_log_generator_sys(self):
"""Test log-generator on SYS log."""
self.test_runner.run_test_log_generator_sys()
@test(runs_after=[test_log_generator_sys])
def test_log_generator_publish_sys(self):
"""Test log-generator with publish on SYS log."""
self.test_runner.run_test_log_generator_publish_sys()
@test(depends_on=[test_log_publish_sys],
runs_after=[test_log_generator_publish_sys])
def test_log_generator_swift_client_sys(self):
"""Test log-generator on SYS log with passed-in Swift client."""
self.test_runner.run_test_log_generator_swift_client_sys()
@test(depends_on=[test_log_publish_sys],
runs_after=[test_log_generator_swift_client_sys])
def test_log_save_sys(self):
"""Test log-save on SYS log."""
self.test_runner.run_test_log_save_sys()
@test(runs_after=[test_log_save_sys])
def test_log_save_publish_sys(self):
"""Test log-save on SYS log with publish."""
self.test_runner.run_test_log_save_publish_sys()
@test(runs_after=[test_log_save_publish_sys])
def test_log_discard_sys(self):
"""Test log-discard on SYS log."""
self.test_runner.run_test_log_discard_sys()

View File

@ -1,126 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_actions_group"
class InstanceActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_actions_runners'
_runner_cls = 'InstanceActionsRunner'
@test(depends_on_groups=[groups.INST_LOG],
groups=[GROUP, groups.INST_ACTIONS])
class InstanceActionsGroup(TestGroup):
"""Test Instance Actions functionality."""
def __init__(self):
super(InstanceActionsGroup, self).__init__(
InstanceActionsRunnerFactory.instance())
@test
def add_test_data(self):
"""Add test data."""
self.test_runner.run_add_test_data()
@test(depends_on=[add_test_data])
def verify_test_data(self):
"""Verify test data."""
self.test_runner.run_verify_test_data()
@test(runs_after=[verify_test_data])
def instance_restart(self):
"""Restart an existing instance."""
self.test_runner.run_instance_restart()
@test(depends_on=[verify_test_data, instance_restart])
def verify_test_data_after_restart(self):
"""Verify test data after restart."""
self.test_runner.run_verify_test_data()
@test(depends_on=[instance_restart],
runs_after=[verify_test_data_after_restart])
def instance_resize_volume(self):
"""Resize attached volume."""
self.test_runner.run_instance_resize_volume()
@test(depends_on=[verify_test_data, instance_resize_volume])
def verify_test_data_after_volume_resize(self):
"""Verify test data after volume resize."""
self.test_runner.run_verify_test_data()
@test(depends_on=[add_test_data],
runs_after=[verify_test_data_after_volume_resize])
def remove_test_data(self):
"""Remove test data."""
self.test_runner.run_remove_test_data()
@test(depends_on_classes=[InstanceActionsGroup],
groups=[GROUP, groups.INST_ACTIONS_RESIZE])
class InstanceActionsResizeGroup(TestGroup):
"""Test Instance Actions Resize functionality."""
def __init__(self):
super(InstanceActionsResizeGroup, self).__init__(
InstanceActionsRunnerFactory.instance())
@test
def add_test_data(self):
"""Add test data."""
self.test_runner.run_add_test_data()
@test(depends_on=[add_test_data])
def verify_test_data(self):
"""Verify test data."""
self.test_runner.run_verify_test_data()
@test(runs_after=[verify_test_data])
def instance_resize_flavor(self):
"""Resize instance flavor."""
self.test_runner.run_instance_resize_flavor()
@test(depends_on_classes=[InstanceActionsResizeGroup],
groups=[GROUP, groups.INST_ACTIONS_RESIZE_WAIT])
class InstanceActionsResizeWaitGroup(TestGroup):
"""Test that Instance Actions Resize Completes."""
def __init__(self):
super(InstanceActionsResizeWaitGroup, self).__init__(
InstanceActionsRunnerFactory.instance())
@test
def wait_for_instance_resize_flavor(self):
"""Wait for resize instance flavor to complete."""
self.test_runner.run_wait_for_instance_resize_flavor()
@test(depends_on=[wait_for_instance_resize_flavor])
def verify_test_data_after_flavor_resize(self):
"""Verify test data after flavor resize."""
self.test_runner.run_verify_test_data()
@test(runs_after=[verify_test_data_after_flavor_resize])
def remove_test_data(self):
"""Remove test data."""
self.test_runner.run_remove_test_data()

View File

@ -1,138 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_create_group"
class InstanceCreateRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_create_runners'
_runner_cls = 'InstanceCreateRunner'
@test(groups=[GROUP, groups.INST_CREATE])
class InstanceCreateGroup(TestGroup):
"""Test Instance Create functionality."""
def __init__(self):
super(InstanceCreateGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def create_empty_instance(self):
"""Create an empty instance."""
self.test_runner.run_empty_instance_create()
@test(depends_on_classes=[InstanceCreateGroup],
groups=[GROUP, groups.INST_INIT_CREATE])
class InstanceInitCreateGroup(TestGroup):
"""Test Instance Init Create functionality."""
def __init__(self):
super(InstanceInitCreateGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def create_initial_configuration(self):
"""Create a configuration group for a new initialized instance."""
self.test_runner.run_initial_configuration_create()
@test(runs_after=[create_initial_configuration])
def create_initialized_instance(self):
"""Create an instance with initial properties."""
self.test_runner.run_initialized_instance_create()
@test(depends_on_classes=[InstanceCreateGroup],
groups=[GROUP, groups.INST_CREATE])
class InstanceCreateWaitGroup(TestGroup):
"""Test that Instance Create Completes."""
def __init__(self):
super(InstanceCreateWaitGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def wait_for_instance(self):
"""Waiting for main instance to become active."""
self.test_runner.run_wait_for_instance()
@test(depends_on_classes=[InstanceCreateWaitGroup],
groups=[GROUP, groups.INST_INIT_CREATE_WAIT])
class InstanceInitCreateWaitGroup(TestGroup):
"""Test that Instance Init Create Completes."""
def __init__(self):
super(InstanceInitCreateWaitGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def wait_for_init_instance(self):
"""Waiting for init instance to become active."""
self.test_runner.run_wait_for_init_instance()
@test(depends_on=[wait_for_init_instance])
def add_initialized_instance_data(self):
"""Add data to the initialized instance."""
self.test_runner.run_add_initialized_instance_data()
@test(runs_after=[add_initialized_instance_data])
def validate_initialized_instance(self):
"""Validate the initialized instance data and properties."""
self.test_runner.run_validate_initialized_instance()
@test(depends_on_classes=[InstanceInitCreateWaitGroup],
groups=[GROUP, groups.INST_INIT_DELETE])
class InstanceInitDeleteGroup(TestGroup):
"""Test Initialized Instance Delete functionality."""
def __init__(self):
super(InstanceInitDeleteGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def delete_initialized_instance(self):
"""Delete the initialized instance."""
self.test_runner.run_initialized_instance_delete()
@test(depends_on_classes=[InstanceInitDeleteGroup],
groups=[GROUP, groups.INST_INIT_DELETE_WAIT])
class InstanceInitDeleteWaitGroup(TestGroup):
"""Test that Initialized Instance Delete Completes."""
def __init__(self):
super(InstanceInitDeleteWaitGroup, self).__init__(
InstanceCreateRunnerFactory.instance())
@test
def wait_for_init_delete(self):
"""Wait for the initialized instance to be gone."""
self.test_runner.run_wait_for_init_delete()
@test(runs_after=[wait_for_init_delete])
def delete_initial_configuration(self):
"""Delete the initial configuration group."""
self.test_runner.run_initial_configuration_delete()

View File

@ -1,61 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_delete_group"
class InstanceDeleteRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_delete_runners'
_runner_cls = 'InstanceDeleteRunner'
@test(depends_on_groups=[groups.INST_CREATE],
groups=[GROUP, groups.INST_DELETE],
runs_after_groups=[groups.USER_ACTION_INST_DELETE_WAIT,
groups.REPL_INST_DELETE_WAIT])
class InstanceDeleteGroup(TestGroup):
"""Test Instance Delete functionality."""
def __init__(self):
super(InstanceDeleteGroup, self).__init__(
InstanceDeleteRunnerFactory.instance())
@test
def instance_delete(self):
"""Delete an existing instance."""
self.test_runner.run_instance_delete()
@test(depends_on_classes=[InstanceDeleteGroup],
groups=[GROUP, groups.INST_DELETE_WAIT])
class InstanceDeleteWaitGroup(TestGroup):
"""Test that Instance Delete Completes."""
def __init__(self):
super(InstanceDeleteWaitGroup, self).__init__(
InstanceDeleteRunnerFactory.instance())
@test
def instance_delete_wait(self):
"""Wait for existing instance to be gone."""
self.test_runner.run_instance_delete_wait()

View File

@ -1,105 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_error_create_group"
class InstanceErrorCreateRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_error_create_runners'
_runner_cls = 'InstanceErrorCreateRunner'
@test(depends_on_groups=[groups.INST_CREATE],
groups=[GROUP, groups.INST_ERROR_CREATE])
class InstanceErrorCreateGroup(TestGroup):
"""Test Instance Error Create functionality."""
def __init__(self):
super(InstanceErrorCreateGroup, self).__init__(
InstanceErrorCreateRunnerFactory.instance())
@test
def create_error_instance(self):
"""Create an instance in error state."""
self.test_runner.run_create_error_instance()
@test(runs_after=[create_error_instance])
def create_error2_instance(self):
"""Create another instance in error state."""
self.test_runner.run_create_error2_instance()
@test(depends_on_classes=[InstanceErrorCreateGroup],
groups=[GROUP, groups.INST_ERROR_CREATE_WAIT])
class InstanceErrorCreateWaitGroup(TestGroup):
"""Test that Instance Error Create Completes."""
def __init__(self):
super(InstanceErrorCreateWaitGroup, self).__init__(
InstanceErrorCreateRunnerFactory.instance())
@test
def wait_for_error_instances(self):
"""Wait for the error instances to fail."""
self.test_runner.run_wait_for_error_instances()
@test(depends_on=[wait_for_error_instances])
def validate_error_instance(self):
"""Validate the error instance fault message."""
self.test_runner.run_validate_error_instance()
@test(depends_on=[wait_for_error_instances],
runs_after=[validate_error_instance])
def validate_error2_instance(self):
"""Validate the error2 instance fault message as admin."""
self.test_runner.run_validate_error2_instance()
@test(depends_on_classes=[InstanceErrorCreateWaitGroup],
groups=[GROUP, groups.INST_ERROR_DELETE])
class InstanceErrorDeleteGroup(TestGroup):
"""Test Instance Error Delete functionality."""
def __init__(self):
super(InstanceErrorDeleteGroup, self).__init__(
InstanceErrorCreateRunnerFactory.instance())
@test
def delete_error_instances(self):
"""Delete the error instances."""
self.test_runner.run_delete_error_instances()
@test(depends_on_classes=[InstanceErrorDeleteGroup],
groups=[GROUP, groups.INST_ERROR_DELETE_WAIT])
class InstanceErrorDeleteWaitGroup(TestGroup):
"""Test that Instance Error Delete Completes."""
def __init__(self):
super(InstanceErrorDeleteWaitGroup, self).__init__(
InstanceErrorCreateRunnerFactory.instance())
@test
def wait_for_error_delete(self):
"""Wait for the error instances to be gone."""
self.test_runner.run_wait_for_error_delete()

View File

@ -1,64 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_force_delete_group"
class InstanceForceDeleteRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_force_delete_runners'
_runner_cls = 'InstanceForceDeleteRunner'
@test(depends_on_groups=[groups.INST_ERROR_DELETE_WAIT],
groups=[GROUP, groups.INST_FORCE_DELETE])
class InstanceForceDeleteGroup(TestGroup):
"""Test Instance Force Delete functionality."""
def __init__(self):
super(InstanceForceDeleteGroup, self).__init__(
InstanceForceDeleteRunnerFactory.instance())
@test
def create_build_instance(self):
"""Create an instance in BUILD state."""
self.test_runner.run_create_build_instance()
@test(depends_on=['create_build_instance'])
def delete_build_instance(self):
"""Make sure the instance in BUILD state deletes."""
self.test_runner.run_delete_build_instance()
@test(depends_on_classes=[InstanceForceDeleteGroup],
groups=[GROUP, groups.INST_FORCE_DELETE_WAIT])
class InstanceForceDeleteWaitGroup(TestGroup):
"""Make sure the Force Delete instance goes away."""
def __init__(self):
super(InstanceForceDeleteWaitGroup, self).__init__(
InstanceForceDeleteRunnerFactory.instance())
@test
def wait_for_force_delete(self):
"""Wait for the Force Delete instance to be gone."""
self.test_runner.run_wait_for_force_delete()

View File

@ -1,120 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.instance_upgrade_group"
class InstanceUpgradeRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_upgrade_runners'
_runner_cls = 'InstanceUpgradeRunner'
class UserActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'user_actions_runners'
_runner_cls = 'UserActionsRunner'
class DatabaseActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'database_actions_runners'
_runner_cls = 'DatabaseActionsRunner'
@test(depends_on_groups=[groups.INST_CREATE_WAIT],
groups=[GROUP, groups.INST_UPGRADE],
runs_after_groups=[groups.INST_ACTIONS])
class InstanceUpgradeGroup(TestGroup):
def __init__(self):
super(InstanceUpgradeGroup, self).__init__(
InstanceUpgradeRunnerFactory.instance())
self.database_actions_runner = DatabaseActionsRunnerFactory.instance()
self.user_actions_runner = UserActionsRunnerFactory.instance()
@test
def create_user_databases(self):
"""Create user databases on an existing instance."""
# These databases may be referenced by the users (below) so we need to
# create them first.
self.database_actions_runner.run_databases_create()
@test(runs_after=[create_user_databases])
def create_users(self):
"""Create users on an existing instance."""
self.user_actions_runner.run_users_create()
@test(runs_after=[create_users])
def add_test_data(self):
"""Add test data."""
self.test_runner.run_add_test_data()
@test(depends_on=[add_test_data])
def verify_test_data(self):
"""Verify test data."""
self.test_runner.run_verify_test_data()
@test(depends_on=[verify_test_data])
def list_users_before_upgrade(self):
"""List the created users before upgrade."""
self.user_actions_runner.run_users_list()
@test(depends_on=[list_users_before_upgrade])
def instance_upgrade(self):
"""Upgrade an existing instance."""
raise SkipTest("Skip the instance upgrade integration test "
"temporarily because of not stable in CI")
# self.test_runner.run_instance_upgrade()
@test(depends_on=[list_users_before_upgrade])
def show_user(self):
"""Show created users."""
self.user_actions_runner.run_user_show()
@test(depends_on=[create_users],
runs_after=[show_user])
def list_users(self):
"""List the created users."""
self.user_actions_runner.run_users_list()
@test(depends_on=[verify_test_data, instance_upgrade])
def verify_test_data_after_upgrade(self):
"""Verify test data after upgrade."""
self.test_runner.run_verify_test_data()
@test(depends_on=[add_test_data],
runs_after=[verify_test_data_after_upgrade])
def remove_test_data(self):
"""Remove test data."""
self.test_runner.run_remove_test_data()
@test(depends_on=[create_users],
runs_after=[list_users])
def delete_user(self):
"""Delete the created users."""
self.user_actions_runner.run_user_delete()
@test(depends_on=[create_user_databases], runs_after=[delete_user])
def delete_user_databases(self):
"""Delete the user databases."""
self.database_actions_runner.run_database_delete()

View File

@ -1,722 +0,0 @@
# Copyright 2016 Tesora, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.module_group"
class ModuleRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'module_runners'
_runner_cls = 'ModuleRunner'
@test(groups=[GROUP, groups.MODULE_CREATE])
class ModuleCreateGroup(TestGroup):
"""Test Module Create functionality."""
def __init__(self):
super(ModuleCreateGroup, self).__init__(
ModuleRunnerFactory.instance())
@test
def module_delete_existing(self):
"""Delete all previous test modules."""
self.test_runner.run_module_delete_existing()
@test
def module_create_bad_type(self):
"""Ensure create module with invalid type fails."""
self.test_runner.run_module_create_bad_type()
@test
def module_create_non_admin_auto(self):
"""Ensure create auto_apply module for non-admin fails."""
self.test_runner.run_module_create_non_admin_auto()
@test
def module_create_non_admin_all_tenant(self):
"""Ensure create all tenant module for non-admin fails."""
self.test_runner.run_module_create_non_admin_all_tenant()
@test
def module_create_non_admin_hidden(self):
"""Ensure create hidden module for non-admin fails."""
self.test_runner.run_module_create_non_admin_hidden()
@test
def module_create_non_admin_priority(self):
"""Ensure create priority module for non-admin fails."""
self.test_runner.run_module_create_non_admin_priority()
@test
def module_create_non_admin_no_full_access(self):
"""Ensure create no full access module for non-admin fails."""
self.test_runner.run_module_create_non_admin_no_full_access()
@test
def module_create_full_access_with_admin_opt(self):
"""Ensure create full access module with admin opts fails."""
self.test_runner.run_module_create_full_access_with_admin_opt()
@test
def module_create_bad_datastore(self):
"""Ensure create module with invalid datastore fails."""
self.test_runner.run_module_create_bad_datastore()
@test
def module_create_bad_datastore_version(self):
"""Ensure create module with invalid datastore_version fails."""
self.test_runner.run_module_create_bad_datastore_version()
@test
def module_create_missing_datastore(self):
"""Ensure create module with missing datastore fails."""
self.test_runner.run_module_create_missing_datastore()
@test(runs_after=[module_delete_existing])
def module_create(self):
"""Check that create module works."""
self.test_runner.run_module_create()
@test(runs_after=[module_create])
def module_create_for_update(self):
"""Check that create module for update works."""
self.test_runner.run_module_create_for_update()
@test(depends_on=[module_create])
def module_create_dupe(self):
"""Ensure create with duplicate info fails."""
self.test_runner.run_module_create_dupe()
@test(depends_on=[module_create_for_update])
def module_update_missing_datastore(self):
"""Ensure update module with missing datastore fails."""
self.test_runner.run_module_update_missing_datastore()
@test(runs_after=[module_create_for_update])
def module_create_bin(self):
"""Check that create module with binary contents works."""
self.test_runner.run_module_create_bin()
@test(runs_after=[module_create_bin])
def module_create_bin2(self):
"""Check that create module with other binary contents works."""
self.test_runner.run_module_create_bin2()
@test(depends_on=[module_create])
def module_show(self):
"""Check that show module works."""
self.test_runner.run_module_show()
@test(depends_on=[module_create])
def module_show_unauth_user(self):
"""Ensure that show module for unauth user fails."""
self.test_runner.run_module_show_unauth_user()
@test(depends_on=[module_create, module_create_bin, module_create_bin2])
def module_list(self):
"""Check that list modules works."""
self.test_runner.run_module_list()
@test(depends_on=[module_create, module_create_bin, module_create_bin2])
def module_list_unauth_user(self):
"""Ensure that list module for unauth user fails."""
self.test_runner.run_module_list_unauth_user()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_list])
def module_create_admin_all(self):
"""Check that create module works with all admin options."""
self.test_runner.run_module_create_admin_all()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_admin_all])
def module_create_admin_hidden(self):
"""Check that create module works with hidden option."""
self.test_runner.run_module_create_admin_hidden()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_admin_hidden])
def module_create_admin_auto(self):
"""Check that create module works with auto option."""
self.test_runner.run_module_create_admin_auto()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_admin_auto])
def module_create_admin_live_update(self):
"""Check that create module works with live-update option."""
self.test_runner.run_module_create_admin_live_update()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_admin_live_update])
def module_create_admin_priority_apply(self):
"""Check that create module works with priority-apply option."""
self.test_runner.run_module_create_admin_priority_apply()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_admin_priority_apply])
def module_create_datastore(self):
"""Check that create module with datastore works."""
self.test_runner.run_module_create_datastore()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_datastore])
def module_create_different_datastore(self):
"""Check that create module with different datastore works."""
self.test_runner.run_module_create_different_datastore()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_different_datastore])
def module_create_ds_version(self):
"""Check that create module with ds version works."""
self.test_runner.run_module_create_ds_version()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_ds_version])
def module_create_all_tenant(self):
"""Check that create 'all' tenants with datastore module works."""
self.test_runner.run_module_create_all_tenant()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_all_tenant, module_list_unauth_user])
def module_create_different_tenant(self):
"""Check that create with same name on different tenant works."""
self.test_runner.run_module_create_different_tenant()
@test(depends_on=[module_create, module_create_bin, module_create_bin2],
runs_after=[module_create_different_tenant])
def module_create_full_access(self):
"""Check that create by admin with full access works."""
self.test_runner.run_module_create_full_access()
@test(depends_on=[module_create_all_tenant],
runs_after=[module_create_full_access])
def module_full_access_toggle(self):
"""Check that toggling full access works."""
self.test_runner.run_module_full_access_toggle()
@test(depends_on=[module_create_all_tenant],
runs_after=[module_full_access_toggle])
def module_list_again(self):
"""Check that list modules skips invisible modules."""
self.test_runner.run_module_list_again()
@test(depends_on=[module_create_ds_version],
runs_after=[module_list_again])
def module_list_ds(self):
"""Check that list modules by datastore works."""
self.test_runner.run_module_list_ds()
@test(depends_on=[module_create_ds_version],
runs_after=[module_list_ds])
def module_list_ds_all(self):
"""Check that list modules by all datastores works."""
self.test_runner.run_module_list_ds_all()
@test(depends_on=[module_create_admin_hidden])
def module_show_invisible(self):
"""Ensure that show invisible module for non-admin fails."""
self.test_runner.run_module_show_invisible()
@test(depends_on=[module_create_all_tenant],
runs_after=[module_create_different_tenant])
def module_list_admin(self):
"""Check that list modules for admin works."""
self.test_runner.run_module_list_admin()
@test(depends_on=[module_create],
runs_after=[module_show])
def module_update(self):
"""Check that update module works."""
self.test_runner.run_module_update()
@test(depends_on=[module_update])
def module_update_same_contents(self):
"""Check that update module with same contents works."""
self.test_runner.run_module_update_same_contents()
@test(depends_on=[module_update],
runs_after=[module_update_same_contents])
def module_update_auto_toggle(self):
"""Check that update module works for auto apply toggle."""
self.test_runner.run_module_update_auto_toggle()
@test(depends_on=[module_update],
runs_after=[module_update_auto_toggle])
def module_update_all_tenant_toggle(self):
"""Check that update module works for all tenant toggle."""
self.test_runner.run_module_update_all_tenant_toggle()
@test(depends_on=[module_update],
runs_after=[module_update_all_tenant_toggle])
def module_update_invisible_toggle(self):
"""Check that update module works for invisible toggle."""
self.test_runner.run_module_update_invisible_toggle()
@test(depends_on=[module_update],
runs_after=[module_update_invisible_toggle])
def module_update_priority_toggle(self):
"""Check that update module works for priority toggle."""
self.test_runner.run_module_update_priority_toggle()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_unauth(self):
"""Ensure update module for unauth user fails."""
self.test_runner.run_module_update_unauth()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_auto(self):
"""Ensure update module to auto_apply for non-admin fails."""
self.test_runner.run_module_update_non_admin_auto()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_auto_off(self):
"""Ensure update module to auto_apply off for non-admin fails."""
self.test_runner.run_module_update_non_admin_auto_off()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_auto_any(self):
"""Ensure any update module to auto_apply for non-admin fails."""
self.test_runner.run_module_update_non_admin_auto_any()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_all_tenant(self):
"""Ensure update module to all tenant for non-admin fails."""
self.test_runner.run_module_update_non_admin_all_tenant()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_all_tenant_off(self):
"""Ensure update module to all tenant off for non-admin fails."""
self.test_runner.run_module_update_non_admin_all_tenant_off()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_all_tenant_any(self):
"""Ensure any update module to all tenant for non-admin fails."""
self.test_runner.run_module_update_non_admin_all_tenant_any()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_invisible(self):
"""Ensure update module to invisible for non-admin fails."""
self.test_runner.run_module_update_non_admin_invisible()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_invisible_off(self):
"""Ensure update module to invisible off for non-admin fails."""
self.test_runner.run_module_update_non_admin_invisible_off()
@test(depends_on=[module_update],
runs_after=[module_update_priority_toggle])
def module_update_non_admin_invisible_any(self):
"""Ensure any update module to invisible for non-admin fails."""
self.test_runner.run_module_update_non_admin_invisible_any()
@test(depends_on_groups=[groups.INST_CREATE_WAIT, groups.MODULE_CREATE],
runs_after_groups=[groups.INST_ERROR_DELETE, groups.INST_FORCE_DELETE],
groups=[GROUP, groups.MODULE_INST, groups.MODULE_INST_CREATE])
class ModuleInstCreateGroup(TestGroup):
"""Test Module Instance Create functionality."""
def __init__(self):
super(ModuleInstCreateGroup, self).__init__(
ModuleRunnerFactory.instance())
@test
def module_list_instance_empty(self):
"""Check that the instance has no modules associated."""
self.test_runner.run_module_list_instance_empty()
@test(runs_after=[module_list_instance_empty])
def module_instances_empty(self):
"""Check that the module hasn't been applied to any instances."""
self.test_runner.run_module_instances_empty()
@test(runs_after=[module_instances_empty])
def module_instance_count_empty(self):
"""Check that no instance count exists."""
self.test_runner.run_module_instance_count_empty()
@test(runs_after=[module_instance_count_empty])
def module_query_empty(self):
"""Check that the instance has no modules applied."""
self.test_runner.run_module_query_empty()
@test(runs_after=[module_query_empty])
def module_apply(self):
"""Check that module-apply works."""
self.test_runner.run_module_apply()
@test(runs_after=[module_apply])
def module_apply_wrong_module(self):
"""Ensure that module-apply for wrong module fails."""
self.test_runner.run_module_apply_wrong_module()
@test(depends_on=[module_apply_wrong_module])
def module_update_not_live(self):
"""Ensure updating a non live_update module fails."""
self.test_runner.run_module_update_not_live()
@test(depends_on=[module_apply],
runs_after=[module_update_not_live])
def module_list_instance_after_apply(self):
"""Check that the instance has the modules associated."""
self.test_runner.run_module_list_instance_after_apply()
@test(runs_after=[module_list_instance_after_apply])
def module_apply_live_update(self):
"""Check that module-apply works for live_update."""
self.test_runner.run_module_apply_live_update()
@test(depends_on=[module_apply_live_update])
def module_list_instance_after_apply_live(self):
"""Check that the instance has the right modules."""
self.test_runner.run_module_list_instance_after_apply_live()
@test(runs_after=[module_list_instance_after_apply_live])
def module_instances_after_apply(self):
"""Check that the instance shows up in the list."""
self.test_runner.run_module_instances_after_apply()
@test(runs_after=[module_instances_after_apply])
def module_instance_count_after_apply(self):
"""Check that the instance count is right after apply."""
self.test_runner.run_module_instance_count_after_apply()
@test(runs_after=[module_instance_count_after_apply])
def module_query_after_apply(self):
"""Check that module-query works."""
self.test_runner.run_module_query_after_apply()
@test(runs_after=[module_query_after_apply])
def module_update_live_update(self):
"""Check that update module works on 'live' applied module."""
self.test_runner.run_module_update_live_update()
@test(runs_after=[module_update_live_update])
def module_apply_another(self):
"""Check that module-apply works for another module."""
self.test_runner.run_module_apply_another()
@test(depends_on=[module_apply_another])
def module_list_instance_after_apply_another(self):
"""Check that the instance has the right modules again."""
self.test_runner.run_module_list_instance_after_apply_another()
@test(runs_after=[module_list_instance_after_apply_another])
def module_instances_after_apply_another(self):
"""Check that the instance shows up in the list still."""
self.test_runner.run_module_instances_after_apply_another()
@test(runs_after=[module_instances_after_apply_another])
def module_instance_count_after_apply_another(self):
"""Check that the instance count is right after another apply."""
self.test_runner.run_module_instance_count_after_apply_another()
@test(depends_on=[module_apply_another],
runs_after=[module_instance_count_after_apply_another])
def module_query_after_apply_another(self):
"""Check that module-query works after another apply."""
self.test_runner.run_module_query_after_apply_another()
@test(depends_on=[module_apply],
runs_after=[module_query_after_apply_another])
def create_inst_with_mods(self):
"""Check that creating an instance with modules works."""
self.test_runner.run_create_inst_with_mods()
@test(runs_after=[create_inst_with_mods])
def create_inst_with_wrong_module(self):
"""Ensure that creating an inst with wrong ds mod fails."""
self.test_runner.run_create_inst_with_wrong_module()
@test(depends_on=[module_apply],
runs_after=[create_inst_with_wrong_module])
def module_delete_applied(self):
"""Ensure that deleting an applied module fails."""
self.test_runner.run_module_delete_applied()
@test(depends_on=[module_apply],
runs_after=[module_delete_applied])
def module_remove(self):
"""Check that module-remove works."""
self.test_runner.run_module_remove()
@test(depends_on=[module_remove])
def module_query_after_remove(self):
"""Check that the instance has modules applied after remove."""
self.test_runner.run_module_query_after_remove()
@test(depends_on=[module_remove],
runs_after=[module_query_after_remove])
def module_update_after_remove(self):
"""Check that update module after remove works."""
self.test_runner.run_module_update_after_remove()
@test(depends_on=[module_remove],
runs_after=[module_update_after_remove])
def module_apply_another_again(self):
"""Check that module-apply another works a second time."""
self.test_runner.run_module_apply_another()
@test(depends_on=[module_apply],
runs_after=[module_apply_another_again])
def module_query_after_apply_another2(self):
"""Check that module-query works still."""
self.test_runner.run_module_query_after_apply_another()
@test(depends_on=[module_apply_another_again],
runs_after=[module_query_after_apply_another2])
def module_remove_again(self):
"""Check that module-remove works again."""
self.test_runner.run_module_remove()
@test(depends_on=[module_remove_again])
def module_query_empty_after_again(self):
"""Check that the inst has right mod applied after 2nd remove."""
self.test_runner.run_module_query_after_remove()
@test(depends_on=[module_remove_again],
runs_after=[module_query_empty_after_again])
def module_update_after_remove_again(self):
"""Check that update module after remove again works."""
self.test_runner.run_module_update_after_remove_again()
@test(depends_on_groups=[groups.MODULE_INST_CREATE],
groups=[GROUP, groups.MODULE_INST, groups.MODULE_INST_CREATE_WAIT],
runs_after_groups=[groups.INST_ACTIONS, groups.INST_UPGRADE])
class ModuleInstCreateWaitGroup(TestGroup):
"""Test that Module Instance Create Completes."""
def __init__(self):
super(ModuleInstCreateWaitGroup, self).__init__(
ModuleRunnerFactory.instance())
@test
def wait_for_inst_with_mods(self):
"""Wait for create instance with modules to finish."""
self.test_runner.run_wait_for_inst_with_mods()
@test(depends_on=[wait_for_inst_with_mods])
def module_query_after_inst_create(self):
"""Check that module-query works on new instance."""
self.test_runner.run_module_query_after_inst_create()
@test(depends_on=[wait_for_inst_with_mods],
runs_after=[module_query_after_inst_create])
def module_retrieve_after_inst_create(self):
"""Check that module-retrieve works on new instance."""
self.test_runner.run_module_retrieve_after_inst_create()
@test(depends_on=[wait_for_inst_with_mods],
runs_after=[module_retrieve_after_inst_create])
def module_query_after_inst_create_admin(self):
"""Check that module-query works for admin."""
self.test_runner.run_module_query_after_inst_create_admin()
@test(depends_on=[wait_for_inst_with_mods],
runs_after=[module_query_after_inst_create_admin])
def module_retrieve_after_inst_create_admin(self):
"""Check that module-retrieve works for admin."""
self.test_runner.run_module_retrieve_after_inst_create_admin()
@test(depends_on=[wait_for_inst_with_mods],
runs_after=[module_retrieve_after_inst_create_admin])
def module_delete_auto_applied(self):
"""Ensure that module-delete on auto-applied module fails."""
self.test_runner.run_module_delete_auto_applied()
@test(runs_after=[module_delete_auto_applied])
def module_list_instance_after_mod_inst(self):
"""Check that the new instance has the right modules."""
self.test_runner.run_module_list_instance_after_mod_inst()
@test(runs_after=[module_list_instance_after_mod_inst])
def module_instances_after_mod_inst(self):
"""Check that the new instance shows up in the list."""
self.test_runner.run_module_instances_after_mod_inst()
@test(runs_after=[module_instances_after_mod_inst])
def module_instance_count_after_mod_inst(self):
"""Check that the new instance count is right."""
self.test_runner.run_module_instance_count_after_mod_inst()
@test(runs_after=[module_instance_count_after_mod_inst])
def module_reapply_with_md5(self):
"""Check that module reapply with md5 works."""
self.test_runner.run_module_reapply_with_md5()
@test(runs_after=[module_reapply_with_md5])
def module_reapply_with_md5_verify(self):
"""Verify the dates after md5 reapply (no-op)."""
self.test_runner.run_module_reapply_with_md5_verify()
@test(runs_after=[module_reapply_with_md5_verify])
def module_list_instance_after_reapply_md5(self):
"""Check that the instance's modules haven't changed."""
self.test_runner.run_module_list_instance_after_reapply_md5()
@test(runs_after=[module_list_instance_after_reapply_md5])
def module_instances_after_reapply_md5(self):
"""Check that the new instance still shows up in the list."""
self.test_runner.run_module_instances_after_reapply_md5()
@test(runs_after=[module_instances_after_reapply_md5])
def module_instance_count_after_reapply_md5(self):
"""Check that the instance count hasn't changed."""
self.test_runner.run_module_instance_count_after_reapply_md5()
@test(runs_after=[module_instance_count_after_reapply_md5])
def module_reapply_all(self):
"""Check that module reapply works."""
self.test_runner.run_module_reapply_all()
@test(runs_after=[module_reapply_all])
def module_reapply_all_wait(self):
"""Wait for module reapply to complete."""
self.test_runner.run_module_reapply_all_wait()
@test(runs_after=[module_reapply_all_wait])
def module_instance_count_after_reapply(self):
"""Check that the reapply instance count is right."""
self.test_runner.run_module_instance_count_after_reapply()
@test(runs_after=[module_instance_count_after_reapply])
def module_reapply_with_force(self):
"""Check that module reapply with force works."""
self.test_runner.run_module_reapply_with_force()
@test(runs_after=[module_reapply_with_force])
def module_reapply_with_force_wait(self):
"""Wait for module reapply with force to complete."""
self.test_runner.run_module_reapply_with_force_wait()
@test(runs_after=[module_reapply_with_force_wait])
def module_list_instance_after_reapply_force(self):
"""Check that the new instance still has the right modules."""
self.test_runner.run_module_list_instance_after_reapply()
@test(runs_after=[module_list_instance_after_reapply_force])
def module_instances_after_reapply_force(self):
"""Check that the new instance still shows up in the list."""
self.test_runner.run_module_instances_after_reapply()
@test(runs_after=[module_instances_after_reapply_force])
def module_instance_count_after_reapply_force(self):
"""Check that the instance count is right after reapply force."""
self.test_runner.run_module_instance_count_after_reapply()
@test(depends_on_groups=[groups.MODULE_INST_CREATE_WAIT],
groups=[GROUP, groups.MODULE_INST, groups.MODULE_INST_DELETE])
class ModuleInstDeleteGroup(TestGroup):
"""Test Module Instance Delete functionality."""
def __init__(self):
super(ModuleInstDeleteGroup, self).__init__(
ModuleRunnerFactory.instance())
@test
def delete_inst_with_mods(self):
"""Check that instance with module can be deleted."""
self.test_runner.run_delete_inst_with_mods()
@test(runs_after=[delete_inst_with_mods])
def remove_mods_from_main_inst(self):
"""Check that modules can be removed from the main instance."""
self.test_runner.run_remove_mods_from_main_inst()
@test(depends_on_groups=[groups.MODULE_INST_DELETE],
groups=[GROUP, groups.MODULE_INST, groups.MODULE_INST_DELETE_WAIT],
runs_after_groups=[groups.INST_DELETE])
class ModuleInstDeleteWaitGroup(TestGroup):
"""Test that Module Instance Delete Completes."""
def __init__(self):
super(ModuleInstDeleteWaitGroup, self).__init__(
ModuleRunnerFactory.instance())
@test
def wait_for_delete_inst_with_mods(self):
"""Wait until the instance with module is gone."""
self.test_runner.run_wait_for_delete_inst_with_mods()
@test(depends_on_groups=[groups.MODULE_CREATE],
runs_after_groups=[groups.MODULE_INST_DELETE_WAIT],
groups=[GROUP, groups.MODULE_DELETE])
class ModuleDeleteGroup(TestGroup):
"""Test Module Delete functionality."""
def __init__(self):
super(ModuleDeleteGroup, self).__init__(
ModuleRunnerFactory.instance())
def module_delete_non_existent(self):
"""Ensure delete non-existent module fails."""
self.test_runner.run_module_delete_non_existent()
def module_delete_unauth_user(self):
"""Ensure delete module by unauth user fails."""
self.test_runner.run_module_delete_unauth_user()
@test(runs_after=[module_delete_unauth_user,
module_delete_non_existent])
def module_delete_hidden_by_non_admin(self):
"""Ensure delete hidden module by non-admin user fails."""
self.test_runner.run_module_delete_hidden_by_non_admin()
@test(runs_after=[module_delete_hidden_by_non_admin])
def module_delete_all_tenant_by_non_admin(self):
"""Ensure delete all tenant module by non-admin user fails."""
self.test_runner.run_module_delete_all_tenant_by_non_admin()
@test(runs_after=[module_delete_all_tenant_by_non_admin])
def module_delete_auto_by_non_admin(self):
"""Ensure delete auto-apply module by non-admin user fails."""
self.test_runner.run_module_delete_auto_by_non_admin()
@test(runs_after=[module_delete_auto_by_non_admin])
def module_delete(self):
"""Check that delete module works."""
self.test_runner.run_module_delete()
@test(runs_after=[module_delete])
def module_delete_admin(self):
"""Check that delete module works for admin."""
self.test_runner.run_module_delete_admin()
@test(runs_after=[module_delete_admin])
def module_delete_remaining(self):
"""Delete all remaining test modules."""
self.test_runner.run_module_delete_existing()

View File

@ -1,347 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.replication_group"
class ReplicationRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'replication_runners'
_runner_cls = 'ReplicationRunner'
class BackupRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'backup_runners'
_runner_cls = 'BackupRunner'
@test(depends_on_groups=[groups.INST_CREATE],
groups=[GROUP, groups.REPL_INST_CREATE])
class ReplicationInstCreateGroup(TestGroup):
"""Test Replication Instance Create functionality."""
def __init__(self):
super(ReplicationInstCreateGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def add_data_for_replication(self):
"""Add data to master for initial replica setup."""
self.test_runner.run_add_data_for_replication()
@test(depends_on=[add_data_for_replication])
def verify_data_for_replication(self):
"""Verify initial data exists on master."""
self.test_runner.run_verify_data_for_replication()
@test(runs_after=[verify_data_for_replication])
def create_non_affinity_master(self):
"""Test creating a non-affinity master."""
self.test_runner.run_create_non_affinity_master()
@test(runs_after=[create_non_affinity_master])
def create_single_replica(self):
"""Test creating a single replica."""
self.test_runner.run_create_single_replica()
@test(depends_on_classes=[ReplicationInstCreateGroup],
groups=[GROUP, groups.REPL_INST_CREATE_WAIT])
class ReplicationInstCreateWaitGroup(TestGroup):
"""Wait for Replication Instance Create to complete."""
def __init__(self):
super(ReplicationInstCreateWaitGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def wait_for_non_affinity_master(self):
"""Wait for non-affinity master to complete."""
self.test_runner.run_wait_for_non_affinity_master()
@test(depends_on=[wait_for_non_affinity_master])
def create_non_affinity_replica(self):
"""Test creating a non-affinity replica."""
self.test_runner.run_create_non_affinity_replica()
@test(depends_on=[create_non_affinity_replica])
def wait_for_non_affinity_replica_fail(self):
"""Wait for non-affinity replica to fail."""
self.test_runner.run_wait_for_non_affinity_replica_fail()
@test(runs_after=[wait_for_non_affinity_replica_fail])
def delete_non_affinity_repl(self):
"""Test deleting non-affinity replica."""
self.test_runner.run_delete_non_affinity_repl()
@test(runs_after=[delete_non_affinity_repl])
def wait_for_single_replica(self):
"""Wait for single replica to complete."""
self.test_runner.run_wait_for_single_replica()
@test(depends_on=[wait_for_single_replica])
def add_data_after_replica(self):
"""Add data to master after initial replica is setup"""
self.test_runner.run_add_data_after_replica()
@test(depends_on=[add_data_after_replica])
def verify_replica_data_after_single(self):
"""Verify data exists on single replica"""
self.test_runner.run_verify_replica_data_after_single()
@test(depends_on_classes=[ReplicationInstCreateWaitGroup],
groups=[GROUP, groups.REPL_INST_MULTI_CREATE])
class ReplicationInstMultiCreateGroup(TestGroup):
"""Test Replication Instance Multi-Create functionality."""
def __init__(self):
super(ReplicationInstMultiCreateGroup, self).__init__(
ReplicationRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
@test
def backup_master_instance(self):
"""Backup the master instance."""
self.backup_runner.run_backup_create()
self.backup_runner.run_backup_create_completed()
self.test_runner.master_backup_count += 1
@test(depends_on=[backup_master_instance])
def create_multiple_replicas(self):
"""Test creating multiple replicas."""
self.test_runner.run_create_multiple_replicas()
@test(depends_on=[create_multiple_replicas])
def check_has_incremental_backup(self):
"""Test that creating multiple replicas uses incr backup."""
self.backup_runner.run_check_has_incremental()
@test(depends_on_classes=[ReplicationInstMultiCreateGroup],
groups=[GROUP, groups.REPL_INST_DELETE_NON_AFFINITY_WAIT])
class ReplicationInstDeleteNonAffReplWaitGroup(TestGroup):
"""Wait for Replication Instance Non-Affinity repl to be gone."""
def __init__(self):
super(ReplicationInstDeleteNonAffReplWaitGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def wait_for_delete_non_affinity_repl(self):
"""Wait for the non-affinity replica to delete."""
self.test_runner.run_wait_for_delete_non_affinity_repl()
@test(depends_on=[wait_for_delete_non_affinity_repl])
def delete_non_affinity_master(self):
"""Test deleting non-affinity master."""
self.test_runner.run_delete_non_affinity_master()
@test(depends_on_classes=[ReplicationInstDeleteNonAffReplWaitGroup],
groups=[GROUP, groups.REPL_INST_MULTI_CREATE_WAIT])
class ReplicationInstMultiCreateWaitGroup(TestGroup):
"""Wait for Replication Instance Multi-Create to complete."""
def __init__(self):
super(ReplicationInstMultiCreateWaitGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def wait_for_delete_non_affinity_master(self):
"""Wait for the non-affinity master to delete."""
self.test_runner.run_wait_for_delete_non_affinity_master()
@test(runs_after=[wait_for_delete_non_affinity_master])
def wait_for_multiple_replicas(self):
"""Wait for multiple replicas to complete."""
self.test_runner.run_wait_for_multiple_replicas()
@test(depends_on=[wait_for_multiple_replicas])
def verify_replica_data_orig(self):
"""Verify original data was transferred to replicas."""
self.test_runner.run_verify_replica_data_orig()
@test(depends_on=[wait_for_multiple_replicas],
runs_after=[verify_replica_data_orig])
def add_data_to_replicate(self):
"""Add new data to master to verify replication."""
self.test_runner.run_add_data_to_replicate()
@test(depends_on=[add_data_to_replicate])
def verify_data_to_replicate(self):
"""Verify new data exists on master."""
self.test_runner.run_verify_data_to_replicate()
@test(depends_on=[add_data_to_replicate],
runs_after=[verify_data_to_replicate])
def verify_replica_data_orig2(self):
"""Verify original data was transferred to replicas."""
self.test_runner.run_verify_replica_data_orig()
@test(depends_on=[add_data_to_replicate],
runs_after=[verify_replica_data_orig2])
def verify_replica_data_new(self):
"""Verify new data was transferred to replicas."""
self.test_runner.run_verify_replica_data_new()
@test(depends_on=[wait_for_multiple_replicas],
runs_after=[verify_replica_data_new])
def promote_master(self):
"""Ensure promoting master fails."""
self.test_runner.run_promote_master()
@test(depends_on=[wait_for_multiple_replicas],
runs_after=[promote_master])
def eject_replica(self):
"""Ensure ejecting non master fails."""
self.test_runner.run_eject_replica()
@test(depends_on=[wait_for_multiple_replicas],
runs_after=[eject_replica])
def eject_valid_master(self):
"""Ensure ejecting valid master fails."""
self.test_runner.run_eject_valid_master()
@test(depends_on=[wait_for_multiple_replicas],
runs_after=[eject_valid_master])
def delete_valid_master(self):
"""Ensure deleting valid master fails."""
self.test_runner.run_delete_valid_master()
@test(depends_on_classes=[ReplicationInstMultiCreateWaitGroup],
groups=[GROUP, groups.REPL_INST_MULTI_PROMOTE])
class ReplicationInstMultiPromoteGroup(TestGroup):
"""Test Replication Instance Multi-Promote functionality."""
def __init__(self):
super(ReplicationInstMultiPromoteGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def promote_to_replica_source(self):
"""Test promoting a replica to replica source (master)."""
self.test_runner.run_promote_to_replica_source()
@test(depends_on=[promote_to_replica_source])
def verify_replica_data_new_master(self):
"""Verify data is still on new master."""
self.test_runner.run_verify_replica_data_new_master()
@test(depends_on=[promote_to_replica_source],
runs_after=[verify_replica_data_new_master])
def add_data_to_replicate2(self):
"""Add data to new master to verify replication."""
self.test_runner.run_add_data_to_replicate2()
@test(depends_on=[add_data_to_replicate2])
def verify_data_to_replicate2(self):
"""Verify data exists on new master."""
self.test_runner.run_verify_data_to_replicate2()
@test(depends_on=[add_data_to_replicate2],
runs_after=[verify_data_to_replicate2])
def verify_replica_data_new2(self):
"""Verify data was transferred to new replicas."""
self.test_runner.run_verify_replica_data_new2()
@test(depends_on=[promote_to_replica_source],
runs_after=[verify_replica_data_new2])
def promote_original_source(self):
"""Test promoting back the original replica source."""
self.test_runner.run_promote_original_source()
@test(depends_on=[promote_original_source])
def add_final_data_to_replicate(self):
"""Add final data to original master to verify switch."""
self.test_runner.run_add_final_data_to_replicate()
@test(depends_on=[add_final_data_to_replicate])
def verify_data_to_replicate_final(self):
"""Verify final data exists on master."""
self.test_runner.run_verify_data_to_replicate_final()
@test(depends_on=[verify_data_to_replicate_final])
def verify_final_data_replicated(self):
"""Verify final data was transferred to all replicas."""
self.test_runner.run_verify_final_data_replicated()
@test(depends_on_classes=[ReplicationInstMultiPromoteGroup],
groups=[GROUP, groups.REPL_INST_DELETE])
class ReplicationInstDeleteGroup(TestGroup):
"""Test Replication Instance Delete functionality."""
def __init__(self):
super(ReplicationInstDeleteGroup, self).__init__(
ReplicationRunnerFactory.instance())
@test
def remove_replicated_data(self):
"""Remove replication data."""
self.test_runner.run_remove_replicated_data()
@test(runs_after=[remove_replicated_data])
def detach_replica_from_source(self):
"""Test detaching a replica from the master."""
self.test_runner.run_detach_replica_from_source()
@test(runs_after=[detach_replica_from_source])
def delete_detached_replica(self):
"""Test deleting the detached replica."""
self.test_runner.run_delete_detached_replica()
@test(runs_after=[delete_detached_replica])
def delete_all_replicas(self):
"""Test deleting all the remaining replicas."""
self.test_runner.run_delete_all_replicas()
@test(depends_on_classes=[ReplicationInstDeleteGroup],
groups=[GROUP, groups.REPL_INST_DELETE_WAIT])
class ReplicationInstDeleteWaitGroup(TestGroup):
"""Wait for Replication Instance Delete to complete."""
def __init__(self):
super(ReplicationInstDeleteWaitGroup, self).__init__(
ReplicationRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
@test
def wait_for_delete_replicas(self):
"""Wait for all the replicas to delete."""
self.test_runner.run_wait_for_delete_replicas()
@test(runs_after=[wait_for_delete_replicas])
def test_backup_deleted(self):
"""Remove the full backup and test that the created backup
is now gone.
"""
self.test_runner.run_test_backup_deleted()
self.backup_runner.run_delete_backup()
@test(runs_after=[test_backup_deleted])
def cleanup_master_instance(self):
"""Remove slave users from master instance."""
self.test_runner.run_cleanup_master_instance()

View File

@ -1,251 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.root_actions_group"
class RootActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'root_actions_runners'
_runner_cls = 'RootActionsRunner'
class BackupRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'backup_runners'
_runner_cls = 'BackupRunner'
class BackupRunnerFactory2(test_runners.RunnerFactory):
_runner_ns = 'backup_runners'
_runner_cls = 'BackupRunner'
@test(depends_on_groups=[groups.INST_FORCE_DELETE_WAIT],
groups=[GROUP, groups.ROOT_ACTION_ENABLE])
class RootActionsEnableGroup(TestGroup):
"""Test Root Actions Enable functionality."""
def __init__(self):
super(RootActionsEnableGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def check_root_never_enabled(self):
"""Check the root has never been enabled on the instance."""
self.test_runner.run_check_root_never_enabled()
@test(depends_on=[check_root_never_enabled])
def disable_root_before_enabled(self):
"""Ensure disable fails if root was never enabled."""
self.test_runner.check_root_disable_supported()
self.test_runner.run_disable_root_before_enabled()
@test(depends_on=[check_root_never_enabled],
runs_after=[disable_root_before_enabled])
def enable_root_no_password(self):
"""Enable root (without specifying a password)."""
self.test_runner.run_enable_root_no_password()
@test(depends_on=[enable_root_no_password])
def check_root_enabled(self):
"""Check the root is now enabled."""
self.test_runner.run_check_root_enabled()
@test(depends_on=[check_root_enabled])
def backup_root_enabled_instance(self):
"""Backup the root-enabled instance."""
self.test_runner.check_inherit_root_state_supported()
self.backup_runner.run_backup_create()
self.backup_runner.run_backup_create_completed()
@test(depends_on=[check_root_enabled],
runs_after=[backup_root_enabled_instance])
def delete_root(self):
"""Ensure an attempt to delete the root user fails."""
self.test_runner.run_delete_root()
@test(depends_on=[check_root_never_enabled],
runs_after=[delete_root])
def enable_root_with_password(self):
"""Enable root (with a given password)."""
self.test_runner.run_enable_root_with_password()
@test(depends_on=[enable_root_with_password])
def check_root_still_enabled(self):
"""Check the root is still enabled."""
self.test_runner.run_check_root_enabled()
@test(depends_on_classes=[RootActionsEnableGroup],
groups=[GROUP, groups.ROOT_ACTION_DISABLE])
class RootActionsDisableGroup(TestGroup):
"""Test Root Actions Disable functionality."""
def __init__(self):
super(RootActionsDisableGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def disable_root(self):
"""Disable root."""
self.test_runner.check_root_disable_supported()
self.test_runner.run_disable_root()
@test(depends_on=[disable_root])
def check_root_still_enabled_after_disable(self):
"""Check the root is still marked as enabled after disable."""
self.test_runner.check_root_disable_supported()
self.test_runner.run_check_root_still_enabled_after_disable()
@test(depends_on=[check_root_still_enabled_after_disable])
def backup_root_disabled_instance(self):
"""Backup the root-disabled instance."""
self.test_runner.check_root_disable_supported()
self.test_runner.check_inherit_root_state_supported()
self.backup_runner2.run_backup_create()
self.backup_runner2.run_backup_create_completed()
@test(depends_on_classes=[RootActionsDisableGroup],
groups=[GROUP, groups.ROOT_ACTION_INST, groups.ROOT_ACTION_INST_CREATE])
class RootActionsInstCreateGroup(TestGroup):
"""Test Root Actions Instance Create functionality."""
def __init__(self):
super(RootActionsInstCreateGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def restore_root_enabled_instance(self):
"""Restore the root-enabled instance."""
self.backup_runner.run_restore_from_backup(suffix='_root_enable')
@test
def restore_root_disabled_instance(self):
"""Restore the root-disabled instance."""
self.test_runner.check_root_disable_supported()
self.backup_runner2.run_restore_from_backup(suffix='_root_disable')
@test(depends_on_classes=[RootActionsInstCreateGroup],
groups=[GROUP, groups.ROOT_ACTION_INST,
groups.ROOT_ACTION_INST_CREATE_WAIT])
class RootActionsInstCreateWaitGroup(TestGroup):
"""Wait for Root Actions Instance Create to complete."""
def __init__(self):
super(RootActionsInstCreateWaitGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def wait_for_restored_instance(self):
"""Wait until restoring a root-enabled instance completes."""
self.backup_runner.run_restore_from_backup_completed()
@test(depends_on=[wait_for_restored_instance])
def check_root_enabled_after_restore(self):
"""Check the root is also enabled on the restored instance."""
instance_id = self.backup_runner.restore_instance_id
root_creds = self.test_runner.restored_root_creds
self.test_runner.run_check_root_enabled_after_restore(
instance_id, root_creds)
@test
def wait_for_restored_instance2(self):
"""Wait until restoring a root-disabled instance completes."""
self.test_runner.check_root_disable_supported()
self.backup_runner2.run_restore_from_backup_completed()
@test(depends_on=[wait_for_restored_instance2])
def check_root_enabled_after_restore2(self):
"""Check the root is also enabled on the restored instance."""
instance_id = self.backup_runner2.restore_instance_id
root_creds = self.test_runner.restored_root_creds2
self.test_runner.run_check_root_enabled_after_restore2(
instance_id, root_creds)
@test(depends_on_classes=[RootActionsInstCreateWaitGroup],
groups=[GROUP, groups.ROOT_ACTION_INST, groups.ROOT_ACTION_INST_DELETE])
class RootActionsInstDeleteGroup(TestGroup):
"""Test Root Actions Instance Delete functionality."""
def __init__(self):
super(RootActionsInstDeleteGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def delete_restored_instance(self):
"""Delete the restored root-enabled instance."""
self.backup_runner.run_delete_restored_instance()
@test
def delete_instance_backup(self):
"""Delete the root-enabled instance backup."""
self.backup_runner.run_delete_backup()
@test
def delete_restored_instance2(self):
"""Delete the restored root-disabled instance."""
self.test_runner.check_root_disable_supported()
self.backup_runner2.run_delete_restored_instance()
@test
def delete_instance_backup2(self):
"""Delete the root-disabled instance backup."""
self.test_runner.check_root_disable_supported()
self.backup_runner2.run_delete_backup()
@test(depends_on_classes=[RootActionsInstDeleteGroup],
groups=[GROUP, groups.ROOT_ACTION_INST,
groups.ROOT_ACTION_INST_DELETE_WAIT])
class RootActionsInstDeleteWaitGroup(TestGroup):
"""Wait for Root Actions Instance Delete to complete."""
def __init__(self):
super(RootActionsInstDeleteWaitGroup, self).__init__(
RootActionsRunnerFactory.instance())
self.backup_runner = BackupRunnerFactory.instance()
self.backup_runner2 = BackupRunnerFactory2.instance()
@test
def wait_for_restored_instance_delete(self):
"""Wait for the root-enabled instance to be deleted."""
self.backup_runner.run_wait_for_restored_instance_delete()
@test
def wait_for_restored_instance2_delete(self):
"""Wait for the root-disabled instance to be deleted."""
self.backup_runner2.run_wait_for_restored_instance_delete()

View File

@ -1,26 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class TestGroup(object, metaclass=abc.ABCMeta):
def __init__(self, test_runner):
self._test_runner = test_runner
@property
def test_runner(self):
return self._test_runner

View File

@ -1,269 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import test
from trove.tests.scenario import groups
from trove.tests.scenario.groups.test_group import TestGroup
from trove.tests.scenario.runners import test_runners
GROUP = "scenario.user_actions_group"
class UserActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'user_actions_runners'
_runner_cls = 'UserActionsRunner'
class InstanceCreateRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'instance_create_runners'
_runner_cls = 'InstanceCreateRunner'
class DatabaseActionsRunnerFactory(test_runners.RunnerFactory):
_runner_ns = 'database_actions_runners'
_runner_cls = 'DatabaseActionsRunner'
@test(depends_on_groups=[groups.ROOT_ACTION_INST_DELETE_WAIT],
groups=[GROUP, groups.USER_ACTION_CREATE])
class UserActionsCreateGroup(TestGroup):
"""Test User Actions Create functionality."""
def __init__(self):
super(UserActionsCreateGroup, self).__init__(
UserActionsRunnerFactory.instance())
self.database_actions_runner = DatabaseActionsRunnerFactory.instance()
@test
def create_user_databases(self):
"""Create user databases on an existing instance."""
# These databases may be referenced by the users (below) so we need to
# create them first.
self.database_actions_runner.run_databases_create()
@test(runs_after=[create_user_databases])
def create_users(self):
"""Create users on an existing instance."""
self.test_runner.run_users_create()
@test(depends_on=[create_users])
def show_user(self):
"""Show created users."""
self.test_runner.run_user_show()
@test(depends_on=[create_users],
runs_after=[show_user])
def list_users(self):
"""List the created users."""
self.test_runner.run_users_list()
@test(depends_on=[create_users],
runs_after=[list_users])
def show_user_access(self):
"""Show user access list."""
self.test_runner.run_user_access_show()
@test(depends_on=[create_users],
runs_after=[show_user_access])
def revoke_user_access(self):
"""Revoke user database access."""
self.test_runner.run_user_access_revoke()
@test(depends_on=[create_users],
runs_after=[revoke_user_access])
def grant_user_access(self):
"""Grant user database access."""
self.test_runner.run_user_access_grant()
@test(depends_on=[create_users],
runs_after=[grant_user_access])
def create_user_with_no_attributes(self):
"""Ensure creating a user with blank specification fails."""
self.test_runner.run_user_create_with_no_attributes()
@test(depends_on=[create_users],
runs_after=[create_user_with_no_attributes])
def create_user_with_blank_name(self):
"""Ensure creating a user with blank name fails."""
self.test_runner.run_user_create_with_blank_name()
@test(depends_on=[create_users],
runs_after=[create_user_with_blank_name])
def create_user_with_blank_password(self):
"""Ensure creating a user with blank password fails."""
self.test_runner.run_user_create_with_blank_password()
@test(depends_on=[create_users],
runs_after=[create_user_with_blank_password])
def create_existing_user(self):
"""Ensure creating an existing user fails."""
self.test_runner.run_existing_user_create()
@test(depends_on=[create_users],
runs_after=[create_existing_user])
def update_user_with_blank_name(self):
"""Ensure updating a user with blank name fails."""
self.test_runner.run_user_update_with_blank_name()
@test(depends_on=[create_users],
runs_after=[update_user_with_blank_name])
def update_user_with_existing_name(self):
"""Ensure updating a user with an existing name fails."""
self.test_runner.run_user_update_with_existing_name()
@test(depends_on=[create_users],
runs_after=[update_user_with_existing_name])
def update_user_attributes(self):
"""Update an existing user."""
self.test_runner.run_user_attribute_update()
@test(depends_on=[update_user_attributes])
def recreate_user_with_no_access(self):
"""Re-create a renamed user with no access rights."""
self.test_runner.run_user_recreate_with_no_access()
@test
def show_nonexisting_user(self):
"""Ensure show on non-existing user fails."""
self.test_runner.run_nonexisting_user_show()
@test
def update_nonexisting_user(self):
"""Ensure updating a non-existing user fails."""
self.test_runner.run_nonexisting_user_update()
@test
def delete_nonexisting_user(self):
"""Ensure deleting a non-existing user fails."""
self.test_runner.run_nonexisting_user_delete()
@test
def create_system_user(self):
"""Ensure creating a system user fails."""
self.test_runner.run_system_user_create()
@test
def show_system_user(self):
"""Ensure showing a system user fails."""
self.test_runner.run_system_user_show()
@test
def update_system_user(self):
"""Ensure updating a system user fails."""
self.test_runner.run_system_user_attribute_update()
@test(depends_on_classes=[UserActionsCreateGroup],
groups=[GROUP, groups.USER_ACTION_DELETE])
class UserActionsDeleteGroup(TestGroup):
"""Test User Actions Delete functionality."""
def __init__(self):
super(UserActionsDeleteGroup, self).__init__(
UserActionsRunnerFactory.instance())
self.database_actions_runner = DatabaseActionsRunnerFactory.instance()
@test
def delete_user(self):
"""Delete the created users."""
self.test_runner.run_user_delete()
@test
def delete_system_user(self):
"""Ensure deleting a system user fails."""
self.test_runner.run_system_user_delete()
@test
def delete_user_databases(self):
"""Delete the user databases."""
self.database_actions_runner.run_database_delete()
@test(groups=[GROUP, groups.USER_ACTION_INST, groups.USER_ACTION_INST_CREATE],
depends_on_classes=[UserActionsDeleteGroup])
class UserActionsInstCreateGroup(TestGroup):
"""Test User Actions Instance Create functionality."""
def __init__(self):
super(UserActionsInstCreateGroup, self).__init__(
UserActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def create_initialized_instance(self):
"""Create an instance with initial users."""
self.instance_create_runner.run_initialized_instance_create(
with_dbs=False, with_users=True, configuration_id=None,
create_helper_user=False, name_suffix='_user')
@test(depends_on_classes=[UserActionsInstCreateGroup],
groups=[GROUP, groups.USER_ACTION_INST,
groups.USER_ACTION_INST_CREATE_WAIT])
class UserActionsInstCreateWaitGroup(TestGroup):
"""Wait for User Actions Instance Create to complete."""
def __init__(self):
super(UserActionsInstCreateWaitGroup, self).__init__(
UserActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def wait_for_instances(self):
"""Waiting for user instance to become active."""
self.instance_create_runner.run_wait_for_init_instance()
@test(depends_on=[wait_for_instances])
def validate_initialized_instance(self):
"""Validate the user instance data and properties."""
self.instance_create_runner.run_validate_initialized_instance()
@test(depends_on_classes=[UserActionsInstCreateWaitGroup],
groups=[GROUP, groups.USER_ACTION_INST, groups.USER_ACTION_INST_DELETE])
class UserActionsInstDeleteGroup(TestGroup):
"""Test User Actions Instance Delete functionality."""
def __init__(self):
super(UserActionsInstDeleteGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def delete_initialized_instance(self):
"""Delete the user instance."""
self.instance_create_runner.run_initialized_instance_delete()
@test(depends_on_classes=[UserActionsInstDeleteGroup],
groups=[GROUP, groups.USER_ACTION_INST,
groups.USER_ACTION_INST_DELETE_WAIT])
class UserActionsInstDeleteWaitGroup(TestGroup):
"""Wait for User Actions Instance Delete to complete."""
def __init__(self):
super(UserActionsInstDeleteWaitGroup, self).__init__(
DatabaseActionsRunnerFactory.instance())
self.instance_create_runner = InstanceCreateRunnerFactory.instance()
@test
def wait_for_delete_initialized_instance(self):
"""Wait for the user instance to delete."""
self.instance_create_runner.run_wait_for_init_delete()

View File

@ -1,162 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import Cluster
from trove.tests.scenario.helpers.test_helper import TestHelper
from trove.tests.scenario.runners.test_runners import TestRunner
class CassandraClient(object):
# Cassandra 2.1 only supports protocol versions 3 and lower.
NATIVE_PROTOCOL_VERSION = 3
def __init__(self, contact_points, user, password, keyspace):
super(CassandraClient, self).__init__()
self._cluster = None
self._session = None
self._cluster = Cluster(
contact_points=contact_points,
auth_provider=PlainTextAuthProvider(user, password),
protocol_version=self.NATIVE_PROTOCOL_VERSION)
self._session = self._connect(keyspace)
def _connect(self, keyspace):
if not self._cluster.is_shutdown:
return self._cluster.connect(keyspace)
else:
raise Exception("Cannot perform this operation on a terminated "
"cluster.")
@property
def session(self):
return self._session
def __del__(self):
if self._cluster is not None:
self._cluster.shutdown()
if self._session is not None:
self._session.shutdown()
class CassandraHelper(TestHelper):
DATA_COLUMN_NAME = 'value'
def __init__(self, expected_override_name, report):
super(CassandraHelper, self).__init__(expected_override_name, report)
self._data_cache = dict()
def create_client(self, host, *args, **kwargs):
user = self.get_helper_credentials()
username = kwargs.get('username', user['name'])
password = kwargs.get('password', user['password'])
database = kwargs.get('database', user['database'])
return CassandraClient([host], username, password, database)
def add_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
self._create_data_table(client, data_label)
stmt = client.session.prepare("INSERT INTO %s (%s) VALUES (?)"
% (data_label, self.DATA_COLUMN_NAME))
count = self._count_data_rows(client, data_label)
if count == 0:
for value in self._get_dataset(data_size):
client.session.execute(stmt, [value])
def _create_data_table(self, client, table_name):
client.session.execute('CREATE TABLE IF NOT EXISTS %s '
'(%s INT PRIMARY KEY)'
% (table_name, self.DATA_COLUMN_NAME))
def _count_data_rows(self, client, table_name):
rows = client.session.execute('SELECT COUNT(*) FROM %s' % table_name)
if rows:
return rows[0][0]
return 0
def _get_dataset(self, data_size):
cache_key = str(data_size)
if cache_key in self._data_cache:
return self._data_cache.get(cache_key)
data = self._generate_dataset(data_size)
self._data_cache[cache_key] = data
return data
def _generate_dataset(self, data_size):
return range(1, data_size + 1)
def remove_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
self._drop_table(client, data_label)
def _drop_table(self, client, table_name):
client.session.execute('DROP TABLE %s' % table_name)
def verify_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
expected_data = self._get_dataset(data_size)
client = self.get_client(host, *args, **kwargs)
actual_data = self._select_data_rows(client, data_label)
TestRunner.assert_equal(len(expected_data), len(actual_data),
"Unexpected number of result rows.")
for expected_row in expected_data:
TestRunner.assert_true(expected_row in actual_data,
"Row not found in the result set: %s"
% expected_row)
def _select_data_rows(self, client, table_name):
rows = client.session.execute('SELECT %s FROM %s'
% (self.DATA_COLUMN_NAME, table_name))
return [value[0] for value in rows]
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass', 'database': 'firstdb'}
def ping(self, host, *args, **kwargs):
try:
self.get_client(host, *args, **kwargs)
return True
except Exception:
return False
def get_valid_database_definitions(self):
return [{"name": 'db1'}, {"name": 'db2'}, {"name": 'db3'}]
def get_valid_user_definitions(self):
return [{'name': 'user1', 'password': 'password1',
'databases': []},
{'name': 'user2', 'password': 'password1',
'databases': [{'name': 'db1'}]},
{'name': 'user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def get_non_dynamic_group(self):
return {'sstable_preemptive_open_interval_in_mb': 40}
def get_invalid_groups(self):
return [{'sstable_preemptive_open_interval_in_mb': -1},
{'sstable_preemptive_open_interval_in_mb': 'string_value'}]
def get_exposed_user_log_names(self):
return ['system']

View File

@ -1,100 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from couchbase.bucket import Bucket
from couchbase import exceptions as cb_except
from trove.tests.scenario.helpers.test_helper import TestHelper
from trove.tests.scenario.runners.test_runners import TestRunner
from trove.tests.util import utils
class CouchbaseHelper(TestHelper):
def __init__(self, expected_override_name, report):
super(CouchbaseHelper, self).__init__(expected_override_name, report)
self._data_cache = dict()
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass'}
def create_client(self, host, *args, **kwargs):
user = self.get_helper_credentials()
return self._create_test_bucket(host, user['name'], user['password'])
def _create_test_bucket(self, host, bucket_name, password):
return Bucket('couchbase://%s/%s' % (host, bucket_name),
password=password)
# Add data overrides
def add_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
if not self._key_exists(client, data_label, *args, **kwargs):
self._set_data_point(client, data_label,
self._get_dataset(data_start, data_size))
@utils.retry((cb_except.TemporaryFailError, cb_except.BusyError))
def _key_exists(self, client, key, *args, **kwargs):
return client.get(key, quiet=True).success
@utils.retry((cb_except.TemporaryFailError, cb_except.BusyError))
def _set_data_point(self, client, key, value, *args, **kwargs):
client.insert(key, value)
def _get_dataset(self, data_start, data_size):
cache_key = str(data_size)
if cache_key in self._data_cache:
return self._data_cache.get(cache_key)
data = range(data_start, data_start + data_size)
self._data_cache[cache_key] = data
return data
# Remove data overrides
def remove_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
if self._key_exists(client, data_label, *args, **kwargs):
self._remove_data_point(client, data_label, *args, **kwargs)
@utils.retry((cb_except.TemporaryFailError, cb_except.BusyError))
def _remove_data_point(self, client, key, *args, **kwargs):
client.remove(key)
# Verify data overrides
def verify_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
expected_value = self._get_dataset(data_start, data_size)
self._verify_data_point(client, data_label, expected_value)
def _verify_data_point(self, client, key, expected_value, *args, **kwargs):
value = self._get_data_point(client, key, *args, **kwargs)
TestRunner.assert_equal(expected_value, value,
"Unexpected value '%s' returned from "
"Couchbase key '%s'" % (value, key))
@utils.retry((cb_except.TemporaryFailError, cb_except.BusyError))
def _get_data_point(self, client, key, *args, **kwargs):
return client.get(key).value
def ping(self, host, *args, **kwargs):
try:
self.create_client(host, *args, **kwargs)
return True
except Exception:
return False

View File

@ -1,111 +0,0 @@
# Copyright 2016 IBM Corporation
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import couchdb
from trove.tests.scenario.helpers.test_helper import TestHelper
from trove.tests.scenario.runners.test_runners import TestRunner
class CouchdbHelper(TestHelper):
def __init__(self, expected_override_name, report):
super(CouchdbHelper, self).__init__(expected_override_name, report)
self._data_cache = dict()
self.field_name = 'ff-%s'
self.database = 'firstdb'
def create_client(self, host, *args, **kwargs):
username = self.get_helper_credentials()['name']
password = self.get_helper_credentials()["password"]
url = 'http://%(username)s:%(password)s@%(host)s:5984/' % {
'username': username,
'password': password,
'host': host,
}
server = couchdb.Server(url)
return server
def add_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
db = client[self.database]
doc = {}
doc_id, doc_rev = db.save(doc)
data = self._get_dataset(data_size)
doc = db.get(doc_id)
for value in data:
key = self.field_name % value
doc[key] = value
db.save(doc)
def _get_dataset(self, data_size):
cache_key = str(data_size)
if cache_key in self._data_cache:
return self._data_cache.get(cache_key)
data = self._generate_dataset(data_size)
self._data_cache[cache_key] = data
return data
def _generate_dataset(self, data_size):
return range(1, data_size + 1)
def remove_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host)
db = client[self.database + "_" + data_label]
client.delete(db)
def verify_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
expected_data = self._get_dataset(data_size)
client = self.get_client(host, *args, **kwargs)
db = client[self.database]
actual_data = []
TestRunner.assert_equal(len(db), 1)
for i in db:
items = db[i].items()
actual_data = ([value for key, value in items
if key not in ['_id', '_rev']])
TestRunner.assert_equal(len(expected_data),
len(actual_data),
"Unexpected number of result rows.")
for expected_row in expected_data:
TestRunner.assert_true(expected_row in actual_data,
"Row not found in the result set: %s"
% expected_row)
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass',
'database': self.database}
def get_helper_credentials_root(self):
return {'name': 'root', 'password': 'rootpass'}
def get_valid_database_definitions(self):
return [{'name': 'db1'}, {'name': 'db2'}, {"name": 'db3'}]
def get_valid_user_definitions(self):
return [{'name': 'user1', 'password': 'password1', 'databases': [],
'host': '127.0.0.1'},
{'name': 'user2', 'password': 'password1',
'databases': [{'name': 'db1'}], 'host': '0.0.0.0'},
{'name': 'user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]

View File

@ -1,44 +0,0 @@
# Copyright 2016 IBM Corporation
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.test_helper import TestHelper
class Db2Helper(TestHelper):
def __init__(self, expected_override_name, report):
super(Db2Helper, self).__init__(expected_override_name, report)
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass', 'database': 'lite'}
def get_valid_user_definitions(self):
return [{'name': 'user1', 'password': 'password1', 'databases': []},
{'name': 'user2', 'password': 'password1',
'databases': [{'name': 'db1'}]},
{'name': 'user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def get_dynamic_group(self):
return {'MON_HEAP_SZ': 40}
def get_non_dynamic_group(self):
return {'NUMDB': 30}
def get_invalid_groups(self):
return [{'timezone': 997},
{"max_worker_processes": 'string_value'},
{"standard_conforming_strings": 'string_value'}]

View File

@ -1,22 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.mysql_helper import MysqlHelper
class MariadbHelper(MysqlHelper):
def __init__(self, expected_override_name, report):
super(MariadbHelper, self).__init__(expected_override_name, report)

View File

@ -1,41 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.test_helper import TestHelper
class MongodbHelper(TestHelper):
def __init__(self, expected_override_name, report):
super(MongodbHelper, self).__init__(expected_override_name, report)
def get_valid_database_definitions(self):
return [{"name": 'db1'}, {"name": 'db2'}, {'name': 'db3'}]
def get_valid_user_definitions(self):
return [{'name': 'db0.user1', 'password': 'password1',
'databases': []},
{'name': 'db0.user2', 'password': 'password1',
'databases': [{'name': 'db1'}]},
{'name': 'db1.user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def get_non_dynamic_group(self):
return {'systemLog.verbosity': 4}
def get_invalid_groups(self):
return [{'net.maxIncomingConnections': -1},
{'storage.mmapv1.nsSize': 4096},
{'storage.journal.enabled': 'string_value'}]

View File

@ -1,59 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.sql_helper import SqlHelper
class MysqlHelper(SqlHelper):
def __init__(self, expected_override_name, report):
super(MysqlHelper, self).__init__(expected_override_name, report,
'mysql+pymysql')
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass', 'database': 'firstdb'}
def get_helper_credentials_root(self):
return {'name': 'root', 'password': 'rootpass'}
def get_valid_database_definitions(self):
return [{'name': 'db1', 'character_set': 'latin2',
'collate': 'latin2_general_ci'},
{'name': 'db2'}, {"name": 'db3'}]
def get_valid_user_definitions(self):
return [{'name': 'a_user1', 'password': 'password1', 'databases': [],
'host': '127.0.0.1'},
{'name': 'a_user2', 'password': 'password1',
'databases': [{'name': 'db1'}], 'host': '0.0.0.0'},
{'name': 'a_user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def get_dynamic_group(self):
return {'key_buffer_size': 10485760,
'join_buffer_size': 10485760}
def get_non_dynamic_group(self):
return {'innodb_buffer_pool_size': 10485760,
'long_query_time': 59.1}
def get_invalid_groups(self):
return [{'key_buffer_size': -1}, {"join_buffer_size": 'string_value'}]
def get_exposed_user_log_names(self):
return ['general', 'slow_query']
def get_unexposed_sys_log_names(self):
return ['guest', 'error']

View File

@ -1,22 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.mysql_helper import MysqlHelper
class PerconaHelper(MysqlHelper):
def __init__(self, expected_override_name, report):
super(PerconaHelper, self).__init__(expected_override_name, report)

View File

@ -1,69 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.sql_helper import SqlHelper
class PostgresqlHelper(SqlHelper):
def __init__(self, expected_override_name, report, port=5432):
super(PostgresqlHelper, self).__init__(expected_override_name, report,
'postgresql', port=port)
@property
def test_schema(self):
return 'public'
def get_helper_credentials(self):
# There must be a database with the same name as the user in order
# for the user to be able to login.
return {'name': 'lite', 'password': 'litepass', 'database': 'lite'}
def get_helper_credentials_root(self):
return {'name': 'postgres', 'password': 'rootpass'}
def get_valid_database_definitions(self):
return [{'name': 'db1'}, {'name': 'db2'}, {'name': 'db3'}]
def get_valid_user_definitions(self):
return [{'name': 'user1', 'password': 'password1', 'databases': []},
{'name': 'user2', 'password': 'password1',
'databases': [{'name': 'db1'}]},
{'name': 'user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def get_dynamic_group(self):
return {'effective_cache_size': '528MB'}
def get_non_dynamic_group(self):
return {'max_connections': 113,
'log_min_duration_statement': '257ms'}
def get_invalid_groups(self):
return [{'timezone': 997},
{"vacuum_cost_delay": 'string_value'},
{"standard_conforming_strings": 'string_value'}]
def get_configuration_value(self, property_name, host, *args, **kwargs):
client = self.get_client(host, *args, **kwargs)
cmd = "SHOW %s;" % property_name
row = client.execute(cmd).fetchone()
return row[0]
def get_exposed_user_log_names(self):
return ['general']
def log_enable_requires_restart(self):
return True

View File

@ -1,22 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.tests.scenario.helpers.mysql_helper import MysqlHelper
class PxcHelper(MysqlHelper):
def __init__(self, expected_override_name, report):
super(PxcHelper, self).__init__(expected_override_name, report)

View File

@ -1,214 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
import redis
from trove.tests.scenario.helpers.test_helper import TestHelper
from trove.tests.scenario.runners.test_runners import TestRunner
class RedisHelper(TestHelper):
def __init__(self, expected_override_name, report):
super(RedisHelper, self).__init__(expected_override_name, report)
self.key_patterns = ['user_a:%s', 'user_b:%s']
self.value_pattern = 'id:%s'
self.label_value = 'value_set'
self._ds_client_cache = dict()
def get_helper_credentials_root(self):
return {'name': '-', 'password': 'rootpass'}
def get_client(self, host, *args, **kwargs):
# We need to cache the Redis client in order to prevent Error 99
# (Cannot assign requested address) when working with large data sets.
# A new client may be created frequently due to how the redirection
# works (see '_execute_with_redirection').
# The old (now closed) connections however have to wait for about 60s
# (TIME_WAIT) before the port can be released.
# This is a feature of the operating system that helps it dealing with
# packets that arrive after the connection is closed.
#
# NOTE(zhaochao): when connecting to Redis server with a password,
# current cached client may not updated to use the same password,
# connection_kwargs of the ConnectPool object should be checked,
# if the new password is different, A new client instance will be
# created.
recreate_client = True
# NOTE(zhaochao): Another problem about caching clients is, when
# the 'requirepass' paramter of Redis server is changed, already
# connected client can still issue commands. If we want to make sure
# old passwords cannot be used to connect to the server, cached
# clients shouldn't be used, a new one should be created instead.
# We cannot easily tell whether the 'requirepass' paramter is changed.
# So we have to always recreate a client when a password is explicitly
# specified. The cached client is only used when no password
# specified(i.e. we're going to use the default password) and the
# cached password is same as the default one.
if (host in self._ds_client_cache and 'password' not in kwargs):
default_password = self.get_helper_credentials()['password']
cached_password = (self._ds_client_cache[host]
.connection_pool
.connection_kwargs.get('password'))
if cached_password == default_password:
recreate_client = False
if recreate_client:
self._ds_client_cache[host] = (
self.create_client(host, *args, **kwargs))
return self._ds_client_cache[host]
def create_client(self, host, *args, **kwargs):
user = self.get_helper_credentials()
password = kwargs.get('password', user['password'])
client = redis.Redis(password=password, host=host)
return client
# Add data overrides
# We use multiple keys to make the Redis backup take longer
def add_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
test_set = self._get_data_point(host, data_label, *args, **kwargs)
if not test_set:
for num in range(data_start, data_start + data_size):
for key_pattern in self.key_patterns:
self._set_data_point(
host,
key_pattern % str(num), self.value_pattern % str(num),
*args, **kwargs)
# now that the data is there, add the label
self._set_data_point(
host,
data_label, self.label_value,
*args, **kwargs)
def _set_data_point(self, host, key, value, *args, **kwargs):
def set_point(client, key, value):
return client.set(key, value)
self._execute_with_redirection(
host, set_point, [key, value], *args, **kwargs)
def _get_data_point(self, host, key, *args, **kwargs):
def get_point(client, key):
return client.get(key)
return self._execute_with_redirection(
host, get_point, [key], *args, **kwargs)
def _execute_with_redirection(self, host, callback, callback_args,
*args, **kwargs):
"""Redis clustering is a relatively new feature still not supported
in a fully transparent way by all clients.
The application itself is responsible for connecting to the right node
when accessing a key in a Redis cluster instead.
Clients may be redirected to other nodes by redirection errors:
redis.exceptions.ResponseError: MOVED 10778 10.64.0.2:6379
This method tries to execute a given callback on a given host.
If it gets a redirection error it parses the new host from the response
and issues the same callback on this new host.
"""
client = self.get_client(host, *args, **kwargs)
try:
return callback(client, *callback_args)
except redis.exceptions.ResponseError as ex:
response = str(ex)
if response:
tokens = response.split()
if tokens[0] == 'MOVED':
redirected_host = tokens[2].split(':')[0]
if redirected_host:
return self._execute_with_redirection(
redirected_host, callback, callback_args,
*args, **kwargs)
raise ex
# Remove data overrides
# We use multiple keys to make the Redis backup take longer
def remove_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
test_set = self._get_data_point(host, data_label, *args, **kwargs)
if test_set:
for num in range(data_start, data_start + data_size):
for key_pattern in self.key_patterns:
self._expire_data_point(host, key_pattern % str(num),
*args, **kwargs)
# now that the data is gone, remove the label
self._expire_data_point(host, data_label, *args, **kwargs)
def _expire_data_point(self, host, key, *args, **kwargs):
def expire_point(client, key):
return client.expire(key, 0)
self._execute_with_redirection(
host, expire_point, [key], *args, **kwargs)
# Verify data overrides
# We use multiple keys to make the Redis backup take longer
def verify_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
# make sure the data is there - tests edge cases and a random one
self._verify_data_point(host, data_label, self.label_value,
*args, **kwargs)
midway_num = data_start + int(data_size / 2)
random_num = random.randint(data_start + 2,
data_start + data_size - 3)
for num in [data_start,
data_start + 1,
midway_num,
random_num,
data_start + data_size - 2,
data_start + data_size - 1]:
for key_pattern in self.key_patterns:
self._verify_data_point(host,
key_pattern % num,
self.value_pattern % num,
*args, **kwargs)
# negative tests
for num in [data_start - 1,
data_start + data_size]:
for key_pattern in self.key_patterns:
self._verify_data_point(host, key_pattern % num, None,
*args, **kwargs)
def _verify_data_point(self, host, key, expected_value, *args, **kwargs):
value = self._get_data_point(host, key, *args, **kwargs)
TestRunner.assert_equal(expected_value, value,
"Unexpected value '%s' returned from Redis "
"key '%s'" % (value, key))
def get_dynamic_group(self):
return {'hz': 15}
def get_non_dynamic_group(self):
return {'databases': 24}
def get_invalid_groups(self):
return [{'hz': 600}, {'databases': -1}, {'databases': 'string_value'}]
def ping(self, host, *args, **kwargs):
try:
client = self.get_client(host, *args, **kwargs)
return client.ping()
except Exception:
return False

View File

@ -1,150 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from sqlalchemy import MetaData, Table, Column, Integer
from trove.tests.scenario.helpers.test_helper import TestHelper
from trove.tests.scenario.runners.test_runners import TestRunner
class SqlHelper(TestHelper):
"""This mixin provides data handling helper functions for SQL datastores.
"""
DATA_COLUMN_NAME = 'value'
def __init__(self, expected_override_name, report,
protocol="mysql+pymysql", port=None):
super(SqlHelper, self).__init__(expected_override_name, report)
self.protocol = protocol
self.port = port
self.credentials = self.get_helper_credentials()
self.credentials_root = self.get_helper_credentials_root()
self._schema_metadata = MetaData()
self._data_cache = dict()
@property
def test_schema(self):
return self.credentials['database']
def create_client(self, host, *args, **kwargs):
username = kwargs.get('username', self.credentials['name'])
password = kwargs.get('password', self.credentials['password'])
database = kwargs.get('database', self.credentials['database'])
creds = {"name": username, "password": password, "database": database}
return sqlalchemy.create_engine(
self._build_connection_string(host, creds))
def _build_connection_string(self, host, creds):
if self.port:
host = "%s:%d" % (host, self.port)
credentials = {'protocol': self.protocol,
'host': host,
'user': creds.get('name', ''),
'password': creds.get('password', ''),
'database': creds.get('database', '')}
return ('%(protocol)s://%(user)s:%(password)s@%(host)s/%(database)s'
% credentials)
# Add data overrides
def add_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host, *args, **kwargs)
self._create_data_table(client, self.test_schema, data_label)
count = self._count_data_rows(client, self.test_schema, data_label)
if count == 0:
self._insert_data_rows(client, self.test_schema, data_label,
data_size)
def _create_data_table(self, client, schema_name, table_name):
Table(
table_name, self._schema_metadata,
Column(self.DATA_COLUMN_NAME, Integer(),
nullable=False, default=0),
keep_existing=True, schema=schema_name
).create(client, checkfirst=True)
def _count_data_rows(self, client, schema_name, table_name):
data_table = self._get_schema_table(schema_name, table_name)
return client.execute(data_table.count()).scalar()
def _insert_data_rows(self, client, schema_name, table_name, data_size):
data_table = self._get_schema_table(schema_name, table_name)
client.execute(data_table.insert(), self._get_dataset(data_size))
def _get_schema_table(self, schema_name, table_name):
qualified_table_name = '%s.%s' % (schema_name, table_name)
return self._schema_metadata.tables.get(qualified_table_name)
def _get_dataset(self, data_size):
cache_key = str(data_size)
if cache_key in self._data_cache:
return self._data_cache.get(cache_key)
data = self._generate_dataset(data_size)
self._data_cache[cache_key] = data
return data
def _generate_dataset(self, data_size):
return [{self.DATA_COLUMN_NAME: value}
for value in range(1, data_size + 1)]
# Remove data overrides
def remove_actual_data(self, data_label, data_start, data_size, host,
*args, **kwargs):
client = self.get_client(host)
self._drop_table(client, self.test_schema, data_label)
def _drop_table(self, client, schema_name, table_name):
data_table = self._get_schema_table(schema_name, table_name)
data_table.drop(client, checkfirst=True)
# Verify data overrides
def verify_actual_data(self, data_label, data_Start, data_size, host,
*args, **kwargs):
expected_data = [(item[self.DATA_COLUMN_NAME],)
for item in self._get_dataset(data_size)]
client = self.get_client(host, *args, **kwargs)
actual_data = self._select_data_rows(client, self.test_schema,
data_label)
TestRunner.assert_equal(len(expected_data), len(actual_data),
"Unexpected number of result rows.")
TestRunner.assert_list_elements_equal(
expected_data, actual_data, "Unexpected rows in the result set.")
def _select_data_rows(self, client, schema_name, table_name):
data_table = self._get_schema_table(schema_name, table_name)
return client.execute(data_table.select()).fetchall()
def ping(self, host, *args, **kwargs):
try:
root_client = self.get_client(host, *args, **kwargs)
root_client.execute("SELECT 1;")
return True
except Exception as e:
print("Failed to execute sql command, error: %s" % str(e))
return False
def get_configuration_value(self, property_name, host, *args, **kwargs):
client = self.get_client(host, *args, **kwargs)
cmd = "SHOW GLOBAL VARIABLES LIKE '%s';" % property_name
row = client.execute(cmd).fetchone()
return row['Value']

View File

@ -1,510 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from enum import Enum
import inspect
from proboscis import SkipTest
from time import sleep
class DataType(Enum):
"""
Represent the type of data to add to a datastore. This allows for
multiple 'states' of data that can be verified after actions are
performed by Trove.
If new entries are added here, sane values should be added to the
_fn_data dictionary defined in TestHelper.
"""
# micro amount of data, useful for testing datastore logging, etc.
micro = 1
# another micro dataset (also for datastore logging)
micro2 = 2
# another micro dataset (also for datastore logging)
micro3 = 3
# another micro dataset (also for datastore logging)
micro4 = 4
# very tiny amount of data, useful for testing replication
# propagation, etc.
tiny = 5
# another tiny dataset (also for replication propagation)
tiny2 = 6
# a third tiny dataset (also for replication propagation)
tiny3 = 7
# a forth tiny dataset (for cluster propagation)
tiny4 = 8
# small amount of data (this can be added to each instance
# after creation, for example).
small = 9
# large data, enough to make creating a backup take 20s or more.
large = 10
class TestHelper(object):
"""
Base class for all 'Helper' classes.
The Helper classes are designed to do datastore specific work
that can be used by multiple runner classes. Things like adding
data to datastores and verifying data or internal database states,
etc. should be handled by these classes.
"""
# Define the actions that can be done on each DataType. When adding
# a new action, remember to modify _data_fns
FN_ADD = 'add'
FN_REMOVE = 'remove'
FN_VERIFY = 'verify'
FN_TYPES = [FN_ADD, FN_REMOVE, FN_VERIFY]
# Artificial 'DataType' name to use for the methods that do the
# actual data manipulation work.
DT_ACTUAL = 'actual'
def __init__(self, expected_override_name, report):
"""Initialize the helper class by creating a number of stub
functions that each datastore specific class can chose to
override. Basically, the functions are of the form:
{FN_TYPE}_{DataType.name}_data
For example:
add_tiny_data
add_small_data
remove_small_data
verify_large_data
and so on. Add and remove actions throw a SkipTest if not
implemented, and verify actions by default do nothing.
These methods, by default, call the corresponding *_actual_data()
passing in 'data_label', 'data_start' and 'data_size' as defined
for each DataType in the dictionary below.
"""
super(TestHelper, self).__init__()
self._expected_override_name = expected_override_name
self.report = report
# For building data access functions
# name/fn pairs for each action
self._data_fns = {self.FN_ADD: {},
self.FN_REMOVE: {},
self.FN_VERIFY: {}}
# Pattern used to create the data functions. The first parameter
# is the function type (FN_TYPE), the second is the DataType
# or DT_ACTUAL.
self.data_fn_pattern = '%s_%s_data'
# Values to distinguish between the different DataTypes. If these
# values don't work for a datastore, it will need to override
# the auto-generated {FN_TYPE}_{DataType.name}_data method.
self.DATA_START = 'start'
self.DATA_SIZE = 'size'
self._fn_data = {
DataType.micro.name: {
self.DATA_START: 100,
self.DATA_SIZE: 10},
DataType.micro2.name: {
self.DATA_START: 200,
self.DATA_SIZE: 10},
DataType.micro3.name: {
self.DATA_START: 300,
self.DATA_SIZE: 10},
DataType.micro4.name: {
self.DATA_START: 400,
self.DATA_SIZE: 10},
DataType.tiny.name: {
self.DATA_START: 1000,
self.DATA_SIZE: 100},
DataType.tiny2.name: {
self.DATA_START: 2000,
self.DATA_SIZE: 100},
DataType.tiny3.name: {
self.DATA_START: 3000,
self.DATA_SIZE: 100},
DataType.tiny4.name: {
self.DATA_START: 4000,
self.DATA_SIZE: 100},
DataType.small.name: {
self.DATA_START: 10000,
self.DATA_SIZE: 1000},
DataType.large.name: {
self.DATA_START: 100000,
self.DATA_SIZE: 100000},
}
self._build_data_fns()
#################
# Utility methods
#################
def get_class_name(self):
"""Builds a string of the expected class name, plus the actual one
being used if it's not the same.
"""
class_name_str = "'%s'" % self._expected_override_name
if self._expected_override_name != self.__class__.__name__:
class_name_str += ' (using %s)' % self.__class__.__name__
return class_name_str
################
# Client related
################
def get_client(self, host, *args, **kwargs):
"""Gets the datastore client. This isn't cached as the
database may be restarted in between calls, causing
lost connection errors.
"""
return self.create_client(host, *args, **kwargs)
def create_client(self, host, *args, **kwargs):
"""Create a datastore client. This is datastore specific, so this
method should be overridden if datastore access is desired.
"""
raise SkipTest('No client defined')
def get_helper_credentials(self):
"""Return the credentials that the client will be using to
access the database.
"""
return {'name': None, 'password': None, 'database': None}
def ping(self, host, *args, **kwargs):
"""Try to connect to a given host and perform a simple read-only
action.
Return True on success or False otherwise.
"""
pass
##############
# Root related
##############
def get_helper_credentials_root(self):
"""Return the credentials that the client will be using to
access the database as root.
"""
return {'name': None, 'password': None}
##############
# Data related
##############
def add_data(self, data_type, host, *args, **kwargs):
"""Adds data of type 'data_type' to the database. Descendant
classes should implement a function 'add_actual_data' that has the
following signature:
def add_actual_data(
self, # standard self reference
data_label, # label used to identify the 'type' to add
data_start, # a start count
data_size, # a size to use
host, # the host to add the data to
*args, # for possible future expansion
**kwargs # for possible future expansion
):
The data_label could be used to create a database or a table if the
datastore supports that. The data_start and data_size values are
designed not to overlap, such that all the data could be stored
in a single namespace (for example, creating ids from data_start
to data_start + data_size).
Since this method may be called multiple times, the
'add_actual_data' function should be idempotent.
"""
self._perform_data_action(self.FN_ADD, data_type.name,
host, *args, **kwargs)
def remove_data(self, data_type, host, *args, **kwargs):
"""Removes all data associated with 'data_type'. See
instructions for 'add_data' for implementation guidance.
"""
self._perform_data_action(self.FN_REMOVE, data_type.name,
host, *args, **kwargs)
def verify_data(self, data_type, host, *args, **kwargs):
"""Verify that the data of type 'data_type' exists in the
datastore. This can be done by testing edge cases, and possibly
some random elements within the set. See
instructions for 'add_data' for implementation guidance.
"""
self._perform_data_action(self.FN_VERIFY, data_type.name,
host, *args, **kwargs)
def _perform_data_action(self, fn_type, fn_name, host,
*args, **kwargs):
"""By default, the action is attempted 10 times, sleeping for 3
seconds between each attempt. This can be controlled by the
retry_count and retry_sleep kwarg values.
"""
retry_count = kwargs.pop('retry_count', 10) or 0
retry_sleep = kwargs.pop('retry_sleep', 3) or 0
fns = self._data_fns[fn_type]
data_fn_name = self.data_fn_pattern % (fn_type, fn_name)
attempts = -1
while True:
attempts += 1
try:
fns[data_fn_name](self, host, *args, **kwargs)
break
except SkipTest:
raise
except Exception as ex:
self.report.log("Attempt %d to %s data type %s failed\n%s"
% (attempts, fn_type, fn_name, ex))
if attempts > retry_count:
raise RuntimeError("Error calling %s from class %s - %s" %
(data_fn_name, self.__class__.__name__,
ex))
self.report.log("Trying again (after %d second sleep)" %
retry_sleep)
sleep(retry_sleep)
def _build_data_fns(self):
"""Build the base data functions specified by FN_TYPE_*
for each of the types defined in the DataType class. For example,
'add_small_data' and 'verify_large_data'. These
functions are set to call '*_actual_data' and will pass in
sane values for label, start and size. The '*_actual_data'
methods should be overwritten by a descendant class, and are the
ones that do the actual work.
The original 'add_small_data', etc. methods can also be overridden
if needed, and those overwritten functions will be bound before
calling any data functions such as 'add_data' or 'remove_data'.
"""
for fn_type in self.FN_TYPES:
fn_dict = self._data_fns[fn_type]
for data_type in DataType:
self._data_fn_builder(fn_type, data_type.name, fn_dict)
self._data_fn_builder(fn_type, self.DT_ACTUAL, fn_dict)
self._override_data_fns()
def _data_fn_builder(self, fn_type, fn_name, fn_dict):
"""Builds the actual function with a SkipTest exception,
and changes the name to reflect the pattern.
"""
data_fn_name = self.data_fn_pattern % (fn_type, fn_name)
# Build the overridable 'actual' Data Manipulation methods
if fn_name == self.DT_ACTUAL:
def data_fn(self, data_label, data_start, data_size, host,
*args, **kwargs):
# default action is to skip the test
cls_str = ''
if self._expected_override_name != self.__class__.__name__:
cls_str = (' (%s not loaded)' %
self._expected_override_name)
raise SkipTest("Data function '%s' not found in '%s'%s" % (
data_fn_name, self.__class__.__name__, cls_str))
else:
def data_fn(self, host, *args, **kwargs):
# call the corresponding 'actual' method
fns = self._data_fns[fn_type]
var_dict = self._fn_data[fn_name]
data_start = var_dict[self.DATA_START]
data_size = var_dict[self.DATA_SIZE]
actual_fn_name = self.data_fn_pattern % (
fn_type, self.DT_ACTUAL)
try:
fns[actual_fn_name](self, fn_name, data_start, data_size,
host, *args, **kwargs)
except SkipTest:
raise
except Exception as ex:
raise RuntimeError("Error calling %s from class %s: %s" % (
data_fn_name, self.__class__.__name__, ex))
data_fn.__name__ = data_fn.func_name = data_fn_name
fn_dict[data_fn_name] = data_fn
def _override_data_fns(self):
"""Bind the override methods to the dict."""
members = inspect.getmembers(self.__class__,
predicate=inspect.ismethod)
for fn_type in self.FN_TYPES:
fns = self._data_fns[fn_type]
for name, fn in members:
if name in fns:
fns[name] = fn
#######################
# Database/User related
#######################
def get_valid_database_definitions(self):
"""Return a list of valid database JSON definitions.
These definitions will be used by tests that create databases.
Return an empty list if the datastore does not support databases.
"""
return list()
def get_valid_user_definitions(self):
"""Return a list of valid user JSON definitions.
These definitions will be used by tests that create users.
Return an empty list if the datastore does not support users.
"""
return list()
def get_non_existing_database_definition(self):
"""Return a valid JSON definition for a non-existing database.
This definition will be used by negative database tests.
The database will not be created by any of the tests.
Return None if the datastore does not support databases.
"""
valid_defs = self.get_valid_database_definitions()
return self._get_non_existing_definition(valid_defs)
def get_non_existing_user_definition(self):
"""Return a valid JSON definition for a non-existing user.
This definition will be used by negative user tests.
The user will not be created by any of the tests.
Return None if the datastore does not support users.
"""
valid_defs = self.get_valid_user_definitions()
return self._get_non_existing_definition(valid_defs)
def _get_non_existing_definition(self, existing_defs):
"""This will create a unique definition for a non-existing object
by randomizing one of an existing object.
"""
if existing_defs:
non_existing_def = dict(existing_defs[0])
while non_existing_def in existing_defs:
non_existing_def = self._randomize_on_name(non_existing_def)
return non_existing_def
return None
def _randomize_on_name(self, definition):
def_copy = dict(definition)
def_copy['name'] = ''.join([def_copy['name'], 'rnd'])
return def_copy
#############################
# Configuration Group related
#############################
def get_dynamic_group(self):
"""Return a definition of a dynamic configuration group.
A dynamic group should contain only properties that do not require
database restart.
Return an empty dict if the datastore does not have any.
"""
return dict()
def get_non_dynamic_group(self):
"""Return a definition of a non-dynamic configuration group.
A non-dynamic group has to include at least one property that requires
database restart.
Return an empty dict if the datastore does not have any.
"""
return dict()
def get_invalid_groups(self):
"""Return a list of configuration groups with invalid values.
An empty list indicates that no 'invalid' tests should be run.
"""
return []
def get_configuration_value(self, property_name, host, *args, **kwargs):
"""Use the client to retrieve the value of a given configuration
property.
"""
raise SkipTest("Runtime configuration retrieval not implemented in %s"
% self.get_class_name())
###################
# Guest Log related
###################
def get_exposed_log_list(self):
"""Return the list of exposed logs for the datastore. This
method shouldn't need to be overridden.
"""
logs = []
try:
logs.extend(self.get_exposed_user_log_names())
except SkipTest:
pass
try:
logs.extend(self.get_exposed_sys_log_names())
except SkipTest:
pass
return logs
def get_full_log_list(self):
"""Return the full list of all logs for the datastore. This
method shouldn't need to be overridden.
"""
logs = self.get_exposed_log_list()
try:
logs.extend(self.get_unexposed_user_log_names())
except SkipTest:
pass
try:
logs.extend(self.get_unexposed_sys_log_names())
except SkipTest:
pass
return logs
# Override these guest log methods if needed
def get_exposed_user_log_names(self):
"""Return the names of the user logs that are visible to all users.
The first log name will be used for tests.
"""
raise SkipTest("No exposed user log names defined.")
def get_unexposed_user_log_names(self):
"""Return the names of the user logs that not visible to all users.
The first log name will be used for tests.
"""
raise SkipTest("No unexposed user log names defined.")
def get_exposed_sys_log_names(self):
"""Return the names of SYS logs that are visible to all users.
The first log name will be used for tests.
"""
raise SkipTest("No exposed sys log names defined.")
def get_unexposed_sys_log_names(self):
"""Return the names of the sys logs that not visible to all users.
The first log name will be used for tests.
"""
return ['guest']
def log_enable_requires_restart(self):
"""Returns whether enabling or disabling a USER log requires a
restart of the datastore.
"""
return False
################
# Module related
################
def get_valid_module_type(self):
"""Return a valid module type."""
return "Ping"
#################
# Cluster related
#################
def get_cluster_types(self):
"""Returns a list of cluster type lists to use when creating instances.
The list should be the same size as the number of cluster instances
that will be created. If not specified, no types are sent to
cluster-create. Cluster grow uses the first type in the list for the
first instance, and doesn't use anything for the second instance
(i.e. doesn't pass in anything for 'type').
An example for this method would be:
return [['data', 'other_type'], ['third_type']]
"""
return None

View File

@ -1,55 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.scenario.helpers.sql_helper import SqlHelper
class VerticaHelper(SqlHelper):
def __init__(self, expected_override_name, report):
super(VerticaHelper, self).__init__(expected_override_name, report,
'vertica')
def get_helper_credentials(self):
return {'name': 'lite', 'password': 'litepass', 'database': 'lite'}
def get_valid_user_definitions(self):
return [{'name': 'user1', 'password': 'password1', 'databases': []},
{'name': 'user2', 'password': 'password1',
'databases': [{'name': 'db1'}]},
{'name': 'user3', 'password': 'password1',
'databases': [{'name': 'db1'}, {'name': 'db2'}]}]
def add_actual_data(self, *args, **kwargs):
raise SkipTest("Adding data to Vertica is not implemented")
def verify_actual_data(self, *args, **kwargs):
raise SkipTest("Verifying data in Vertica is not implemented")
def remove_actual_data(self, *args, **kwargs):
raise SkipTest("Removing data from Vertica is not implemented")
def get_dynamic_group(self):
return {'ActivePartitionCount': 3}
def get_non_dynamic_group(self):
return {'BlockCacheSize': 1024}
def get_invalid_groups(self):
return [{'timezone': 997},
{"max_worker_processes": 'string_value'},
{"standard_conforming_strings": 'string_value'}]

View File

@ -1,5 +0,0 @@
BUG_EJECT_VALID_MASTER = 1622014
BUG_WRONG_API_VALIDATION = 1498573
BUG_STOP_DB_IN_CLUSTER = 1645096
BUG_UNAUTH_TEST_WRONG = 1653614
BUG_FORCE_DELETE_FAILS = 1656422

View File

@ -1,479 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from troveclient.compat import exceptions
from trove.common.utils import generate_uuid
from trove.common.utils import poll_until
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario.runners.test_runners import TestRunner
class BackupRunner(TestRunner):
def __init__(self):
self.TIMEOUT_BACKUP_CREATE = 60 * 60
self.TIMEOUT_BACKUP_DELETE = 120
super(BackupRunner, self).__init__(timeout=self.TIMEOUT_BACKUP_CREATE)
self.BACKUP_NAME = 'backup_test'
self.BACKUP_DESC = 'test description'
self.backup_host = None
self.backup_info = None
self.backup_count_prior_to_create = 0
self.backup_count_for_ds_prior_to_create = 0
self.backup_count_for_instance_prior_to_create = 0
self.databases_before_backup = None
self.backup_inc_1_info = None
self.backup_inc_2_info = None
self.data_types_added = []
self.restore_instance_id = None
self.restore_host = None
self.restore_inc_1_instance_id = None
self.restore_inc_1_host = None
def run_backup_create_instance_invalid(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
invalid_inst_id = 'invalid-inst-id'
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.create,
self.BACKUP_NAME, invalid_inst_id, self.BACKUP_DESC)
def run_backup_create_instance_not_found(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.create,
self.BACKUP_NAME, generate_uuid(), self.BACKUP_DESC)
def run_add_data_for_backup(self):
self.backup_host = self.get_instance_host()
self.assert_add_data_for_backup(self.backup_host, DataType.large)
def assert_add_data_for_backup(self, host, data_type):
"""In order for this to work, the corresponding datastore
'helper' class should implement the 'add_actual_data' method.
"""
self.test_helper.add_data(data_type, host)
self.data_types_added.append(data_type)
def run_verify_data_for_backup(self):
self.assert_verify_backup_data(self.backup_host, DataType.large)
def assert_verify_backup_data(self, host, data_type):
"""In order for this to work, the corresponding datastore
'helper' class should implement the 'verify_actual_data' method.
"""
self.test_helper.verify_data(data_type, host)
def run_save_backup_counts(self):
# Necessary to test that the count increases.
self.backup_count_prior_to_create = len(
self.auth_client.backups.list())
self.backup_count_for_ds_prior_to_create = len(
self.auth_client.backups.list(
datastore=self.instance_info.dbaas_datastore))
self.backup_count_for_instance_prior_to_create = len(
self.auth_client.instances.backups(self.instance_info.id))
def run_backup_create(self):
if self.test_helper.get_valid_database_definitions():
self.databases_before_backup = self._get_databases(
self.instance_info.id)
self.backup_info = self.assert_backup_create(
self.BACKUP_NAME, self.BACKUP_DESC, self.instance_info.id)
def _get_databases(self, instance_id):
return [database.name for database in
self.auth_client.databases.list(instance_id)]
def assert_backup_create(self, name, desc, instance_id, parent_id=None,
incremental=False):
client = self.auth_client
datastore_version = client.datastore_versions.get(
self.instance_info.dbaas_datastore,
self.instance_info.dbaas_datastore_version)
if incremental:
result = client.backups.create(
name, instance_id, desc, incremental=incremental)
else:
result = client.backups.create(
name, instance_id, desc, parent_id=parent_id)
self.assert_equal(name, result.name,
'Unexpected backup name')
self.assert_equal(desc, result.description,
'Unexpected backup description')
self.assert_equal(instance_id, result.instance_id,
'Unexpected instance ID for backup')
self.assert_equal('NEW', result.status,
'Unexpected status for backup')
if parent_id:
self.assert_equal(parent_id, result.parent_id,
'Unexpected status for backup')
instance = client.instances.get(instance_id)
self.assert_equal('BACKUP', instance.status,
'Unexpected instance status')
self.assert_equal(self.instance_info.dbaas_datastore,
result.datastore['type'],
'Unexpected datastore')
self.assert_equal(self.instance_info.dbaas_datastore_version,
result.datastore['version'],
'Unexpected datastore version')
self.assert_equal(datastore_version.id, result.datastore['version_id'],
'Unexpected datastore version id')
return result
def run_restore_instance_from_not_completed_backup(
self, expected_exception=exceptions.Conflict,
expected_http_code=409):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
None, self._restore_from_backup, client, self.backup_info.id)
self.assert_client_code(client, expected_http_code)
def run_instance_action_right_after_backup_create(
self, expected_exception=exceptions.UnprocessableEntity,
expected_http_code=422):
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.resize_instance,
self.instance_info.id, 1)
def run_backup_create_another_backup_running(
self, expected_exception=exceptions.UnprocessableEntity,
expected_http_code=422):
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.backups.create,
'backup_test2', self.instance_info.id,
'test description2')
def run_backup_delete_while_backup_running(
self, expected_exception=exceptions.UnprocessableEntity,
expected_http_code=422):
client = self.auth_client
result = client.backups.list()
backup = result[0]
self.assert_raises(expected_exception, expected_http_code,
client, client.backups.delete, backup.id)
def run_backup_create_completed(self):
self._verify_backup(self.backup_info.id)
def _verify_backup(self, backup_id):
def _result_is_active():
backup = self.auth_client.backups.get(backup_id)
if backup.status == 'COMPLETED':
return True
else:
self.assert_not_equal('FAILED', backup.status,
'Backup status should not be')
return False
poll_until(_result_is_active, time_out=self.TIMEOUT_BACKUP_CREATE)
def run_instance_goes_active(self, expected_states=['BACKUP', 'HEALTHY']):
self._assert_instance_states(self.instance_info.id, expected_states)
def run_backup_list(self):
backup_list = self.auth_client.backups.list()
self.assert_backup_list(
backup_list, self.backup_count_prior_to_create + 1)
def assert_backup_list(self, backup_list, expected_count):
self.assert_equal(expected_count, len(backup_list),
'Unexpected number of backups found')
if expected_count:
backup = backup_list[0]
self.assert_equal(self.BACKUP_NAME, backup.name,
'Unexpected backup name')
self.assert_equal(self.BACKUP_DESC, backup.description,
'Unexpected backup description')
self.assert_not_equal(0.0, backup.size, 'Unexpected backup size')
self.assert_equal(self.instance_info.id, backup.instance_id,
'Unexpected instance id')
self.assert_equal('COMPLETED', backup.status,
'Unexpected backup status')
def run_backup_list_filter_datastore(self):
backup_list = self.auth_client.backups.list(
datastore=self.instance_info.dbaas_datastore)
self.assert_backup_list(
backup_list, self.backup_count_for_ds_prior_to_create + 1)
def run_backup_list_filter_datastore_not_found(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.list,
datastore='NOT_FOUND')
def run_backup_list_for_instance(self):
backup_list = self.auth_client.instances.backups(
self.instance_info.id)
self.assert_backup_list(
backup_list, self.backup_count_for_instance_prior_to_create + 1)
def run_backup_get(self):
backup = self.auth_client.backups.get(self.backup_info.id)
self.assert_backup_list([backup], 1)
self.assert_equal(self.instance_info.dbaas_datastore,
backup.datastore['type'],
'Unexpected datastore type')
self.assert_equal(self.instance_info.dbaas_datastore_version,
backup.datastore['version'],
'Unexpected datastore version')
datastore_version = self.auth_client.datastore_versions.get(
self.instance_info.dbaas_datastore,
self.instance_info.dbaas_datastore_version)
self.assert_equal(datastore_version.id, backup.datastore['version_id'])
def run_backup_get_unauthorized_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.unauth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.get, self.backup_info.id)
def run_add_data_for_inc_backup_1(self):
self.backup_host = self.get_instance_host()
self.assert_add_data_for_backup(self.backup_host, DataType.tiny)
def run_verify_data_for_inc_backup_1(self):
self.assert_verify_backup_data(self.backup_host, DataType.tiny)
def run_inc_backup_1(self):
suffix = '_inc_1'
self.backup_inc_1_info = self.assert_backup_create(
self.BACKUP_NAME + suffix, self.BACKUP_DESC + suffix,
self.instance_info.id, parent_id=self.backup_info.id)
def run_wait_for_inc_backup_1(self):
self._verify_backup(self.backup_inc_1_info.id)
def run_add_data_for_inc_backup_2(self):
self.backup_host = self.get_instance_host()
self.assert_add_data_for_backup(self.backup_host, DataType.tiny2)
def run_verify_data_for_inc_backup_2(self):
self.assert_verify_backup_data(self.backup_host, DataType.tiny2)
def run_inc_backup_2(self):
suffix = '_inc_2'
self.backup_inc_2_info = self.assert_backup_create(
self.BACKUP_NAME + suffix, self.BACKUP_DESC + suffix,
self.instance_info.id, parent_id=self.backup_inc_1_info.id,
incremental=True)
def run_wait_for_inc_backup_2(self):
self._verify_backup(self.backup_inc_2_info.id)
def run_restore_from_backup(self, expected_http_code=200, suffix=''):
self.restore_instance_id = self.assert_restore_from_backup(
self.backup_info.id, suffix=suffix,
expected_http_code=expected_http_code)
def assert_restore_from_backup(self, backup_ref, suffix='',
expected_http_code=200):
client = self.auth_client
result = self._restore_from_backup(client, backup_ref, suffix=suffix)
self.assert_client_code(client, expected_http_code)
self.assert_equal('BUILD', result.status,
'Unexpected instance status')
self.register_debug_inst_ids(result.id)
return result.id
def _restore_from_backup(self, client, backup_ref, suffix=''):
restore_point = {'backupRef': backup_ref}
result = client.instances.create(
self.instance_info.name + '_restore' + suffix,
self.instance_info.dbaas_flavor_href,
self.instance_info.volume,
nics=self.instance_info.nics,
restorePoint=restore_point,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
return result
def run_restore_from_inc_1_backup(self, expected_http_code=200):
self.restore_inc_1_instance_id = self.assert_restore_from_backup(
self.backup_inc_1_info.id, suffix='_inc_1',
expected_http_code=expected_http_code)
def run_restore_from_backup_completed(
self, expected_states=['BUILD', 'HEALTHY']):
self.assert_restore_from_backup_completed(
self.restore_instance_id, expected_states)
self.restore_host = self.get_instance_host(self.restore_instance_id)
def assert_restore_from_backup_completed(
self, instance_id, expected_states):
self._assert_instance_states(instance_id, expected_states)
def run_restore_from_inc_1_backup_completed(
self, expected_states=['BUILD', 'HEALTHY']):
self.assert_restore_from_backup_completed(
self.restore_inc_1_instance_id, expected_states)
self.restore_inc_1_host = self.get_instance_host(
self.restore_inc_1_instance_id)
def run_verify_data_in_restored_instance(self):
self.assert_verify_backup_data(self.restore_host, DataType.large)
def run_verify_databases_in_restored_instance(self):
self.assert_verify_backup_databases(self.restore_instance_id,
self.databases_before_backup)
def run_verify_data_in_restored_inc_1_instance(self):
self.assert_verify_backup_data(self.restore_inc_1_host, DataType.large)
self.assert_verify_backup_data(self.restore_inc_1_host, DataType.tiny)
def run_verify_databases_in_restored_inc_1_instance(self):
self.assert_verify_backup_databases(self.restore_inc_1_instance_id,
self.databases_before_backup)
def assert_verify_backup_databases(self, instance_id, expected_databases):
if expected_databases is not None:
actual = self._get_databases(instance_id)
self.assert_list_elements_equal(
expected_databases, actual,
"Unexpected databases on the restored instance.")
else:
raise SkipTest("Datastore does not support databases.")
def run_delete_restored_instance(self, expected_http_code=202):
self.assert_delete_restored_instance(
self.restore_instance_id, expected_http_code)
def assert_delete_restored_instance(
self, instance_id, expected_http_code):
client = self.auth_client
client.instances.delete(instance_id)
self.assert_client_code(client, expected_http_code)
def run_delete_restored_inc_1_instance(self, expected_http_code=202):
self.assert_delete_restored_instance(
self.restore_inc_1_instance_id, expected_http_code)
def run_wait_for_restored_instance_delete(self, expected_state='SHUTDOWN'):
self.assert_restored_instance_deleted(
self.restore_instance_id, expected_state)
self.restore_instance_id = None
self.restore_host = None
def assert_restored_instance_deleted(self, instance_id, expected_state):
self.assert_all_gone(instance_id, expected_state)
def run_wait_for_restored_inc_1_instance_delete(
self, expected_state='SHUTDOWN'):
self.assert_restored_instance_deleted(
self.restore_inc_1_instance_id, expected_state)
self.restore_inc_1_instance_id = None
self.restore_inc_1_host = None
def run_delete_unknown_backup(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.delete,
'unknown_backup')
def run_delete_backup_unauthorized_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.unauth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.delete, self.backup_info.id)
def run_delete_inc_2_backup(self, expected_http_code=202):
self.assert_delete_backup(
self.backup_inc_2_info.id, expected_http_code)
self.backup_inc_2_info = None
def assert_delete_backup(
self, backup_id, expected_http_code):
client = self.auth_client
client.backups.delete(backup_id)
self.assert_client_code(client, expected_http_code)
self._wait_until_backup_is_gone(client, backup_id)
def _wait_until_backup_is_gone(self, client, backup_id):
def _backup_is_gone():
try:
client.backups.get(backup_id)
return False
except exceptions.NotFound:
return True
poll_until(_backup_is_gone,
time_out=self.TIMEOUT_BACKUP_DELETE)
def run_delete_backup(self, expected_http_code=202):
self.assert_delete_backup(self.backup_info.id, expected_http_code)
def run_check_for_incremental_backup(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
if self.backup_inc_1_info is None:
raise SkipTest("Incremental Backup not created")
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.backups.get,
self.backup_inc_1_info.id)
self.backup_inc_1_info = None
def run_remove_backup_data_from_instance(self):
for data_type in self.data_types_added:
self.test_helper.remove_data(data_type, self.backup_host)
self.data_types_added = []
def run_check_has_incremental(self):
self.assert_incremental_exists(self.backup_info.id)
def assert_incremental_exists(self, parent_id):
def _backup_with_parent_found():
backup_list = self.auth_client.backups.list()
for bkup in backup_list:
if bkup.parent_id == parent_id:
return True
return False
poll_until(_backup_with_parent_found, time_out=30)
class RedisBackupRunner(BackupRunner):
def run_check_has_incremental(self):
pass

View File

@ -1,808 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import os
from proboscis import SkipTest
import time as timer
from trove.common import exception
from trove.common.utils import poll_until
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
from trove.tests.util.check import TypeCheck
from troveclient.compat import exceptions
class ClusterRunner(TestRunner):
USE_CLUSTER_ID_FLAG = 'TESTS_USE_CLUSTER_ID'
DO_NOT_DELETE_CLUSTER_FLAG = 'TESTS_DO_NOT_DELETE_CLUSTER'
EXTRA_INSTANCE_NAME = "named_instance"
def __init__(self):
super(ClusterRunner, self).__init__()
self.cluster_name = 'test_cluster'
self.cluster_id = 0
self.cluster_inst_ids = None
self.cluster_count_before_create = None
self.srv_grp_id = None
self.current_root_creds = None
self.locality = 'affinity'
self.initial_instance_count = None
self.cluster_instances = None
self.cluster_removed_instances = None
self.active_config_group_id = None
self.config_requires_restart = False
self.initial_group_id = None
self.dynamic_group_id = None
self.non_dynamic_group_id = None
@property
def is_using_existing_cluster(self):
return self.has_env_flag(self.USE_CLUSTER_ID_FLAG)
@property
def has_do_not_delete_cluster(self):
return self.has_env_flag(self.DO_NOT_DELETE_CLUSTER_FLAG)
@property
def min_cluster_node_count(self):
return 2
def run_initial_configuration_create(self, expected_http_code=200):
group_id, requires_restart = self.create_initial_configuration(
expected_http_code)
if group_id:
self.initial_group_id = group_id
self.config_requires_restart = requires_restart
else:
raise SkipTest("No groups defined.")
def run_cluster_create(self, num_nodes=None, expected_task_name='BUILDING',
expected_http_code=200):
self.cluster_count_before_create = len(
self.auth_client.clusters.list())
if not num_nodes:
num_nodes = self.min_cluster_node_count
instance_flavor = self.get_instance_flavor()
instance_defs = [
self.build_flavor(
flavor_id=self.get_flavor_href(instance_flavor),
volume_size=self.instance_info.volume['size'])
for count in range(0, num_nodes)]
types = self.test_helper.get_cluster_types()
for index, instance_def in enumerate(instance_defs):
instance_def['nics'] = self.instance_info.nics
if types and index < len(types):
instance_def['type'] = types[index]
self.cluster_id = self.assert_cluster_create(
self.cluster_name, instance_defs, self.locality,
self.initial_group_id, expected_task_name, expected_http_code)
def assert_cluster_create(
self, cluster_name, instances_def, locality, configuration,
expected_task_name, expected_http_code):
self.report.log("Testing cluster create: %s" % cluster_name)
client = self.auth_client
cluster = self.get_existing_cluster()
if cluster:
self.report.log("Using an existing cluster: %s" % cluster.id)
else:
cluster = client.clusters.create(
cluster_name, self.instance_info.dbaas_datastore,
self.instance_info.dbaas_datastore_version,
instances=instances_def, locality=locality,
configuration=configuration)
self.assert_client_code(client, expected_http_code)
self.active_config_group_id = configuration
self._assert_cluster_values(cluster, expected_task_name)
for instance in cluster.instances:
self.register_debug_inst_ids(instance['id'])
return cluster.id
def run_cluster_create_wait(self,
expected_instance_states=['BUILD', 'HEALTHY']):
self.assert_cluster_create_wait(
self.cluster_id, expected_instance_states=expected_instance_states)
def assert_cluster_create_wait(
self, cluster_id, expected_instance_states):
client = self.auth_client
cluster_instances = self._get_cluster_instances(client, cluster_id)
self.assert_all_instance_states(
cluster_instances, expected_instance_states)
# Create the helper user/database on the first node.
# The cluster should handle the replication itself.
if not self.get_existing_cluster():
self.create_test_helper_on_instance(cluster_instances[0])
# Although all instances have already acquired the expected state,
# we still need to poll for the final cluster task, because
# it may take up to the periodic task interval until the task name
# gets updated in the Trove database.
self._assert_cluster_states(client, cluster_id, ['NONE'])
# make sure the server_group was created
self.cluster_inst_ids = [inst.id for inst in cluster_instances]
for id in self.cluster_inst_ids:
srv_grp_id = self.assert_server_group_exists(id)
if self.srv_grp_id and self.srv_grp_id != srv_grp_id:
self.fail("Found multiple server groups for cluster")
self.srv_grp_id = srv_grp_id
def get_existing_cluster(self):
if self.is_using_existing_cluster:
cluster_id = os.environ.get(self.USE_CLUSTER_ID_FLAG)
return self.auth_client.clusters.get(cluster_id)
def run_cluster_list(self, expected_http_code=200):
self.assert_cluster_list(
self.cluster_count_before_create + 1,
expected_http_code)
def assert_cluster_list(self, expected_count, expected_http_code):
client = self.auth_client
count = len(client.clusters.list())
self.assert_client_code(client, expected_http_code)
self.assert_equal(expected_count, count, "Unexpected cluster count")
def run_cluster_show(self, expected_http_code=200,
expected_task_name='NONE'):
self.assert_cluster_show(
self.cluster_id, expected_task_name, expected_http_code)
def run_cluster_restart(self, expected_http_code=202,
expected_task_name='RESTARTING_CLUSTER'):
self.assert_cluster_restart(
self.cluster_id, expected_task_name, expected_http_code)
def assert_cluster_restart(
self, cluster_id, expected_task_name, expected_http_code):
client = self.auth_client
client.clusters.restart(cluster_id)
self.assert_client_code(client, expected_http_code)
self._assert_cluster_response(
client, cluster_id, expected_task_name)
def run_cluster_restart_wait(self):
self.assert_cluster_restart_wait(self.cluster_id)
def assert_cluster_restart_wait(self, cluster_id):
client = self.auth_client
cluster_instances = self._get_cluster_instances(
client, cluster_id)
self.assert_all_instance_states(
cluster_instances, ['REBOOT', 'HEALTHY'])
self._assert_cluster_states(
client, cluster_id, ['NONE'])
self._assert_cluster_response(
client, cluster_id, 'NONE')
def assert_cluster_show(self, cluster_id, expected_task_name,
expected_http_code):
self._assert_cluster_response(self.auth_client,
cluster_id, expected_task_name)
def run_cluster_root_enable(self, expected_task_name=None,
expected_http_code=200):
root_credentials = self.test_helper.get_helper_credentials_root()
if not root_credentials or not root_credentials.get('name'):
raise SkipTest("No root credentials provided.")
client = self.auth_client
self.current_root_creds = client.root.create_cluster_root(
self.cluster_id, root_credentials['password'])
self.assert_client_code(client, expected_http_code)
self._assert_cluster_response(
client, self.cluster_id, expected_task_name)
self.assert_equal(root_credentials['name'],
self.current_root_creds[0])
self.assert_equal(root_credentials['password'],
self.current_root_creds[1])
def run_verify_cluster_root_enable(self):
if not self.current_root_creds:
raise SkipTest("Root not enabled.")
cluster = self.auth_client.clusters.get(self.cluster_id)
for instance in cluster.instances:
root_enabled_test = self.auth_client.root.is_instance_root_enabled(
instance['id'])
self.assert_true(root_enabled_test.rootEnabled)
for ipv4 in self.extract_ipv4s(cluster.ip):
self.report.log("Pinging cluster as superuser via node: %s" % ipv4)
ping_response = self.test_helper.ping(
ipv4,
username=self.current_root_creds[0],
password=self.current_root_creds[1])
self.assert_true(ping_response)
def run_add_initial_cluster_data(self, data_type=DataType.tiny):
self.assert_add_cluster_data(data_type, self.cluster_id)
def assert_add_cluster_data(self, data_type, cluster_id):
cluster = self.auth_client.clusters.get(cluster_id)
self.test_helper.add_data(data_type, self.extract_ipv4s(cluster.ip)[0])
def run_verify_initial_cluster_data(self, data_type=DataType.tiny):
self.assert_verify_cluster_data(data_type, self.cluster_id)
def assert_verify_cluster_data(self, data_type, cluster_id):
cluster = self.auth_client.clusters.get(cluster_id)
for ipv4 in self.extract_ipv4s(cluster.ip):
self.report.log("Verifying cluster data via node: %s" % ipv4)
self.test_helper.verify_data(data_type, ipv4)
def run_remove_initial_cluster_data(self, data_type=DataType.tiny):
self.assert_remove_cluster_data(data_type, self.cluster_id)
def assert_remove_cluster_data(self, data_type, cluster_id):
cluster = self.auth_client.clusters.get(cluster_id)
self.test_helper.remove_data(
data_type, self.extract_ipv4s(cluster.ip)[0])
def run_cluster_grow(self, expected_task_name='GROWING_CLUSTER',
expected_http_code=202):
# Add two instances. One with an explicit name.
flavor_href = self.get_flavor_href(self.get_instance_flavor())
added_instance_defs = [
self._build_instance_def(flavor_href,
self.instance_info.volume['size']),
self._build_instance_def(flavor_href,
self.instance_info.volume['size'],
self.EXTRA_INSTANCE_NAME)]
types = self.test_helper.get_cluster_types()
if types and types[0]:
added_instance_defs[0]['type'] = types[0]
self.assert_cluster_grow(
self.cluster_id, added_instance_defs, expected_task_name,
expected_http_code)
def _build_instance_def(self, flavor_id, volume_size, name=None):
instance_def = self.build_flavor(
flavor_id=flavor_id, volume_size=volume_size)
if name:
instance_def.update({'name': name})
instance_def.update({'nics': self.instance_info.nics})
return instance_def
def assert_cluster_grow(self, cluster_id, added_instance_defs,
expected_task_name, expected_http_code):
client = self.auth_client
cluster = client.clusters.get(cluster_id)
initial_instance_count = len(cluster.instances)
cluster = client.clusters.grow(cluster_id, added_instance_defs)
self.assert_client_code(client, expected_http_code)
self._assert_cluster_response(client, cluster_id, expected_task_name)
self.assert_equal(len(added_instance_defs),
len(cluster.instances) - initial_instance_count,
"Unexpected number of added nodes.")
def run_cluster_grow_wait(self):
self.assert_cluster_grow_wait(self.cluster_id)
def assert_cluster_grow_wait(self, cluster_id):
client = self.auth_client
cluster_instances = self._get_cluster_instances(client, cluster_id)
self.assert_all_instance_states(cluster_instances, ['HEALTHY'])
self._assert_cluster_states(client, cluster_id, ['NONE'])
self._assert_cluster_response(client, cluster_id, 'NONE')
def run_add_grow_cluster_data(self, data_type=DataType.tiny2):
self.assert_add_cluster_data(data_type, self.cluster_id)
def run_verify_grow_cluster_data(self, data_type=DataType.tiny2):
self.assert_verify_cluster_data(data_type, self.cluster_id)
def run_remove_grow_cluster_data(self, data_type=DataType.tiny2):
self.assert_remove_cluster_data(data_type, self.cluster_id)
def run_cluster_upgrade(self, expected_task_name='UPGRADING_CLUSTER',
expected_http_code=202):
self.assert_cluster_upgrade(self.cluster_id,
expected_task_name, expected_http_code)
def assert_cluster_upgrade(self, cluster_id,
expected_task_name, expected_http_code):
client = self.auth_client
cluster = client.clusters.get(cluster_id)
self.initial_instance_count = len(cluster.instances)
client.clusters.upgrade(
cluster_id, self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
self._assert_cluster_response(client, cluster_id, expected_task_name)
def run_cluster_upgrade_wait(self):
self.assert_cluster_upgrade_wait(
self.cluster_id,
expected_last_instance_states=['HEALTHY']
)
def assert_cluster_upgrade_wait(self, cluster_id,
expected_last_instance_states):
client = self.auth_client
self._assert_cluster_states(client, cluster_id, ['NONE'])
cluster_instances = self._get_cluster_instances(client, cluster_id)
self.assert_equal(
self.initial_instance_count,
len(cluster_instances),
"Unexpected number of instances after upgrade.")
self.assert_all_instance_states(cluster_instances,
expected_last_instance_states)
self._assert_cluster_response(client, cluster_id, 'NONE')
def run_add_upgrade_cluster_data(self, data_type=DataType.tiny3):
self.assert_add_cluster_data(data_type, self.cluster_id)
def run_verify_upgrade_cluster_data(self, data_type=DataType.tiny3):
self.assert_verify_cluster_data(data_type, self.cluster_id)
def run_remove_upgrade_cluster_data(self, data_type=DataType.tiny3):
self.assert_remove_cluster_data(data_type, self.cluster_id)
def run_cluster_shrink(self, expected_task_name='SHRINKING_CLUSTER',
expected_http_code=202):
self.assert_cluster_shrink(self.auth_client,
self.cluster_id, [self.EXTRA_INSTANCE_NAME],
expected_task_name, expected_http_code)
def assert_cluster_shrink(self, client, cluster_id, removed_instance_names,
expected_task_name, expected_http_code):
cluster = client.clusters.get(cluster_id)
self.initial_instance_count = len(cluster.instances)
self.cluster_removed_instances = (
self._find_cluster_instances_by_name(
cluster, removed_instance_names))
client.clusters.shrink(
cluster_id, [{'id': instance.id}
for instance in self.cluster_removed_instances])
self.assert_client_code(client, expected_http_code)
self._assert_cluster_response(client, cluster_id, expected_task_name)
def _find_cluster_instances_by_name(self, cluster, instance_names):
return [self.auth_client.instances.get(instance['id'])
for instance in cluster.instances
if instance['name'] in instance_names]
def run_cluster_shrink_wait(self):
self.assert_cluster_shrink_wait(
self.cluster_id, expected_last_instance_state='SHUTDOWN')
def assert_cluster_shrink_wait(self, cluster_id,
expected_last_instance_state):
client = self.auth_client
self._assert_cluster_states(client, cluster_id, ['NONE'])
cluster = client.clusters.get(cluster_id)
self.assert_equal(
len(self.cluster_removed_instances),
self.initial_instance_count - len(cluster.instances),
"Unexpected number of removed nodes.")
cluster_instances = self._get_cluster_instances(client, cluster_id)
self.assert_all_instance_states(cluster_instances, ['HEALTHY'])
self.assert_all_gone(self.cluster_removed_instances,
expected_last_instance_state)
self._assert_cluster_response(client, cluster_id, 'NONE')
def run_add_shrink_cluster_data(self, data_type=DataType.tiny4):
self.assert_add_cluster_data(data_type, self.cluster_id)
def run_verify_shrink_cluster_data(self, data_type=DataType.tiny4):
self.assert_verify_cluster_data(data_type, self.cluster_id)
def run_remove_shrink_cluster_data(self, data_type=DataType.tiny4):
self.assert_remove_cluster_data(data_type, self.cluster_id)
def run_cluster_delete(
self, expected_task_name='DELETING', expected_http_code=202):
if self.has_do_not_delete_cluster:
self.report.log("TESTS_DO_NOT_DELETE_CLUSTER=True was "
"specified, skipping delete...")
raise SkipTest("TESTS_DO_NOT_DELETE_CLUSTER was specified.")
self.assert_cluster_delete(
self.cluster_id, expected_http_code)
def assert_cluster_delete(self, cluster_id, expected_http_code):
self.report.log("Testing cluster delete: %s" % cluster_id)
client = self.auth_client
self.cluster_instances = self._get_cluster_instances(client,
cluster_id)
client.clusters.delete(cluster_id)
self.assert_client_code(client, expected_http_code)
def _get_cluster_instances(self, client, cluster_id):
cluster = client.clusters.get(cluster_id)
return [client.instances.get(instance['id'])
for instance in cluster.instances]
def run_cluster_delete_wait(
self, expected_task_name='DELETING',
expected_last_instance_state='SHUTDOWN'):
if self.has_do_not_delete_cluster:
self.report.log("TESTS_DO_NOT_DELETE_CLUSTER=True was "
"specified, skipping delete wait...")
raise SkipTest("TESTS_DO_NOT_DELETE_CLUSTER was specified.")
self.assert_cluster_delete_wait(
self.cluster_id, expected_task_name, expected_last_instance_state)
def assert_cluster_delete_wait(
self, cluster_id, expected_task_name,
expected_last_instance_state):
client = self.auth_client
# Since the server_group is removed right at the beginning of the
# cluster delete process we can't check for locality anymore.
self._assert_cluster_response(client, cluster_id, expected_task_name,
check_locality=False)
self.assert_all_gone(self.cluster_instances,
expected_last_instance_state)
self._assert_cluster_gone(client, cluster_id)
# make sure the server group is gone too
self.assert_server_group_gone(self.srv_grp_id)
def _assert_cluster_states(self, client, cluster_id, expected_states,
fast_fail_status=None):
for status in expected_states:
start_time = timer.time()
try:
poll_until(
lambda: self._has_task(
client, cluster_id, status,
fast_fail_status=fast_fail_status),
sleep_time=self.def_sleep_time,
time_out=self.def_timeout)
self.report.log("Cluster has gone '%s' in %s." %
(status, self._time_since(start_time)))
except exception.PollTimeOut:
self.report.log(
"Status of cluster '%s' did not change to '%s' after %s."
% (cluster_id, status, self._time_since(start_time)))
return False
return True
def _has_task(self, client, cluster_id, task, fast_fail_status=None):
cluster = client.clusters.get(cluster_id)
task_name = cluster.task['name']
self.report.log("Waiting for cluster '%s' to become '%s': %s"
% (cluster_id, task, task_name))
if fast_fail_status and task_name == fast_fail_status:
raise RuntimeError("Cluster '%s' acquired a fast-fail task: %s"
% (cluster_id, task))
return task_name == task
def _assert_cluster_response(self, client, cluster_id, expected_task_name,
check_locality=True):
cluster = client.clusters.get(cluster_id)
self._assert_cluster_values(cluster, expected_task_name,
check_locality=check_locality)
def _assert_cluster_values(self, cluster, expected_task_name,
check_locality=True):
with TypeCheck('Cluster', cluster) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("datastore", dict)
check.has_field("instances", list)
check.has_field("links", list)
check.has_field("created", str)
check.has_field("updated", str)
if check_locality:
check.has_field("locality", str)
if self.active_config_group_id:
check.has_field("configuration", str)
for instance in cluster.instances:
isinstance(instance, dict)
self.assert_is_not_none(instance['id'])
self.assert_is_not_none(instance['links'])
self.assert_is_not_none(instance['name'])
self.assert_equal(expected_task_name, cluster.task['name'],
'Unexpected cluster task name')
if check_locality:
self.assert_equal(self.locality, cluster.locality,
"Unexpected cluster locality")
def _assert_cluster_gone(self, client, cluster_id):
t0 = timer.time()
try:
# This will poll until the cluster goes away.
self._assert_cluster_states(client, cluster_id, ['NONE'])
self.fail(
"Cluster '%s' still existed after %s seconds."
% (cluster_id, self._time_since(t0)))
except exceptions.NotFound:
self.assert_client_code(client, 404)
def restart_after_configuration_change(self):
if self.config_requires_restart:
self.run_cluster_restart()
self.run_cluster_restart_wait()
self.config_requires_restart = False
else:
raise SkipTest("Not required.")
def run_create_dynamic_configuration(self, expected_http_code=200):
values = self.test_helper.get_dynamic_group()
if values:
self.dynamic_group_id = self.assert_create_group(
'dynamic_cluster_test_group',
'a fully dynamic group should not require restart',
values, expected_http_code)
elif values is None:
raise SkipTest("No dynamic group defined in %s." %
self.test_helper.get_class_name())
else:
raise SkipTest("Datastore has no dynamic configuration values.")
def assert_create_group(self, name, description, values,
expected_http_code):
json_def = json.dumps(values)
client = self.auth_client
result = client.configurations.create(
name,
json_def,
description,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
return result.id
def run_create_non_dynamic_configuration(self, expected_http_code=200):
values = self.test_helper.get_non_dynamic_group()
if values:
self.non_dynamic_group_id = self.assert_create_group(
'non_dynamic_cluster_test_group',
'a group containing non-dynamic properties should always '
'require restart',
values, expected_http_code)
elif values is None:
raise SkipTest("No non-dynamic group defined in %s." %
self.test_helper.get_class_name())
else:
raise SkipTest("Datastore has no non-dynamic configuration "
"values.")
def run_attach_dynamic_configuration(
self, expected_states=['NONE'],
expected_http_code=202):
if self.dynamic_group_id:
self.assert_attach_configuration(
self.cluster_id, self.dynamic_group_id, expected_states,
expected_http_code)
def assert_attach_configuration(
self, cluster_id, group_id, expected_states, expected_http_code,
restart_inst=False):
client = self.auth_client
client.clusters.configuration_attach(cluster_id, group_id)
self.assert_client_code(client, expected_http_code)
self.active_config_group_id = group_id
self._assert_cluster_states(client, cluster_id, expected_states)
self.assert_configuration_group(client, cluster_id, group_id)
if restart_inst:
self.config_requires_restart = True
cluster_instances = self._get_cluster_instances(cluster_id)
for node in cluster_instances:
self.assert_equal(
'RESTART_REQUIRED', node.status,
"Node '%s' should be in 'RESTART_REQUIRED' state."
% node.id)
def assert_configuration_group(
self, client, cluster_id, expected_group_id):
cluster = client.clusters.get(cluster_id)
self.assert_equal(
expected_group_id, cluster.configuration,
"Attached group does not have the expected ID.")
cluster_instances = self._get_cluster_instances(client, cluster_id)
for node in cluster_instances:
self.assert_equal(
expected_group_id, cluster.configuration,
"Attached group does not have the expected ID on "
"cluster node: %s" % node.id)
def run_attach_non_dynamic_configuration(
self, expected_states=['NONE'],
expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_attach_configuration(
self.cluster_id, self.non_dynamic_group_id,
expected_states, expected_http_code, restart_inst=True)
def run_verify_initial_configuration(self):
if self.initial_group_id:
self.verify_configuration(self.cluster_id, self.initial_group_id)
def verify_configuration(self, cluster_id, expected_group_id):
self.assert_configuration_group(cluster_id, expected_group_id)
self.assert_configuration_values(cluster_id, expected_group_id)
def assert_configuration_values(self, cluster_id, group_id):
if group_id == self.initial_group_id:
if not self.config_requires_restart:
expected_configs = self.test_helper.get_dynamic_group()
else:
expected_configs = self.test_helper.get_non_dynamic_group()
if group_id == self.dynamic_group_id:
expected_configs = self.test_helper.get_dynamic_group()
elif group_id == self.non_dynamic_group_id:
expected_configs = self.test_helper.get_non_dynamic_group()
self._assert_configuration_values(cluster_id, expected_configs)
def _assert_configuration_values(self, cluster_id, expected_configs):
cluster_instances = self._get_cluster_instances(cluster_id)
for node in cluster_instances:
host = self.get_instance_host(node)
self.report.log(
"Verifying cluster configuration via node: %s" % host)
for name, value in expected_configs.items():
actual = self.test_helper.get_configuration_value(name, host)
self.assert_equal(str(value), str(actual),
"Unexpected value of property '%s'" % name)
def run_verify_dynamic_configuration(self):
if self.dynamic_group_id:
self.verify_configuration(self.cluster_id, self.dynamic_group_id)
def run_verify_non_dynamic_configuration(self):
if self.non_dynamic_group_id:
self.verify_configuration(
self.cluster_id, self.non_dynamic_group_id)
def run_detach_initial_configuration(self, expected_states=['NONE'],
expected_http_code=202):
if self.initial_group_id:
self.assert_detach_configuration(
self.cluster_id, expected_states, expected_http_code,
restart_inst=self.config_requires_restart)
def run_detach_dynamic_configuration(self, expected_states=['NONE'],
expected_http_code=202):
if self.dynamic_group_id:
self.assert_detach_configuration(
self.cluster_id, expected_states, expected_http_code)
def assert_detach_configuration(
self, cluster_id, expected_states, expected_http_code,
restart_inst=False):
client = self.auth_client
client.clusters.configuration_detach(cluster_id)
self.assert_client_code(client, expected_http_code)
self.active_config_group_id = None
self._assert_cluster_states(client, cluster_id, expected_states)
cluster = client.clusters.get(cluster_id)
self.assert_false(
hasattr(cluster, 'configuration'),
"Configuration group was not detached from the cluster.")
cluster_instances = self._get_cluster_instances(client, cluster_id)
for node in cluster_instances:
self.assert_false(
hasattr(node, 'configuration'),
"Configuration group was not detached from cluster node: %s"
% node.id)
if restart_inst:
self.config_requires_restart = True
cluster_instances = self._get_cluster_instances(client, cluster_id)
for node in cluster_instances:
self.assert_equal(
'RESTART_REQUIRED', node.status,
"Node '%s' should be in 'RESTART_REQUIRED' state."
% node.id)
def run_detach_non_dynamic_configuration(
self, expected_states=['NONE'],
expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_detach_configuration(
self.cluster_id, expected_states, expected_http_code,
restart_inst=True)
def run_delete_initial_configuration(self, expected_http_code=202):
if self.initial_group_id:
self.assert_group_delete(self.initial_group_id, expected_http_code)
def assert_group_delete(self, group_id, expected_http_code):
client = self.auth_client
client.configurations.delete(group_id)
self.assert_client_code(client, expected_http_code)
def run_delete_dynamic_configuration(self, expected_http_code=202):
if self.dynamic_group_id:
self.assert_group_delete(self.dynamic_group_id, expected_http_code)
def run_delete_non_dynamic_configuration(self, expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_group_delete(self.non_dynamic_group_id,
expected_http_code)
class CassandraClusterRunner(ClusterRunner):
def run_cluster_root_enable(self):
raise SkipTest("Operation is currently not supported.")
class MariadbClusterRunner(ClusterRunner):
@property
def min_cluster_node_count(self):
return self.get_datastore_config_property('min_cluster_member_count')
class MongodbClusterRunner(ClusterRunner):
@property
def min_cluster_node_count(self):
return 3
def run_cluster_delete(self, expected_task_name='NONE',
expected_http_code=202):
raise SkipKnownBug(runners.BUG_STOP_DB_IN_CLUSTER)
class PxcClusterRunner(ClusterRunner):
@property
def min_cluster_node_count(self):
return self.get_datastore_config_property('min_cluster_member_count')
class RedisClusterRunner(ClusterRunner):
# Since Redis runs all the shrink code in the API server, the call
# will not return until the task name has been set back to 'NONE' so
# we can't check it.
def run_cluster_shrink(self, expected_task_name='NONE',
expected_http_code=202):
return super(RedisClusterRunner, self).run_cluster_shrink(
expected_task_name=expected_task_name,
expected_http_code=expected_http_code)
class VerticaClusterRunner(ClusterRunner):
@property
def min_cluster_node_count(self):
return self.get_datastore_config_property('cluster_member_count')

View File

@ -1,580 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
import json
from proboscis import SkipTest
from trove.common.utils import generate_uuid
from trove.tests.scenario.runners.test_runners import TestRunner
from trove.tests.util.check import CollectionCheck
from trove.tests.util.check import TypeCheck
from troveclient.compat import exceptions
class ConfigurationRunner(TestRunner):
def __init__(self):
super(ConfigurationRunner, self).__init__(sleep_time=10)
self.dynamic_group_name = 'dynamic_test_group'
self.dynamic_group_id = None
self.dynamic_inst_count = 0
self.non_dynamic_group_name = 'non_dynamic_test_group'
self.non_dynamic_group_id = None
self.non_dynamic_inst_count = 0
self.initial_group_count = 0
self.additional_group_count = 0
self.config_id_for_inst = None
self.config_inst_id = None
def run_create_bad_group(self,
expected_exception=exceptions.UnprocessableEntity,
expected_http_code=422):
bad_group = {'unknown_datastore_key': 'bad_value'}
self.assert_action_on_conf_group_failure(
bad_group, expected_exception, expected_http_code)
def assert_action_on_conf_group_failure(
self, group_values, expected_exception, expected_http_code):
json_def = json.dumps(group_values)
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.configurations.create,
'conf_group',
json_def,
'Group with Bad or Invalid entries',
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
def run_create_invalid_groups(
self, expected_exception=exceptions.UnprocessableEntity,
expected_http_code=422):
invalid_groups = self.test_helper.get_invalid_groups()
if invalid_groups:
for invalid_group in invalid_groups:
self.assert_action_on_conf_group_failure(
invalid_group,
expected_exception, expected_http_code)
elif invalid_groups is None:
raise SkipTest("No invalid configuration values defined in %s." %
self.test_helper.get_class_name())
else:
raise SkipTest("Datastore has no invalid configuration values.")
def run_delete_non_existent_group(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_group_delete_failure(
None, expected_exception, expected_http_code)
def run_delete_bad_group_id(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_group_delete_failure(
generate_uuid(), expected_exception, expected_http_code)
def run_attach_non_existent_group(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_instance_modify_failure(
self.instance_info.id, generate_uuid(),
expected_exception, expected_http_code)
def run_attach_non_existent_group_to_non_existent_inst(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_instance_modify_failure(
generate_uuid(), generate_uuid(),
expected_exception, expected_http_code)
def run_detach_group_with_none_attached(self,
expected_states=['HEALTHY'],
expected_http_code=202):
self.assert_instance_modify(
self.instance_info.id, None,
expected_states, expected_http_code)
# run again, just to make sure
self.assert_instance_modify(
self.instance_info.id, None,
expected_states, expected_http_code)
def run_create_dynamic_group(self, expected_http_code=200):
self.initial_group_count = len(self.auth_client.configurations.list())
values = self.test_helper.get_dynamic_group()
if values:
self.dynamic_group_id = self.assert_create_group(
self.dynamic_group_name,
'a fully dynamic group should not require restart',
values, expected_http_code)
self.additional_group_count += 1
elif values is None:
raise SkipTest("No dynamic group defined in %s." %
self.test_helper.get_class_name())
else:
raise SkipTest("Datastore has no dynamic configuration values.")
def assert_create_group(self, name, description, values,
expected_http_code):
json_def = json.dumps(values)
client = self.auth_client
result = client.configurations.create(
name,
json_def,
description,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
with TypeCheck('Configuration', result) as configuration:
configuration.has_field('name', str)
configuration.has_field('description', str)
configuration.has_field('values', dict)
configuration.has_field('datastore_name', str)
configuration.has_field('datastore_version_id', str)
configuration.has_field('datastore_version_name', str)
self.assert_equal(name, result.name)
self.assert_equal(description, result.description)
self.assert_equal(values, result.values)
return result.id
def run_create_non_dynamic_group(self, expected_http_code=200):
values = self.test_helper.get_non_dynamic_group()
if values:
self.non_dynamic_group_id = self.assert_create_group(
self.non_dynamic_group_name,
'a group containing non-dynamic properties should always '
'require restart',
values, expected_http_code)
self.additional_group_count += 1
elif values is None:
raise SkipTest("No non-dynamic group defined in %s." %
self.test_helper.get_class_name())
else:
raise SkipTest("Datastore has no non-dynamic configuration "
"values.")
def run_attach_dynamic_group_to_non_existent_inst(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
if self.dynamic_group_id:
self.assert_instance_modify_failure(
generate_uuid(), self.dynamic_group_id,
expected_exception, expected_http_code)
def run_attach_non_dynamic_group_to_non_existent_inst(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
if self.non_dynamic_group_id:
self.assert_instance_modify_failure(
generate_uuid(), self.non_dynamic_group_id,
expected_exception, expected_http_code)
def run_list_configuration_groups(self):
configuration_list = self.auth_client.configurations.list()
self.assert_configuration_list(
configuration_list,
self.initial_group_count + self.additional_group_count)
def assert_configuration_list(self, configuration_list, expected_count):
self.assert_equal(expected_count, len(configuration_list),
'Unexpected number of configurations found')
if expected_count:
configuration_names = [conf.name for conf in configuration_list]
if self.dynamic_group_id:
self.assert_true(
self.dynamic_group_name in configuration_names)
if self.non_dynamic_group_id:
self.assert_true(
self.non_dynamic_group_name in configuration_names)
def run_dynamic_configuration_show(self):
if self.dynamic_group_id:
self.assert_configuration_show(self.dynamic_group_id,
self.dynamic_group_name)
else:
raise SkipTest("No dynamic group created.")
def assert_configuration_show(self, config_id, config_name):
result = self.auth_client.configurations.get(config_id)
self.assert_equal(config_id, result.id, "Unexpected config id")
self.assert_equal(config_name, result.name, "Unexpected config name")
# check the result field types
with TypeCheck("configuration", result) as check:
check.has_field("id", str)
check.has_field("name", str)
check.has_field("description", str)
check.has_field("values", dict)
check.has_field("created", str)
check.has_field("updated", str)
check.has_field("instance_count", int)
# check for valid timestamps
self.assert_true(self._is_valid_timestamp(result.created),
'Created timestamp %s is invalid' % result.created)
self.assert_true(self._is_valid_timestamp(result.updated),
'Updated timestamp %s is invalid' % result.updated)
with CollectionCheck("configuration_values", result.values) as check:
# check each item has the correct type according to the rules
for (item_key, item_val) in result.values.items():
print("item_key: %s" % item_key)
print("item_val: %s" % item_val)
param = (
self.auth_client.configuration_parameters.get_parameter(
self.instance_info.dbaas_datastore,
self.instance_info.dbaas_datastore_version,
item_key))
if param.type == 'integer':
check.has_element(item_key, int)
if param.type == 'string':
check.has_element(item_key, str)
if param.type == 'boolean':
check.has_element(item_key, bool)
def _is_valid_timestamp(self, time_string):
try:
datetime.strptime(time_string, "%Y-%m-%dT%H:%M:%S")
except ValueError:
return False
return True
def run_non_dynamic_configuration_show(self):
if self.non_dynamic_group_id:
self.assert_configuration_show(self.non_dynamic_group_id,
self.non_dynamic_group_name)
else:
raise SkipTest("No non-dynamic group created.")
def run_dynamic_conf_get_unauthorized_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_conf_get_unauthorized_user(self.dynamic_group_id,
expected_exception,
expected_http_code)
def assert_conf_get_unauthorized_user(
self, config_id, expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.unauth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.configurations.get, config_id)
def run_non_dynamic_conf_get_unauthorized_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_conf_get_unauthorized_user(self.dynamic_group_id,
expected_exception,
expected_http_code)
def run_list_dynamic_inst_conf_groups_before(self):
if self.dynamic_group_id:
self.dynamic_inst_count = len(
self.auth_client.configurations.instances(
self.dynamic_group_id))
def assert_conf_instance_list(self, group_id, expected_count):
conf_instance_list = self.auth_client.configurations.instances(
group_id)
self.assert_equal(expected_count, len(conf_instance_list),
'Unexpected number of configurations found')
if expected_count:
conf_instance_ids = [inst.id for inst in conf_instance_list]
self.assert_true(
self.instance_info.id in conf_instance_ids)
def run_attach_dynamic_group(
self, expected_states=['HEALTHY'], expected_http_code=202):
if self.dynamic_group_id:
self.assert_instance_modify(
self.instance_info.id, self.dynamic_group_id,
expected_states, expected_http_code)
def run_verify_dynamic_values(self):
if self.dynamic_group_id:
self.assert_configuration_values(self.instance_info.id,
self.dynamic_group_id)
def assert_configuration_values(self, instance_id, group_id):
if group_id == self.dynamic_group_id:
expected_configs = self.test_helper.get_dynamic_group()
elif group_id == self.non_dynamic_group_id:
expected_configs = self.test_helper.get_non_dynamic_group()
self._assert_configuration_values(instance_id, expected_configs)
def _assert_configuration_values(self, instance_id, expected_configs):
host = self.get_instance_host(instance_id)
for name, value in expected_configs.items():
actual = self.test_helper.get_configuration_value(name, host)
# Compare floating point numbers as floats to avoid rounding
# and precision issues.
try:
expected_value = float(value)
actual_value = float(actual)
except ValueError:
expected_value = str(value)
actual_value = str(actual)
self.assert_equal(expected_value, actual_value,
"Unexpected value of property '%s'" % name)
def run_list_dynamic_inst_conf_groups_after(self):
if self.dynamic_group_id:
self.assert_conf_instance_list(self.dynamic_group_id,
self.dynamic_inst_count + 1)
def run_attach_dynamic_group_again(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# The exception here should probably be UnprocessableEntity or
# something else other than BadRequest as the request really is
# valid.
if self.dynamic_group_id:
self.assert_instance_modify_failure(
self.instance_info.id, self.dynamic_group_id,
expected_exception, expected_http_code)
def run_delete_attached_dynamic_group(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# The exception here should probably be UnprocessableEntity or
# something else other than BadRequest as the request really is
# valid.
if self.dynamic_group_id:
self.assert_group_delete_failure(
self.dynamic_group_id, expected_exception, expected_http_code)
def run_update_dynamic_group(self, expected_states=['HEALTHY'],
expected_http_code=202):
if self.dynamic_group_id:
values = json.dumps(self.test_helper.get_dynamic_group())
self.assert_update_group(
self.instance_info.id, self.dynamic_group_id, values,
expected_states, expected_http_code)
def assert_update_group(
self, instance_id, group_id, values,
expected_states, expected_http_code, restart_inst=False):
client = self.auth_client
client.configurations.update(group_id, values)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
if restart_inst:
self._restart_instance(instance_id)
def run_detach_dynamic_group(
self, expected_states=['HEALTHY'], expected_http_code=202):
if self.dynamic_group_id:
self.assert_instance_modify(
self.instance_info.id, None,
expected_states, expected_http_code)
def run_list_non_dynamic_inst_conf_groups_before(self):
if self.non_dynamic_group_id:
self.non_dynamic_inst_count = len(
self.auth_client.configurations.instances(
self.non_dynamic_group_id))
def run_attach_non_dynamic_group(
self, expected_states=['RESTART_REQUIRED'],
expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_instance_modify(
self.instance_info.id, self.non_dynamic_group_id,
expected_states, expected_http_code, restart_inst=True)
def run_verify_non_dynamic_values(self):
if self.non_dynamic_group_id:
self.assert_configuration_values(self.instance_info.id,
self.non_dynamic_group_id)
def run_list_non_dynamic_inst_conf_groups_after(self):
if self.non_dynamic_group_id:
self.assert_conf_instance_list(self.non_dynamic_group_id,
self.non_dynamic_inst_count + 1)
def run_attach_non_dynamic_group_again(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
if self.non_dynamic_group_id:
self.assert_instance_modify_failure(
self.instance_info.id, self.non_dynamic_group_id,
expected_exception, expected_http_code)
def run_delete_attached_non_dynamic_group(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
if self.non_dynamic_group_id:
self.assert_group_delete_failure(
self.non_dynamic_group_id, expected_exception,
expected_http_code)
def run_update_non_dynamic_group(
self, expected_states=['RESTART_REQUIRED'],
expected_http_code=202):
if self.non_dynamic_group_id:
values = json.dumps(self.test_helper.get_non_dynamic_group())
self.assert_update_group(
self.instance_info.id, self.non_dynamic_group_id, values,
expected_states, expected_http_code, restart_inst=True)
def run_detach_non_dynamic_group(
self, expected_states=['RESTART_REQUIRED'],
expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_instance_modify(
self.instance_info.id, None, expected_states,
expected_http_code, restart_inst=True)
def assert_instance_modify(
self, instance_id, group_id, expected_states, expected_http_code,
restart_inst=False):
client = self.auth_client
params = {}
if group_id:
params['configuration'] = group_id
else:
params['remove_configuration'] = True
client.instances.update(instance_id, **params)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
# Verify the group has been attached.
instance = self.get_instance(instance_id)
if group_id:
group = self.auth_client.configurations.get(group_id)
self.assert_equal(
group.id, instance.configuration['id'],
"Attached group does not have the expected ID")
self.assert_equal(
group.name, instance.configuration['name'],
"Attached group does not have the expected name")
else:
self.assert_false(
hasattr(instance, 'configuration'),
"The configuration group was not detached from the instance.")
if restart_inst:
self._restart_instance(instance_id)
def assert_instance_modify_failure(
self, instance_id, group_id, expected_exception,
expected_http_code):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.instances.modify,
instance_id, configuration=group_id)
def run_delete_dynamic_group(self, expected_http_code=202):
if self.dynamic_group_id:
self.assert_group_delete(self.dynamic_group_id,
expected_http_code)
def assert_group_delete(self, group_id, expected_http_code):
client = self.auth_client
client.configurations.delete(group_id)
self.assert_client_code(client, expected_http_code)
def run_delete_non_dynamic_group(self, expected_http_code=202):
if self.non_dynamic_group_id:
self.assert_group_delete(self.non_dynamic_group_id,
expected_http_code)
def assert_group_delete_failure(self, group_id, expected_exception,
expected_http_code):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.configurations.delete, group_id)
def _restart_instance(
self, instance_id, expected_states=['REBOOT', 'HEALTHY'],
expected_http_code=202):
client = self.auth_client
client.instances.restart(instance_id)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
def run_create_instance_with_conf(self):
self.config_id_for_inst = (
self.dynamic_group_id or self.non_dynamic_group_id)
if self.config_id_for_inst:
self.config_inst_id = self.assert_create_instance_with_conf(
self.config_id_for_inst)
else:
raise SkipTest("No groups (dynamic or non-dynamic) defined in %s."
% self.test_helper.get_class_name())
def assert_create_instance_with_conf(self, config_id):
# test that a new instance will apply the configuration on create
client = self.auth_client
result = client.instances.create(
self.instance_info.name + "_config",
self.instance_info.dbaas_flavor_href,
self.instance_info.volume,
[], [],
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version,
nics=self.instance_info.nics,
availability_zone="nova",
configuration=config_id)
self.assert_client_code(client, 200)
self.assert_equal("BUILD", result.status, 'Unexpected inst status')
self.register_debug_inst_ids(result.id)
return result.id
def run_wait_for_conf_instance(
self, expected_states=['BUILD', 'HEALTHY']):
if self.config_inst_id:
self.assert_instance_action(self.config_inst_id, expected_states)
self.create_test_helper_on_instance(self.config_inst_id)
inst = self.auth_client.instances.get(self.config_inst_id)
self.assert_equal(self.config_id_for_inst,
inst.configuration['id'])
else:
raise SkipTest("No instance created with a configuration group.")
def run_verify_instance_values(self):
if self.config_id_for_inst:
self.assert_configuration_values(self.config_inst_id,
self.config_id_for_inst)
else:
raise SkipTest("No instance created with a configuration group.")
def run_delete_conf_instance(self, expected_http_code=202):
if self.config_inst_id:
self.assert_delete_conf_instance(
self.config_inst_id, expected_http_code)
else:
raise SkipTest("No instance created with a configuration group.")
def assert_delete_conf_instance(self, instance_id, expected_http_code):
client = self.auth_client
client.instances.delete(instance_id)
self.assert_client_code(client, expected_http_code)
def run_wait_for_delete_conf_instance(
self, expected_last_state=['SHUTDOWN']):
if self.config_inst_id:
self.assert_all_gone(self.config_inst_id, expected_last_state)
else:
raise SkipTest("No instance created with a configuration group.")

View File

@ -1,228 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.common import exception
from trove.common.utils import poll_until
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
from troveclient.compat import exceptions
class DatabaseActionsRunner(TestRunner):
def __init__(self):
super(DatabaseActionsRunner, self).__init__()
self.db_defs = []
@property
def first_db_def(self):
if self.db_defs:
return self.db_defs[0]
raise SkipTest("No valid database definitions provided.")
@property
def non_existing_db_def(self):
db_def = self.test_helper.get_non_existing_database_definition()
if db_def:
return db_def
raise SkipTest("No valid database definitions provided.")
def run_databases_create(self, expected_http_code=202):
databases = self.test_helper.get_valid_database_definitions()
if databases:
self.db_defs = self.assert_databases_create(
self.instance_info.id, databases, expected_http_code)
else:
raise SkipTest("No valid database definitions provided.")
def assert_databases_create(self, instance_id, serial_databases_def,
expected_http_code):
client = self.auth_client
client.databases.create(instance_id, serial_databases_def)
self.assert_client_code(client, expected_http_code)
self.wait_for_database_create(client,
instance_id, serial_databases_def)
return serial_databases_def
def run_databases_list(self, expected_http_code=200):
self.assert_databases_list(
self.instance_info.id, self.db_defs, expected_http_code)
def assert_databases_list(self, instance_id, expected_database_defs,
expected_http_code, limit=2):
client = self.auth_client
full_list = client.databases.list(instance_id)
self.assert_client_code(client, expected_http_code)
listed_databases = {database.name: database for database in full_list}
self.assert_is_none(full_list.next,
"Unexpected pagination in the list.")
for database_def in expected_database_defs:
database_name = database_def['name']
self.assert_true(
database_name in listed_databases,
"Database not included in the 'database-list' output: %s" %
database_name)
# Check that the system (ignored) databases are not included in the
# output.
system_databases = self.get_system_databases()
self.assert_false(
any(name in listed_databases for name in system_databases),
"System databases should not be included in the 'database-list' "
"output.")
# Test list pagination.
list_page = client.databases.list(instance_id, limit=limit)
self.assert_client_code(client, expected_http_code)
self.assert_true(len(list_page) <= limit)
if len(full_list) > limit:
self.assert_is_not_none(list_page.next, "List page is missing.")
else:
self.assert_is_none(list_page.next, "An extra page in the list.")
marker = list_page.next
self.assert_pagination_match(list_page, full_list, 0, limit)
if marker:
last_database = list_page[-1]
expected_marker = last_database.name
self.assert_equal(expected_marker, marker,
"Pagination marker should be the last element "
"in the page.")
list_page = client.databases.list(
instance_id, marker=marker)
self.assert_client_code(client, expected_http_code)
self.assert_pagination_match(
list_page, full_list, limit, len(full_list))
def run_database_create_with_no_attributes(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_databases_create_failure(
self.instance_info.id, {}, expected_exception, expected_http_code)
def run_database_create_with_blank_name(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_databases_create_failure(
self.instance_info.id, {'name': ''},
expected_exception, expected_http_code)
def run_existing_database_create(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_databases_create_failure(
self.instance_info.id, self.first_db_def,
expected_exception, expected_http_code)
def assert_databases_create_failure(
self, instance_id, serial_databases_def,
expected_exception, expected_http_code):
client = self.auth_client
self.assert_raises(
expected_exception,
expected_http_code,
client, client.databases.create,
instance_id,
serial_databases_def)
def run_system_database_create(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_databases = self.get_system_databases()
database_defs = [{'name': name} for name in system_databases]
if system_databases:
self.assert_databases_create_failure(
self.instance_info.id, database_defs,
expected_exception, expected_http_code)
def run_database_delete(self, expected_http_code=202):
for database_def in self.db_defs:
self.assert_database_delete(
self.instance_info.id, database_def['name'],
expected_http_code)
def assert_database_delete(
self,
instance_id,
database_name,
expected_http_code):
client = self.auth_client
client.databases.delete(instance_id, database_name)
self.assert_client_code(client, expected_http_code)
self._wait_for_database_delete(client, instance_id, database_name)
def _wait_for_database_delete(self, client,
instance_id, deleted_database_name):
self.report.log("Waiting for deleted database to disappear from the "
"listing: %s" % deleted_database_name)
def _db_is_gone():
all_dbs = self.get_db_names(client, instance_id)
return deleted_database_name not in all_dbs
try:
poll_until(_db_is_gone, time_out=self.GUEST_CAST_WAIT_TIMEOUT_SEC)
self.report.log("Database is now gone from the instance.")
except exception.PollTimeOut:
self.fail("Database still listed after the poll timeout: %ds" %
self.GUEST_CAST_WAIT_TIMEOUT_SEC)
def run_nonexisting_database_delete(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_database_delete_failure(
self.instance_info.id, self.non_existing_db_def['name'],
expected_exception, expected_http_code)
def run_system_database_delete(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_databases = self.get_system_databases()
if system_databases:
for name in system_databases:
self.assert_database_delete_failure(
self.instance_info.id, name,
expected_exception, expected_http_code)
def assert_database_delete_failure(
self, instance_id, database_name,
expected_exception, expected_http_code):
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.databases.delete,
instance_id, database_name)
def get_system_databases(self):
return self.get_datastore_config_property('ignore_dbs')
class PostgresqlDatabaseActionsRunner(DatabaseActionsRunner):
def run_system_database_create(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
def run_system_database_delete(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)

View File

@ -1,813 +0,0 @@
# Copyright 2015 Tesora Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from swiftclient.client import ClientException
import tempfile
from troveclient.compat import exceptions
from trove.common import cfg
from trove.guestagent.common import operating_system
from trove.guestagent import guest_log
from trove.tests.config import CONFIG
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
CONF = cfg.CONF
class GuestLogRunner(TestRunner):
def __init__(self):
super(GuestLogRunner, self).__init__()
self.container = CONF.guest_log_container_name
self.prefix_pattern = '%(instance_id)s/%(datastore)s-%(log)s/'
self.stopped_log_details = None
self._last_log_published = {}
self._last_log_contents = {}
def _get_last_log_published(self, log_name):
return self._last_log_published.get(log_name, None)
def _set_last_log_published(self, log_name, published):
self._last_log_published[log_name] = published
def _get_last_log_contents(self, log_name):
return self._last_log_contents.get(log_name, [])
def _set_last_log_contents(self, log_name, published):
self._last_log_contents[log_name] = published
def _get_exposed_user_log_names(self):
"""Returns the full list of exposed user logs."""
return self.test_helper.get_exposed_user_log_names()
def _get_exposed_user_log_name(self):
"""Return the first exposed user log name."""
return self.test_helper.get_exposed_user_log_names()[0]
def _get_unexposed_sys_log_name(self):
"""Return the first unexposed sys log name."""
return self.test_helper.get_unexposed_sys_log_names()[0]
def run_test_log_list(self):
self.assert_log_list(self.auth_client,
self.test_helper.get_exposed_log_list())
def assert_log_list(self, client, expected_list):
log_list = list(client.instances.log_list(self.instance_info.id))
log_names = list(ll.name for ll in log_list)
self.assert_list_elements_equal(expected_list, log_names)
self.register_debug_inst_ids(self.instance_info.id)
def run_test_admin_log_list(self):
self.assert_log_list(self.admin_client,
self.test_helper.get_full_log_list())
def run_test_log_show(self):
log_pending = self._set_zero_or_none()
log_name = self._get_exposed_user_log_name()
self.assert_log_show(self.auth_client,
log_name,
expected_published=0,
expected_pending=log_pending)
def _set_zero_or_none(self):
"""This attempts to handle the case where an existing instance
is used. Values that would normally be '0' are not, and must
be ignored.
"""
value = 0
if self.is_using_existing_instance:
value = None
return value
def assert_log_show(self, client, log_name,
expected_http_code=200,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None,
is_admin=False):
self.report.log("Executing log_show for log '%s'" % log_name)
log_details = client.instances.log_show(
self.instance_info.id, log_name)
self.assert_client_code(client, expected_http_code)
self.assert_log_details(
log_details, log_name,
expected_type=expected_type,
expected_status=expected_status,
expected_published=expected_published,
expected_pending=expected_pending,
is_admin=is_admin)
def assert_log_details(self, log_details, expected_log_name,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None,
is_admin=False):
"""Check that the action generates the proper response data.
For log_published and log_pending, setting the value to 'None'
will skip that check (useful when using an existing instance,
as there may be pending things in user logs right from the get-go)
and setting it to a value other than '0' will verify that the actual
value is '>=value' (since it's impossible to know what the actual
value will be at any given time). '0' will still match exclusively.
"""
self.report.log("Validating log details for log '%s'" %
expected_log_name)
self._set_last_log_published(expected_log_name, log_details.published)
self.assert_equal(expected_log_name, log_details.name,
"Wrong log name for '%s' log" % expected_log_name)
self.assert_equal(expected_type, log_details.type,
"Wrong log type for '%s' log" % expected_log_name)
current_status = log_details.status.replace(' ', '_')
if not isinstance(expected_status, list):
expected_status = [expected_status]
self.assert_is_sublist([current_status], expected_status,
"Wrong log status for '%s' log" %
expected_log_name)
if expected_published is None:
pass
elif expected_published == 0:
self.assert_equal(0, log_details.published,
"Wrong log published for '%s' log" %
expected_log_name)
else:
self.assert_true(log_details.published >= expected_published,
"Missing log published for '%s' log: "
"expected %d, got %d" %
(expected_log_name, expected_published,
log_details.published))
if expected_pending is None:
pass
elif expected_pending == 0:
self.assert_equal(0, log_details.pending,
"Wrong log pending for '%s' log" %
expected_log_name)
else:
self.assert_true(log_details.pending >= expected_pending,
"Missing log pending for '%s' log: "
"expected %d, got %d" %
(expected_log_name, expected_pending,
log_details.pending))
container = self.container
prefix = self.prefix_pattern % {
'instance_id': self.instance_info.id,
'datastore': CONFIG.dbaas_datastore,
'log': expected_log_name}
metafile = prefix.rstrip('/') + '_metafile'
if expected_published == 0:
self.assert_storage_gone(container, prefix, metafile,
is_admin=is_admin)
container = 'None'
prefix = 'None'
else:
self.assert_storage_exists(container, prefix, metafile,
is_admin=is_admin)
self.assert_equal(container, log_details.container,
"Wrong log container for '%s' log" %
expected_log_name)
self.assert_equal(prefix, log_details.prefix,
"Wrong log prefix for '%s' log" % expected_log_name)
self.assert_equal(metafile, log_details.metafile,
"Wrong log metafile for '%s' log" %
expected_log_name)
def assert_log_enable(self, client, log_name,
expected_http_code=200,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None):
self.report.log("Executing log_enable for log '%s'" % log_name)
log_details = client.instances.log_action(
self.instance_info.id, log_name, enable=True)
self.assert_client_code(client, expected_http_code)
self.assert_log_details(
log_details, log_name,
expected_type=expected_type,
expected_status=expected_status,
expected_published=expected_published,
expected_pending=expected_pending)
def assert_log_disable(self, client, log_name, discard=None,
expected_http_code=200,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None):
self.report.log("Executing log_disable for log '%s' (discard: %s)" %
(log_name, discard))
log_details = client.instances.log_action(
self.instance_info.id, log_name, disable=True, discard=discard)
self.assert_client_code(client, expected_http_code)
self.assert_log_details(
log_details, log_name,
expected_type=expected_type,
expected_status=expected_status,
expected_published=expected_published,
expected_pending=expected_pending)
def assert_log_publish(self, client, log_name, disable=None, discard=None,
expected_http_code=200,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None,
is_admin=False):
self.report.log("Executing log_publish for log '%s' (disable: %s "
"discard: %s)" %
(log_name, disable, discard))
log_details = client.instances.log_action(
self.instance_info.id, log_name, publish=True, disable=disable,
discard=discard)
self.assert_client_code(client, expected_http_code)
self.assert_log_details(
log_details, log_name,
expected_type=expected_type,
expected_status=expected_status,
expected_published=expected_published,
expected_pending=expected_pending,
is_admin=is_admin)
def assert_log_discard(self, client, log_name,
expected_http_code=200,
expected_type=guest_log.LogType.USER.name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=None, expected_pending=None):
self.report.log("Executing log_discard for log '%s'" % log_name)
log_details = client.instances.log_action(
self.instance_info.id, log_name, discard=True)
self.assert_client_code(client, expected_http_code)
self.assert_log_details(
log_details, log_name,
expected_type=expected_type,
expected_status=expected_status,
expected_published=expected_published,
expected_pending=expected_pending)
def assert_storage_gone(self, container, prefix, metafile, is_admin=False):
if is_admin:
swift_client = self.admin_swift_client
else:
swift_client = self.swift_client
try:
headers, container_files = swift_client.get_container(
container, prefix=prefix)
self.assert_equal(0, len(container_files),
"Found files in %s/%s: %s" %
(container, prefix, container_files))
except ClientException as ex:
if ex.http_status == 404:
self.report.log("Container '%s' does not exist" %
container)
pass
else:
raise
try:
swift_client.get_object(container, metafile)
self.fail("Found metafile after discard: %s" % metafile)
except ClientException as ex:
if ex.http_status == 404:
self.report.log("Metafile '%s' gone as expected" %
metafile)
pass
else:
raise
def assert_storage_exists(self, container, prefix, metafile,
is_admin=False):
if is_admin:
swift_client = self.admin_swift_client
else:
swift_client = self.swift_client
try:
headers, container_files = swift_client.get_container(
container, prefix=prefix)
self.assert_true(len(container_files) > 0,
"No files found in %s/%s" %
(container, prefix))
except ClientException as ex:
if ex.http_status == 404:
self.fail("Container '%s' does not exist" % container)
else:
raise
try:
swift_client.get_object(container, metafile)
except ClientException as ex:
if ex.http_status == 404:
self.fail("Missing metafile: %s" % metafile)
else:
raise
def run_test_log_enable_sys(self,
expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_enable_fails(
self.admin_client,
expected_exception, expected_http_code,
log_name)
def assert_log_enable_fails(self, client,
expected_exception, expected_http_code,
log_name):
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_action,
self.instance_info.id, log_name, enable=True)
def run_test_log_disable_sys(self,
expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_disable_fails(
self.admin_client,
expected_exception, expected_http_code,
log_name)
def assert_log_disable_fails(self, client,
expected_exception, expected_http_code,
log_name, discard=None):
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_action,
self.instance_info.id, log_name, disable=True,
discard=discard)
def run_test_log_show_unauth_user(self,
expected_exception=exceptions.NotFound,
expected_http_code=404):
log_name = self._get_exposed_user_log_name()
self.assert_log_show_fails(
self.unauth_client,
expected_exception, expected_http_code,
log_name)
def assert_log_show_fails(self, client,
expected_exception, expected_http_code,
log_name):
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_show,
self.instance_info.id, log_name)
def run_test_log_list_unauth_user(self,
expected_exception=exceptions.NotFound,
expected_http_code=404):
client = self.unauth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_list,
self.instance_info.id)
def run_test_log_generator_unauth_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
log_name = self._get_exposed_user_log_name()
self.assert_log_generator_unauth_user(
self.unauth_client, log_name,
expected_exception, expected_http_code)
def assert_log_generator_unauth_user(self, client, log_name,
expected_exception,
expected_http_code,
publish=None):
raise SkipKnownBug(runners.BUG_UNAUTH_TEST_WRONG)
# self.assert_raises(expected_exception, expected_http_code,
# client, client.instances.log_generator,
# self.instance_info.id, log_name, publish=publish)
def run_test_log_generator_publish_unauth_user(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
log_name = self._get_exposed_user_log_name()
self.assert_log_generator_unauth_user(
self.unauth_client, log_name,
expected_exception, expected_http_code,
publish=True)
def run_test_log_show_unexposed_user(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_show_fails(
self.auth_client,
expected_exception, expected_http_code,
log_name)
def run_test_log_enable_unexposed_user(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_enable_fails(
self.auth_client,
expected_exception, expected_http_code,
log_name)
def run_test_log_disable_unexposed_user(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_disable_fails(
self.auth_client,
expected_exception, expected_http_code,
log_name)
def run_test_log_publish_unexposed_user(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_publish_fails(
self.auth_client,
expected_exception, expected_http_code,
log_name)
def assert_log_publish_fails(self, client,
expected_exception, expected_http_code,
log_name,
disable=None, discard=None):
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_action,
self.instance_info.id, log_name, publish=True,
disable=disable, discard=discard)
def run_test_log_discard_unexposed_user(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_discard_fails(
self.auth_client,
expected_exception, expected_http_code,
log_name)
def assert_log_discard_fails(self, client,
expected_exception, expected_http_code,
log_name):
self.assert_raises(expected_exception, expected_http_code,
client, client.instances.log_action,
self.instance_info.id, log_name, discard=True)
def run_test_log_enable_user(self):
expected_status = guest_log.LogStatus.Ready.name
expected_pending = 1
if self.test_helper.log_enable_requires_restart():
expected_status = guest_log.LogStatus.Restart_Required.name
# if using an existing instance, there may already be something
expected_pending = self._set_zero_or_none()
for log_name in self._get_exposed_user_log_names():
self.assert_log_enable(
self.auth_client,
log_name,
expected_status=expected_status,
expected_published=0, expected_pending=expected_pending)
def run_test_log_enable_flip_user(self):
# for restart required datastores, test that flipping them
# back to disabled returns the status to 'Disabled'
# from 'Restart_Required'
if self.test_helper.log_enable_requires_restart():
# if using an existing instance, there may already be something
expected_pending = self._set_zero_or_none()
for log_name in self._get_exposed_user_log_names():
self.assert_log_disable(
self.auth_client,
log_name,
expected_status=guest_log.LogStatus.Disabled.name,
expected_published=0, expected_pending=expected_pending)
self.assert_log_enable(
self.auth_client,
log_name,
expected_status=guest_log.LogStatus.Restart_Required.name,
expected_published=0, expected_pending=expected_pending)
def run_test_restart_datastore(self, expected_http_code=202):
if self.test_helper.log_enable_requires_restart():
instance_id = self.instance_info.id
# we need to wait until the heartbeat flips the instance
# back into 'ACTIVE' before we issue the restart command
expected_states = ['RESTART_REQUIRED', 'HEALTHY']
self.assert_instance_action(instance_id, expected_states)
client = self.auth_client
client.instances.restart(instance_id)
self.assert_client_code(client, expected_http_code)
def run_test_wait_for_restart(self, expected_states=['REBOOT', 'HEALTHY']):
if self.test_helper.log_enable_requires_restart():
self.assert_instance_action(self.instance_info.id, expected_states)
def run_test_log_publish_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_log_publish(
self.auth_client,
log_name,
expected_status=[guest_log.LogStatus.Published.name,
guest_log.LogStatus.Partial.name],
expected_published=1, expected_pending=None)
def run_test_add_data(self):
self.test_helper.add_data(DataType.micro, self.get_instance_host())
def run_test_verify_data(self):
self.test_helper.verify_data(DataType.micro, self.get_instance_host())
def run_test_log_publish_again_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_log_publish(
self.auth_client,
log_name,
expected_status=[guest_log.LogStatus.Published.name,
guest_log.LogStatus.Partial.name],
expected_published=self._get_last_log_published(log_name),
expected_pending=None)
def run_test_log_generator_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_log_generator(
self.auth_client,
log_name,
lines=2, expected_lines=2)
def assert_log_generator(self, client, log_name, publish=False,
lines=4, expected_lines=None,
swift_client=None):
self.report.log("Executing log_generator for log '%s' (publish: %s)" %
(log_name, publish))
if publish:
client.instances.log_action(self.instance_info.id, log_name,
publish=True)
log_gen = client.instances.log_generator(
self.instance_info.id, log_name,
lines=lines, swift=swift_client)
log_contents = "".join([chunk for chunk in log_gen()])
self.report.log("Returned %d lines for log '%s': %s" % (
len(log_contents.splitlines()), log_name, log_contents))
self._set_last_log_contents(log_name, log_contents)
if expected_lines:
self.assert_equal(expected_lines,
len(log_contents.splitlines()),
"Wrong line count for '%s' log" % log_name)
else:
self.assert_true(len(log_contents.splitlines()) <= lines,
"More than %d lines found for '%s' log" %
(lines, log_name))
def run_test_log_generator_publish_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_log_generator(
self.auth_client,
log_name, publish=True,
lines=3, expected_lines=3)
def run_test_log_generator_swift_client_user(self):
swift_client = self.swift_client
for log_name in self._get_exposed_user_log_names():
self.assert_log_generator(
self.auth_client,
log_name, publish=True,
lines=3, expected_lines=3,
swift_client=swift_client)
def run_test_add_data_again(self):
# Add some more data so we have at least 3 log data files
self.test_helper.add_data(DataType.micro2, self.get_instance_host())
def run_test_verify_data_again(self):
self.test_helper.verify_data(DataType.micro2, self.get_instance_host())
def run_test_log_generator_user_by_row(self):
log_name = self._get_exposed_user_log_name()
self.assert_log_publish(
self.auth_client,
log_name,
expected_status=[guest_log.LogStatus.Published.name,
guest_log.LogStatus.Partial.name],
expected_published=self._get_last_log_published(log_name),
expected_pending=None)
# Now get the full contents of the log
self.assert_log_generator(self.auth_client, log_name, lines=100000)
log_lines = len(self._get_last_log_contents(log_name).splitlines())
# cap at 100, so the test can't run away if something goes wrong
log_lines = min(log_lines, 100)
# Make sure we get the right number of log lines back each time
for lines in range(1, log_lines):
self.assert_log_generator(
self.auth_client,
log_name, lines=lines, expected_lines=lines)
def run_test_log_save_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_test_log_save(self.auth_client, log_name)
def run_test_log_save_publish_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_test_log_save(self.auth_client, log_name, publish=True)
def assert_test_log_save(self, client, log_name, publish=False):
# generate the file
self.report.log("Executing log_save for log '%s' (publish: %s)" %
(log_name, publish))
if publish:
client.instances.log_action(self.instance_info.id,
log_name=log_name,
publish=True)
with tempfile.NamedTemporaryFile() as temp_file:
client.instances.log_save(self.instance_info.id,
log_name=log_name,
filename=temp_file.name)
file_contents = operating_system.read_file(temp_file.name)
# now grab the contents ourselves
self.assert_log_generator(client, log_name, lines=100000)
# and compare them
self.assert_equal(self._get_last_log_contents(log_name),
file_contents)
def run_test_log_discard_user(self):
for log_name in self._get_exposed_user_log_names():
self.assert_log_discard(
self.auth_client,
log_name,
expected_status=guest_log.LogStatus.Ready.name,
expected_published=0, expected_pending=1)
def run_test_log_disable_user(self):
expected_status = guest_log.LogStatus.Disabled.name
if self.test_helper.log_enable_requires_restart():
expected_status = guest_log.LogStatus.Restart_Required.name
for log_name in self._get_exposed_user_log_names():
self.assert_log_disable(
self.auth_client,
log_name,
expected_status=expected_status,
expected_published=0, expected_pending=1)
def run_test_log_show_after_stop_details(self):
log_name = self._get_exposed_user_log_name()
self.stopped_log_details = self.auth_client.instances.log_show(
self.instance_info.id, log_name)
self.assert_is_not_none(self.stopped_log_details)
def run_test_add_data_again_after_stop(self):
# Add some more data to make sure logging has stopped
self.test_helper.add_data(DataType.micro3, self.get_instance_host())
def run_test_verify_data_again_after_stop(self):
self.test_helper.verify_data(DataType.micro3, self.get_instance_host())
def run_test_log_show_after_stop(self):
log_name = self._get_exposed_user_log_name()
self.assert_log_show(
self.auth_client, log_name,
expected_published=self.stopped_log_details.published,
expected_pending=self.stopped_log_details.pending)
def run_test_log_enable_user_after_stop(self):
expected_status = guest_log.LogStatus.Ready.name
expected_pending = 1
if self.test_helper.log_enable_requires_restart():
expected_status = guest_log.LogStatus.Restart_Required.name
log_name = self._get_exposed_user_log_name()
self.assert_log_enable(
self.auth_client,
log_name,
expected_status=expected_status,
expected_published=0, expected_pending=expected_pending)
def run_test_add_data_again_after_stop_start(self):
# Add some more data to make sure logging has started again
self.test_helper.add_data(DataType.micro4, self.get_instance_host())
def run_test_verify_data_again_after_stop_start(self):
self.test_helper.verify_data(DataType.micro4, self.get_instance_host())
def run_test_log_publish_after_stop_start(self):
log_name = self._get_exposed_user_log_name()
self.assert_log_publish(
self.auth_client,
log_name,
expected_status=[guest_log.LogStatus.Published.name,
guest_log.LogStatus.Partial.name],
expected_published=self._get_last_log_published(log_name) + 1,
expected_pending=None)
def run_test_log_disable_user_after_stop_start(self):
expected_status = guest_log.LogStatus.Disabled.name
if self.test_helper.log_enable_requires_restart():
expected_status = guest_log.LogStatus.Restart_Required.name
log_name = self._get_exposed_user_log_name()
self.assert_log_disable(
self.auth_client,
log_name, discard=True,
expected_status=expected_status,
expected_published=0, expected_pending=1)
def run_test_log_show_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_show(
self.admin_client,
log_name,
expected_type=guest_log.LogType.SYS.name,
expected_status=[guest_log.LogStatus.Ready.name,
guest_log.LogStatus.Partial.name],
expected_published=0, expected_pending=1,
is_admin=True
)
def run_test_log_publish_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_publish(
self.admin_client,
log_name,
expected_type=guest_log.LogType.SYS.name,
expected_status=[guest_log.LogStatus.Partial.name,
guest_log.LogStatus.Published.name],
expected_published=1, expected_pending=None,
is_admin=True)
def run_test_log_publish_again_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_publish(
self.admin_client,
log_name,
expected_type=guest_log.LogType.SYS.name,
expected_status=[guest_log.LogStatus.Partial.name,
guest_log.LogStatus.Published.name],
expected_published=self._get_last_log_published(log_name) + 1,
expected_pending=None,
is_admin=True)
def run_test_log_generator_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_generator(
self.admin_client,
log_name,
lines=4, expected_lines=4)
def run_test_log_generator_publish_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_generator(
self.admin_client,
log_name, publish=True,
lines=4, expected_lines=4)
def run_test_log_generator_swift_client_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_generator(
self.admin_client,
log_name, publish=True,
lines=4, expected_lines=4,
swift_client=self.admin_swift_client)
def run_test_log_save_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_test_log_save(
self.admin_client,
log_name)
def run_test_log_save_publish_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_test_log_save(
self.admin_client,
log_name,
publish=True)
def run_test_log_discard_sys(self):
log_name = self._get_unexposed_sys_log_name()
self.assert_log_discard(
self.admin_client,
log_name,
expected_type=guest_log.LogType.SYS.name,
expected_status=guest_log.LogStatus.Ready.name,
expected_published=0, expected_pending=1)
class CassandraGuestLogRunner(GuestLogRunner):
def run_test_log_show(self):
log_name = self._get_exposed_user_log_name()
self.assert_log_show(self.auth_client,
log_name,
expected_published=0,
expected_pending=None)

View File

@ -1,115 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.config import CONFIG
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceActionsRunner(TestRunner):
def __init__(self):
super(InstanceActionsRunner, self).__init__()
self.resize_flavor_id = self._get_resize_flavor().id
def _get_resize_flavor(self):
if self.EPHEMERAL_SUPPORT:
flavor_name = CONFIG.values.get(
'instance_bigger_eph_flavor_name', 'eph.rd-smaller')
else:
flavor_name = CONFIG.values.get(
'instance_bigger_flavor_name', 'm1.rd-smaller')
return self.get_flavor(flavor_name)
def run_add_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.add_data(DataType.small, host)
def run_verify_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.verify_data(DataType.small, host)
def run_remove_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.remove_data(DataType.small, host)
def run_instance_restart(
self, expected_states=['REBOOT', 'HEALTHY'],
expected_http_code=202):
self.assert_instance_restart(self.instance_info.id, expected_states,
expected_http_code)
def assert_instance_restart(self, instance_id, expected_states,
expected_http_code):
self.report.log("Testing restart on instance: %s" % instance_id)
client = self.auth_client
client.instances.restart(instance_id)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
def run_instance_resize_volume(
self, resize_amount=1,
expected_states=['RESIZE', 'HEALTHY'],
expected_http_code=202):
if self.VOLUME_SUPPORT:
self.assert_instance_resize_volume(
self.instance_info.id, resize_amount, expected_states,
expected_http_code)
else:
raise SkipTest("Volume support is disabled.")
def assert_instance_resize_volume(self, instance_id, resize_amount,
expected_states, expected_http_code):
self.report.log("Testing volume resize by '%d' on instance: %s"
% (resize_amount, instance_id))
instance = self.get_instance(instance_id)
old_volume_size = int(instance.volume['size'])
new_volume_size = old_volume_size + resize_amount
client = self.auth_client
client.instances.resize_volume(instance_id, new_volume_size)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
instance = self.get_instance(instance_id)
self.assert_equal(new_volume_size, instance.volume['size'],
'Unexpected new volume size')
def run_instance_resize_flavor(self, expected_http_code=202):
self.assert_instance_resize_flavor(
self.instance_info.id, self.resize_flavor_id, expected_http_code)
def assert_instance_resize_flavor(self, instance_id, resize_flavor_id,
expected_http_code):
self.report.log("Testing resize to '%s' on instance: %s" %
(resize_flavor_id, instance_id))
client = self.auth_client
client.instances.resize_instance(instance_id, resize_flavor_id)
self.assert_client_code(client, expected_http_code)
def run_wait_for_instance_resize_flavor(
self, expected_states=['RESIZE', 'HEALTHY']):
self.report.log("Waiting for resize to '%s' on instance: %s" %
(self.resize_flavor_id, self.instance_info.id))
self._assert_instance_states(self.instance_info.id, expected_states)
instance = self.get_instance(self.instance_info.id)
self.assert_equal(self.resize_flavor_id, instance.flavor['id'],
'Unexpected resize flavor_id')

View File

@ -1,339 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.config import CONFIG
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario.runners.test_runners import CheckInstance
from trove.tests.scenario.runners.test_runners import InstanceTestInfo
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceCreateRunner(TestRunner):
def __init__(self):
super(InstanceCreateRunner, self).__init__()
self.init_inst_info = None
self.init_inst_dbs = None
self.init_inst_users = None
self.init_inst_host = None
self.init_inst_data = None
self.init_inst_config_group_id = None
self.config_group_id = None
def run_empty_instance_create(self, expected_states=['BUILD', 'HEALTHY'],
expected_http_code=200):
name = self.instance_info.name
flavor = self.get_instance_flavor()
volume_size = self.instance_info.volume_size
instance_info = self.assert_instance_create(
name, flavor, volume_size, [], [], None, None,
CONFIG.dbaas_datastore, CONFIG.dbaas_datastore_version,
expected_states, expected_http_code, create_helper_user=True,
locality='affinity')
# Update the shared instance info.
self.instance_info.id = instance_info.id
self.instance_info.name = instance_info.name
self.instance_info.databases = instance_info.databases
self.instance_info.users = instance_info.users
self.instance_info.dbaas_datastore = instance_info.dbaas_datastore
self.instance_info.dbaas_datastore_version = (
instance_info.dbaas_datastore_version)
self.instance_info.dbaas_flavor_href = instance_info.dbaas_flavor_href
self.instance_info.volume = instance_info.volume
self.instance_info.helper_user = instance_info.helper_user
self.instance_info.helper_database = instance_info.helper_database
def run_initial_configuration_create(self, expected_http_code=200):
group_id, _ = self.create_initial_configuration(expected_http_code)
if group_id:
self.config_group_id = group_id
else:
raise SkipTest("No groups defined.")
def run_initialized_instance_create(
self, with_dbs=True, with_users=True, configuration_id=None,
expected_states=['BUILD', 'HEALTHY'], expected_http_code=200,
create_helper_user=True, name_suffix='_init'):
if self.is_using_existing_instance:
# The user requested to run the tests using an existing instance.
# We therefore skip any scenarios that involve creating new
# test instances.
raise SkipTest("Using an existing instance.")
configuration_id = configuration_id or self.config_group_id
name = self.instance_info.name + name_suffix
flavor = self.get_instance_flavor()
volume_size = self.instance_info.volume_size
self.init_inst_dbs = (self.test_helper.get_valid_database_definitions()
if with_dbs else [])
self.init_inst_users = (self.test_helper.get_valid_user_definitions()
if with_users else [])
self.init_inst_config_group_id = configuration_id
if (self.init_inst_dbs or self.init_inst_users or configuration_id):
info = self.assert_instance_create(
name, flavor, volume_size,
self.init_inst_dbs, self.init_inst_users,
configuration_id, None,
CONFIG.dbaas_datastore, CONFIG.dbaas_datastore_version,
expected_states, expected_http_code,
create_helper_user=create_helper_user)
self.init_inst_info = info
else:
# There is no need to run this test as it's effectively the same as
# the empty instance test.
raise SkipTest("No testable initial properties provided.")
def assert_instance_create(
self, name, flavor, trove_volume_size,
database_definitions, user_definitions,
configuration_id, root_password, datastore, datastore_version,
expected_states, expected_http_code, create_helper_user=False,
locality=None):
"""This assert method executes a 'create' call and verifies the server
response. It neither waits for the instance to become available
nor it performs any other validations itself.
It has been designed this way to increase test granularity
(other tests may run while the instance is building) and also to allow
its reuse in other runners.
"""
databases = database_definitions
users = [{'name': item['name'], 'password': item['password']}
for item in user_definitions]
instance_info = InstanceTestInfo()
# Here we add helper user/database if any.
if create_helper_user:
helper_db_def, helper_user_def, root_def = self.build_helper_defs()
if helper_db_def:
self.report.log(
"Appending a helper database '%s' to the instance "
"definition." % helper_db_def['name'])
databases.append(helper_db_def)
instance_info.helper_database = helper_db_def
if helper_user_def:
self.report.log(
"Appending a helper user '%s:%s' to the instance "
"definition."
% (helper_user_def['name'], helper_user_def['password']))
users.append(helper_user_def)
instance_info.helper_user = helper_user_def
instance_info.name = name
instance_info.databases = databases
instance_info.users = users
instance_info.dbaas_datastore = CONFIG.dbaas_datastore
instance_info.dbaas_datastore_version = CONFIG.dbaas_datastore_version
instance_info.dbaas_flavor_href = self.get_flavor_href(flavor)
if self.VOLUME_SUPPORT:
instance_info.volume = {'size': trove_volume_size}
else:
instance_info.volume = None
instance_info.nics = self.instance_info.nics
self.report.log("Testing create instance: %s"
% {'name': name,
'flavor': flavor.id,
'volume': trove_volume_size,
'nics': instance_info.nics,
'databases': databases,
'users': users,
'configuration': configuration_id,
'root password': root_password,
'datastore': datastore,
'datastore version': datastore_version})
instance = self.get_existing_instance()
if instance:
self.report.log("Using an existing instance: %s" % instance.id)
self.assert_equal(expected_states[-1], instance.status,
"Given instance is in a bad state.")
instance_info.name = instance.name
else:
self.report.log("Creating a new instance.")
client = self.auth_client
instance = client.instances.create(
instance_info.name,
instance_info.dbaas_flavor_href,
instance_info.volume,
instance_info.databases,
instance_info.users,
nics=instance_info.nics,
configuration=configuration_id,
availability_zone="nova",
datastore=instance_info.dbaas_datastore,
datastore_version=instance_info.dbaas_datastore_version,
locality=locality)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance.id, expected_states[0:1])
self.register_debug_inst_ids(instance.id)
instance_info.id = instance.id
with CheckInstance(instance._info) as check:
check.flavor()
check.datastore()
check.links(instance._info['links'])
if self.VOLUME_SUPPORT:
check.volume()
self.assert_equal(trove_volume_size,
instance._info['volume']['size'],
"Unexpected Trove volume size")
self.assert_equal(instance_info.name, instance._info['name'],
"Unexpected instance name")
self.assert_equal(str(flavor.id),
str(instance._info['flavor']['id']),
"Unexpected instance flavor")
self.assert_equal(instance_info.dbaas_datastore,
instance._info['datastore']['type'],
"Unexpected instance datastore version")
self.assert_equal(instance_info.dbaas_datastore_version,
instance._info['datastore']['version'],
"Unexpected instance datastore version")
self.assert_configuration_group(instance_info.id, configuration_id)
if locality:
self.assert_equal(locality, instance._info['locality'],
"Unexpected locality")
return instance_info
def run_wait_for_instance(self, expected_states=['BUILD', 'HEALTHY']):
instances = [self.instance_info.id]
self.assert_all_instance_states(instances, expected_states)
self.instance_info.srv_grp_id = self.assert_server_group_exists(
self.instance_info.id)
self.wait_for_test_helpers(self.instance_info)
def run_wait_for_init_instance(self, expected_states=['BUILD', 'HEALTHY']):
if self.init_inst_info:
instances = [self.init_inst_info.id]
self.assert_all_instance_states(instances, expected_states)
self.wait_for_test_helpers(self.init_inst_info)
def wait_for_test_helpers(self, inst_info):
self.report.log("Waiting for helper users and databases to be "
"created on instance: %s" % inst_info.id)
client = self.auth_client
if inst_info.helper_user:
self.wait_for_user_create(client, inst_info.id,
[inst_info.helper_user])
if inst_info.helper_database:
self.wait_for_database_create(client, inst_info.id,
[inst_info.helper_database])
self.report.log("Test helpers are ready.")
def run_add_initialized_instance_data(self):
if self.init_inst_info:
self.init_inst_data = DataType.small
self.init_inst_host = self.get_instance_host(
self.init_inst_info.id)
self.test_helper.add_data(self.init_inst_data, self.init_inst_host)
def run_validate_initialized_instance(self):
if self.init_inst_info:
self.assert_instance_properties(
self.init_inst_info.id, self.init_inst_dbs,
self.init_inst_users, self.init_inst_config_group_id,
self.init_inst_data)
def assert_instance_properties(
self, instance_id, expected_dbs_definitions,
expected_user_definitions, expected_config_group_id,
expected_data_type):
if expected_dbs_definitions:
self.assert_database_list(instance_id, expected_dbs_definitions)
else:
self.report.log("No databases to validate for instance: %s"
% instance_id)
if expected_user_definitions:
self.assert_user_list(instance_id, expected_user_definitions)
else:
self.report.log("No users to validate for instance: %s"
% instance_id)
self.assert_configuration_group(instance_id, expected_config_group_id)
if self.init_inst_host:
self.test_helper.verify_data(
expected_data_type, self.init_inst_host)
else:
self.report.log("No data to validate for instance: %s"
% instance_id)
def assert_configuration_group(self, instance_id, expected_group_id):
instance = self.get_instance(instance_id)
if expected_group_id:
self.assert_equal(expected_group_id, instance.configuration['id'],
"Wrong configuration group attached")
else:
self.assert_false(hasattr(instance, 'configuration'),
"No configuration group expected")
def assert_database_list(self, instance_id, expected_databases):
self.wait_for_database_create(self.auth_client,
instance_id, expected_databases)
def _get_names(self, definitions):
return [item['name'] for item in definitions]
def assert_user_list(self, instance_id, expected_users):
self.wait_for_user_create(self.auth_client,
instance_id, expected_users)
# Verify that user definitions include only created databases.
all_databases = self._get_names(
self.test_helper.get_valid_database_definitions())
for user in expected_users:
if 'databases' in user:
self.assert_is_sublist(
self._get_names(user['databases']), all_databases,
"Definition of user '%s' specifies databases not included "
"in the list of initial databases." % user['name'])
def run_initialized_instance_delete(self, expected_http_code=202):
if self.init_inst_info:
client = self.auth_client
client.instances.delete(self.init_inst_info.id)
self.assert_client_code(client, expected_http_code)
else:
raise SkipTest("Cleanup is not required.")
def run_wait_for_init_delete(self, expected_states=['SHUTDOWN']):
delete_ids = []
if self.init_inst_info:
delete_ids.append(self.init_inst_info.id)
if delete_ids:
self.assert_all_gone(delete_ids, expected_states[-1])
else:
raise SkipTest("Cleanup is not required.")
self.init_inst_info = None
self.init_inst_dbs = None
self.init_inst_users = None
self.init_inst_host = None
self.init_inst_data = None
self.init_inst_config_group_id = None
def run_initial_configuration_delete(self, expected_http_code=202):
if self.config_group_id:
client = self.auth_client
client.configurations.delete(self.config_group_id)
self.assert_client_code(client, expected_http_code)
else:
raise SkipTest("Cleanup is not required.")
self.config_group_id = None

View File

@ -1,49 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import proboscis
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceDeleteRunner(TestRunner):
def __init__(self):
super(InstanceDeleteRunner, self).__init__()
def run_instance_delete(self, expected_http_code=202):
if self.has_do_not_delete_instance:
self.report.log("TESTS_DO_NOT_DELETE_INSTANCE=True was "
"specified, skipping delete...")
raise proboscis.SkipTest("TESTS_DO_NOT_DELETE_INSTANCE "
"was specified.")
self.assert_instance_delete(self.instance_info.id, expected_http_code)
def assert_instance_delete(self, instance_id, expected_http_code):
self.report.log("Testing delete on instance: %s" % instance_id)
client = self.auth_client
client.instances.delete(instance_id)
self.assert_client_code(client, expected_http_code)
def run_instance_delete_wait(self, expected_states=['SHUTDOWN']):
if self.has_do_not_delete_instance:
self.report.log("TESTS_DO_NOT_DELETE_INSTANCE=True was "
"specified, skipping delete wait...")
raise proboscis.SkipTest("TESTS_DO_NOT_DELETE_INSTANCE "
"was specified.")
self.assert_all_gone(self.instance_info.id, expected_states[-1])
self.assert_server_group_gone(self.instance_info.srv_grp_id)

View File

@ -1,129 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.scenario.runners.test_runners import CheckInstance
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceErrorCreateRunner(TestRunner):
def __init__(self):
super(InstanceErrorCreateRunner, self).__init__(sleep_time=1)
self.error_inst_id = None
self.error2_inst_id = None
def run_create_error_instance(self, expected_http_code=200):
if self.is_using_existing_instance:
raise SkipTest("Using an existing instance.")
name = self.instance_info.name + '_error'
flavor = self.get_instance_flavor(fault_num=1)
client = self.auth_client
inst = client.instances.create(
name,
self.get_flavor_href(flavor),
self.instance_info.volume,
nics=self.instance_info.nics,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
self.error_inst_id = inst.id
def run_create_error2_instance(self, expected_http_code=200):
if self.is_using_existing_instance:
raise SkipTest("Using an existing instance.")
name = self.instance_info.name + '_error2'
flavor = self.get_instance_flavor(fault_num=2)
client = self.auth_client
inst = client.instances.create(
name,
self.get_flavor_href(flavor),
self.instance_info.volume,
nics=self.instance_info.nics,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
self.error2_inst_id = inst.id
def run_wait_for_error_instances(self, expected_states=['ERROR']):
error_ids = []
if self.error_inst_id:
error_ids.append(self.error_inst_id)
if self.error2_inst_id:
error_ids.append(self.error2_inst_id)
if error_ids:
self.assert_all_instance_states(
error_ids, expected_states, fast_fail_status=[])
def run_validate_error_instance(self):
if not self.error_inst_id:
raise SkipTest("No error instance created.")
instance = self.get_instance(
self.error_inst_id, self.auth_client)
with CheckInstance(instance._info) as check:
check.fault()
err_msg = "disk is too small for requested image"
self.assert_true(err_msg in instance.fault['message'],
"Message '%s' does not contain '%s'" %
(instance.fault['message'], err_msg))
def run_validate_error2_instance(self):
if not self.error2_inst_id:
raise SkipTest("No error2 instance created.")
instance = self.get_instance(
self.error2_inst_id, client=self.admin_client)
with CheckInstance(instance._info) as check:
check.fault(is_admin=True)
err_msg = "Quota exceeded for ram"
self.assert_true(err_msg in instance.fault['message'],
"Message '%s' does not contain '%s'" %
(instance.fault['message'], err_msg))
def run_delete_error_instances(self, expected_http_code=202):
client = self.auth_client
if self.error_inst_id:
client.instances.delete(self.error_inst_id)
self.assert_client_code(client, expected_http_code)
if self.error2_inst_id:
client.instances.delete(self.error2_inst_id)
self.assert_client_code(client, expected_http_code)
def run_wait_for_error_delete(self, expected_states=['SHUTDOWN']):
delete_ids = []
if self.error_inst_id:
delete_ids.append(self.error_inst_id)
if self.error2_inst_id:
delete_ids.append(self.error2_inst_id)
if delete_ids:
self.assert_all_gone(delete_ids, expected_states[-1])
else:
raise SkipTest("Cleanup is not required.")
# All the neutron ports should be removed.
if self.error_inst_id:
ports = self.neutron_client.list_ports(
name='trove-%s' % self.error_inst_id
)
self.assert_equal(0, len(ports.get("ports", [])))

View File

@ -1,59 +0,0 @@
# Copyright 2016 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceForceDeleteRunner(TestRunner):
def __init__(self):
super(InstanceForceDeleteRunner, self).__init__(sleep_time=1)
self.build_inst_id = None
def run_create_build_instance(self, expected_states=['NEW', 'BUILD'],
expected_http_code=200):
if self.is_using_existing_instance:
raise SkipTest("Using an existing instance.")
name = self.instance_info.name + '_build'
flavor = self.get_instance_flavor()
client = self.auth_client
inst = client.instances.create(
name,
self.get_flavor_href(flavor),
self.instance_info.volume,
nics=self.instance_info.nics,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action([inst.id], expected_states)
self.build_inst_id = inst.id
def run_delete_build_instance(self, expected_http_code=202):
if self.build_inst_id:
client = self.admin_client
client.instances.force_delete(self.build_inst_id)
self.assert_client_code(client, expected_http_code)
def run_wait_for_force_delete(self):
raise SkipKnownBug(runners.BUG_FORCE_DELETE_FAILS)
# if self.build_inst_id:
# self.assert_all_gone([self.build_inst_id], ['SHUTDOWN'])

View File

@ -1,68 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_service import loopingcall
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario.runners.test_runners import TestRunner
class InstanceUpgradeRunner(TestRunner):
def __init__(self):
super(InstanceUpgradeRunner, self).__init__()
def run_add_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.add_data(DataType.small, host)
def run_verify_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.verify_data(DataType.small, host)
def run_remove_test_data(self):
host = self.get_instance_host(self.instance_info.id)
self.test_helper.remove_data(DataType.small, host)
def run_instance_upgrade(self, expected_states=['UPGRADE', 'HEALTHY'],
expected_http_code=202):
instance_id = self.instance_info.id
self.report.log("Testing upgrade on instance: %s" % instance_id)
target_version = self.instance_info.dbaas_datastore_version
client = self.auth_client
client.instances.upgrade(instance_id, target_version)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(instance_id, expected_states)
def _wait_for_user_list():
try:
all_users = self.get_user_names(client, instance_id)
self.report.log("Users in the db instance %s: %s" %
(instance_id, all_users))
except Exception as e:
self.report.log(
"Failed to list users in db instance %s(will continue), "
"error: %s" % (instance_id, str(e))
)
else:
raise loopingcall.LoopingCallDone()
timer = loopingcall.FixedIntervalWithTimeoutLoopingCall(
_wait_for_user_list)
try:
timer.start(interval=3, timeout=120).wait()
except loopingcall.LoopingCallTimeOut:
self.fail("Timed out: Cannot list users in the db instance %s"
% instance_id)

File diff suppressed because it is too large Load Diff

View File

@ -1,101 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.tests.scenario.runners.test_runners import TestRunner
from troveclient.compat import exceptions
class NegativeClusterActionsRunner(TestRunner):
def __init__(self):
super(NegativeClusterActionsRunner, self).__init__()
def run_create_constrained_size_cluster(self, min_nodes=2, max_nodes=None,
expected_http_code=400):
self.assert_create_constrained_size_cluster('negative_cluster',
min_nodes, max_nodes,
expected_http_code)
def assert_create_constrained_size_cluster(self, cluster_name,
min_nodes, max_nodes,
expected_http_code):
# Create a cluster with less than 'min_nodes'.
if min_nodes:
instances_def = [self.build_flavor()] * (min_nodes - 1)
self._assert_cluster_create_raises(cluster_name, instances_def,
expected_http_code)
# Create a cluster with mare than 'max_nodes'.
if max_nodes:
instances_def = [self.build_flavor()] * (max_nodes + 1)
self._assert_cluster_create_raises(cluster_name, instances_def,
expected_http_code)
def run_create_heterogeneous_cluster(self, expected_http_code=400):
# Create a cluster with different node flavors.
instances_def = [self.build_flavor(flavor_id=2, volume_size=1),
self.build_flavor(flavor_id=3, volume_size=1)]
self._assert_cluster_create_raises('heterocluster',
instances_def, expected_http_code)
# Create a cluster with different volume sizes.
instances_def = [self.build_flavor(flavor_id=2, volume_size=1),
self.build_flavor(flavor_id=2, volume_size=2)]
self._assert_cluster_create_raises('heterocluster',
instances_def, expected_http_code)
def _assert_cluster_create_raises(self, cluster_name, instances_def,
expected_http_code):
client = self.auth_client
self.assert_raises(exceptions.BadRequest, expected_http_code,
client, client.clusters.create,
cluster_name,
self.instance_info.dbaas_datastore,
self.instance_info.dbaas_datastore_version,
instances=instances_def)
class MongodbNegativeClusterActionsRunner(NegativeClusterActionsRunner):
def run_create_constrained_size_cluster(self):
super(MongodbNegativeClusterActionsRunner,
self).run_create_constrained_size_cluster(min_nodes=3,
max_nodes=3)
class CassandraNegativeClusterActionsRunner(NegativeClusterActionsRunner):
def run_create_constrained_size_cluster(self):
raise SkipTest("No constraints apply to the number of cluster nodes.")
def run_create_heterogeneous_cluster(self):
raise SkipTest("No constraints apply to the size of cluster nodes.")
class RedisNegativeClusterActionsRunner(NegativeClusterActionsRunner):
def run_create_constrained_size_cluster(self):
raise SkipTest("No constraints apply to the number of cluster nodes.")
def run_create_heterogeneous_cluster(self):
raise SkipTest("No constraints apply to the size of cluster nodes.")
class PxcNegativeClusterActionsRunner(NegativeClusterActionsRunner):
def run_create_constrained_size_cluster(self):
raise SkipTest("No constraints apply to the number of cluster nodes.")

View File

@ -1,464 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from trove.common import utils
from trove.tests.scenario.helpers.test_helper import DataType
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import CheckInstance
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
from troveclient.compat import exceptions
class ReplicationRunner(TestRunner):
def __init__(self):
super(ReplicationRunner, self).__init__()
self.master_id = self.instance_info.id
self.replica_1_id = 0
self.master_host = self.get_instance_host(self.master_id)
self.replica_1_host = None
self.master_backup_count = None
self.used_data_sets = set()
self.non_affinity_master_id = None
self.non_affinity_srv_grp_id = None
self.non_affinity_repl_id = None
self.locality = 'affinity'
def run_add_data_for_replication(self, data_type=DataType.small):
self.assert_add_replication_data(data_type, self.master_host)
def assert_add_replication_data(self, data_type, host):
"""In order for this to work, the corresponding datastore
'helper' class should implement the 'add_actual_data' method.
"""
self.test_helper.add_data(data_type, host)
self.used_data_sets.add(data_type)
def run_add_data_after_replica(self, data_type=DataType.micro):
self.assert_add_replication_data(data_type, self.master_host)
def run_verify_data_for_replication(self, data_type=DataType.small):
self.assert_verify_replication_data(data_type, self.master_host)
def assert_verify_replication_data(self, data_type, host):
"""In order for this to work, the corresponding datastore
'helper' class should implement the 'verify_actual_data' method.
"""
self.test_helper.verify_data(data_type, host)
def run_create_non_affinity_master(self, expected_http_code=200):
client = self.auth_client
self.non_affinity_master_id = client.instances.create(
self.instance_info.name + '_non-affinity',
self.instance_info.dbaas_flavor_href,
self.instance_info.volume,
datastore=self.instance_info.dbaas_datastore,
datastore_version=self.instance_info.dbaas_datastore_version,
nics=self.instance_info.nics,
locality='anti-affinity').id
self.assert_client_code(client, expected_http_code)
self.register_debug_inst_ids(self.non_affinity_master_id)
def run_create_single_replica(self, expected_http_code=200):
self.master_backup_count = len(
self.auth_client.instances.backups(self.master_id))
self.replica_1_id = self.assert_replica_create(
self.master_id, 'replica1', 1, expected_http_code)[0]
def assert_replica_create(
self, master_id, replica_name, replica_count, expected_http_code):
# When creating multiple replicas, only one replica info will be
# returned, so we should compare the replica set members before and
# after the creation to get the correct new replica ids.
original_replicas = self._get_replica_set(master_id)
client = self.auth_client
client.instances.create(
self.instance_info.name + '_' + replica_name,
replica_of=master_id,
nics=self.instance_info.nics,
replica_count=replica_count)
self.assert_client_code(client, expected_http_code)
new_replicas = self._get_replica_set(master_id) - original_replicas
self.register_debug_inst_ids(new_replicas)
return list(new_replicas)
def run_wait_for_single_replica(self,
expected_states=['BUILD', 'HEALTHY']):
self.assert_instance_action(self.replica_1_id, expected_states)
self._assert_is_master(self.master_id, [self.replica_1_id])
self._assert_is_replica(self.replica_1_id, self.master_id)
self._assert_locality(self.master_id)
self.replica_1_host = self.get_instance_host(self.replica_1_id)
def _assert_is_master(self, instance_id, replica_ids):
client = self.admin_client
instance = self.get_instance(instance_id, client=client)
self.assert_client_code(client, 200)
CheckInstance(instance._info).slaves()
self.assert_true(
set(replica_ids).issubset(self._get_replica_set(instance_id)))
self._validate_master(instance_id)
def _get_replica_set(self, master_id):
instance = self.get_instance(master_id)
# Return an empty set before the first replia is created
return set([replica['id']
for replica in instance._info.get('replicas', [])])
def _assert_is_replica(self, instance_id, master_id):
client = self.admin_client
instance = self.get_instance(instance_id, client=client)
self.assert_client_code(client, 200)
CheckInstance(instance._info).replica_of()
self.assert_equal(master_id, instance._info['replica_of']['id'],
'Unexpected replication master ID')
self._validate_replica(instance_id)
def _assert_locality(self, instance_id):
replica_ids = self._get_replica_set(instance_id)
instance = self.get_instance(instance_id)
self.assert_equal(self.locality, instance.locality,
"Unexpected locality for instance '%s'" %
instance_id)
for replica_id in replica_ids:
replica = self.get_instance(replica_id)
self.assert_equal(self.locality, replica.locality,
"Unexpected locality for instance '%s'" %
replica_id)
def run_wait_for_non_affinity_master(self,
expected_states=['BUILD', 'HEALTHY']):
self._assert_instance_states(self.non_affinity_master_id,
expected_states)
self.non_affinity_srv_grp_id = self.assert_server_group_exists(
self.non_affinity_master_id)
def run_create_non_affinity_replica(self, expected_http_code=200):
client = self.auth_client
self.non_affinity_repl_id = client.instances.create(
self.instance_info.name + '_non-affinity-repl',
nics=self.instance_info.nics,
replica_of=self.non_affinity_master_id,
replica_count=1).id
self.assert_client_code(client, expected_http_code)
self.register_debug_inst_ids(self.non_affinity_repl_id)
def run_create_multiple_replicas(self, expected_http_code=200):
self.assert_replica_create(self.master_id,
'replica2', 2, expected_http_code)
def run_wait_for_multiple_replicas(
self, expected_states=['BUILD', 'HEALTHY']):
replica_ids = self._get_replica_set(self.master_id)
self.report.log("Waiting for replicas: %s" % replica_ids)
self.assert_instance_action(replica_ids, expected_states)
self._assert_is_master(self.master_id, replica_ids)
for replica_id in replica_ids:
self._assert_is_replica(replica_id, self.master_id)
self._assert_locality(self.master_id)
def run_wait_for_non_affinity_replica_fail(
self, expected_states=['BUILD', 'ERROR']):
self._assert_instance_states(self.non_affinity_repl_id,
expected_states,
fast_fail_status=['HEALTHY'])
def run_delete_non_affinity_repl(self, expected_http_code=202):
self.assert_delete_instances(
self.non_affinity_repl_id, expected_http_code=expected_http_code)
def assert_delete_instances(self, instance_ids, expected_http_code):
instance_ids = (instance_ids if utils.is_collection(instance_ids)
else [instance_ids])
client = self.auth_client
for instance_id in instance_ids:
client.instances.delete(instance_id)
self.assert_client_code(client, expected_http_code)
def run_wait_for_delete_non_affinity_repl(
self, expected_last_status=['SHUTDOWN']):
self.assert_all_gone([self.non_affinity_repl_id],
expected_last_status=expected_last_status)
def run_delete_non_affinity_master(self, expected_http_code=202):
self.assert_delete_instances(
self.non_affinity_master_id, expected_http_code=expected_http_code)
def run_wait_for_delete_non_affinity_master(
self, expected_last_status=['SHUTDOWN']):
self.assert_all_gone([self.non_affinity_master_id],
expected_last_status=expected_last_status)
self.assert_server_group_gone(self.non_affinity_srv_grp_id)
def run_add_data_to_replicate(self):
self.assert_add_replication_data(DataType.tiny, self.master_host)
def run_verify_data_to_replicate(self):
self.assert_verify_replication_data(DataType.tiny, self.master_host)
def run_verify_replica_data_orig(self):
self.assert_verify_replica_data(self.instance_info.id, DataType.small)
def assert_verify_replica_data(self, master_id, data_type):
replica_ids = self._get_replica_set(master_id)
for replica_id in replica_ids:
host = self.get_instance_host(replica_id)
self.report.log("Checking data on host %s" % host)
self.assert_verify_replication_data(data_type, host)
def run_verify_replica_data_after_single(self):
self.assert_verify_replica_data(self.instance_info.id, DataType.micro)
def run_verify_replica_data_new(self):
self.assert_verify_replica_data(self.instance_info.id, DataType.tiny)
def run_promote_master(self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.instances.promote_to_replica_source,
self.instance_info.id)
def run_eject_replica(self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.instances.eject_replica_source,
self.replica_1_id)
def run_eject_valid_master(self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# client = self.auth_client
# self.assert_raises(
# expected_exception, expected_http_code,
# client, client.instances.eject_replica_source,
# self.instance_info.id)
# Uncomment once BUG_EJECT_VALID_MASTER is fixed
raise SkipKnownBug(runners.BUG_EJECT_VALID_MASTER)
def run_delete_valid_master(self, expected_exception=exceptions.Forbidden,
expected_http_code=403):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.instances.delete,
self.instance_info.id)
def run_promote_to_replica_source(self,
expected_states=['PROMOTE', 'HEALTHY'],
expected_http_code=202):
self.assert_promote_to_replica_source(
self.replica_1_id, self.instance_info.id, expected_states,
expected_http_code)
def assert_promote_to_replica_source(
self, new_master_id, old_master_id,
expected_states, expected_http_code):
original_replica_ids = self._get_replica_set(old_master_id)
other_replica_ids = list(original_replica_ids)
other_replica_ids.remove(new_master_id)
# Promote replica
self.assert_replica_promote(new_master_id, expected_states,
expected_http_code)
current_replica_ids = list(other_replica_ids)
current_replica_ids.append(old_master_id)
self._assert_is_master(new_master_id, current_replica_ids)
self._assert_is_replica(old_master_id, new_master_id)
def assert_replica_promote(
self, new_master_id, expected_states, expected_http_code):
client = self.auth_client
client.instances.promote_to_replica_source(new_master_id)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(new_master_id, expected_states)
def run_verify_replica_data_new_master(self):
self.assert_verify_replication_data(
DataType.small, self.replica_1_host)
self.assert_verify_replication_data(
DataType.tiny, self.replica_1_host)
def run_add_data_to_replicate2(self):
self.assert_add_replication_data(DataType.tiny2, self.replica_1_host)
def run_verify_data_to_replicate2(self):
self.assert_verify_replication_data(DataType.tiny2,
self.replica_1_host)
def run_verify_replica_data_new2(self):
self.assert_verify_replica_data(self.replica_1_id, DataType.tiny2)
def run_promote_original_source(self,
expected_states=['PROMOTE', 'HEALTHY'],
expected_http_code=202):
self.assert_promote_to_replica_source(
self.instance_info.id, self.replica_1_id, expected_states,
expected_http_code)
def run_add_final_data_to_replicate(self):
self.assert_add_replication_data(DataType.tiny3, self.master_host)
def run_verify_data_to_replicate_final(self):
self.assert_verify_replication_data(DataType.tiny3, self.master_host)
def run_verify_final_data_replicated(self):
self.assert_verify_replica_data(self.master_id, DataType.tiny3)
def run_remove_replicated_data(self):
self.assert_remove_replicated_data(self.master_host)
def assert_remove_replicated_data(self, host):
"""In order for this to work, the corresponding datastore
'helper' class should implement the 'remove_actual_data' method.
"""
for data_set in self.used_data_sets:
self.report.log("Removing replicated data set: %s" % data_set)
self.test_helper.remove_data(data_set, host)
def run_detach_replica_from_source(self,
expected_states=['DETACH', 'HEALTHY'],
expected_http_code=202):
self.assert_detach_replica_from_source(
self.instance_info.id, self.replica_1_id,
expected_states, expected_http_code)
def assert_detach_replica_from_source(
self, master_id, replica_id, expected_states,
expected_http_code):
other_replica_ids = self._get_replica_set(master_id)
other_replica_ids.remove(replica_id)
self.assert_detach_replica(
replica_id, expected_states, expected_http_code)
self._assert_is_master(master_id, other_replica_ids)
self._assert_is_not_replica(replica_id)
def assert_detach_replica(
self, replica_id, expected_states, expected_http_code):
client = self.auth_client
client.instances.update(replica_id, detach_replica_source=True)
self.assert_client_code(client, expected_http_code)
self.assert_instance_action(replica_id, expected_states)
def _assert_is_not_replica(self, instance_id):
client = self.admin_client
instance = self.get_instance(instance_id, client=client)
self.assert_client_code(client, 200)
if 'replica_of' not in instance._info:
try:
self._validate_replica(instance_id)
self.fail("The instance is still configured as a replica "
"after detached: %s" % instance_id)
except AssertionError:
pass
else:
self.fail("Unexpected replica_of ID.")
def run_delete_detached_replica(self, expected_http_code=202):
self.assert_delete_instances(
self.replica_1_id, expected_http_code=expected_http_code)
def run_delete_all_replicas(self, expected_http_code=202):
self.assert_delete_all_replicas(
self.instance_info.id, expected_http_code)
def assert_delete_all_replicas(
self, master_id, expected_http_code):
self.report.log("Deleting a replication set: %s" % master_id)
replica_ids = self._get_replica_set(master_id)
self.assert_delete_instances(replica_ids, expected_http_code)
def run_wait_for_delete_replicas(
self, expected_last_status=['SHUTDOWN']):
replica_ids = self._get_replica_set(self.master_id)
replica_ids.add(self.replica_1_id)
self.assert_all_gone(replica_ids,
expected_last_status=expected_last_status)
def run_test_backup_deleted(self):
backup = self.auth_client.instances.backups(self.master_id)
self.assert_equal(self.master_backup_count, len(backup))
def run_cleanup_master_instance(self):
pass
def _validate_master(self, instance_id):
"""This method is intended to be overridden by each
datastore as needed. It is to be used for any database
specific master instance validation.
"""
pass
def _validate_replica(self, instance_id):
"""This method is intended to be overridden by each
datastore as needed. It is to be used for any database
specific replica instance validation.
"""
pass
class MysqlReplicationRunner(ReplicationRunner):
def run_cleanup_master_instance(self):
for user in self.auth_client.users.list(self.master_id):
if user.name.startswith("slave_"):
self.auth_client.users.delete(self.master_id, user.name,
user.host)
def _validate_master(self, instance_id):
"""For Mysql validate that the master has its
binlog_format set to MIXED.
"""
host = self.get_instance_host(instance_id)
self._validate_binlog_fmt(instance_id, host)
def _validate_replica(self, instance_id):
"""For Mysql validate that any replica has its
binlog_format set to MIXED and it is in read_only
mode.
"""
host = self.get_instance_host(instance_id)
self._validate_binlog_fmt(instance_id, host)
self._validate_read_only(instance_id, host)
def _validate_binlog_fmt(self, instance_id, host):
binlog_fmt = self.test_helper.get_configuration_value('binlog_format',
host)
self.assert_equal(self._get_expected_binlog_format(), binlog_fmt,
'Wrong binlog format detected for %s' % instance_id)
def _get_expected_binlog_format(self):
return 'MIXED'
def _validate_read_only(self, instance_id, host):
read_only = self.test_helper.get_configuration_value('read_only',
host)
self.assert_equal('ON', read_only, 'Wrong read only mode detected '
'for %s' % instance_id)
class PerconaReplicationRunner(MysqlReplicationRunner):
pass
class MariadbReplicationRunner(MysqlReplicationRunner):
pass

View File

@ -1,249 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from proboscis import SkipTest
from trove.common import utils
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
from troveclient.compat import exceptions
class RootActionsRunner(TestRunner):
def __init__(self):
self.current_root_creds = None
self.restored_root_creds = None
self.restored_root_creds2 = None
super(RootActionsRunner, self).__init__()
def run_check_root_never_enabled(self, expected_http_code=200):
self.assert_root_disabled(self.instance_info.id, expected_http_code)
def assert_root_disabled(self, instance_id, expected_http_code):
self._assert_root_state(instance_id, False, expected_http_code,
"The root has already been enabled on the "
"instance.")
def _assert_root_state(self, instance_id, expected_state,
expected_http_code, message):
# The call returns a nameless user object with 'rootEnabled' attribute.
client = self.auth_client
response = client.root.is_root_enabled(instance_id)
self.assert_client_code(client, expected_http_code)
actual_state = getattr(response, 'rootEnabled', None)
self.assert_equal(expected_state, actual_state, message)
def run_disable_root_before_enabled(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_root_disable_failure(
self.instance_info.id, expected_exception, expected_http_code)
def assert_root_disable_failure(self, instance_id, expected_exception,
expected_http_code):
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.root.delete, instance_id)
def run_enable_root_no_password(self, expected_http_code=200):
root_credentials = self.test_helper.get_helper_credentials_root()
self.current_root_creds = self.assert_root_create(
self.instance_info.id, None, root_credentials['name'],
expected_http_code)
self.restored_root_creds = list(self.current_root_creds)
def assert_root_create(self, instance_id, root_password,
expected_root_name, expected_http_code):
client = self.auth_client
if root_password is not None:
root_creds = client.root.create_instance_root(
instance_id, root_password)
self.assert_equal(root_password, root_creds[1])
else:
root_creds = client.root.create(instance_id)
self.assert_client_code(client, expected_http_code)
if expected_root_name is not None:
self.assert_equal(expected_root_name, root_creds[0])
self.assert_can_connect(instance_id, root_creds)
return root_creds
def assert_can_connect(self, instance_id, test_connect_creds):
self._assert_connect(instance_id, True, test_connect_creds)
def _assert_connect(self, instance_id, expected_response,
test_connect_creds):
host = self.get_instance_host(instance_id=instance_id)
self.report.log(
"Pinging instance %s with credentials: %s, database: %s" %
(instance_id, test_connect_creds,
self.test_helper.credentials.get("database"))
)
ping_response = self.test_helper.ping(
host,
username=test_connect_creds[0],
password=test_connect_creds[1],
database=self.test_helper.credentials.get("database")
)
self.assert_equal(expected_response, ping_response)
def run_check_root_enabled(self, expected_http_code=200):
self.assert_root_enabled(self.instance_info.id, expected_http_code)
def assert_root_enabled(self, instance_id, expected_http_code):
self._assert_root_state(instance_id, True, expected_http_code,
"The root has not been enabled on the "
"instance yet.")
def run_enable_root_with_password(self, expected_http_code=200):
root_credentials = self.test_helper.get_helper_credentials_root()
password = root_credentials['password']
if password is not None:
self.current_root_creds = self.assert_root_create(
self.instance_info.id,
password, root_credentials['name'],
expected_http_code)
else:
raise SkipTest("No valid root password defined in %s."
% self.test_helper.get_class_name())
def run_disable_root(self, expected_http_code=204):
self.restored_root_creds2 = list(self.current_root_creds)
self.assert_root_disable(self.instance_info.id, expected_http_code)
def assert_root_disable(self, instance_id, expected_http_code):
client = self.auth_client
client.root.delete(instance_id)
self.assert_client_code(client, expected_http_code)
self.assert_cannot_connect(self.instance_info.id,
self.current_root_creds)
def assert_cannot_connect(self, instance_id, test_connect_creds):
self._assert_connect(instance_id, False, test_connect_creds)
def run_check_root_still_enabled_after_disable(
self, expected_http_code=200):
self.assert_root_enabled(self.instance_info.id, expected_http_code)
def run_delete_root(self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_root_delete_failure(
self.instance_info.id, expected_exception, expected_http_code)
def assert_root_delete_failure(self, instance_id, expected_exception,
expected_http_code):
root_user_name = self.current_root_creds[0]
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.users.delete,
instance_id, root_user_name)
def run_check_root_enabled_after_restore(
self, restored_instance_id, restored_creds,
expected_http_code=200):
self.assert_root_enabled_after_restore(
restored_instance_id, restored_creds, True, expected_http_code)
def run_check_root_enabled_after_restore2(
self, restored_instance_id, restored_creds,
expected_http_code=200):
self.assert_root_enabled_after_restore(
restored_instance_id, restored_creds, False, expected_http_code)
def assert_root_enabled_after_restore(
self, restored_instance_id, restored_creds,
expected_connect_response, expected_http_code):
if restored_instance_id:
self.assert_root_enabled(restored_instance_id, expected_http_code)
self._assert_connect(restored_instance_id,
expected_connect_response, restored_creds)
else:
raise SkipTest("No restored instance.")
def check_root_disable_supported(self):
"""Throw SkipTest if root-disable is not supported."""
pass
def check_inherit_root_state_supported(self):
"""Throw SkipTest if inherting root state is not supported."""
pass
class PerconaRootActionsRunner(RootActionsRunner):
def check_root_disable_supported(self):
raise SkipTest("Operation is currently not supported.")
class MariadbRootActionsRunner(RootActionsRunner):
def check_root_disable_supported(self):
raise SkipTest("Operation is currently not supported.")
class PxcRootActionsRunner(RootActionsRunner):
def check_root_disable_supported(self):
raise SkipTest("Operation is currently not supported.")
class PostgresqlRootActionsRunner(RootActionsRunner):
def check_root_disable_supported(self):
raise SkipTest("Operation is currently not supported.")
def run_enable_root_with_password(self):
raise SkipTest("Operation is currently not supported.")
def run_delete_root(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
class CouchbaseRootActionsRunner(RootActionsRunner):
def _assert_connect(
self, instance_id, expected_response, test_connect_creds):
host = self.get_instance_host(instance_id=instance_id)
self.report.log("Pinging instance %s with credentials: %s"
% (instance_id, test_connect_creds))
mgmt_port = 8091
mgmt_creds = '%s:%s' % (test_connect_creds[0], test_connect_creds[1])
rest_endpoint = ('http://%s:%d/pools/nodes'
% (host, mgmt_port))
out, err = utils.execute_with_timeout(
'curl', '-u', mgmt_creds, rest_endpoint)
self.assert_equal(expected_response, out and len(out) > 0)
def check_root_disable_supported(self):
raise SkipTest("Operation is currently not supported.")
def run_enable_root_with_password(self):
raise SkipTest("Operation is currently not supported.")
def run_delete_root(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
class RedisRootActionsRunner(RootActionsRunner):
def check_inherit_root_state_supported(self):
raise SkipTest("Redis instances does not inherit root state "
"from backups.")

File diff suppressed because it is too large Load Diff

View File

@ -1,527 +0,0 @@
# Copyright 2015 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from urllib import parse as urllib_parse
from proboscis import SkipTest
from trove.common import exception
from trove.common.utils import poll_until
from trove.tests.scenario import runners
from trove.tests.scenario.runners.test_runners import SkipKnownBug
from trove.tests.scenario.runners.test_runners import TestRunner
from troveclient.compat import exceptions
class UserActionsRunner(TestRunner):
# TODO(pmalik): I believe the 202 (Accepted) should be replaced by
# 200 (OK) as the actions are generally very fast and their results
# available immediately upon execution of the request. This would
# likely require replacing GA casts with calls which I believe are
# more appropriate anyways.
def __init__(self):
super(UserActionsRunner, self).__init__()
self.user_defs = []
self.renamed_user_orig_def = None
@property
def first_user_def(self):
if self.user_defs:
# Try to use the first user with databases if any.
for user_def in self.user_defs:
if 'databases' in user_def and user_def['databases']:
return user_def
return self.user_defs[0]
raise SkipTest("No valid user definitions provided.")
@property
def non_existing_user_def(self):
user_def = self.test_helper.get_non_existing_user_definition()
if user_def:
return user_def
raise SkipTest("No valid user definitions provided.")
def run_users_create(self, expected_http_code=202):
users = self.test_helper.get_valid_user_definitions()
if users:
self.user_defs = self.assert_users_create(
self.instance_info.id, users, expected_http_code)
else:
raise SkipTest("No valid user definitions provided.")
def assert_users_create(self, instance_id, serial_users_def,
expected_http_code):
client = self.auth_client
client.users.create(instance_id, serial_users_def)
self.assert_client_code(client, expected_http_code)
self.wait_for_user_create(client, instance_id, serial_users_def)
return serial_users_def
def run_user_show(self, expected_http_code=200):
for user_def in self.user_defs:
self.assert_user_show(
self.instance_info.id, user_def, expected_http_code)
def assert_user_show(self, instance_id, expected_user_def,
expected_http_code):
user_name = expected_user_def['name']
user_host = expected_user_def.get('host')
client = self.auth_client
queried_user = client.users.get(
instance_id, user_name, user_host)
self.assert_client_code(client, expected_http_code)
self._assert_user_matches(queried_user, expected_user_def)
def _assert_user_matches(self, user, expected_user_def):
user_name = expected_user_def['name']
self.assert_equal(expected_user_def['name'], user.name,
"Mismatch of names for user: %s" % user_name)
self.assert_list_elements_equal(
expected_user_def['databases'], user.databases,
"Mismatch of databases for user: %s" % user_name)
def run_users_list(self, expected_http_code=200):
self.assert_users_list(
self.instance_info.id, self.user_defs, expected_http_code)
def assert_users_list(self, instance_id, expected_user_defs,
expected_http_code, limit=2):
client = self.auth_client
full_list = client.users.list(instance_id)
self.assert_client_code(client, expected_http_code)
listed_users = {user.name: user for user in full_list}
self.assert_is_none(full_list.next,
"Unexpected pagination in the list.")
for user_def in expected_user_defs:
user_name = user_def['name']
self.assert_true(
user_name in listed_users,
"User not included in the 'user-list' output: %s" %
user_name)
self._assert_user_matches(listed_users[user_name], user_def)
# Check that the system (ignored) users are not included in the output.
system_users = self.get_system_users()
self.assert_false(
any(name in listed_users for name in system_users),
"System users should not be included in the 'user-list' output.")
# Test list pagination.
list_page = client.users.list(instance_id, limit=limit)
self.assert_client_code(client, expected_http_code)
self.assert_true(len(list_page) <= limit)
if len(full_list) > limit:
self.assert_is_not_none(list_page.next, "List page is missing.")
else:
self.assert_is_none(list_page.next, "An extra page in the list.")
marker = list_page.next
self.assert_pagination_match(list_page, full_list, 0, limit)
if marker:
last_user = list_page[-1]
expected_marker = self.as_pagination_marker(last_user)
self.assert_equal(expected_marker, marker,
"Pagination marker should be the last element "
"in the page.")
list_page = client.users.list(instance_id, marker=marker)
self.assert_client_code(client, expected_http_code)
self.assert_pagination_match(
list_page, full_list, limit, len(full_list))
def as_pagination_marker(self, user):
return urllib_parse.quote(user.name)
def run_user_access_show(self, expected_http_code=200):
for user_def in self.user_defs:
self.assert_user_access_show(
self.instance_info.id, user_def, expected_http_code)
def assert_user_access_show(self, instance_id, user_def,
expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
user_dbs = client.users.list_access(
instance_id, user_name, hostname=user_host)
self.assert_client_code(client, expected_http_code)
expected_dbs = {db_def['name'] for db_def in user_def['databases']}
listed_dbs = [db.name for db in user_dbs]
self.assert_equal(len(expected_dbs), len(listed_dbs),
"Unexpected number of databases on the user access "
"list.")
for database in expected_dbs:
self.assert_true(
database in listed_dbs,
"Database not found in the user access list: %s" % database)
def run_user_access_revoke(self, expected_http_code=202):
self._apply_on_all_databases(
self.instance_info.id, self.assert_user_access_revoke,
expected_http_code)
def _apply_on_all_databases(self, instance_id, action, expected_http_code):
if any(user_def['databases'] for user_def in self.user_defs):
for user_def in self.user_defs:
user_name, user_host = self._get_user_name_host_pair(user_def)
db_defs = user_def['databases']
for db_def in db_defs:
db_name = db_def['name']
action(instance_id, user_name, user_host,
db_name, expected_http_code)
else:
raise SkipTest("No user databases defined.")
def assert_user_access_revoke(self, instance_id, user_name, user_host,
database, expected_http_code):
client = self.auth_client
client.users.revoke(
instance_id, user_name, database, hostname=user_host)
self.assert_client_code(client, expected_http_code)
user_dbs = client.users.list_access(
instance_id, user_name, hostname=user_host)
self.assert_false(any(db.name == database for db in user_dbs),
"Database should no longer be included in the user "
"access list after revoke: %s" % database)
def run_user_access_grant(self, expected_http_code=202):
self._apply_on_all_databases(
self.instance_info.id, self.assert_user_access_grant,
expected_http_code)
def assert_user_access_grant(self, instance_id, user_name, user_host,
database, expected_http_code):
client = self.auth_client
client.users.grant(
instance_id, user_name, [database], hostname=user_host)
self.assert_client_code(client, expected_http_code)
user_dbs = client.users.list_access(
instance_id, user_name, hostname=user_host)
self.assert_true(any(db.name == database for db in user_dbs),
"Database should be included in the user "
"access list after granting access: %s" % database)
def run_user_create_with_no_attributes(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_users_create_failure(
self.instance_info.id, {}, expected_exception, expected_http_code)
def run_user_create_with_blank_name(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# Test with missing user name attribute.
no_name_usr_def = self.copy_dict(self.non_existing_user_def,
ignored_keys=['name'])
self.assert_users_create_failure(
self.instance_info.id, no_name_usr_def,
expected_exception, expected_http_code)
# Test with empty user name attribute.
blank_name_usr_def = self.copy_dict(self.non_existing_user_def)
blank_name_usr_def.update({'name': ''})
self.assert_users_create_failure(
self.instance_info.id, blank_name_usr_def,
expected_exception, expected_http_code)
def run_user_create_with_blank_password(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# Test with missing password attribute.
no_pass_usr_def = self.copy_dict(self.non_existing_user_def,
ignored_keys=['password'])
self.assert_users_create_failure(
self.instance_info.id, no_pass_usr_def,
expected_exception, expected_http_code)
# Test with missing databases attribute.
no_db_usr_def = self.copy_dict(self.non_existing_user_def,
ignored_keys=['databases'])
self.assert_users_create_failure(
self.instance_info.id, no_db_usr_def,
expected_exception, expected_http_code)
def run_existing_user_create(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_users_create_failure(
self.instance_info.id, self.first_user_def,
expected_exception, expected_http_code)
def run_system_user_create(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_users = self.get_system_users()
if system_users:
user_defs = [{'name': name, 'password': 'password1',
'databases': []} for name in system_users]
self.assert_users_create_failure(
self.instance_info.id, user_defs,
expected_exception, expected_http_code)
def assert_users_create_failure(
self, instance_id, serial_users_def,
expected_exception, expected_http_code):
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.users.create, instance_id, serial_users_def)
def run_user_update_with_blank_name(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_user_attribute_update_failure(
self.instance_info.id, self.first_user_def, {'name': ''},
expected_exception, expected_http_code)
def run_user_update_with_existing_name(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
self.assert_user_attribute_update_failure(
self.instance_info.id, self.first_user_def,
{'name': self.first_user_def['name']},
expected_exception, expected_http_code)
def assert_user_attribute_update_failure(
self, instance_id, user_def, update_attribites,
expected_exception, expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.users.update_attributes, instance_id,
user_name, update_attribites, user_host)
def _get_user_name_host_pair(self, user_def):
return user_def['name'], user_def.get('host')
def run_system_user_attribute_update(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_users = self.get_system_users()
if system_users:
for name in system_users:
user_def = {'name': name, 'password': 'password2'}
self.assert_user_attribute_update_failure(
self.instance_info.id, user_def, user_def,
expected_exception, expected_http_code)
def run_user_attribute_update(self, expected_http_code=202):
updated_def = self.first_user_def
# Update the name by appending a random string to it.
updated_name = ''.join([updated_def['name'], 'upd'])
update_attribites = {'name': updated_name,
'password': 'password2'}
self.assert_user_attribute_update(
self.instance_info.id, updated_def,
update_attribites, expected_http_code)
def assert_user_attribute_update(self, instance_id, user_def,
update_attribites, expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
client.users.update_attributes(
instance_id, user_name, update_attribites, user_host)
self.assert_client_code(client, expected_http_code)
# Update the stored definitions with the new value.
expected_def = None
for user_def in self.user_defs:
if user_def['name'] == user_name:
self.renamed_user_orig_def = dict(user_def)
user_def.update(update_attribites)
expected_def = user_def
self.wait_for_user_create(client, instance_id, self.user_defs)
# Verify using 'user-show' and 'user-list'.
self.assert_user_show(instance_id, expected_def, 200)
self.assert_users_list(instance_id, self.user_defs, 200)
def run_user_recreate_with_no_access(self, expected_http_code=202):
if (self.renamed_user_orig_def and
self.renamed_user_orig_def['databases']):
self.assert_user_recreate_with_no_access(
self.instance_info.id, self.renamed_user_orig_def,
expected_http_code)
else:
raise SkipTest("No renamed users with databases.")
def assert_user_recreate_with_no_access(self, instance_id, original_def,
expected_http_code=202):
# Recreate a previously renamed user without assigning any access
# rights to it.
recreated_user_def = dict(original_def)
recreated_user_def.update({'databases': []})
user_def = self.assert_users_create(
instance_id, [recreated_user_def], expected_http_code)
# Append the new user to defs for cleanup.
self.user_defs.extend(user_def)
# Assert empty user access.
self.assert_user_access_show(instance_id, recreated_user_def, 200)
def run_user_delete(self, expected_http_code=202):
for user_def in self.user_defs:
self.assert_user_delete(
self.instance_info.id, user_def, expected_http_code)
def assert_user_delete(self, instance_id, user_def, expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
client.users.delete(instance_id, user_name, user_host)
self.assert_client_code(client, expected_http_code)
self._wait_for_user_delete(client, instance_id, user_name)
def _wait_for_user_delete(self, client, instance_id, deleted_user_name):
self.report.log("Waiting for deleted user to disappear from the "
"listing: %s" % deleted_user_name)
def _db_is_gone():
all_users = self.get_user_names(client, instance_id)
return deleted_user_name not in all_users
try:
poll_until(_db_is_gone, time_out=self.GUEST_CAST_WAIT_TIMEOUT_SEC)
self.report.log("User is now gone from the instance.")
except exception.PollTimeOut:
self.fail("User still listed after the poll timeout: %ds" %
self.GUEST_CAST_WAIT_TIMEOUT_SEC)
def run_nonexisting_user_show(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_user_show_failure(
self.instance_info.id,
{'name': self.non_existing_user_def['name']},
expected_exception, expected_http_code)
def assert_user_show_failure(self, instance_id, user_def,
expected_exception, expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
self.assert_raises(
expected_exception, expected_http_code,
client, client.users.get, instance_id, user_name, user_host)
def run_system_user_show(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_users = self.get_system_users()
if system_users:
for name in system_users:
self.assert_user_show_failure(
self.instance_info.id, {'name': name},
expected_exception, expected_http_code)
def run_nonexisting_user_update(self, expected_http_code=404):
# Test valid update on a non-existing user.
update_def = {'name': self.non_existing_user_def['name']}
self.assert_user_attribute_update_failure(
self.instance_info.id, update_def, update_def,
exceptions.NotFound, expected_http_code)
def run_nonexisting_user_delete(
self, expected_exception=exceptions.NotFound,
expected_http_code=404):
self.assert_user_delete_failure(
self.instance_info.id,
{'name': self.non_existing_user_def['name']},
expected_exception, expected_http_code)
def assert_user_delete_failure(
self, instance_id, user_def,
expected_exception, expected_http_code):
user_name, user_host = self._get_user_name_host_pair(user_def)
client = self.auth_client
self.assert_raises(expected_exception, expected_http_code,
client, client.users.delete,
instance_id, user_name, user_host)
def run_system_user_delete(
self, expected_exception=exceptions.BadRequest,
expected_http_code=400):
# TODO(pmalik): Actions on system users and databases should probably
# return Forbidden 403 instead. The current error messages are
# confusing (talking about a malformed request).
system_users = self.get_system_users()
if system_users:
for name in system_users:
self.assert_user_delete_failure(
self.instance_info.id, {'name': name},
expected_exception, expected_http_code)
def get_system_users(self):
return self.get_datastore_config_property('ignore_users')
class MysqlUserActionsRunner(UserActionsRunner):
def as_pagination_marker(self, user):
return urllib_parse.quote('%s@%s' % (user.name, user.host))
class MariadbUserActionsRunner(MysqlUserActionsRunner):
def __init__(self):
super(MariadbUserActionsRunner, self).__init__()
class PerconaUserActionsRunner(MysqlUserActionsRunner):
def __init__(self):
super(PerconaUserActionsRunner, self).__init__()
class PxcUserActionsRunner(MysqlUserActionsRunner):
def __init__(self):
super(PxcUserActionsRunner, self).__init__()
class PostgresqlUserActionsRunner(UserActionsRunner):
def run_user_update_with_existing_name(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
def run_system_user_show(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
def run_system_user_attribute_update(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)
def run_system_user_delete(self):
raise SkipKnownBug(runners.BUG_WRONG_API_VALIDATION)

View File

@ -15,7 +15,6 @@
# under the License.
from unittest.mock import MagicMock, Mock, patch, PropertyMock
from proboscis.asserts import assert_equal
from trove.backup.models import Backup
from trove.common.exception import TroveError, ReplicationSlaveAttachError
@ -61,7 +60,7 @@ class TestManager(trove_testtools.TestCase):
with patch.object(self.manager, '_get_replica_txns',
return_value=txn_list):
result = self.manager._most_current_replica(master, None)
assert_equal(result, selected_master)
self.assertEqual(result, selected_master)
with self.assertRaisesRegex(TroveError,
'not all replicating from same'):

View File

@ -1,205 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Matcher classes to be used inside of the testtools assertThat framework."""
import pprint
class DictKeysMismatch(object):
def __init__(self, d1only, d2only):
self.d1only = d1only
self.d2only = d2only
def describe(self):
return ('Keys in d1 and not d2: %(d1only)s.'
' Keys in d2 and not d1: %(d2only)s' % self.__dict__)
def get_details(self):
return {}
class DictMismatch(object):
def __init__(self, key, d1_value, d2_value):
self.key = key
self.d1_value = d1_value
self.d2_value = d2_value
def describe(self):
return ("Dictionaries do not match at %(key)s."
" d1: %(d1_value)s d2: %(d2_value)s" % self.__dict__)
def get_details(self):
return {}
class DictMatches(object):
def __init__(self, d1, approx_equal=False, tolerance=0.001):
self.d1 = d1
self.approx_equal = approx_equal
self.tolerance = tolerance
def __str__(self):
return 'DictMatches(%s)' % (pprint.pformat(self.d1))
# Useful assertions
def match(self, d2):
"""Assert two dicts are equivalent.
This is a 'deep' match in the sense that it handles nested
dictionaries appropriately.
NOTE:
If you don't care (or don't know) a given value, you can specify
the string DONTCARE as the value. This will cause that dict-item
to be skipped.
"""
d1keys = set(self.d1.keys())
d2keys = set(d2.keys())
if d1keys != d2keys:
d1only = d1keys - d2keys
d2only = d2keys - d1keys
return DictKeysMismatch(d1only, d2only)
for key in d1keys:
d1value = self.d1[key]
d2value = d2[key]
try:
error = abs(float(d1value) - float(d2value))
within_tolerance = error <= self.tolerance
except (ValueError, TypeError):
# If both values aren't convertible to float, just ignore
# ValueError if arg is a str, TypeError if it's something else
# (like None)
within_tolerance = False
if hasattr(d1value, 'keys') and hasattr(d2value, 'keys'):
matcher = DictMatches(d1value)
did_match = matcher.match(d2value)
if did_match is not None:
return did_match
elif 'DONTCARE' in (d1value, d2value):
continue
elif self.approx_equal and within_tolerance:
continue
elif d1value != d2value:
return DictMismatch(key, d1value, d2value)
class ListLengthMismatch(object):
def __init__(self, len1, len2):
self.len1 = len1
self.len2 = len2
def describe(self):
return ('Length mismatch: len(L1)=%(len1)d != '
'len(L2)=%(len2)d' % self.__dict__)
def get_details(self):
return {}
class DictListMatches(object):
def __init__(self, l1, approx_equal=False, tolerance=0.001):
self.l1 = l1
self.approx_equal = approx_equal
self.tolerance = tolerance
def __str__(self):
return 'DictListMatches(%s)' % (pprint.pformat(self.l1))
# Useful assertions
def match(self, l2):
"""Assert a list of dicts are equivalent."""
l1count = len(self.l1)
l2count = len(l2)
if l1count != l2count:
return ListLengthMismatch(l1count, l2count)
for d1, d2 in zip(self.l1, l2):
matcher = DictMatches(d2,
approx_equal=self.approx_equal,
tolerance=self.tolerance)
did_match = matcher.match(d1)
if did_match:
return did_match
class SubDictMismatch(object):
def __init__(self,
key=None,
sub_value=None,
super_value=None,
keys=False):
self.key = key
self.sub_value = sub_value
self.super_value = super_value
self.keys = keys
def describe(self):
if self.keys:
return "Keys between dictionaries did not match"
else:
return ("Dictionaries do not match at %s. d1: %s d2: %s"
% (self.key,
self.super_value,
self.sub_value))
def get_details(self):
return {}
class IsSubDictOf(object):
def __init__(self, super_dict):
self.super_dict = super_dict
def __str__(self):
return 'IsSubDictOf(%s)' % (self.super_dict)
def match(self, sub_dict):
"""Assert a sub_dict is subset of super_dict."""
if not set(sub_dict.keys()).issubset(set(self.super_dict.keys())):
return SubDictMismatch(keys=True)
for k, sub_value in sub_dict.items():
super_value = self.super_dict[k]
if isinstance(sub_value, dict):
matcher = IsSubDictOf(super_value)
did_match = matcher.match(sub_value)
if did_match is not None:
return did_match
elif 'DONTCARE' in (sub_value, super_value):
continue
else:
if sub_value != super_value:
return SubDictMismatch(k, sub_value, super_value)
class FunctionCallMatcher(object):
def __init__(self, expected_func_calls):
self.expected_func_calls = expected_func_calls
self.actual_func_calls = []
def call(self, *args, **kwargs):
func_call = {'args': args, 'kwargs': kwargs}
self.actual_func_calls.append(func_call)
def match(self):
dict_list_matcher = DictListMatches(self.expected_func_calls)
return dict_list_matcher.match(self.actual_func_calls)

View File

@ -1,324 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`tests` -- Utility methods for tests.
===================================
.. automodule:: utils
:platform: Unix
:synopsis: Tests for Nova.
"""
import subprocess
from urllib.parse import unquote
try:
EVENT_AVAILABLE = True
except ImportError:
EVENT_AVAILABLE = False
import glanceclient
from keystoneauth1.identity import v3
from keystoneauth1 import session
from neutronclient.v2_0 import client as neutron_client
from novaclient import client as nova_client
from proboscis.asserts import assert_true
from proboscis.asserts import Check
from proboscis.asserts import fail
from proboscis import SkipTest
from sqlalchemy import create_engine
from sqlalchemy.sql.expression import text
import tenacity
from troveclient.compat import Dbaas
from trove.common import cfg
from trove.common.utils import import_class
from trove.common.utils import import_object
from trove.tests.config import CONFIG as test_config
from trove.tests.util.client import TestClient
from trove.tests.util import mysql
from trove.tests.util import test_config as CONFIG
from trove.tests.util.users import Requirements
WHITE_BOX = test_config.white_box
FLUSH = text("FLUSH PRIVILEGES;")
CONF = cfg.CONF
def create_client(*args, **kwargs):
"""
Using the User Requirements as arguments, finds a user and grabs a new
DBAAS client.
"""
reqs = Requirements(*args, **kwargs)
user = test_config.users.find_user(reqs)
return create_dbaas_client(user)
def create_dbaas_client(user):
"""Creates a rich client for the Trove API using the test config."""
auth_strategy = None
kwargs = {
'service_type': 'database',
'insecure': test_config.values['trove_client_insecure'],
}
def set_optional(kwargs_name, test_conf_name):
value = test_config.values.get(test_conf_name, None)
if value is not None:
kwargs[kwargs_name] = value
force_url = 'override_trove_api_url' in test_config.values
service_url = test_config.get('override_trove_api_url', None)
if user.requirements.is_admin:
service_url = test_config.get('override_admin_trove_api_url',
service_url)
if service_url:
kwargs['service_url'] = service_url
auth_strategy = None
if user.requirements.is_admin:
auth_strategy = test_config.get('admin_auth_strategy',
test_config.auth_strategy)
else:
auth_strategy = test_config.auth_strategy
set_optional('region_name', 'trove_client_region_name')
if test_config.values.get('override_trove_api_url_append_tenant',
False):
kwargs['service_url'] += "/" + user.tenant
if auth_strategy == 'fake':
from troveclient.compat import auth
class FakeAuth(auth.Authenticator):
def authenticate(self):
class FakeCatalog(object):
def __init__(self, auth):
self.auth = auth
def get_public_url(self):
return "%s/%s" % (test_config.dbaas_url,
self.auth.tenant)
def get_token(self):
return self.auth.tenant
return FakeCatalog(self)
auth_strategy = FakeAuth
if auth_strategy:
kwargs['auth_strategy'] = auth_strategy
if not user.requirements.is_admin:
auth_url = test_config.trove_auth_url
else:
auth_url = test_config.values.get('trove_admin_auth_url',
test_config.trove_auth_url)
if test_config.values.get('trove_client_cls'):
cls_name = test_config.trove_client_cls
kwargs['client_cls'] = import_class(cls_name)
dbaas = Dbaas(user.auth_user, user.auth_key, tenant=user.tenant,
auth_url=auth_url, **kwargs)
dbaas.authenticate()
with Check() as check:
check.is_not_none(dbaas.client.auth_token, "Auth token not set!")
if not force_url and user.requirements.is_admin:
expected_prefix = test_config.dbaas_url
actual = dbaas.client.service_url
msg = "Dbaas management url was expected to start with %s, but " \
"was %s." % (expected_prefix, actual)
check.true(actual.startswith(expected_prefix), msg)
return TestClient(dbaas)
def create_keystone_session(user):
auth = v3.Password(username=user.auth_user,
password=user.auth_key,
project_id=user.tenant_id,
user_domain_name='Default',
project_domain_name='Default',
auth_url=test_config.auth_url)
return session.Session(auth=auth)
def create_nova_client(user, service_type=None):
if not service_type:
service_type = CONF.nova_compute_service_type
openstack = nova_client.Client(
CONF.nova_client_version,
username=user.auth_user,
password=user.auth_key,
user_domain_name='Default',
project_id=user.tenant_id,
auth_url=CONFIG.auth_url,
service_type=service_type, os_cache=False,
cacert=test_config.values.get('cacert', None)
)
return TestClient(openstack)
def create_neutron_client(user):
sess = create_keystone_session(user)
client = neutron_client.Client(
session=sess,
service_type=CONF.neutron_service_type,
region_name=CONFIG.trove_client_region_name,
insecure=CONF.neutron_api_insecure,
endpoint_type=CONF.neutron_endpoint_type
)
return TestClient(client)
def create_glance_client(user):
sess = create_keystone_session(user)
glance = glanceclient.Client(CONF.glance_client_version, session=sess)
return TestClient(glance)
def dns_checker(mgmt_instance):
"""Given a MGMT instance, ensures DNS provisioning worked.
Uses a helper class which, given a mgmt instance (returned by the mgmt
API) can confirm that the DNS record provisioned correctly.
"""
if CONFIG.values.get('trove_dns_checker') is not None:
checker = import_class(CONFIG.trove_dns_checker)
checker()(mgmt_instance)
else:
raise SkipTest("Can't access DNS system to check if DNS provisioned.")
def process(cmd):
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output
def string_in_list(str, substr_list):
"""Returns True if the string appears in the list."""
return any([str.find(x) >= 0 for x in substr_list])
def unquote_user_host(user_hostname):
unquoted = unquote(user_hostname)
if '@' not in unquoted:
return unquoted, '%'
if unquoted.endswith('@'):
return unquoted, '%'
splitup = unquoted.split('@')
host = splitup[-1]
user = '@'.join(splitup[:-1])
return user, host
def iso_time(time_string):
"""Return a iso formated datetime: 2013-04-15T19:50:23Z."""
ts = time_string.replace(' ', 'T')
try:
micro = ts.rindex('.')
ts = ts[:micro]
except ValueError:
pass
return '%sZ' % ts
def assert_contains(exception_message, substrings):
for substring in substrings:
assert_true(substring in exception_message,
message="'%s' not in '%s'"
% (substring, exception_message))
# TODO(dukhlov): Still required by trove integration
# Should be removed after trove integration fix
# https://bugs.launchpad.net/trove-integration/+bug/1228306
# TODO(cp16net): DO NOT USE needs to be removed
def mysql_connection():
cls = CONFIG.get('mysql_connection',
"local.MySqlConnection")
if cls == "local.MySqlConnection":
return MySqlConnection()
return import_object(cls)()
class MySqlConnection(object):
def assert_fails(self, ip, user_name, password):
try:
with mysql.create_mysql_connection(ip, user_name, password):
pass
fail("Should have failed to connect: mysql --host %s -u %s -p%s"
% (ip, user_name, password))
except mysql.MySqlPermissionsFailure:
return # Good, this is what we wanted.
except mysql.MySqlConnectionFailure as mcf:
fail("Expected to see permissions failure. Instead got message:"
"%s" % mcf.message)
@tenacity.retry(
wait=tenacity.wait_fixed(3),
stop=tenacity.stop_after_attempt(5),
reraise=True
)
def create(self, ip, user_name, password):
print("Connecting mysql, host: %s, user: %s, password: %s" %
(ip, user_name, password))
return mysql.create_mysql_connection(ip, user_name, password)
class LocalSqlClient(object):
"""A sqlalchemy wrapper to manage transactions."""
def __init__(self, engine, use_flush=True):
self.engine = engine
self.use_flush = use_flush
def __enter__(self):
self.conn = self.engine.connect()
self.trans = self.conn.begin()
return self.conn
def __exit__(self, type, value, traceback):
if self.trans:
if type is not None: # An error occurred
self.trans.rollback()
else:
if self.use_flush:
self.conn.execute(FLUSH)
self.trans.commit()
self.conn.close()
def execute(self, t, **kwargs):
try:
return self.conn.execute(t, kwargs)
except Exception:
self.trans.rollback()
self.trans = None
raise
@staticmethod
def init_engine(user, password, host):
return create_engine("mysql+pymysql://%s:%s@%s:3306" %
(user, password, host),
pool_recycle=1800, echo=True)

View File

@ -1,204 +0,0 @@
# Copyright (c) 2012 OpenStack
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Like asserts, but does not raise an exception until the end of a block."""
import traceback
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from proboscis.asserts import assert_not_equal
from proboscis.asserts import assert_true
from proboscis.asserts import ASSERTION_ERROR
from proboscis.asserts import Check
def get_stack_trace_of_caller(level_up):
"""Gets the stack trace at the point of the caller."""
level_up += 1
st = traceback.extract_stack()
caller_index = len(st) - level_up
if caller_index < 0:
caller_index = 0
new_st = st[0:caller_index]
return new_st
def raise_blame_caller(level_up, ex):
"""Raises an exception, changing the stack trace to point to the caller."""
new_st = get_stack_trace_of_caller(level_up + 2)
raise ex.with_traceback(new_st)
class Checker(object):
def __init__(self):
self.messages = []
self.odd = True
self.protected = False
def _add_exception(self, _type, value, tb):
"""Takes an exception, and adds it as a string."""
if self.odd:
prefix = "* "
else:
prefix = "- "
start = "Check failure! Traceback:"
middle = prefix.join(traceback.format_list(tb))
end = '\n'.join(traceback.format_exception_only(_type, value))
msg = '\n'.join([start, middle, end])
self.messages.append(msg)
self.odd = not self.odd
def equal(self, *args, **kwargs):
self._run_assertion(assert_equal, *args, **kwargs)
def false(self, *args, **kwargs):
self._run_assertion(assert_false, *args, **kwargs)
def not_equal(self, *args, **kwargs):
self._run_assertion(assert_not_equal, *args, **kwargs)
def _run_assertion(self, assert_func, *args, **kwargs):
"""
Runs an assertion method, but catches any failure and adds it as a
string to the messages list.
"""
if self.protected:
try:
assert_func(*args, **kwargs)
except ASSERTION_ERROR as ae:
st = get_stack_trace_of_caller(2)
self._add_exception(ASSERTION_ERROR, ae, st)
else:
assert_func(*args, **kwargs)
def __enter__(self):
self.protected = True
return self
def __exit__(self, _type, value, tb):
self.protected = False
if _type is not None:
# An error occurred other than an assertion failure.
# Return False to allow the Exception to be raised
return False
if len(self.messages) != 0:
final_message = '\n'.join(self.messages)
raise ASSERTION_ERROR(final_message)
def true(self, *args, **kwargs):
self._run_assertion(assert_true, *args, **kwargs)
class AttrCheck(Check):
"""Class for attr checks, links and other common items."""
def __init__(self):
super(AttrCheck, self).__init__()
def fail(self, msg):
self.true(False, msg)
def contains_allowed_attrs(self, list, allowed_attrs, msg=None):
# Check these attrs only are returned in create response
for attr in list:
if attr not in allowed_attrs:
self.fail("%s should not contain '%s'" % (msg, attr))
def links(self, links):
allowed_attrs = ['href', 'rel']
for link in links:
self.contains_allowed_attrs(link, allowed_attrs, msg="Links")
class CollectionCheck(Check):
"""Checks for elements in a dictionary."""
def __init__(self, name, collection):
self.name = name
self.collection = collection
super(CollectionCheck, self).__init__()
def element_equals(self, key, expected_value):
if key not in self.collection:
message = 'Element "%s.%s" does not exist.' % (self.name, key)
self.fail(message)
else:
value = self.collection[key]
self.equal(value, expected_value)
def has_element(self, key, element_type):
if key not in self.collection:
message = 'Element "%s.%s" does not exist.' % (self.name, key)
self.fail(message)
else:
value = self.collection[key]
match = False
if not isinstance(element_type, tuple):
type_list = [element_type]
else:
type_list = element_type
for possible_type in type_list:
if possible_type is None:
if value is None:
match = True
else:
if isinstance(value, possible_type):
match = True
if not match:
self.fail('Element "%s.%s" does not match any of these '
'expected types: %s' % (self.name, key, type_list))
class TypeCheck(Check):
"""Checks for attributes in an object."""
def __init__(self, name, instance):
self.name = name
self.instance = instance
super(TypeCheck, self).__init__()
def _check_type(self, value, attribute_type):
if not isinstance(value, attribute_type):
self.fail("%s attribute %s is of type %s (expected %s)."
% (self.name, self.attribute_name, type(value),
attribute_type))
def has_field(self, attribute_name, attribute_type,
additional_checks=None):
if not hasattr(self.instance, attribute_name):
self.fail("%s missing attribute %s." % (self.name, attribute_name))
else:
value = getattr(self.instance, attribute_name)
match = False
if isinstance(attribute_type, tuple):
type_list = attribute_type
else:
type_list = [attribute_type]
for possible_type in type_list:
if possible_type is None:
if value is None:
match = True
else:
if isinstance(value, possible_type):
match = True
if not match:
self.fail("%s attribute %s is of type %s (expected one of "
"the following: %s)." % (self.name, attribute_name,
type(value),
attribute_type))
if match and additional_checks:
additional_checks(value)

View File

@ -1,75 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`tests` -- Utility methods for tests.
===================================
.. automodule:: utils
:platform: Unix
:synopsis: Tests for Nova.
"""
from proboscis import asserts
from trove.tests.config import CONFIG
def add_report_event_to(home, name):
"""Takes a module, class, etc, and an attribute name to decorate."""
func = getattr(home, name)
def __cb(*args, **kwargs):
# While %s turns a var into a string but in some rare cases explicit
# str() is less likely to raise an exception.
arg_strs = [repr(arg) for arg in args]
arg_strs += ['%s=%s' % (repr(key), repr(value))
for (key, value) in kwargs.items()]
CONFIG.get_reporter().log("[RDC] Calling : %s(%s)..."
% (name, ','.join(arg_strs)))
value = func(*args, **kwargs)
CONFIG.get_reporter.log("[RDC] returned %s." % str(value))
return value
setattr(home, name, __cb)
class TestClient(object):
"""Decorates the rich clients with some extra methods.
These methods are filled with test asserts, meaning if you use this you
get the tests for free.
"""
def __init__(self, real_client):
"""Accepts a normal client."""
self.real_client = real_client
def assert_http_code(self, expected_http_code):
resp, body = self.real_client.client.last_response
asserts.assert_equal(resp.status, expected_http_code)
@property
def last_http_code(self):
resp, body = self.real_client.client.last_response
return resp.status
def __getattr__(self, item):
if item == "__setstate__":
raise AttributeError(item)
if hasattr(self.real_client, item):
return getattr(self.real_client, item)
raise AttributeError(item)

View File

@ -1,336 +0,0 @@
# Copyright 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""
Simulates time itself to make the fake mode tests run even faster.
Specifically, this forces all various threads of execution to run one at a time
based on when they would have been scheduled using the various eventlet spawn
functions. Because only one thing is running at a given time, it eliminates
race conditions that would normally be present from testing multi-threaded
scenarios. It also means that the simulated time.sleep does not actually have
to sit around for the designated time, which greatly speeds up the time it
takes to run the tests.
Event Simulator Overview
========================
We use this to simulate all the threads of Trove running.
i.e. (api,taskmanager,proboscis tests). All the services end
up sleeping and having to wait for something to happen at times.
Monkey Patching Methods
-----------------------
We monkey patch a few methods to make this happen.
A few sleep methods with a fake_sleep.
* time.sleep
* eventlet.sleep
* greenthread.sleep
A few spawn methods with a fake_spawn
* eventlet.spawn_after
* eventlet.spawn_n
Raise an error if you try this one.
* eventlet.spawn
Replace the poll_until with a fake_poll_until.
Coroutine Object
----------------
There is a Coroutine object here that mimics the behavior of a thread.
It takes in a function with args and kwargs and executes it. If at any
point that method calls time.sleep(seconds) then the event simulator will
put that method on the stack of threads and run the fake_sleep method
that will then iterate over all the threads in the stack updating the time
they still need to sleep. Then as the threads hit the end of their sleep
time period they will continue to execute.
fake_threads
------------
One thing to note here is the idea of a stack of threads being kept in
fake_threads list. Any new thread created is added to this stack.
A fake_thread attributes:
fake_thread = {
'sleep': time_from_now_in_seconds,
'greenlet': Coroutine(method_to_execute),
'name': str(func)
}
'sleep' is the time it should wait to execute this method.
'greenlet' is the thread object
'name' is the unique name of the thread to track
main_loop Method
----------------
The main_loop method is a loop that runs forever waiting on all the
threads to complete while running pulse every 0.1 seconds. This is the
key to simulated the threads quickly. We are pulsing every 0.1
seconds looking to make sure there are no threads just waiting around for
no reason rather than waiting a full second to respond.
pulse Method
------------
The pulse method is going through the stack(list) of threads looking for
the next thread to execute while updating the 'sleep' time and the if
the 'sleep' time is <=0 then it will run this thread until it calls for
another time.sleep.
If the method/thread running calls time.sleep for what ever reason then
the thread's 'sleep' parameter is updated to the new 'next_sleep_time'.
If the method/thread running completes without calling time.sleep because it
finished all work needed to be done then there the 'next_sleep_time' is set
to None and the method/thread is deleted from the stack(list) of threads.
"""
import eventlet
from eventlet.event import Event
from eventlet.semaphore import Semaphore
from eventlet import spawn as true_spawn
class Coroutine(object):
"""
This class simulates a coroutine, which is ironic, as greenlet actually
*is* a coroutine. But trying to use greenlet here gives nasty results
since eventlet thoroughly monkey-patches things, making it difficult
to run greenlet on its own.
Essentially think of this as a wrapper for eventlet's threads which has a
run and sleep function similar to old school coroutines, meaning it won't
start until told and when asked to sleep it won't wake back up without
permission.
"""
ALL = []
def __init__(self, func, *args, **kwargs):
self.my_sem = Semaphore(0) # This is held by the thread as it runs.
self.caller_sem = None
self.dead = False
started = Event()
self.id = 5
self.ALL.append(self)
def go():
self.id = eventlet.corolocal.get_ident()
started.send(True)
self.my_sem.acquire(blocking=True, timeout=None)
try:
func(*args, **kwargs)
# except Exception as e:
# print("Exception in coroutine! %s" % e)
finally:
self.dead = True
self.caller_sem.release() # Relinquish control back to caller.
for i in range(len(self.ALL)):
if self.ALL[i].id == self.id:
del self.ALL[i]
break
true_spawn(go)
started.wait()
@classmethod
def get_current(cls):
"""Finds the coroutine associated with the thread which calls it."""
return cls.get_by_id(eventlet.corolocal.get_ident())
@classmethod
def get_by_id(cls, id):
for cr in cls.ALL:
if cr.id == id:
return cr
raise RuntimeError("Coroutine with id %s not found!" % id)
def sleep(self):
"""Puts the coroutine to sleep until run is called again.
This should only be called by the thread which owns this object.
"""
# Only call this from its own thread.
assert eventlet.corolocal.get_ident() == self.id
self.caller_sem.release() # Relinquish control back to caller.
self.my_sem.acquire(blocking=True, timeout=None)
def run(self):
"""Starts up the thread. Should be called from a different thread."""
# Don't call this from the thread which it represents.
assert eventlet.corolocal.get_ident() != self.id
self.caller_sem = Semaphore(0)
self.my_sem.release()
self.caller_sem.acquire() # Wait for it to finish.
# Main global thread to run.
main_greenlet = None
# Stack of threads currently running or sleeping
fake_threads = []
# Allow a sleep method to be called at least this number of times before
# raising an error that there are not other active threads waiting to run.
allowable_empty_sleeps = 1
sleep_allowance = allowable_empty_sleeps
def other_threads_are_active():
"""Returns True if concurrent activity is being simulated.
Specifically, this means there is a fake thread in action other than the
"pulse" thread and the main test thread.
"""
return len(fake_threads) >= 2
def fake_sleep(time_to_sleep):
"""Simulates sleep.
Puts the coroutine which calls it to sleep. If a coroutine object is not
associated with the caller this will fail.
"""
if time_to_sleep:
global sleep_allowance
sleep_allowance -= 1
if not other_threads_are_active():
if sleep_allowance < -1:
raise RuntimeError("Sleeping for no reason.")
else:
return # Forgive the thread for calling this for one time.
sleep_allowance = allowable_empty_sleeps
cr = Coroutine.get_current()
for ft in fake_threads:
if ft['greenlet'].id == cr.id:
ft['next_sleep_time'] = time_to_sleep
cr.sleep()
def fake_poll_until(retriever, condition=lambda value: value,
sleep_time=1, time_out=0):
"""Fakes out poll until."""
from trove.common import exception
slept_time = 0
while True:
resource = retriever()
if condition(resource):
return resource
fake_sleep(sleep_time)
slept_time += sleep_time
if time_out and slept_time >= time_out:
raise exception.PollTimeOut()
def run_main(func):
"""Runs the given function as the initial thread of the event simulator."""
global main_greenlet
main_greenlet = Coroutine(main_loop)
fake_spawn(0, func)
main_greenlet.run()
def main_loop():
"""The coroutine responsible for calling each "fake thread."
The Coroutine which calls this is the only one that won't end up being
associated with the fake_threads list. The reason is this loop needs to
wait on whatever thread is running, meaning it has to be a Coroutine as
well.
"""
while len(fake_threads) > 0:
pulse(0.1)
def fake_spawn_n(func, *args, **kw):
fake_spawn(0, func, *args, **kw)
def fake_spawn(time_from_now_in_seconds, func, *args, **kw):
"""Fakes eventlet's spawn function by adding a fake thread."""
def thread_start():
# fake_sleep(time_from_now_in_seconds)
return func(*args, **kw)
cr = Coroutine(thread_start)
fake_threads.append({'sleep': time_from_now_in_seconds,
'greenlet': cr,
'name': str(func)})
def pulse(seconds):
"""
Runs the event simulator for the amount of simulated time denoted by
"seconds".
"""
index = 0
while index < len(fake_threads):
t = fake_threads[index]
t['sleep'] -= seconds
if t['sleep'] <= 0:
t['sleep'] = 0
t['next_sleep_time'] = None
t['greenlet'].run()
sleep_time = t['next_sleep_time']
if sleep_time is None or isinstance(sleep_time, tuple):
del fake_threads[index]
index -= 1
else:
t['sleep'] = sleep_time
index += 1
def wait_until_all_activity_stops():
"""In fake mode, wait for all simulated events to chill out.
This can be useful in situations where you need simulated activity (such
as calls running in TaskManager) to "bleed out" and finish before running
another test.
"""
if main_greenlet is None:
return
while other_threads_are_active():
fake_sleep(1)
def monkey_patch():
"""
Changes global functions such as time.sleep, eventlet.spawn* and others
to their event_simulator equivalents.
"""
import time
time.sleep = fake_sleep
import eventlet
from eventlet import greenthread
eventlet.sleep = fake_sleep
greenthread.sleep = fake_sleep
eventlet.spawn_after = fake_spawn
def raise_error():
raise RuntimeError("Illegal operation!")
eventlet.spawn_n = fake_spawn_n
eventlet.spawn = raise_error
from trove.common import utils
utils.poll_until = fake_poll_until

View File

@ -1,180 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Rackspace Hosting
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import re
from oslo_db.sqlalchemy import engines
import pexpect
from sqlalchemy.exc import OperationalError
try:
from sqlalchemy.exc import ResourceClosedError
except ImportError:
ResourceClosedError = Exception
from trove import tests
from trove.tests.config import CONFIG
def create_mysql_connection(host, user, password):
connection = CONFIG.mysql_connection_method
if connection['type'] == "direct":
return SqlAlchemyConnection(host, user, password)
elif connection['type'] == "tunnel":
if 'ssh' not in connection:
raise RuntimeError("If connection type is 'tunnel' then a "
"property 'ssh' is expected.")
return PexpectMySqlConnection(connection['ssh'], host, user, password)
else:
raise RuntimeError("Unknown Bad test configuration for "
"mysql_connection_method")
class MySqlConnectionFailure(RuntimeError):
def __init__(self, msg):
super(MySqlConnectionFailure, self).__init__(msg)
class MySqlPermissionsFailure(RuntimeError):
def __init__(self, msg):
super(MySqlPermissionsFailure, self).__init__(msg)
class SqlAlchemyConnection(object):
def __init__(self, host, user, password):
self.host = host
self.user = user
self.password = password
try:
self.engine = self._init_engine(user, password, host)
except OperationalError as oe:
if self._exception_is_permissions_issue(str(oe)):
raise MySqlPermissionsFailure(oe)
else:
raise MySqlConnectionFailure(oe)
@staticmethod
def _exception_is_permissions_issue(msg):
"""Assert message cited a permissions issue and not something else."""
pos_error = re.compile(r".*Host '[\w\.]*' is not allowed to connect "
"to this MySQL server.*")
pos_error1 = re.compile(".*Access denied for user "
r"'[\w\*\!\@\#\^\&]*'@'[\w\.]*'.*")
if (pos_error.match(msg) or pos_error1.match(msg)):
return True
def __enter__(self):
try:
self.conn = self.engine.connect()
except OperationalError as oe:
if self._exception_is_permissions_issue(str(oe)):
raise MySqlPermissionsFailure(oe)
else:
raise MySqlConnectionFailure(oe)
self.trans = self.conn.begin()
return self
def execute(self, cmd):
"""Execute some code."""
cmd = cmd.replace("%", "%%")
try:
return self.conn.execute(cmd).fetchall()
except Exception:
self.trans.rollback()
self.trans = None
try:
raise
except ResourceClosedError:
return []
def __exit__(self, type, value, traceback):
if self.trans:
if type is not None: # An error occurred
self.trans.rollback()
else:
self.trans.commit()
self.conn.close()
@staticmethod
def _init_engine(user, password, host):
return engines.create_engine(
"mysql+pymysql://%s:%s@%s:3306" % (user, password, host))
class PexpectMySqlConnection(object):
TIME_OUT = 30
def __init__(self, ssh_args, host, user, password):
self.host = host
self.user = user
self.password = password
cmd = '%s %s' % (tests.SSH_CMD, ssh_args)
self.proc = pexpect.spawn(cmd)
print(cmd)
self.proc.expect(r":~\$", timeout=self.TIME_OUT)
cmd2 = "mysql --host '%s' -u '%s' '-p%s'\n" % \
(self.host, self.user, self.password)
print(cmd2)
self.proc.send(cmd2)
result = self.proc.expect([
'mysql>',
'Access denied',
"Can't connect to MySQL server"],
timeout=self.TIME_OUT)
if result == 1:
raise MySqlPermissionsFailure(self.proc.before)
elif result == 2:
raise MySqlConnectionFailure(self.proc.before)
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.proc.close()
def execute(self, cmd):
self.proc.send(cmd + "\\G\n")
outcome = self.proc.expect(['Empty set', 'mysql>'],
timeout=self.TIME_OUT)
if outcome == 0:
return []
else:
# This next line might be invaluable for long test runs.
print("Interpreting output: %s" % self.proc.before)
lines = self.proc.before.split("\r\n")
result = []
row = None
for line in lines:
plural_s = "s" if len(result) != 0 else ""
end_line = "%d row%s in set" % ((len(result) + 1), plural_s)
if len(result) == 0:
end_line = "1 row in set"
if (line.startswith("***************************") or
line.startswith(end_line)):
if row is not None:
result.append(row)
row = {}
elif row is not None:
colon = line.find(": ")
field = line[:colon]
value = line[colon + 2:]
row[field] = value
return result

View File

@ -1,86 +0,0 @@
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import subprocess
from proboscis.asserts import fail
import tenacity
from trove import tests
from trove.tests.config import CONFIG
from trove.tests import util
from trove.tests.util.users import Requirements
def create_server_connection(instance_id, ip_address=None):
if util.test_config.use_local_ovz:
return OpenVZServerConnection(instance_id)
return ServerSSHConnection(instance_id, ip_address=ip_address)
class ServerSSHConnection(object):
def __init__(self, instance_id, ip_address=None):
if not ip_address:
req_admin = Requirements(is_admin=True)
user = util.test_config.users.find_user(req_admin)
dbaas_admin = util.create_dbaas_client(user)
instance = dbaas_admin.management.show(instance_id)
mgmt_interfaces = instance.server["addresses"].get(
CONFIG.trove_mgmt_network, []
)
mgmt_addresses = [str(inf["addr"]) for inf in mgmt_interfaces
if inf["version"] == 4]
if len(mgmt_addresses) == 0:
fail("No IPV4 ip found for management network.")
else:
self.ip_address = mgmt_addresses[0]
else:
self.ip_address = ip_address
TROVE_TEST_SSH_USER = os.environ.get('TROVE_TEST_SSH_USER')
if TROVE_TEST_SSH_USER and '@' not in self.ip_address:
self.ip_address = TROVE_TEST_SSH_USER + '@' + self.ip_address
@tenacity.retry(
wait=tenacity.wait_fixed(5),
stop=tenacity.stop_after_attempt(3),
retry=tenacity.retry_if_exception_type(subprocess.CalledProcessError)
)
def execute(self, cmd):
exe_cmd = "%s %s '%s'" % (tests.SSH_CMD, self.ip_address, cmd)
print("RUNNING COMMAND: %s" % exe_cmd)
output = util.process(exe_cmd)
print("OUTPUT: %s" % output)
return output
class OpenVZServerConnection(object):
def __init__(self, instance_id):
self.instance_id = instance_id
req_admin = Requirements(is_admin=True)
self.user = util.test_config.users.find_user(req_admin)
self.dbaas_admin = util.create_dbaas_client(self.user)
self.instance = self.dbaas_admin.management.show(self.instance_id)
self.instance_local_id = self.instance.server["local_id"]
def execute(self, cmd):
exe_cmd = "sudo vzctl exec %s %s" % (self.instance_local_id, cmd)
print("RUNNING COMMAND: %s" % exe_cmd)
return util.process(exe_cmd)

View File

@ -1,84 +0,0 @@
# Copyright (c) 2013 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from collections import defaultdict
from oslo_log import log as logging
import proboscis.asserts as asserts
from proboscis.dependencies import SkipTest
from trove.common import utils
from trove.tests.config import CONFIG
LOG = logging.getLogger(__name__)
MESSAGE_QUEUE = defaultdict(list)
def create_usage_verifier():
return utils.import_object(CONFIG.usage_endpoint)
class UsageVerifier(object):
def clear_events(self):
"""Hook that is called to allow endpoints to clean up."""
pass
def check_message(self, resource_id, event_type, **attrs):
messages = utils.poll_until(lambda: self.get_messages(resource_id),
lambda x: len(x) > 0, time_out=30)
found = None
for message in messages:
if message['event_type'] == event_type:
found = message
asserts.assert_is_not_none(found,
"No message type %s for resource %s" %
(event_type, resource_id))
with asserts.Check() as check:
for key, value in attrs.items():
check.equal(found[key], value)
def get_messages(self, resource_id, expected_messages=None):
global MESSAGE_QUEUE
msgs = MESSAGE_QUEUE.get(resource_id, [])
if expected_messages is not None:
asserts.assert_equal(len(msgs), expected_messages)
return msgs
class FakeVerifier(object):
"""This is the default handler in fake mode, it is basically a no-op."""
def clear_events(self):
pass
def check_message(self, *args, **kwargs):
raise SkipTest("Notifications not available")
def get_messages(self, *args, **kwargs):
pass
def notify(event_type, payload):
"""Simple test notify function which saves the messages to global list."""
payload['event_type'] = event_type
if 'instance_id' in payload and 'server_type' not in payload:
LOG.debug('Received Usage Notification: %s', event_type)
resource_id = payload['instance_id']
global MESSAGE_QUEUE
MESSAGE_QUEUE[resource_id].append(payload)
LOG.debug('Message Queue for %(id)s now has %(msg_count)d messages',
{'id': resource_id,
'msg_count': len(MESSAGE_QUEUE[resource_id])})

Some files were not shown because too many files have changed in this diff Show More