OpenStack Database As A Service (Trove)
Go to file
Sushil Kumar c61a3bf5f9 Implement clustering for Vertica datastore
A specification for this change was submitted for review in
https://review.openstack.org/#/c/151279

- HP Vertica Community Edition supports upto a 3-node cluster.
- HP Vertica requires a minimum of 3 nodes to achieve fault tolerance.
- This patchset provides ability to launch HP Vertica 3-node cluster.
- The cluster-show API, would also list the IPs of underlying instances.

Code Added:
- Added API strategy, taskmanager strategy, and guestagent strategy.
- Included unit tests.

Workflow for building Vertica cluster is as follows:
- Guest instances are booted using new API strategy which then
  sends control to taskmanager strategy for further communication
  and guestagent API execution.
- Once the guest instances are active in nova,
  they receive "prepare" message and following steps are performed:
    - Mount the data disk on device_path.
    - Check if vertica packages have been installed, install_if_needed().
    - Run Vertica pre-install test, prepare_for_install_vertica().
    - Get to a status BUILD_PENDING.
- Cluster-Taskmanager strategy waits for all the instances
  in cluster to get to BUILD_PENDING state.
- Once all instances in a cluster get to BUILD_PENDING state,
  taskmanager first, configures passwordless ssh for os-users(root, dbadmin)
  with the help of guestagent APIs get_keys and authroize_keys.
- Once passwordless ssh has been configured, the taskmanager calls
  install_cluster guestagent API, which installs cluster on
  member instances and creates a database on the cluster.
- Once this method finishes its job then taskmanager calls
  another guestagent API cluster_complete to
  notify cluster member of completion of cluster creation.

New Files:
- A new directory, vertica, has been created, for api, taskmanager,
  guestagent strategies under
  trove/common/strategies/cluster/experimental.
- Unit-tests for cluster-controller, api and taskmanager code.

DocImpact
Change-Id: Ide30d1d2a136c7e638532a115db5ff5ab2a75e72
Implements: blueprint implement-vertica-cluster
2015-03-24 06:22:55 +00:00
apidocs convert the max and min values to int instead of string 2015-02-24 20:26:00 -08:00
contrib Migrating trove to entry points 2014-02-01 01:19:58 +00:00
doc Clean up github references from docs 2014-12-05 23:24:53 -08:00
etc Implement clustering for Vertica datastore 2015-03-24 06:22:55 +00:00
rsdns Use dict comprehensions instead of dict constructor 2015-01-09 22:36:37 +08:00
tools Remove Python 2.6 classifier 2014-12-02 09:57:52 +01:00
trove Implement clustering for Vertica datastore 2015-03-24 06:22:55 +00:00
.coveragerc Rename from reddwarf to trove. 2013-06-24 14:11:15 -07:00
.gitignore Moved the apidocs from openstack/database-api 2014-04-24 14:41:40 -05:00
.gitreview Renamed repos to trove. 2013-06-14 18:25:42 -04:00
.testr.conf Rename from reddwarf to trove. 2013-06-24 14:11:15 -07:00
CONTRIBUTING.rst Update CONTRIBUTING.RST file 2015-02-04 11:10:28 -08:00
LICENSE Add Apache 2.0 LICENSE file. 2013-04-29 18:01:12 -04:00
MANIFEST.in Package AUTHORS and ChangeLog file 2013-06-25 10:41:57 +02:00
README.rst Clean up github references from docs 2014-12-05 23:24:53 -08:00
babel.cfg Setup trove for translation 2014-03-19 15:06:23 +01:00
doc-test.conf Update database-api to follow OpenStack conventions 2014-05-23 07:32:29 +02:00
generate_examples.py Add missing api example for incremental backups 2014-12-02 15:57:59 -06:00
openstack-common.conf Eliminate redundant modules from oslo-incubator 2015-01-22 15:16:14 +00:00
requirements.txt Updated from global requirements 2015-03-21 00:19:04 +00:00
run_tests.py Integration with oslo.messaging library 2015-01-06 09:16:39 +02:00
run_tests.sh Remove extraneous vim configuration comments 2014-02-27 15:05:21 +08:00
setup.cfg Integration with oslo.messaging library 2015-01-06 09:16:39 +02:00
setup.py Updated from global requirements 2014-05-01 13:51:51 +00:00
test-requirements.txt Updated from global requirements 2015-03-21 00:19:04 +00:00
tox.ini Remove now obsolete tox targets 2015-02-25 19:52:51 +01:00

README.rst

Trove

Trove is Database as a Service for Open Stack.

Usage for integration testing

If you'd like to start up a fake Trove API daemon for integration testing with your own tool, run:

$ ./tools/start-fake-mode.sh

Stop the server with:

$ ./tools/stop-fake-mode.sh

Tests

To run all tests and PEP8, run tox, like so:

$ tox

To run just the tests for Python 2.7, run:

$ tox -epy27

To run just PEP8, run:

$ tox -epep8

To generate a coverage report,run:

$ tox -ecover

(note: on some boxes, the results may not be accurate unless you run it twice)

If you want to run only the tests in one file you can use testtools e.g.

$ python -m testtools.run trove.tests.unittests.python.module.path