Doc's update for integration tests

* Description of tests was changed and complemented;
* List of tests to each plugin and its version was added.

Change-Id: I8de087d745cb83dfc6da67d1bc5036cacd1b6b80
(cherry picked from commit 3a00dd05e3)
This commit is contained in:
Yaroslav Lobankov 2014-04-03 19:14:02 +04:00 committed by Sergey Lukjanov
parent 1dfb772f50
commit 66b314dd55
1 changed files with 150 additions and 58 deletions

View File

@ -1,81 +1,173 @@
Integration tests for Sahara project
=====================================
====================================
How to run
----------
Create the config file for integration tests: `/sahara/tests/integration/configs/itest.conf`.
You can take a look at sample config files - `/sahara/tests/integration/configs/itest.conf.sample`,
`/sahara/tests/integration/configs/itest.conf.sample-full`.
All values used in the `/sahara/tests/integration/configs/config.py` file are
Create the config file for integration tests ``/sahara/tests/integration/configs/itest.conf``.
You can take a look at sample config file ``/sahara/tests/integration/configs/itest.conf.sample``
or ``/sahara/tests/integration/configs/itest.conf.sample-full``.
All values used in the ``/sahara/tests/integration/configs/config.py`` file are
defaults, so, if they are applicable for your environment then you can skip
config file creation.
To run all integration tests you should use the corresponding tox env: `tox -e integration`.
To run all integration tests you should use the corresponding tox env:
.. sourcecode:: console
$ tox -e integration
..
In this case all tests will be launched except disabled tests.
Tests may be disabled in the `/sahara/tests/integration/configs/config.py` file
or created the config file `/sahara/tests/integration/configs/itest.conf`.
Tests can be disabled in the ``/sahara/tests/integration/configs/config.py``
file or in the ``/sahara/tests/integration/configs/itest.conf``.
If you want to run integration tests for one plugin or a few plugins you should use
the corresponding tox env: `tox -e integration -- <plugin_name>` or
`tox -e integration -- <plugin_name_1> <plugin_name_2>`.
.. note::
For example: `tox -e integration -- vanilla` or `tox -e integration vanilla hdp`
Both ``OS_TENANT_ID`` and ``OS_TENANT_NAME`` must be specified in the
config file.
..
**NOTE**: Both `OS_TENANT_ID` and `OS_TENANT_NAME` must be specified in config file.
If you want to run integration tests for one plugin, you should use the
corresponding tox env:
.. sourcecode:: console
$ tox -e integration -- <tag>
..
<tag> may have the following values: ``transient``, ``vanilla1``, ``vanilla2``,
``hdp``, ``idh2`` and ``idh3``.
For example, you want to run tests for the Vanilla plugin with the Hadoop
version 1.2.1. In this case you should use the following tox env:
.. sourcecode:: console
$ tox -e integration -- vanilla1
..
If you want to run integration tests for a few plugins or their versions, you
should use the corresponding tox env:
.. sourcecode:: console
$ tox -e integration -- <tag1> <tag2> ...
..
For example, you want to run tests for the Vanilla plugin with the Hadoop
version 2.3.0 and for the IDH plugin with the Intel Hadoop version 3.0.2. In
this case you should use the following tox env:
.. sourcecode:: console
$ tox -e integration -- vanilla2 idh3
..
Here are a few more examples.
* ``tox -e integration -- transient`` will run test for transient cluster. In
this case cluster will be created via the Vanilla plugin with the Hadoop
version 1.2.1. More info about transient cluster see in section ``Contents``.
* ``tox -e integration -- hdp`` will run tests for the HDP plugin.
* ``tox -e integration -- transient vanilla2 idh2`` will run test for transient
cluster, tests for the Vanilla plugin with the Hadoop version 1.2.1 and tests
for the IDH plugin with the Intel Hadoop version 2.5.1.
Contents
--------
These integration tests check capacity for work of two plugins for Sahara:
Vanilla and HDP.
The general checks performed by the integration tests are described below, and
for each plugin the applicable checks are listed.
Vanilla plugin has the following checks:
1. Proper cluster creation. This test creates node group templates, a cluster
template and a cluster. All other test checks are executed on the created
cluster.
2. Cinder support. When the cluster is built, Cinder volumes are attached to
some cluster nodes (two 2 GB volumes per node). When cluster state is "Active",
SSH connection is established to nodes which have volumes. On each node
the bash command ``mount | grep <volume_mount_prefix> | wc -l`` is executed and
actual result is compared to the expected result.
3. Cluster configs. When the cluster is created, the bash script
``sahara/tests/integration/tests/resources/cluster_config_test_script.sh`` is
copied to all cluster nodes. On all nodes script checks that cluster configs
was properly applied.
4. Map Reduce. When the cluster is created, the bash script
``sahara/tests/integration/tests/resources/map_reduce_test_script.sh`` is
copied to all cluster nodes. On the master node this script runs Map Reduce
jobs "PI estimator" and "Word count". The input file for the job "Word count"
is generated with the bash command ``dmesg``. On other nodes this script
searches the Hadoop logs of completed jobs.
5. Swift availability. When the cluster is created, the bash script
``sahara/tests/integration/tests/resources/map_reduce_test_script.sh`` is
copied to the master node. The script generates a 1 mb file (we'll call it
"file1") with bash command ``dd if=/dev/urandom of=/tmp/test-file bs=1048576 count=1``.
The file is copied from local storage to HDFS storage, then it is uploaded from
HDFS storage to Swift (the command ``distcp``). Then the file is downloaded
back to HDFS storage from Swift. The file is copied from HDFS storage to local
storage (we'll call it "file2"). The script checks that md5 sums of file1 and
file2 are equal.
6. Elastic Data Processing (EDP). This test launches 4 types of EDP jobs on the
cluster. There are 4 types of EDP jobs: "Pig", "MapReduce",
"MapReduce.Streaming" and "Java".
7. Cluster scaling. This test adds 2 new node groups to the cluster (each node
group has 1 node), reduces count of nodes in 2 node groups from 1 node to 0
nodes (deletes 2 node groups) and increases count of nodes in 1 node group from
3 nodes to 4 nodes. All steps are executed in the same API request.
8. Transient cluster. In this test the cluster is created as a transient
cluster. No jobs are launched on the cluster. So the test checks that cluster
will be automatically deleted by Sahara after a while.
The Vanilla plugin with the Hadoop version 1.2.1 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Proper cluster creation.
2. Cinder support.
3. Cluster configs.
4. Map Reduce.
5. Elastic Data Processing (EDP).
6. Swift availability.
7. Cluster scaling.
8. Transient cluster.
The Vanilla plugin with the Hadoop version 2.3.0 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Proper cluster creation.
2. Cinder support.
3. Map Reduce.
4. Elastic Data Processing (EDP).
5. Swift availability.
6. Cluster scaling.
The HDP plugin has the following checks:
++++++++++++++++++++++++++++++++++++++++
1. A cluster creation. This test create node group templates, a cluster
template and a cluster. All other test checks are performed on created cluster.
1. Proper cluster creation.
2. Cinder support.
3. Map Reduce.
4. Elastic Data Processing (EDP).
5. Swift availability.
6. Cluster scaling.
2. Test for a check of cluster configs. This test checks presence of desired
parameters in cluster configuration files which were specified during
the cluster creation. Desired configuration values are checked with GET-request
as well as directly in configuration files on the cluster.
The IDH plugin with the Intel Hadoop version 2.5.1 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3. Test for a check of Elastic Data Processing (EDP). This test launches
pig-job with jar-library and jar-job.
1. Proper cluster creation.
2. Map Reduce.
3. Swift availability.
4. Cluster scaling.
4. Test for a check of Hadoop (Map Reduce and HDFS). This test launches Map
Reduce jobs "PI estimator" and "Word count". Input file for job "Word count" is
generated with the bash command "dmesg".
The IDH plugin with the Intel Hadoop version 3.0.2 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
5. Test for check of Swift availability. This test generates a 1 mb file.
The file is uploaded to HDFS storage, then to Swift storage with the command
"distcp". Further with the same command "distcp" the 1 mb file is downloaded
to HDFS storage from Swift. The 1 mb file is copied from HDFS to local storage
and md5 sums of both files is compared (the very first 1 mb file and the latest
file downloaded to local storage).
6. Test for a check of cluster scaling. This test add 2 new node groups,
resize 2 node groups to 0 nodes and resize 1 node group to 4 nodes.
All steps are performed in the same scaling request.
HDP plugin has the following checks:
++++++++++++++++++++++++++++++++++++
1. A cluster creation. This test create node group templates, a cluster
template and a cluster. All other test checks are performed on created cluster.
2. Test for a check of Hadoop (Map Reduce and HDFS). This test launches Map
Reduce jobs "PI estimator" and "Word count". Input file for job "Word count" is
generated with the bash command "dmesg".
3. Test for check of Swift availability. This test generates a 1 mb file.
The file is uploaded to HDFS storage, then to Swift storage with the command
"distcp". Further with the same command "distcp" the 1 mb file is downloaded
to HDFS storage from Swift. The 1 mb file is copied from HDFS to local storage
and md5 sums of both files is compared (the very first 1 mb file and the latest
file downloaded to local storage).
4. Test for a check of cluster scaling. This test add a 1 new node group and
resize 1 node group to 4 nodes. All steps are performed in the same scaling
request.
1. Proper cluster creation.
2. Swift availability.
3. Cluster scaling.