Remove old integration tests for sahara codebase
Old integration tests are no longer gated, so we can remove this 'dead' code from sahara repository. Change-Id: Ief0984d34898c134e831d9949e0f16617dd62004
This commit is contained in:
parent
a670d330c0
commit
02412ce50b
|
@ -18,7 +18,6 @@ eggs
|
|||
etc/sahara.conf
|
||||
etc/sahara/*.conf
|
||||
etc/sahara/*.topology
|
||||
sahara/tests/integration/configs/itest.conf
|
||||
sdist
|
||||
target
|
||||
tools/lintstack.head.py
|
||||
|
|
|
@ -9,14 +9,6 @@ Unit Tests
|
|||
In most Sahara sub repositories we have `_package_/tests/unit` or
|
||||
`_package_/tests` that contains Python unit tests.
|
||||
|
||||
Integration tests
|
||||
+++++++++++++++++
|
||||
|
||||
We have integration tests for the main Sahara service and they are located in
|
||||
`sahara/tests/integration`. The main purpose of these integration tests is to
|
||||
run some kind of scenarios to test Sahara using all plugins. You can find more
|
||||
info about it in `sahara/tests/integration/README.rst`.
|
||||
|
||||
Scenario integration tests
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
|
|
|
@ -1,164 +0,0 @@
|
|||
Integration tests for Sahara project
|
||||
====================================
|
||||
|
||||
How to run
|
||||
----------
|
||||
|
||||
Create the config file for integration tests ``/sahara/tests/integration/configs/itest.conf``.
|
||||
You can take a look at sample config file ``/sahara/tests/integration/configs/itest.conf.sample``
|
||||
or ``/sahara/tests/integration/configs/itest.conf.sample-full``.
|
||||
All values used in the ``/sahara/tests/integration/configs/config.py`` file are
|
||||
defaults, so, if they are applicable for your environment then you can skip
|
||||
config file creation.
|
||||
|
||||
To run all integration tests you should use the corresponding tox env:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ tox -e integration
|
||||
..
|
||||
|
||||
In this case all tests will be launched except disabled tests.
|
||||
Tests can be disabled in the ``/sahara/tests/integration/configs/config.py``
|
||||
file or in the ``/sahara/tests/integration/configs/itest.conf``.
|
||||
|
||||
If you want to run integration tests for one plugin, you should use the
|
||||
corresponding tox env:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ tox -e integration -- <tag>
|
||||
..
|
||||
|
||||
<tag> may have the following values: ``transient``, ``vanilla1``, ``vanilla2``,
|
||||
``hdp``.
|
||||
|
||||
For example, you want to run tests for the Vanilla plugin with the Hadoop
|
||||
version 1.2.1. In this case you should use the following tox env:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ tox -e integration -- vanilla1
|
||||
..
|
||||
|
||||
If you want to run integration tests for a few plugins or their versions, you
|
||||
should use the corresponding tox env:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ tox -e integration -- <tag1> <tag2> ...
|
||||
..
|
||||
|
||||
For example, you want to run tests for the Vanilla plugin with the Hadoop
|
||||
version 2.6.0 and for the HDP plugin with the Hortonworks Data Platform version
|
||||
1.3.2. In this case you should use the following tox env:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ tox -e integration -- vanilla2 hdp
|
||||
..
|
||||
|
||||
Here are a few more examples.
|
||||
|
||||
``tox -e integration -- transient`` will run test for transient cluster. In
|
||||
this case cluster will be created via the Vanilla plugin with the Hadoop
|
||||
version 1.2.1. More info about transient cluster see in section ``Contents``.
|
||||
|
||||
``tox -e integration -- hdp`` will run tests for the HDP plugin.
|
||||
|
||||
``tox -e integration -- transient vanilla2 hdp`` will run test for transient
|
||||
cluster, tests for the Vanilla plugin with the Hadoop version 2.6.0 and tests
|
||||
for the HDP plugin with the Hortonworks Data Platform version 1.3.2.
|
||||
|
||||
Contents
|
||||
--------
|
||||
|
||||
The general checks performed by the integration tests are described below, and
|
||||
for each plugin the applicable checks are listed.
|
||||
|
||||
1. Proper cluster creation. This test creates node group templates, a cluster
|
||||
template and a cluster. All other test checks are executed on the created
|
||||
cluster.
|
||||
|
||||
2. Cinder support. When the cluster is built, Cinder volumes are attached to
|
||||
some cluster nodes (two 2 GB volumes per node). When cluster state is "Active",
|
||||
SSH connection is established to nodes which have volumes. On each node
|
||||
the bash command ``mount | grep <volume_mount_prefix> | wc -l`` is executed and
|
||||
actual result is compared to the expected result.
|
||||
|
||||
3. Cluster configs. When the cluster is created, the bash script
|
||||
``sahara/tests/integration/tests/resources/cluster_config_test_script.sh`` is
|
||||
copied to all cluster nodes. On all nodes script checks that cluster configs
|
||||
was properly applied.
|
||||
|
||||
4. Map Reduce. When the cluster is created, the bash script
|
||||
``sahara/tests/integration/tests/resources/map_reduce_test_script.sh`` is
|
||||
copied to all cluster nodes. On the master node this script runs Map Reduce
|
||||
jobs "PI estimator" and "Word count". The input file for the job "Word count"
|
||||
is generated with the bash command ``dmesg``. On other nodes this script
|
||||
searches the Hadoop logs of completed jobs.
|
||||
|
||||
5. Swift availability. When the cluster is created, the bash script
|
||||
``sahara/tests/integration/tests/resources/map_reduce_test_script.sh`` is
|
||||
copied to the master node. The script generates a 1 mb file (we'll call it
|
||||
"file1") with bash command ``dd if=/dev/urandom of=/tmp/test-file bs=1048576 count=1``.
|
||||
The file is copied from local storage to HDFS storage, then it is uploaded from
|
||||
HDFS storage to Swift (the command ``distcp``). Then the file is downloaded
|
||||
back to HDFS storage from Swift. The file is copied from HDFS storage to local
|
||||
storage (we'll call it "file2"). The script checks that md5 sums of file1 and
|
||||
file2 are equal.
|
||||
|
||||
6. Elastic Data Processing (EDP). This test launches 4 types of EDP jobs on the
|
||||
cluster. There are 4 types of EDP jobs: "Pig", "MapReduce",
|
||||
"MapReduce.Streaming" and "Java".
|
||||
|
||||
7. Cluster scaling. This test adds 2 new node groups to the cluster (each node
|
||||
group has 1 node), reduces count of nodes in 2 node groups from 1 node to 0
|
||||
nodes (deletes 2 node groups) and increases count of nodes in 1 node group from
|
||||
3 nodes to 4 nodes. All steps are executed in the same API request.
|
||||
|
||||
8. Transient cluster. In this test the cluster is created as a transient
|
||||
cluster. No jobs are launched on the cluster. So the test checks that cluster
|
||||
will be automatically deleted by Sahara after a while.
|
||||
|
||||
The Vanilla plugin with the Hadoop version 1.2.1 has the following checks:
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
1. Proper cluster creation.
|
||||
2. Cinder support.
|
||||
3. Cluster configs.
|
||||
4. Map Reduce.
|
||||
5. Elastic Data Processing (EDP).
|
||||
6. Swift availability.
|
||||
7. Cluster scaling.
|
||||
8. Transient cluster.
|
||||
|
||||
The Vanilla plugin with the Hadoop version 2.6.0 has the following checks:
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
1. Proper cluster creation.
|
||||
2. Cinder support.
|
||||
3. Map Reduce.
|
||||
4. Elastic Data Processing (EDP).
|
||||
5. Swift availability.
|
||||
6. Cluster scaling.
|
||||
|
||||
The HDP plugin has the following checks:
|
||||
++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
1. Proper cluster creation.
|
||||
2. Cinder support.
|
||||
3. Map Reduce.
|
||||
4. Elastic Data Processing (EDP).
|
||||
5. Swift availability.
|
||||
6. Cluster scaling.
|
||||
|
||||
The CDH plugin has the following checks:
|
||||
++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
1. Proper cluster creation.
|
||||
2. Cinder support.
|
||||
3. Map Reduce.
|
||||
4. Elastic Data Processing (EDP).
|
||||
5. Swift availability.
|
||||
6. Cluster scaling.
|
File diff suppressed because it is too large
Load Diff
|
@ -1,32 +0,0 @@
|
|||
[COMMON]
|
||||
|
||||
OS_USERNAME = 'admin'
|
||||
OS_PASSWORD = 'admin'
|
||||
OS_TENANT_NAME = 'admin'
|
||||
OS_AUTH_URL = 'http://127.0.0.1:5000/v2.0'
|
||||
SAHARA_HOST = '127.0.0.1'
|
||||
FLAVOR_ID = 2
|
||||
USER_KEYPAIR_ID = 'sahara-key-pair'
|
||||
PATH_TO_SSH_KEY = '/home/ubuntu/.ssh/id_rsa'
|
||||
FLOATING_IP_POOL = 'net04_ext'
|
||||
NEUTRON_ENABLED = True
|
||||
INTERNAL_NEUTRON_NETWORK = 'net04'
|
||||
|
||||
[VANILLA]
|
||||
|
||||
IMAGE_NAME = 'sahara-vanilla-image'
|
||||
SKIP_CLUSTER_CONFIG_TEST = True
|
||||
|
||||
[CDH]
|
||||
|
||||
IMAGE_NAME = 'sahara-cdh-image'
|
||||
SKIP_CHECK_SERVICES_TEST = True
|
||||
|
||||
[HDP]
|
||||
|
||||
IMAGE_ID = 'f7de0ea9-eb4d-4b63-8ed0-abcf11cfaff8'
|
||||
SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
|
||||
[MAPR]
|
||||
IMAGE_ID = 'sahara-mapr-image'
|
||||
SKIP_ALL_TESTS_FOR_PLUGIN = False
|
|
@ -1,527 +0,0 @@
|
|||
[COMMON]
|
||||
|
||||
# Username for OpenStack (string value)
|
||||
#OS_USERNAME = 'admin'
|
||||
|
||||
# Password for OpenStack (string value)
|
||||
#OS_PASSWORD = 'admin'
|
||||
|
||||
# Tenant name for OpenStack (string value)
|
||||
#OS_TENANT_NAME = 'admin'
|
||||
|
||||
# URL for OpenStack (string value)
|
||||
#OS_AUTH_URL = 'http://127.0.0.1:5000/v2.0'
|
||||
|
||||
|
||||
#OpenStack auth version for Swift (string value)
|
||||
#SWIFT_AUTH_VERSION = 2
|
||||
|
||||
|
||||
# Host for Sahara API (string value)
|
||||
#SAHARA_HOST = '127.0.0.1'
|
||||
|
||||
# Port for Sahara API (integer value)
|
||||
#SAHARA_PORT = '8386'
|
||||
|
||||
# Api version for Sahara (string value)
|
||||
#SAHARA_API_VERSION = '1.1'
|
||||
|
||||
|
||||
# OpenStack flavor ID for virtual machines. If you leave default value of this
|
||||
# parameter then flavor ID will be created automatically, using nova client.
|
||||
# Created flavor will have the following parameters: name=i-test-flavor-<id>,
|
||||
# ram=1024, vcpus=1, disk=10, ephemeral=10. <id> is ID of 8 characters
|
||||
# (letters and/or digits) which is added to name of flavor for its uniqueness
|
||||
# (string value)
|
||||
#FLAVOR_ID = <None>
|
||||
|
||||
|
||||
# Cluster creation timeout (in minutes); minimal value is 1 (integer value)
|
||||
#CLUSTER_CREATION_TIMEOUT = 30
|
||||
|
||||
# Timeout for node process deployment on cluster nodes (in minutes);
|
||||
# minimal value is 1 (integer value)
|
||||
#TELNET_TIMEOUT = 5
|
||||
|
||||
# Timeout for HDFS initialization (in minutes); minimal value is 1
|
||||
# (integer value)
|
||||
#HDFS_INITIALIZATION_TIMEOUT = 5
|
||||
|
||||
# Timeout for job creation (in minutes); minimal value is 1 (integer value)
|
||||
#JOB_LAUNCH_TIMEOUT = 5
|
||||
|
||||
# Timeout for a poll of state of transient cluster (in minutes);
|
||||
# minimal value is 1 (integer value)
|
||||
#TRANSIENT_CLUSTER_TIMEOUT = 3
|
||||
|
||||
|
||||
# Name for cluster (string value)
|
||||
#CLUSTER_NAME = 'test-cluster'
|
||||
|
||||
|
||||
# OpenStack key pair ID of your SSH public key. Sahara transfers this key to
|
||||
# cluster nodes for access of users to virtual machines of cluster via SSH.
|
||||
# You can export your id_rsa.pub public key to OpenStack and specify its key
|
||||
# pair ID in configuration file of tests. If you already have key pair in
|
||||
# OpenStack then you just should specify its key pair ID in configuration file
|
||||
# of tests. If you have no key pair in OpenStack or you do not want to export
|
||||
# (create) key pair then you just should specify any key pair ID which you like
|
||||
# (for example, "king-kong") but you have necessarily to leave default value of
|
||||
# PATH_TO_SSH_KEY parameter. In this case key pair will be created
|
||||
# automatically. Also to key pair ID will be added little ID (8 characters
|
||||
# (letters and/or digits)) for its uniqueness. In the end of tests key pair
|
||||
# will be deleted (string value)
|
||||
#USER_KEYPAIR_ID = 'sahara-i-test-key-pair'
|
||||
|
||||
# Path to id_rsa key which is used with tests for remote command execution.
|
||||
# If you specify wrong path to key then you will have the error "Private key
|
||||
# file is encrypted". Please, make sure you specified right path to key. If
|
||||
# this parameter is not specified, key pair (private and public SSH keys) will
|
||||
# be generated automatically, using nova client (string value)
|
||||
#PATH_TO_SSH_KEY = <None>
|
||||
|
||||
|
||||
# Pool name for floating IPs. If Sahara uses Nova management network and auto
|
||||
# assignment of IPs was enabled then you should leave default value of this
|
||||
# parameter. If auto assignment was not enabled then you should specify value
|
||||
# (floating IP pool name) of this parameter. If Sahara uses Neutron management
|
||||
# network then you should always specify value (floating IP pool name) of this
|
||||
# parameter (string value)
|
||||
#FLOATING_IP_POOL = <None>
|
||||
|
||||
|
||||
# If Sahara uses Nova management network then you should leave default value
|
||||
# of this flag. If Sahara uses Neutron management network then you should set
|
||||
# this flag to True and specify values of the following parameters:
|
||||
# FLOATING_IP_POOL and INTERNAL_NEUTRON_NETWORK (boolean value)
|
||||
#NEUTRON_ENABLED = False
|
||||
|
||||
# Name for internal Neutron network (string value)
|
||||
#INTERNAL_NEUTRON_NETWORK = 'private'
|
||||
|
||||
# If this flag is True, do not delete the cluster after test.
|
||||
# This is a debugging aid for instances when errors are logged
|
||||
# on the cluster nodes but the cause of the failure is not
|
||||
# evident from the integration test logs, ie an Oozie exception.
|
||||
# It is intended for use on local hosts, not the official ci host.
|
||||
#RETAIN_CLUSTER_AFTER_TEST = False
|
||||
|
||||
# If this flag is True, do not delete the EDP binaries, data
|
||||
# source, job, and job executions after test.
|
||||
# This is a debugging aid for instances when errors are logged
|
||||
# on the cluster nodes but the cause of the failure is not
|
||||
# evident from the integration test logs, ie an Oozie exception.
|
||||
# It is intended for use on local hosts, not the official ci host.
|
||||
# RETAIN_EDP_AFTER_TEST = False
|
||||
|
||||
[VANILLA]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'vanilla'
|
||||
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '1.2.1'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'hadoop'
|
||||
|
||||
# Directory where logs of completed jobs are located (string value)
|
||||
#HADOOP_LOG_DIRECTORY = '/mnt/log/hadoop/hadoop/userlogs'
|
||||
|
||||
# Directory where logs of completed jobs on volume mounted to node are located
|
||||
# (string value)
|
||||
#HADOOP_LOG_DIRECTORY_ON_VOLUME = '/volumes/disk1/log/hadoop/hadoop/userlogs'
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = jobtracker: 50030, namenode: 50070, tasktracker: 50060, datanode: 50075, secondarynamenode: 50090, oozie: 11000
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: namenode, tt: tasktracker, dn: datanode
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_CINDER_TEST = False
|
||||
#SKIP_CLUSTER_CONFIG_TEST = False
|
||||
#SKIP_EDP_TEST = False
|
||||
#SKIP_MAP_REDUCE_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
|
||||
[CDH]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'cdh'
|
||||
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '5'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'hdfs'
|
||||
|
||||
# Directory where logs of completed jobs are located (string value)
|
||||
#HADOOP_LOG_DIRECTORY = ''
|
||||
|
||||
# Directory where logs of completed jobs on volume mounted to node are located
|
||||
# (string value)
|
||||
#HADOOP_LOG_DIRECTORY_ON_VOLUME = ''
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = YARN_RESOURCEMANAGER: 8088, HDFS_NAMENODE: 50070, HDFS_SECONDARYNAMENODE: 50090, YARN_NODEMANAGER: 8042, HDFS_DATANODE: 50075
|
||||
#CLOUDERA_MANAGER: 7180, YARN_JOBHISTORY: 19888, OOZIE_SERVER: 11000
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: HDFS_NAMENODE, tt: YARN_NODEMANAGER, dn: HDFS_DATANODE
|
||||
|
||||
# (string value)
|
||||
#CDH_REPO_LIST_URL = 'http://archive-primary.cloudera.com/cdh5/ubuntu/precise/amd64/cdh/cloudera.list'
|
||||
#CM_REPO_LIST_URL = 'http://archive-primary.cloudera.com/cm5/ubuntu/precise/amd64/cm/cloudera.list'
|
||||
#CDH_APT_KEY_URL = 'http://archive-primary.cloudera.com/cdh5/ubuntu/precise/amd64/cdh/archive.key'
|
||||
#CM_APT_KEY_URL = 'http://archive-primary.cloudera.com/cm5/ubuntu/precise/amd64/cm/archive.key'
|
||||
|
||||
#Manager flavor for manager node
|
||||
#MANAGERNODE_FLAVOR = 3
|
||||
|
||||
#Lager flavor for services tests node
|
||||
#LARGE_FLAVOR = 4
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_CINDER_TEST = False
|
||||
#SKIP_CLUSTER_CONFIG_TEST = False
|
||||
#SKIP_EDP_TEST = False
|
||||
#SKIP_MAP_REDUCE_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
#SKIP_CHECK_SERVICES_TEST = False
|
||||
|
||||
[HDP]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'hdp'
|
||||
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
|
||||
# A list of processes that will be launched on master node (list value)
|
||||
#MASTER_NODE_PROCESSES = JOBTRACKER, NAMENODE, SECONDARY_NAMENODE, GANGLIA_SERVER, NAGIOS_SERVER, AMBARI_SERVER, OOZIE_SERVER
|
||||
|
||||
# A list of processes that will be launched on worker nodes (list value)
|
||||
#WORKER_NODE_PROCESSES = TASKTRACKER, DATANODE, HDFS_CLIENT, MAPREDUCE_CLIENT, OOZIE_CLIENT, PIG
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '1.3.2'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'hdfs'
|
||||
|
||||
# Directory where logs of completed jobs are located (string value)
|
||||
#HADOOP_LOG_DIRECTORY = '/mnt/hadoop/mapred/userlogs'
|
||||
|
||||
# Directory where logs of completed jobs on volume mounted to node are located
|
||||
# (string value)
|
||||
#HADOOP_LOG_DIRECTORY_ON_VOLUME = '/volumes/disk1/hadoop/mapred/userlogs'
|
||||
|
||||
# The number of hosts to add while scaling an existing node group
|
||||
#SCALE_EXISTING_NG_COUNT = 1
|
||||
|
||||
# The number of hosts to add while scaling a new node group
|
||||
#SCALE_NEW_NG_COUNT = 1
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = JOBTRACKER: 50030, NAMENODE: 50070, TASKTRACKER: 50060, DATANODE: 50075, SECONDARY_NAMENODE: 50090
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: NAMENODE, tt: TASKTRACKER, dn: DATANODE
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_CINDER_TEST = False
|
||||
#SKIP_MAP_REDUCE_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
|
||||
[HDP2]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'hdp2'
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
# A list of processes that will be launched on master node (list value)
|
||||
#MASTER_NODE_PROCESSES = NAMENODE, SECONDARY_NAMENODE, ZOOKEEPER_SERVER, AMBARI_SERVER, HISTORYSERVER, RESOURCEMANAGER, GANGLIA_SERVER, NAGIOS_SERVER, OOZIE_SERVER
|
||||
|
||||
# A list of processes that will be launched on worker nodes (list value)
|
||||
#WORKER_NODE_PROCESSES = HDFS_CLIENT, DATANODE, ZOOKEEPER_CLIENT, MAPREDUCE2_CLIENT, YARN_CLIENT, NODEMANAGER, PIG, OOZIE_CLIENT
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '2.0.6'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'hdfs'
|
||||
|
||||
# The number of hosts to add while scaling an existing node group
|
||||
#SCALE_EXISTING_NG_COUNT = 1
|
||||
|
||||
# The number of hosts to add while scaling a new node group
|
||||
#SCALE_NEW_NG_COUNT = 1
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = RESOURCEMANAGER: 8088, NAMENODE: 8020, HISTORYSERVER: 19888, SECONDARY_NAMENODE: 50090
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: NAMENODE, tt: NODEMANAGER, dn: DATANODE
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_EDP_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
|
||||
|
||||
[MAPR]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'mapr'
|
||||
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
|
||||
# A list of processes that will be launched on master node (list value)
|
||||
#MASTER_NODE_PROCESSES = CLDB, FileServer, ZooKeeper, TaskTracker, JobTracker, Oozie
|
||||
|
||||
# A list of processes that will be launched on worker nodes (list value)
|
||||
#WORKER_NODE_PROCESSES = FileServer, TaskTracker, Pig
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '1.0.3'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'mapr'
|
||||
|
||||
# Directory where logs of completed jobs are located (string value)
|
||||
#HADOOP_LOG_DIRECTORY = '/opt/mapr/hadoop/hadoop-0.20.2/logs/userlogs'
|
||||
|
||||
# Directory where logs of completed jobs on volume mounted to node are located
|
||||
# (string value)
|
||||
#HADOOP_LOG_DIRECTORY_ON_VOLUME = '/volumes/disk1/mapr/hadoop/hadoop-0.20.2/logs/userlogs'
|
||||
|
||||
# The number of hosts to add while scaling an existing node group
|
||||
#SCALE_EXISTING_NG_COUNT = 1
|
||||
|
||||
# The number of hosts to add while scaling a new node group
|
||||
#SCALE_NEW_NG_COUNT = 1
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = JobTracker: 50030, CLDB: 7222, TaskTracker: 50060
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: CLDB, tt: TaskTracker, dn: FileServer
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_CINDER_TEST = False
|
||||
#SKIP_MAP_REDUCE_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
|
||||
[MAPR4_1]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'mapr4_1'
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
# A list of processes that will be launched on master node (list value)
|
||||
#MASTER_NODE_PROCESSES = CLDB, FileServer, ZooKeeper, TaskTracker, JobTracker, Oozie
|
||||
|
||||
# A list of processes that will be launched on worker nodes (list value)
|
||||
#WORKER_NODE_PROCESSES = FileServer, TaskTracker, Pig
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '2.4.0'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'mapr'
|
||||
|
||||
# The number of hosts to add while scaling an existing node group
|
||||
#SCALE_EXISTING_NG_COUNT = 1
|
||||
|
||||
# The number of hosts to add while scaling a new node group
|
||||
#SCALE_NEW_NG_COUNT = 1
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = JobTracker: 50030, CLDB: 7222, TaskTracker: 50060
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: CLDB, tt: TaskTracker, dn: FileServer
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_EDP_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
||||
|
||||
[MAPR4_2]
|
||||
|
||||
# Name of plugin (string value)
|
||||
#PLUGIN_NAME = 'mapr4_1'
|
||||
|
||||
# ID for image which is used for cluster creation. Also you can specify image
|
||||
# name or tag of image instead of image ID. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_ID = <None>
|
||||
|
||||
# Name for image which is used for cluster creation. Also you can specify image
|
||||
# ID or tag of image instead of image name. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_NAME = <None>
|
||||
|
||||
# Tag for image which is used for cluster creation. Also you can specify image
|
||||
# ID or image name instead of tag of image. If you do not specify image related
|
||||
# parameters then image for cluster creation will be chosen by tag
|
||||
# "sahara_i_tests" (string value)
|
||||
#IMAGE_TAG = <None>
|
||||
|
||||
# A list of processes that will be launched on master node (list value)
|
||||
#MASTER_NODE_PROCESSES = CLDB, FileServer, ZooKeeper, NodeManager, ResourceManager, HistoryServer, Oozie
|
||||
|
||||
# A list of processes that will be launched on worker nodes (list value)
|
||||
#WORKER_NODE_PROCESSES = FileServer, NodeManager, Pig
|
||||
|
||||
# Version of Hadoop (string value)
|
||||
#HADOOP_VERSION = '2.4.0'
|
||||
|
||||
# Username which is used for access to Hadoop services (string value)
|
||||
#HADOOP_USER = 'mapr'
|
||||
|
||||
# The number of hosts to add while scaling an existing node group
|
||||
#SCALE_EXISTING_NG_COUNT = 1
|
||||
|
||||
# The number of hosts to add while scaling a new node group
|
||||
#SCALE_NEW_NG_COUNT = 1
|
||||
|
||||
# (dictionary value)
|
||||
#HADOOP_PROCESSES_WITH_PORTS = ResourceManager: 8032, CLDB: 7222, HistoryServer: 19888
|
||||
|
||||
|
||||
# (dictionary value)
|
||||
#PROCESS_NAMES = nn: CLDB, tt: NodeManager, dn: FileServer
|
||||
|
||||
|
||||
#SKIP_ALL_TESTS_FOR_PLUGIN = False
|
||||
#SKIP_EDP_TEST = False
|
||||
#SKIP_SWIFT_TEST = False
|
||||
#SKIP_SCALING_TEST = False
|
|
@ -1,730 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import telnetlib
|
||||
import time
|
||||
|
||||
import fixtures
|
||||
from keystoneclient.v2_0 import client as keystone_client
|
||||
from neutronclient.v2_0 import client as neutron_client
|
||||
from novaclient.v2 import client as nova_client
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import uuidutils
|
||||
from oslotest import base
|
||||
from saharaclient.api import base as client_base
|
||||
import saharaclient.client as sahara_client
|
||||
import six
|
||||
from swiftclient import client as swift_client
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.utils.openstack.images as imgs
|
||||
from sahara.utils import ssh_remote
|
||||
|
||||
|
||||
logger = logging.getLogger('swiftclient')
|
||||
logger.setLevel(logging.WARNING)
|
||||
|
||||
|
||||
def errormsg(message):
|
||||
def decorator(fct):
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
fct(*args, **kwargs)
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
ITestCase.print_error_log(message, e)
|
||||
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
|
||||
def skip_test(config_name, message=''):
|
||||
def handle(func):
|
||||
def call(self, *args, **kwargs):
|
||||
if getattr(self, config_name):
|
||||
print(
|
||||
'\n======================================================='
|
||||
)
|
||||
print('INFO: ' + message)
|
||||
print(
|
||||
'=======================================================\n'
|
||||
)
|
||||
|
||||
else:
|
||||
return func(self, *args, **kwargs)
|
||||
return call
|
||||
return handle
|
||||
|
||||
|
||||
class ITestCase(testcase.WithAttributes, base.BaseTestCase):
|
||||
def setUp(self):
|
||||
super(ITestCase, self).setUp()
|
||||
self.common_config = cfg.ITConfig().common_config
|
||||
self.plugin_config = self.get_plugin_config()
|
||||
self._setup_clients()
|
||||
self._setup_networks()
|
||||
self._setup_volume_params()
|
||||
self._setup_flavor()
|
||||
self._setup_ssh_access()
|
||||
self.TEST_EVENT_LOG = True
|
||||
|
||||
self._image_id, self._ssh_username = (
|
||||
self.get_image_id_and_ssh_username())
|
||||
|
||||
telnetlib.Telnet(
|
||||
self.common_config.SAHARA_HOST, self.common_config.SAHARA_PORT
|
||||
)
|
||||
|
||||
def get_plugin_config(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def _setup_ssh_access(self):
|
||||
if not self.common_config.PATH_TO_SSH_KEY:
|
||||
self.user_keypair_id = self.rand_name(
|
||||
self.common_config.USER_KEYPAIR_ID)
|
||||
self.private_key = self.nova.keypairs.create(
|
||||
self.user_keypair_id).private_key
|
||||
|
||||
else:
|
||||
self.user_keypair_id = self.common_config.USER_KEYPAIR_ID
|
||||
self.private_key = open(self.common_config.PATH_TO_SSH_KEY).read()
|
||||
|
||||
def _setup_flavor(self):
|
||||
if not self.common_config.FLAVOR_ID:
|
||||
self.flavor_id = self.nova.flavors.create(
|
||||
name=self.rand_name('i-test-flavor'),
|
||||
ram=1024,
|
||||
vcpus=1,
|
||||
disk=10,
|
||||
ephemeral=10).id
|
||||
|
||||
else:
|
||||
self.flavor_id = self.common_config.FLAVOR_ID
|
||||
|
||||
def _setup_networks(self):
|
||||
self.floating_ip_pool = self.common_config.FLOATING_IP_POOL
|
||||
self.internal_neutron_net = None
|
||||
if self.common_config.NEUTRON_ENABLED:
|
||||
self.internal_neutron_net = self.get_internal_neutron_net_id()
|
||||
self.floating_ip_pool = (
|
||||
self.get_floating_ip_pool_id_for_neutron_net())
|
||||
|
||||
def _setup_volume_params(self):
|
||||
self.volumes_per_node = 0
|
||||
self.volumes_size = 0
|
||||
if not getattr(self.plugin_config, 'SKIP_CINDER_TEST', False):
|
||||
self.volumes_per_node = 2
|
||||
self.volumes_size = 2
|
||||
|
||||
def _setup_clients(self):
|
||||
keystone = keystone_client.Client(
|
||||
username=self.common_config.OS_USERNAME,
|
||||
password=self.common_config.OS_PASSWORD,
|
||||
tenant_name=self.common_config.OS_TENANT_NAME,
|
||||
auth_url=self.common_config.OS_AUTH_URL)
|
||||
|
||||
keystone.management_url = self.common_config.OS_AUTH_URL
|
||||
|
||||
tenant_id = [tenant.id for tenant in keystone.tenants.list()
|
||||
if tenant.name == self.common_config.OS_TENANT_NAME][0]
|
||||
|
||||
self.sahara = sahara_client.Client(
|
||||
version=self.common_config.SAHARA_API_VERSION,
|
||||
username=self.common_config.OS_USERNAME,
|
||||
api_key=self.common_config.OS_PASSWORD,
|
||||
project_name=self.common_config.OS_TENANT_NAME,
|
||||
auth_url=self.common_config.OS_AUTH_URL,
|
||||
sahara_url='http://%s:%s/v%s/%s' % (
|
||||
self.common_config.SAHARA_HOST,
|
||||
self.common_config.SAHARA_PORT,
|
||||
self.common_config.SAHARA_API_VERSION,
|
||||
tenant_id
|
||||
))
|
||||
|
||||
self.nova = nova_client.Client(
|
||||
username=self.common_config.OS_USERNAME,
|
||||
api_key=self.common_config.OS_PASSWORD,
|
||||
project_id=self.common_config.OS_TENANT_NAME,
|
||||
auth_url=self.common_config.OS_AUTH_URL)
|
||||
|
||||
self.neutron = neutron_client.Client(
|
||||
username=self.common_config.OS_USERNAME,
|
||||
password=self.common_config.OS_PASSWORD,
|
||||
tenant_name=self.common_config.OS_TENANT_NAME,
|
||||
auth_url=self.common_config.OS_AUTH_URL)
|
||||
|
||||
# ------------------------Methods for object creation--------------------------
|
||||
|
||||
def create_node_group_template(self, name, plugin_config, description,
|
||||
node_processes, node_configs,
|
||||
volumes_per_node=0, volumes_size=0,
|
||||
floating_ip_pool=None, flavor_id=None,
|
||||
**kwargs):
|
||||
if not flavor_id:
|
||||
flavor_id = self.flavor_id
|
||||
data = self.sahara.node_group_templates.create(
|
||||
name, plugin_config.PLUGIN_NAME, plugin_config.HADOOP_VERSION,
|
||||
flavor_id, description, volumes_per_node, volumes_size,
|
||||
node_processes, node_configs, floating_ip_pool, **kwargs)
|
||||
node_group_template_id = data.id
|
||||
return node_group_template_id
|
||||
|
||||
def create_cluster_template(self, name, plugin_config, description,
|
||||
cluster_configs, node_groups,
|
||||
anti_affinity=None, net_id=None):
|
||||
for node_group in node_groups:
|
||||
for key, value in node_group.items():
|
||||
if value is None:
|
||||
del node_group[key]
|
||||
data = self.sahara.cluster_templates.create(
|
||||
name, plugin_config.PLUGIN_NAME, plugin_config.HADOOP_VERSION,
|
||||
description, cluster_configs, node_groups, anti_affinity, net_id)
|
||||
cluster_template_id = data.id
|
||||
return cluster_template_id
|
||||
|
||||
def create_cluster(self, name, plugin_config, cluster_template_id,
|
||||
description, cluster_configs,
|
||||
node_groups=None, anti_affinity=None,
|
||||
net_id=None, is_transient=False):
|
||||
self.cluster_id = None
|
||||
|
||||
data = self.sahara.clusters.create(
|
||||
name, plugin_config.PLUGIN_NAME, plugin_config.HADOOP_VERSION,
|
||||
cluster_template_id, self._image_id, is_transient,
|
||||
description, cluster_configs, node_groups,
|
||||
self.user_keypair_id, anti_affinity, net_id)
|
||||
self.cluster_id = data.id
|
||||
return self.cluster_id
|
||||
|
||||
def get_cluster_info(self, plugin_config):
|
||||
node_ip_list_with_node_processes = (
|
||||
self.get_cluster_node_ip_list_with_node_processes(self.cluster_id))
|
||||
try:
|
||||
node_info = self.get_node_info(node_ip_list_with_node_processes,
|
||||
plugin_config)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure during check of node process deployment '
|
||||
'on cluster node: ' + six.text_type(e)
|
||||
)
|
||||
|
||||
# For example: method "create_cluster_and_get_info" return
|
||||
# {
|
||||
# 'node_info': {
|
||||
# 'tasktracker_count': 3,
|
||||
# 'node_count': 6,
|
||||
# 'namenode_ip': '172.18.168.242',
|
||||
# 'datanode_count': 3
|
||||
# },
|
||||
# 'cluster_id': 'bee5c6a1-411a-4e88-95fc-d1fbdff2bb9d',
|
||||
# 'node_ip_list': {
|
||||
# '172.18.168.153': ['tasktracker', 'datanode'],
|
||||
# '172.18.168.208': ['secondarynamenode', 'oozie'],
|
||||
# '172.18.168.93': ['tasktracker'],
|
||||
# '172.18.168.101': ['tasktracker', 'datanode'],
|
||||
# '172.18.168.242': ['namenode', 'jobtracker'],
|
||||
# '172.18.168.167': ['datanode']
|
||||
# },
|
||||
# 'plugin_config': <oslo_config.cfg.GroupAttr object at 0x215d9d>
|
||||
# }
|
||||
return {
|
||||
'cluster_id': self.cluster_id,
|
||||
'node_ip_list': node_ip_list_with_node_processes,
|
||||
'node_info': node_info,
|
||||
'plugin_config': plugin_config
|
||||
}
|
||||
|
||||
# --------Helper methods for cluster info obtaining and its processing---------
|
||||
|
||||
def poll_cluster_state(self, cluster_id):
|
||||
data = self.sahara.clusters.get(cluster_id)
|
||||
timeout = self.common_config.CLUSTER_CREATION_TIMEOUT * 60
|
||||
|
||||
try:
|
||||
with fixtures.Timeout(timeout, gentle=True):
|
||||
while True:
|
||||
status = str(data.status)
|
||||
if status == 'Active':
|
||||
break
|
||||
if status == 'Error':
|
||||
self.fail('Cluster state == \'Error\'.')
|
||||
|
||||
time.sleep(10)
|
||||
data = self.sahara.clusters.get(cluster_id)
|
||||
|
||||
except fixtures.TimeoutException:
|
||||
self.fail("Cluster did not return to 'Active' state "
|
||||
"within %d minutes." %
|
||||
self.common_config.CLUSTER_CREATION_TIMEOUT)
|
||||
|
||||
return status
|
||||
|
||||
def get_cluster_node_ip_list_with_node_processes(self, cluster_id):
|
||||
data = self.sahara.clusters.get(cluster_id)
|
||||
node_groups = data.node_groups
|
||||
node_ip_list_with_node_processes = {}
|
||||
for node_group in node_groups:
|
||||
instances = node_group['instances']
|
||||
for instance in instances:
|
||||
node_ip = instance['management_ip']
|
||||
node_ip_list_with_node_processes[node_ip] = node_group[
|
||||
'node_processes']
|
||||
# For example:
|
||||
# node_ip_list_with_node_processes = {
|
||||
# '172.18.168.181': ['tasktracker'],
|
||||
# '172.18.168.94': ['secondarynamenode'],
|
||||
# '172.18.168.208': ['namenode', 'jobtracker'],
|
||||
# '172.18.168.93': ['tasktracker', 'datanode'],
|
||||
# '172.18.168.44': ['tasktracker', 'datanode'],
|
||||
# '172.18.168.233': ['datanode']
|
||||
# }
|
||||
return node_ip_list_with_node_processes
|
||||
|
||||
def put_file_to_hdfs(self, namenode_ip, remote_path, data):
|
||||
tmp_file_path = '/tmp/%s' % uuidutils.generate_uuid()[:8]
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
self.write_file_to(tmp_file_path, data)
|
||||
self.execute_command(
|
||||
'sudo su - -c "hadoop dfs -copyFromLocal %s %s" %s' % (
|
||||
tmp_file_path, remote_path, self.plugin_config.HADOOP_USER))
|
||||
self.execute_command('rm -fr %s' % tmp_file_path)
|
||||
self.close_ssh_connection()
|
||||
|
||||
def try_telnet(self, host, port):
|
||||
try:
|
||||
telnetlib.Telnet(host, port)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nTelnet has failed: ' + six.text_type(e) +
|
||||
' NODE IP: %s, PORT: %s. Passed %s minute(s).'
|
||||
% (host, port, self.common_config.TELNET_TIMEOUT)
|
||||
)
|
||||
|
||||
def get_node_info(self, node_ip_list_with_node_processes, plugin_config):
|
||||
tasktracker_count = 0
|
||||
datanode_count = 0
|
||||
timeout = self.common_config.TELNET_TIMEOUT * 60
|
||||
with fixtures.Timeout(timeout, gentle=True):
|
||||
accessible = False
|
||||
proc_with_ports = plugin_config.HADOOP_PROCESSES_WITH_PORTS
|
||||
while not accessible:
|
||||
accessible = True
|
||||
for node_ip, processes in six.iteritems(
|
||||
node_ip_list_with_node_processes):
|
||||
try:
|
||||
self.try_telnet(node_ip, '22')
|
||||
except Exception:
|
||||
accessible = False
|
||||
|
||||
for process in processes:
|
||||
if process in proc_with_ports:
|
||||
try:
|
||||
self.try_telnet(node_ip,
|
||||
proc_with_ports[process])
|
||||
except Exception:
|
||||
print('Connection attempt. NODE PROCESS: %s, '
|
||||
'PORT: %s.' % (
|
||||
process, proc_with_ports[process]))
|
||||
accessible = False
|
||||
|
||||
if not accessible:
|
||||
time.sleep(1)
|
||||
|
||||
for node_ip, processes in six.iteritems(
|
||||
node_ip_list_with_node_processes):
|
||||
if plugin_config.PROCESS_NAMES['tt'] in processes:
|
||||
tasktracker_count += 1
|
||||
if plugin_config.PROCESS_NAMES['dn'] in processes:
|
||||
datanode_count += 1
|
||||
if plugin_config.PROCESS_NAMES['nn'] in processes:
|
||||
namenode_ip = node_ip
|
||||
|
||||
return {
|
||||
'namenode_ip': namenode_ip,
|
||||
'tasktracker_count': tasktracker_count,
|
||||
'datanode_count': datanode_count,
|
||||
'node_count': len(node_ip_list_with_node_processes)
|
||||
}
|
||||
|
||||
def await_active_workers_for_namenode(self, node_info, plugin_config):
|
||||
self.open_ssh_connection(node_info['namenode_ip'])
|
||||
timeout = self.common_config.HDFS_INITIALIZATION_TIMEOUT * 60
|
||||
try:
|
||||
with fixtures.Timeout(timeout, gentle=True):
|
||||
while True:
|
||||
active_tasktracker_count = self.execute_command(
|
||||
'sudo -u %s bash -lc "hadoop job -list-active-trackers'
|
||||
'" | grep "^tracker_" | wc -l'
|
||||
% plugin_config.HADOOP_USER)[1]
|
||||
try:
|
||||
active_tasktracker_count = int(
|
||||
active_tasktracker_count)
|
||||
except ValueError:
|
||||
active_tasktracker_count = -1
|
||||
|
||||
active_datanode_count = self.execute_command(
|
||||
'sudo -u %s bash -lc "hadoop dfsadmin -report" | '
|
||||
'grep -e "Datanodes available:.*" '
|
||||
'-e "Live datanodes.*" | grep -o "[0-9]*" | head -1'
|
||||
% plugin_config.HADOOP_USER)[1]
|
||||
try:
|
||||
active_datanode_count = int(active_datanode_count)
|
||||
except ValueError:
|
||||
active_datanode_count = -1
|
||||
|
||||
if (active_tasktracker_count ==
|
||||
node_info['tasktracker_count'] and
|
||||
active_datanode_count ==
|
||||
node_info['datanode_count']):
|
||||
break
|
||||
|
||||
time.sleep(10)
|
||||
|
||||
except fixtures.TimeoutException:
|
||||
self.fail(
|
||||
'Tasktracker or datanode cannot be started within '
|
||||
'%s minute(s) for namenode.'
|
||||
% self.common_config.HDFS_INITIALIZATION_TIMEOUT
|
||||
)
|
||||
finally:
|
||||
self.close_ssh_connection()
|
||||
|
||||
def await_active_tasktracker(self, node_info, plugin_config):
|
||||
self.open_ssh_connection(node_info['namenode_ip'])
|
||||
for i in range(self.common_config.HDFS_INITIALIZATION_TIMEOUT * 6):
|
||||
time.sleep(10)
|
||||
active_tasktracker_count = self.execute_command(
|
||||
'sudo -u %s bash -lc "hadoop job -list-active-trackers" '
|
||||
'| grep "^tracker_" | wc -l'
|
||||
% plugin_config.HADOOP_USER)[1]
|
||||
active_tasktracker_count = int(active_tasktracker_count)
|
||||
if (active_tasktracker_count == node_info['tasktracker_count']):
|
||||
break
|
||||
else:
|
||||
self.fail(
|
||||
'Tasktracker or datanode cannot be started within '
|
||||
'%s minute(s) for namenode.'
|
||||
% self.common_config.HDFS_INITIALIZATION_TIMEOUT)
|
||||
self.close_ssh_connection()
|
||||
|
||||
@skip_test('TEST_EVENT_LOG',
|
||||
'Testing event log was skipped until 0.7.8 client release')
|
||||
@errormsg("Failure while event log testing: ")
|
||||
def _test_event_log(self, cluster_id):
|
||||
cluster = self.sahara.clusters.get(cluster_id)
|
||||
events = self.sahara.events.list(cluster_id)
|
||||
|
||||
invalid_steps = []
|
||||
if not events:
|
||||
events = []
|
||||
|
||||
for step in cluster.provision_progress:
|
||||
if not step['successful']:
|
||||
invalid_steps.append(step)
|
||||
|
||||
if len(invalid_steps) > 0 or len(events) > 0:
|
||||
events_info = "\n".join(six.text_type(e) for e in events)
|
||||
invalid_steps_info = "\n".join(six.text_type(e)
|
||||
for e in invalid_steps)
|
||||
steps_info = "\n".join(six.text_type(e)
|
||||
for e in cluster.provision_progress)
|
||||
self.fail(
|
||||
"Issues with event log work: "
|
||||
"\n Not removed events: \n\n {events}"
|
||||
"\n Incomplete steps: \n\n {invalid_steps}"
|
||||
"\n All steps: \n\n {steps}".format(
|
||||
events=events_info,
|
||||
steps=steps_info,
|
||||
invalid_steps=invalid_steps_info))
|
||||
|
||||
# --------------------------------Remote---------------------------------------
|
||||
|
||||
def connect_to_swift(self):
|
||||
return swift_client.Connection(
|
||||
authurl=self.common_config.OS_AUTH_URL,
|
||||
user=self.common_config.OS_USERNAME,
|
||||
key=self.common_config.OS_PASSWORD,
|
||||
tenant_name=self.common_config.OS_TENANT_NAME,
|
||||
auth_version=self.common_config.SWIFT_AUTH_VERSION
|
||||
)
|
||||
|
||||
def open_ssh_connection(self, host):
|
||||
ssh_remote._connect(host, self._ssh_username, self.private_key)
|
||||
|
||||
@staticmethod
|
||||
def execute_command(cmd):
|
||||
return ssh_remote._execute_command(cmd, get_stderr=True)
|
||||
|
||||
@staticmethod
|
||||
def write_file_to(remote_file, data):
|
||||
ssh_remote._write_file_to(remote_file, data)
|
||||
|
||||
@staticmethod
|
||||
def read_file_from(remote_file):
|
||||
return ssh_remote._read_file_from(remote_file)
|
||||
|
||||
@staticmethod
|
||||
def close_ssh_connection():
|
||||
ssh_remote._cleanup()
|
||||
|
||||
def transfer_helper_conf_file_to_node(self, file_name):
|
||||
file = open('sahara/tests/integration/tests/resources/%s' % file_name
|
||||
).read()
|
||||
try:
|
||||
self.write_file_to(file_name, file)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure while conf file transferring '
|
||||
'to cluster node: ' + six.text_type(e)
|
||||
)
|
||||
|
||||
def transfer_helper_script_to_node(self, script_name, parameter_list=None):
|
||||
script = open('sahara/tests/integration/tests/resources/%s'
|
||||
% script_name).read()
|
||||
if parameter_list:
|
||||
for parameter, value in parameter_list.items():
|
||||
script = script.replace(
|
||||
'%s=""' % parameter, '%s=%s' % (parameter, value))
|
||||
try:
|
||||
self.write_file_to('script.sh', script)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure while helper script transferring '
|
||||
'to cluster node: ' + six.text_type(e)
|
||||
)
|
||||
self.execute_command('chmod 777 script.sh')
|
||||
|
||||
def transfer_helper_script_to_nodes(self, node_ip_list, script_name,
|
||||
parameter_list=None):
|
||||
for node_ip in node_ip_list:
|
||||
self.open_ssh_connection(node_ip)
|
||||
self.transfer_helper_script_to_node(script_name, parameter_list)
|
||||
self.close_ssh_connection()
|
||||
|
||||
# -------------------------------Helper methods--------------------------------
|
||||
|
||||
def get_image_id_and_ssh_username(self):
|
||||
def print_error_log(parameter, value):
|
||||
print(
|
||||
'\nImage with %s "%s" was found in image list but it was '
|
||||
'possibly not registered for Sahara. Please, make sure image '
|
||||
'was correctly registered.' % (parameter, value)
|
||||
)
|
||||
|
||||
def try_get_image_id_and_ssh_username(parameter, value):
|
||||
try:
|
||||
return image.id, image.metadata[imgs.PROP_USERNAME]
|
||||
|
||||
except KeyError:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print_error_log(parameter, value)
|
||||
|
||||
images = self.nova.images.list()
|
||||
# If plugin_config.IMAGE_ID is not None then find corresponding image
|
||||
# and return its ID and username. If image not found then handle error
|
||||
if self.plugin_config.IMAGE_ID:
|
||||
for image in images:
|
||||
if image.id == self.plugin_config.IMAGE_ID:
|
||||
return try_get_image_id_and_ssh_username(
|
||||
'ID', self.plugin_config.IMAGE_ID)
|
||||
self.fail(
|
||||
'\n\nImage with ID "%s" not found in image list. Please, make '
|
||||
'sure you specified right image ID.\n' %
|
||||
self.plugin_config.IMAGE_ID)
|
||||
# If plugin_config.IMAGE_NAME is not None then find corresponding image
|
||||
# and return its ID and username. If image not found then handle error
|
||||
if self.plugin_config.IMAGE_NAME:
|
||||
for image in images:
|
||||
if image.name == self.plugin_config.IMAGE_NAME:
|
||||
return try_get_image_id_and_ssh_username(
|
||||
'name', self.plugin_config.IMAGE_NAME)
|
||||
self.fail(
|
||||
'\n\nImage with name "%s" not found in image list. Please, '
|
||||
'make sure you specified right image name.\n'
|
||||
% self.plugin_config.IMAGE_NAME)
|
||||
# If plugin_config.IMAGE_TAG is not None then find corresponding image
|
||||
# and return its ID and username. If image not found then handle error
|
||||
if self.plugin_config.IMAGE_TAG:
|
||||
for image in images:
|
||||
if (image.metadata.get(imgs.PROP_TAG + '%s'
|
||||
% self.plugin_config.IMAGE_TAG)) and (
|
||||
image.metadata.get(imgs.PROP_TAG + str(
|
||||
self.plugin_config.PLUGIN_NAME))):
|
||||
return try_get_image_id_and_ssh_username(
|
||||
'tag', self.plugin_config.IMAGE_TAG
|
||||
)
|
||||
self.fail(
|
||||
'\n\nImage with tag "%s" not found in list of registered '
|
||||
'images for Sahara. Please, make sure tag "%s" was added to '
|
||||
'image and image was correctly registered.\n'
|
||||
% (self.plugin_config.IMAGE_TAG, self.plugin_config.IMAGE_TAG)
|
||||
)
|
||||
# If plugin_config.IMAGE_ID, plugin_config.IMAGE_NAME and
|
||||
# plugin_config.IMAGE_TAG are None then image is chosen
|
||||
# by tag "sahara_i_tests". If image has tag "sahara_i_tests"
|
||||
# (at the same time image ID, image name and image tag were not
|
||||
# specified in configuration file of integration tests) then return
|
||||
# its ID and username. Found image will be chosen as image for tests.
|
||||
# If image with tag "sahara_i_tests" not found then handle error
|
||||
for image in images:
|
||||
if (image.metadata.get(imgs.PROP_TAG + 'sahara_i_tests')) and (
|
||||
image.metadata.get(imgs.PROP_TAG + str(
|
||||
self.plugin_config.PLUGIN_NAME))):
|
||||
try:
|
||||
return image.id, image.metadata[imgs.PROP_USERNAME]
|
||||
|
||||
except KeyError:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nNone of parameters of image (ID, name, tag)'
|
||||
' was specified in configuration file of '
|
||||
'integration tests. That is why there was '
|
||||
'attempt to choose image by tag '
|
||||
'"sahara_i_tests" and image with such tag '
|
||||
'was found in image list but it was possibly '
|
||||
'not registered for Sahara. Please, make '
|
||||
'sure image was correctly registered.'
|
||||
)
|
||||
self.fail(
|
||||
'\n\nNone of parameters of image (ID, name, tag) was specified in '
|
||||
'configuration file of integration tests. That is why there was '
|
||||
'attempt to choose image by tag "sahara_i_tests" but image with '
|
||||
'such tag not found in list of registered images for Sahara. '
|
||||
'Please, make sure image was correctly registered. Please, '
|
||||
'specify one of parameters of image (ID, name or tag) in '
|
||||
'configuration file of integration tests.\n'
|
||||
)
|
||||
|
||||
def get_floating_ip_pool_id_for_neutron_net(self):
|
||||
# Find corresponding floating IP pool by its name and get its ID.
|
||||
# If pool not found then handle error
|
||||
try:
|
||||
floating_ip_pool = self.neutron.list_networks(
|
||||
name=self.common_config.FLOATING_IP_POOL)
|
||||
floating_ip_pool_id = floating_ip_pool['networks'][0]['id']
|
||||
return floating_ip_pool_id
|
||||
|
||||
except IndexError:
|
||||
with excutils.save_and_reraise_exception():
|
||||
raise Exception(
|
||||
'\nFloating IP pool \'%s\' not found in pool list. '
|
||||
'Please, make sure you specified right floating IP pool.'
|
||||
% self.common_config.FLOATING_IP_POOL
|
||||
)
|
||||
|
||||
def get_internal_neutron_net_id(self):
|
||||
# Find corresponding internal Neutron network by its name and get
|
||||
# its ID. If network not found then handle error
|
||||
try:
|
||||
internal_neutron_net = self.neutron.list_networks(
|
||||
name=self.common_config.INTERNAL_NEUTRON_NETWORK)
|
||||
internal_neutron_net_id = internal_neutron_net['networks'][0]['id']
|
||||
return internal_neutron_net_id
|
||||
|
||||
except IndexError:
|
||||
with excutils.save_and_reraise_exception():
|
||||
raise Exception(
|
||||
'\nInternal Neutron network \'%s\' not found in network '
|
||||
'list. Please, make sure you specified right network name.'
|
||||
% self.common_config.INTERNAL_NEUTRON_NETWORK
|
||||
)
|
||||
|
||||
def delete_cluster(self, cluster_id):
|
||||
if not self.common_config.RETAIN_CLUSTER_AFTER_TEST:
|
||||
return self.delete_resource(self.sahara.clusters.delete,
|
||||
cluster_id)
|
||||
|
||||
def delete_cluster_template(self, cluster_template_id):
|
||||
if not self.common_config.RETAIN_CLUSTER_AFTER_TEST:
|
||||
return self.delete_resource(self.sahara.cluster_templates.delete,
|
||||
cluster_template_id)
|
||||
|
||||
def delete_node_group_template(self, node_group_template_id):
|
||||
if not self.common_config.RETAIN_CLUSTER_AFTER_TEST:
|
||||
return self.delete_resource(
|
||||
self.sahara.node_group_templates.delete,
|
||||
node_group_template_id)
|
||||
|
||||
def is_resource_deleted(self, method, *args, **kwargs):
|
||||
try:
|
||||
method(*args, **kwargs)
|
||||
except client_base.APIException as ex:
|
||||
return ex.error_code == 404
|
||||
|
||||
return False
|
||||
|
||||
def delete_resource(self, method, *args, **kwargs):
|
||||
with fixtures.Timeout(self.common_config.DELETE_RESOURCE_TIMEOUT*60,
|
||||
gentle=True):
|
||||
while True:
|
||||
if self.is_resource_deleted(method, *args, **kwargs):
|
||||
break
|
||||
time.sleep(5)
|
||||
|
||||
@staticmethod
|
||||
def delete_swift_container(swift, container):
|
||||
objects = [obj['name'] for obj in swift.get_container(container)[1]]
|
||||
for obj in objects:
|
||||
swift.delete_object(container, obj)
|
||||
swift.delete_container(container)
|
||||
|
||||
@staticmethod
|
||||
def print_error_log(message, exception=None):
|
||||
print(
|
||||
'\n\n!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!* '
|
||||
'ERROR LOG *!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*'
|
||||
'!*!\n'
|
||||
)
|
||||
print(message + six.text_type(exception))
|
||||
print(
|
||||
'\n!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!* END OF '
|
||||
'ERROR LOG *!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*'
|
||||
'!*!\n\n'
|
||||
)
|
||||
|
||||
def capture_error_log_from_cluster_node(self, log_file):
|
||||
print(
|
||||
'\n\n!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!* CAPTURED ERROR '
|
||||
'LOG FROM CLUSTER NODE *!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*'
|
||||
'!*!\n'
|
||||
)
|
||||
print(self.read_file_from(log_file))
|
||||
print(
|
||||
'\n!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!* END OF CAPTURED ERROR '
|
||||
'LOG FROM CLUSTER NODE *!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*'
|
||||
'!*!\n\n'
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def rand_name(name):
|
||||
return '%s-%s' % (name, uuidutils.generate_uuid()[:8])
|
||||
|
||||
def tearDown(self):
|
||||
if not self.common_config.PATH_TO_SSH_KEY:
|
||||
self.nova.keypairs.delete(self.user_keypair_id)
|
||||
if not self.common_config.FLAVOR_ID:
|
||||
self.nova.flavors.delete(self.flavor_id)
|
||||
|
||||
super(ITestCase, self).tearDown()
|
|
@ -1,102 +0,0 @@
|
|||
# Copyright (c) 2015 Intel Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_utils import excutils
|
||||
import six
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
class CheckServicesTest(base.ITestCase):
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_hbase_availability(self, cluster_info):
|
||||
parameters = ['create_data', 'check_get_data', 'check_delete_data']
|
||||
self._check_service_availability(cluster_info, 'hbase_service_test.sh',
|
||||
script_parameters=parameters,
|
||||
conf_files=[])
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_flume_availability(self, cluster_info):
|
||||
self._check_service_availability(cluster_info, 'flume_service_test.sh',
|
||||
script_parameters=[],
|
||||
conf_files=['flume.data',
|
||||
'flume.conf'])
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_sqoop2_availability(self, cluster_info):
|
||||
self._check_service_availability(cluster_info,
|
||||
'sqoop2_service_test.sh')
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_key_value_store_availability(self, cluster_info):
|
||||
namenode_ip = cluster_info['node_info']['namenode_ip']
|
||||
para_create_table = 'create_table -ip %s' % namenode_ip
|
||||
para_create_solr = 'create_solr_collection -ip %s' % namenode_ip
|
||||
para_add_indexer = 'add_indexer -ip %s' % namenode_ip
|
||||
para_create_data = 'create_data -ip %s' % namenode_ip
|
||||
para_check_solr = 'check_solr -ip %s' % namenode_ip
|
||||
para_remove_data = 'remove_data -ip %s' % namenode_ip
|
||||
parameters = [para_create_table, para_create_solr, para_add_indexer,
|
||||
para_create_data, para_check_solr, para_remove_data]
|
||||
self._check_service_availability(cluster_info,
|
||||
'key_value_store_service_test.sh',
|
||||
script_parameters=parameters,
|
||||
conf_files=['key_value_'
|
||||
'store_indexer.xml'])
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_solr_availability(self, cluster_info):
|
||||
self._check_service_availability(cluster_info, 'solr_service_test.sh')
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_sentry_availability(self, cluster_info):
|
||||
self._check_service_availability(cluster_info,
|
||||
'sentry_service_test.sh')
|
||||
|
||||
@base.skip_test('SKIP_CHECK_SERVICES_TEST', message='Test for Services'
|
||||
' checking was skipped.')
|
||||
def check_impala_services(self, cluster_info):
|
||||
namenode_ip = cluster_info['node_info']['namenode_ip']
|
||||
parameter = 'query -ip %s' % namenode_ip
|
||||
self._check_service_availability(cluster_info, 'impala_test_script.sh',
|
||||
script_parameters=[parameter],
|
||||
conf_files=[])
|
||||
|
||||
def _check_service_availability(self, cluster_info, helper_script,
|
||||
script_parameters=[], conf_files=[]):
|
||||
namenode_ip = cluster_info['node_info']['namenode_ip']
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
try:
|
||||
self.transfer_helper_script_to_node(helper_script)
|
||||
if conf_files:
|
||||
for conf_file in conf_files:
|
||||
self.transfer_helper_conf_file_to_node(conf_file)
|
||||
if script_parameters:
|
||||
for parameter in script_parameters:
|
||||
script_command = './script.sh %s' % parameter
|
||||
self.execute_command(script_command)
|
||||
else:
|
||||
self.execute_command('./script.sh')
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(six.text_type(e))
|
||||
finally:
|
||||
self.close_ssh_connection()
|
|
@ -1,66 +0,0 @@
|
|||
# Copyright (c) 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
class CinderVolumeTest(base.ITestCase):
|
||||
def _get_node_list_with_volumes(self, cluster_info):
|
||||
data = self.sahara.clusters.get(cluster_info['cluster_id'])
|
||||
node_groups = data.node_groups
|
||||
node_list_with_volumes = []
|
||||
for node_group in node_groups:
|
||||
if node_group['volumes_per_node'] != 0:
|
||||
for instance in node_group['instances']:
|
||||
node_with_volume = dict()
|
||||
node_with_volume['node_ip'] = instance['management_ip']
|
||||
node_with_volume['volume_count'] = node_group[
|
||||
'volumes_per_node']
|
||||
node_with_volume['volume_mount_prefix'] = node_group[
|
||||
'volume_mount_prefix']
|
||||
node_list_with_volumes.append(node_with_volume)
|
||||
# For example:
|
||||
# node_list_with_volumes = [
|
||||
# {
|
||||
# 'volume_mount_prefix': '/volumes/disk',
|
||||
# 'volume_count': 2,
|
||||
# 'node_ip': '172.18.168.168'
|
||||
# },
|
||||
# {
|
||||
# 'volume_mount_prefix': '/volumes/disk',
|
||||
# 'volume_count': 2,
|
||||
# 'node_ip': '172.18.168.138'
|
||||
# }
|
||||
# ]
|
||||
return node_list_with_volumes
|
||||
|
||||
@base.skip_test('SKIP_CINDER_TEST', message='Test for Cinder was skipped.')
|
||||
def cinder_volume_testing(self, cluster_info):
|
||||
node_list_with_volumes = self._get_node_list_with_volumes(cluster_info)
|
||||
for node_with_volumes in node_list_with_volumes:
|
||||
self.open_ssh_connection(node_with_volumes['node_ip'])
|
||||
volume_count_on_node = int(
|
||||
self.execute_command(
|
||||
'mount | grep %s | wc -l' % node_with_volumes[
|
||||
'volume_mount_prefix']
|
||||
)[1])
|
||||
self.assertEqual(
|
||||
node_with_volumes['volume_count'], volume_count_on_node,
|
||||
'Some volumes was not mounted to node.\n'
|
||||
'Expected count of mounted volumes to node is %s.\n'
|
||||
'Actual count of mounted volumes to node is %s.'
|
||||
% (node_with_volumes['volume_count'], volume_count_on_node)
|
||||
)
|
||||
self.close_ssh_connection()
|
|
@ -1,150 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_utils import excutils
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
NN_CONFIG = {'Name Node Heap Size': 512}
|
||||
SNN_CONFIG = {'Secondary Name Node Heap Size': 521}
|
||||
JT_CONFIG = {'Job Tracker Heap Size': 514}
|
||||
|
||||
DN_CONFIG = {'Data Node Heap Size': 513}
|
||||
TT_CONFIG = {'Task Tracker Heap Size': 515}
|
||||
|
||||
OOZIE_CONFIG = {'Oozie Heap Size': 520,
|
||||
'oozie.notification.url.connection.timeout': 10001}
|
||||
|
||||
CLUSTER_HDFS_CONFIG = {'dfs.replication': 1}
|
||||
CLUSTER_MR_CONFIG = {'mapred.map.tasks.speculative.execution': False,
|
||||
'mapred.child.java.opts': '-Xmx500m'}
|
||||
|
||||
|
||||
CONFIG_MAP = {
|
||||
'namenode': {
|
||||
'service': 'HDFS',
|
||||
'config': NN_CONFIG
|
||||
},
|
||||
'secondarynamenode': {
|
||||
'service': 'HDFS',
|
||||
'config': SNN_CONFIG
|
||||
},
|
||||
'jobtracker': {
|
||||
'service': 'MapReduce',
|
||||
'config': JT_CONFIG
|
||||
},
|
||||
'datanode': {
|
||||
'service': 'HDFS',
|
||||
'config': DN_CONFIG
|
||||
},
|
||||
'tasktracker': {
|
||||
'service': 'MapReduce',
|
||||
'config': TT_CONFIG
|
||||
},
|
||||
'oozie': {
|
||||
'service': 'JobFlow',
|
||||
'config': OOZIE_CONFIG
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class ClusterConfigTest(base.ITestCase):
|
||||
@staticmethod
|
||||
def _get_node_configs(node_group, process):
|
||||
return node_group['node_configs'][CONFIG_MAP[process]['service']]
|
||||
|
||||
@staticmethod
|
||||
def _get_config_from_config_map(process):
|
||||
return CONFIG_MAP[process]['config']
|
||||
|
||||
def _compare_configs(self, expected_config, actual_config):
|
||||
self.assertEqual(
|
||||
expected_config, actual_config,
|
||||
'Failure while config comparison.\n'
|
||||
'Expected config: %s.\n'
|
||||
'Actual config: %s.'
|
||||
% (str(expected_config), str(actual_config))
|
||||
)
|
||||
|
||||
def _compare_configs_on_cluster_node(self, config, value):
|
||||
config = config.replace(' ', '')
|
||||
try:
|
||||
self.execute_command('./script.sh %s -value %s' % (config, value))
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure while config comparison on cluster node: '
|
||||
+ str(e)
|
||||
)
|
||||
self.capture_error_log_from_cluster_node(
|
||||
'/tmp/config-test-log.txt'
|
||||
)
|
||||
|
||||
def _check_configs_for_node_groups(self, node_groups):
|
||||
for node_group in node_groups:
|
||||
for process in node_group['node_processes']:
|
||||
if process in CONFIG_MAP:
|
||||
self._compare_configs(
|
||||
self._get_config_from_config_map(process),
|
||||
self._get_node_configs(node_group, process)
|
||||
)
|
||||
|
||||
def _check_config_application_on_cluster_nodes(
|
||||
self, node_ip_list_with_node_processes):
|
||||
for node_ip, processes in node_ip_list_with_node_processes.items():
|
||||
self.open_ssh_connection(node_ip)
|
||||
for config, value in CLUSTER_MR_CONFIG.items():
|
||||
self._compare_configs_on_cluster_node(config, value)
|
||||
for config, value in CLUSTER_HDFS_CONFIG.items():
|
||||
self._compare_configs_on_cluster_node(config, value)
|
||||
for process in processes:
|
||||
if process in CONFIG_MAP:
|
||||
for config, value in self._get_config_from_config_map(
|
||||
process).items():
|
||||
self._compare_configs_on_cluster_node(config, value)
|
||||
self.close_ssh_connection()
|
||||
|
||||
@base.skip_test('SKIP_CLUSTER_CONFIG_TEST',
|
||||
message='Test for cluster configs was skipped.')
|
||||
def cluster_config_testing(self, cluster_info):
|
||||
cluster_id = cluster_info['cluster_id']
|
||||
data = self.sahara.clusters.get(cluster_id)
|
||||
self._compare_configs(
|
||||
{'Enable Swift': True}, data.cluster_configs['general']
|
||||
)
|
||||
self._compare_configs(
|
||||
CLUSTER_HDFS_CONFIG, data.cluster_configs['HDFS']
|
||||
)
|
||||
self._compare_configs(
|
||||
CLUSTER_MR_CONFIG, data.cluster_configs['MapReduce']
|
||||
)
|
||||
node_groups = data.node_groups
|
||||
self._check_configs_for_node_groups(node_groups)
|
||||
node_ip_list_with_node_processes = (
|
||||
self.get_cluster_node_ip_list_with_node_processes(cluster_id))
|
||||
try:
|
||||
self.transfer_helper_script_to_nodes(
|
||||
node_ip_list_with_node_processes,
|
||||
'cluster_config_test_script.sh'
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(str(e))
|
||||
self._check_config_application_on_cluster_nodes(
|
||||
node_ip_list_with_node_processes
|
||||
)
|
|
@ -1,358 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import string
|
||||
import time
|
||||
import uuid
|
||||
|
||||
import fixtures
|
||||
import six
|
||||
|
||||
from sahara.service.edp import job_utils
|
||||
from sahara.tests.integration.tests import base
|
||||
from sahara.utils import edp
|
||||
|
||||
|
||||
class EDPJobInfo(object):
|
||||
PIG_PATH = 'etc/edp-examples/edp-pig/trim-spaces/'
|
||||
JAVA_PATH = 'etc/edp-examples/edp-java/'
|
||||
MAPREDUCE_PATH = 'etc/edp-examples/edp-mapreduce/'
|
||||
SPARK_PATH = 'etc/edp-examples/edp-spark/'
|
||||
HIVE_PATH = 'etc/edp-examples/edp-hive/'
|
||||
SHELL_PATH = 'etc/edp-examples/edp-shell/'
|
||||
|
||||
HADOOP2_JAVA_PATH = 'etc/edp-examples/hadoop2/edp-java/'
|
||||
|
||||
def read_hive_example_script(self):
|
||||
return open(self.HIVE_PATH + 'script.q').read()
|
||||
|
||||
def read_hive_example_input(self):
|
||||
return open(self.HIVE_PATH + 'input.csv').read()
|
||||
|
||||
def read_pig_example_script(self):
|
||||
return open(self.PIG_PATH + 'example.pig').read()
|
||||
|
||||
def read_pig_example_jar(self):
|
||||
return open(self.PIG_PATH + 'udf.jar').read()
|
||||
|
||||
def read_java_example_lib(self, hadoop_vers=1):
|
||||
if hadoop_vers == 1:
|
||||
return open(self.JAVA_PATH + 'edp-java.jar').read()
|
||||
return open(self.HADOOP2_JAVA_PATH + (
|
||||
'hadoop-mapreduce-examples-2.4.1.jar')).read()
|
||||
|
||||
def java_example_configs(self, hadoop_vers=1):
|
||||
if hadoop_vers == 1:
|
||||
return {
|
||||
'configs': {
|
||||
'edp.java.main_class':
|
||||
'org.openstack.sahara.examples.WordCount'
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
'configs': {
|
||||
'edp.java.main_class':
|
||||
'org.apache.hadoop.examples.QuasiMonteCarlo'
|
||||
},
|
||||
'args': ['10', '10']
|
||||
}
|
||||
|
||||
def read_mapreduce_example_jar(self):
|
||||
return open(self.MAPREDUCE_PATH + 'edp-mapreduce.jar').read()
|
||||
|
||||
def mapreduce_example_configs(self):
|
||||
return {
|
||||
'configs': {
|
||||
'dfs.replication': '1', # for Hadoop 1 only
|
||||
'mapred.mapper.class': 'org.apache.oozie.example.SampleMapper',
|
||||
'mapred.reducer.class':
|
||||
'org.apache.oozie.example.SampleReducer'
|
||||
}
|
||||
}
|
||||
|
||||
def pig_example_configs(self):
|
||||
return {
|
||||
'configs': {
|
||||
'dfs.replication': '1' # for Hadoop 1 only
|
||||
}
|
||||
}
|
||||
|
||||
def mapreduce_streaming_configs(self):
|
||||
return {
|
||||
"configs": {
|
||||
"edp.streaming.mapper": "/bin/cat",
|
||||
"edp.streaming.reducer": "/usr/bin/wc"
|
||||
}
|
||||
}
|
||||
|
||||
def read_shell_example_script(self):
|
||||
return open(self.SHELL_PATH + 'shell-example.sh').read()
|
||||
|
||||
def read_shell_example_text_file(self):
|
||||
return open(self.SHELL_PATH + 'shell-example.txt').read()
|
||||
|
||||
def shell_example_configs(self):
|
||||
return {
|
||||
"params": {
|
||||
"EXTRA_FILE": "*text"
|
||||
},
|
||||
"args": ["/tmp/edp-integration-shell-output.txt"]
|
||||
}
|
||||
|
||||
def read_spark_example_jar(self):
|
||||
return open(self.SPARK_PATH + 'spark-example.jar').read()
|
||||
|
||||
def spark_example_configs(self):
|
||||
return {
|
||||
'configs': {
|
||||
'edp.java.main_class':
|
||||
'org.apache.spark.examples.SparkPi'
|
||||
},
|
||||
'args': ['4']
|
||||
}
|
||||
|
||||
|
||||
class EDPTest(base.ITestCase):
|
||||
def setUp(self):
|
||||
super(EDPTest, self).setUp()
|
||||
self.edp_info = EDPJobInfo()
|
||||
|
||||
def _create_data_source(self, name, data_type, url, description=''):
|
||||
source_id = self.sahara.data_sources.create(
|
||||
name, description, data_type, url, self.common_config.OS_USERNAME,
|
||||
self.common_config.OS_PASSWORD).id
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.data_sources.delete, source_id)
|
||||
return source_id
|
||||
|
||||
def _create_job_binary_internals(self, name, data):
|
||||
job_binary_id = self.sahara.job_binary_internals.create(name, data).id
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.job_binary_internals.delete,
|
||||
job_binary_id)
|
||||
return job_binary_id
|
||||
|
||||
def _create_job_binary(self, name, url, extra=None, description=None):
|
||||
job_binary_id = self.sahara.job_binaries.create(
|
||||
name, url, description or '', extra or {}).id
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.job_binaries.delete, job_binary_id)
|
||||
return job_binary_id
|
||||
|
||||
def _create_job(self, name, job_type, mains, libs):
|
||||
job_id = self.sahara.jobs.create(name, job_type, mains, libs,
|
||||
description='').id
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.jobs.delete, job_id)
|
||||
return job_id
|
||||
|
||||
def _get_job_status(self, job_id):
|
||||
return self.sahara.job_executions.get(job_id).info['status']
|
||||
|
||||
def poll_jobs_status(self, job_ids):
|
||||
timeout = self.common_config.JOB_LAUNCH_TIMEOUT * 60 * len(job_ids)
|
||||
try:
|
||||
with fixtures.Timeout(timeout, gentle=True):
|
||||
success = False
|
||||
while not success:
|
||||
success = True
|
||||
for job_id in job_ids:
|
||||
status = self._get_job_status(job_id)
|
||||
if status in [edp.JOB_STATUS_FAILED,
|
||||
edp.JOB_STATUS_KILLED,
|
||||
edp.JOB_STATUS_DONEWITHERROR]:
|
||||
self.fail(
|
||||
'Job status "%s" \'%s\'.' % (job_id, status))
|
||||
if status != edp.JOB_STATUS_SUCCEEDED:
|
||||
success = False
|
||||
|
||||
time.sleep(5)
|
||||
except fixtures.TimeoutException:
|
||||
self.fail(
|
||||
"Jobs did not return to '{0}' status within {1:d} minute(s)."
|
||||
.format(edp.JOB_STATUS_SUCCEEDED, timeout / 60))
|
||||
|
||||
def _create_job_binaries(self, job_data_list, job_binary_internal_list,
|
||||
job_binary_list, swift_connection=None,
|
||||
container_name=None):
|
||||
for job_data in job_data_list:
|
||||
name = 'binary-job-%s' % str(uuid.uuid4())[:8]
|
||||
if isinstance(job_data, dict):
|
||||
for key, value in job_data.items():
|
||||
name = 'binary-job-%s.%s' % (
|
||||
str(uuid.uuid4())[:8], key)
|
||||
data = value
|
||||
else:
|
||||
data = job_data
|
||||
|
||||
if swift_connection:
|
||||
swift_connection.put_object(container_name, name, data)
|
||||
job_binary = self._create_job_binary(
|
||||
name, 'swift://%s.sahara/%s' % (container_name, name),
|
||||
extra={
|
||||
'user': self.common_config.OS_USERNAME,
|
||||
'password': self.common_config.OS_PASSWORD
|
||||
}
|
||||
)
|
||||
job_binary_list.append(job_binary)
|
||||
else:
|
||||
job_binary_internal_list.append(
|
||||
self._create_job_binary_internals(name, data)
|
||||
)
|
||||
job_binary_list.append(
|
||||
self._create_job_binary(
|
||||
name, 'internal-db://%s' % job_binary_internal_list[-1]
|
||||
)
|
||||
)
|
||||
|
||||
def _enable_substitution(self, configs):
|
||||
|
||||
if "configs" not in configs:
|
||||
configs["configs"] = {}
|
||||
|
||||
configs['configs'][job_utils.DATA_SOURCE_SUBST_NAME] = True
|
||||
configs['configs'][job_utils.DATA_SOURCE_SUBST_UUID] = True
|
||||
|
||||
@base.skip_test('SKIP_EDP_TEST', 'Test for EDP was skipped.')
|
||||
def check_edp_hive(self):
|
||||
hdfs_input_path = '/user/hive/warehouse/input.csv'
|
||||
# put input data to HDFS
|
||||
self.put_file_to_hdfs(
|
||||
self.cluster_info['node_info']['namenode_ip'],
|
||||
hdfs_input_path, self.edp_info.read_hive_example_input())
|
||||
|
||||
input_id = self._create_data_source(self.rand_name('hive-input'),
|
||||
'hdfs', hdfs_input_path)
|
||||
output_id = self._create_data_source(self.rand_name('hive-output'),
|
||||
'hdfs',
|
||||
'/user/hive/warehouse/output')
|
||||
script_id = self._create_job_binary_internals(
|
||||
self.rand_name('hive-script'),
|
||||
self.edp_info.read_hive_example_script())
|
||||
job_binary_id = self._create_job_binary(self.rand_name('hive-edp'),
|
||||
'internal-db://%s' % script_id)
|
||||
job_id = self._create_job(self.rand_name('edp-test-hive'),
|
||||
edp.JOB_TYPE_HIVE,
|
||||
[job_binary_id], [])
|
||||
job_execution_id = self.sahara.job_executions.create(
|
||||
job_id, self.cluster_id, input_id, output_id, {}).id
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.job_executions.delete,
|
||||
job_execution_id)
|
||||
return job_execution_id
|
||||
|
||||
@base.skip_test('SKIP_EDP_TEST', 'Test for EDP was skipped.')
|
||||
def edp_testing(self, job_type, job_data_list, lib_data_list=None,
|
||||
configs=None, pass_input_output_args=False,
|
||||
swift_binaries=False, hdfs_local_output=False):
|
||||
job_data_list = job_data_list or []
|
||||
lib_data_list = lib_data_list or []
|
||||
configs = configs or {}
|
||||
|
||||
swift = self.connect_to_swift()
|
||||
container_name = 'Edp-test-%s' % str(uuid.uuid4())[:8]
|
||||
swift.put_container(container_name)
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.delete_swift_container, swift, container_name)
|
||||
swift.put_object(
|
||||
container_name, 'input', ''.join(
|
||||
random.choice(':' + ' ' + '\n' + string.ascii_lowercase)
|
||||
for x in six.moves.range(10000)
|
||||
)
|
||||
)
|
||||
|
||||
input_id = None
|
||||
output_id = None
|
||||
job_id = None
|
||||
job_execution = None
|
||||
job_binary_list = []
|
||||
lib_binary_list = []
|
||||
job_binary_internal_list = []
|
||||
|
||||
swift_input_url = 'swift://%s.sahara/input' % container_name
|
||||
if hdfs_local_output:
|
||||
# This will create a file in hdfs under the user
|
||||
# executing the job (i.e. /usr/hadoop/Edp-test-xxxx-out)
|
||||
output_type = "hdfs"
|
||||
output_url = container_name + "-out"
|
||||
else:
|
||||
output_type = "swift"
|
||||
output_url = 'swift://%s.sahara/output' % container_name
|
||||
|
||||
input_name = 'input-%s' % str(uuid.uuid4())[:8]
|
||||
input_id = self._create_data_source(input_name,
|
||||
'swift', swift_input_url)
|
||||
|
||||
output_name = 'output-%s' % str(uuid.uuid4())[:8]
|
||||
output_id = self._create_data_source(output_name,
|
||||
output_type,
|
||||
output_url)
|
||||
|
||||
if job_data_list:
|
||||
if swift_binaries:
|
||||
self._create_job_binaries(job_data_list,
|
||||
job_binary_internal_list,
|
||||
job_binary_list,
|
||||
swift_connection=swift,
|
||||
container_name=container_name)
|
||||
else:
|
||||
self._create_job_binaries(job_data_list,
|
||||
job_binary_internal_list,
|
||||
job_binary_list)
|
||||
|
||||
if lib_data_list:
|
||||
if swift_binaries:
|
||||
self._create_job_binaries(lib_data_list,
|
||||
job_binary_internal_list,
|
||||
lib_binary_list,
|
||||
swift_connection=swift,
|
||||
container_name=container_name)
|
||||
else:
|
||||
self._create_job_binaries(lib_data_list,
|
||||
job_binary_internal_list,
|
||||
lib_binary_list)
|
||||
|
||||
job_id = self._create_job(
|
||||
'Edp-test-job-%s' % str(uuid.uuid4())[:8], job_type,
|
||||
job_binary_list, lib_binary_list)
|
||||
if not configs:
|
||||
configs = {}
|
||||
|
||||
# TODO(tmckay): for spark we don't have support for swift
|
||||
# yet. When we do, we'll need something to here to set up
|
||||
# swift paths and we can use a spark wordcount job
|
||||
|
||||
# Append the input/output paths with the swift configs
|
||||
# if the caller has requested it...
|
||||
if edp.compare_job_type(
|
||||
job_type, edp.JOB_TYPE_JAVA) and pass_input_output_args:
|
||||
self._enable_substitution(configs)
|
||||
input_arg = job_utils.DATA_SOURCE_PREFIX + input_name
|
||||
output_arg = output_id
|
||||
if "args" in configs:
|
||||
configs["args"].extend([input_arg, output_arg])
|
||||
else:
|
||||
configs["args"] = [input_arg, output_arg]
|
||||
|
||||
job_execution = self.sahara.job_executions.create(
|
||||
job_id, self.cluster_id, input_id, output_id,
|
||||
configs=configs)
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.job_executions.delete,
|
||||
job_execution.id)
|
||||
|
||||
return job_execution.id
|
|
@ -1,450 +0,0 @@
|
|||
# Copyright (c) 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import check_services
|
||||
from sahara.tests.integration.tests import cinder
|
||||
from sahara.tests.integration.tests import cluster_configs
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import map_reduce
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class CDHGatingTest(check_services.CheckServicesTest,
|
||||
cluster_configs.ClusterConfigTest,
|
||||
map_reduce.MapReduceTest, swift.SwiftTest,
|
||||
scaling.ScalingTest, cinder.CinderVolumeTest, edp.EDPTest):
|
||||
|
||||
cdh_config = cfg.ITConfig().cdh_config
|
||||
SKIP_MAP_REDUCE_TEST = cdh_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = cdh_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = cdh_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = cdh_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = cdh_config.SKIP_EDP_TEST
|
||||
SKIP_CHECK_SERVICES_TEST = cdh_config.SKIP_CHECK_SERVICES_TEST
|
||||
|
||||
def setUp(self):
|
||||
super(CDHGatingTest, self).setUp()
|
||||
self.cluster_id = None
|
||||
self.cluster_template_id = None
|
||||
self.services_cluster_template_id = None
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().cdh_config
|
||||
|
||||
@b.errormsg("Failure while 'nm-dn' node group template creation: ")
|
||||
def _create_nm_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-cdh-nm-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for CDH plugin',
|
||||
'node_processes': ['YARN_NODEMANAGER', 'HDFS_DATANODE'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {},
|
||||
'flavor_id': self.plugin_config.LARGE_FLAVOR
|
||||
}
|
||||
self.ng_tmpl_nm_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_nm_dn_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
cl_config = {
|
||||
'general': {
|
||||
'CDH5 repo list URL': self.plugin_config.CDH_REPO_LIST_URL,
|
||||
'CM5 repo list URL': self.plugin_config.CM_REPO_LIST_URL,
|
||||
'CDH5 repo key URL (for debian-based only)':
|
||||
self.plugin_config.CDH_APT_KEY_URL,
|
||||
'CM5 repo key URL (for debian-based only)':
|
||||
self.plugin_config.CM_APT_KEY_URL,
|
||||
'Enable Swift': True
|
||||
}
|
||||
}
|
||||
template = {
|
||||
'name': 'test-cluster-template-cdh',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for CDH plugin',
|
||||
'cluster_configs': cl_config,
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'manager-node',
|
||||
'flavor_id': self.plugin_config.MANAGERNODE_FLAVOR,
|
||||
'node_processes': ['CLOUDERA_MANAGER'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'master-node-rm-nn',
|
||||
'flavor_id': self.plugin_config.MANAGERNODE_FLAVOR,
|
||||
'node_processes': ['HDFS_NAMENODE',
|
||||
'YARN_RESOURCEMANAGER'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'master-node-oo-hs-snn-hm-hs2',
|
||||
'flavor_id': self.plugin_config.MANAGERNODE_FLAVOR,
|
||||
'node_processes': ['OOZIE_SERVER', 'YARN_JOBHISTORY',
|
||||
'HDFS_SECONDARYNAMENODE',
|
||||
'HIVE_METASTORE', 'HIVE_SERVER2'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-nm-dn',
|
||||
'node_group_template_id': self.ng_tmpl_nm_dn_id,
|
||||
'count': 1
|
||||
},
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {
|
||||
'HDFS': {
|
||||
'dfs_replication': 1
|
||||
}
|
||||
}
|
||||
}
|
||||
self.cluster_id = self.create_cluster(**cluster)
|
||||
self.addCleanup(self.delete_cluster, self.cluster_id)
|
||||
self.poll_cluster_state(self.cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while 's-nn' node group template creation: ")
|
||||
def _create_s_nn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-cdh-s-nn',
|
||||
'plugin_config': self.cdh_config,
|
||||
'description': 'test node group template for CDH plugin',
|
||||
'node_processes': ['HDFS_NAMENODE', 'YARN_RESOURCEMANAGER',
|
||||
'YARN_NODEMANAGER', 'HDFS_DATANODE',
|
||||
'HBASE_MASTER', 'CLOUDERA_MANAGER',
|
||||
'ZOOKEEPER_SERVER', 'HBASE_REGIONSERVER',
|
||||
'YARN_JOBHISTORY', 'OOZIE_SERVER',
|
||||
'FLUME_AGENT', 'HIVE_METASTORE',
|
||||
'HIVE_SERVER2', 'HUE_SERVER', 'SENTRY_SERVER',
|
||||
'SOLR_SERVER', 'SQOOP_SERVER',
|
||||
'KEY_VALUE_STORE_INDEXER', 'HIVE_WEBHCAT',
|
||||
'IMPALA_CATALOGSERVER',
|
||||
'SPARK_YARN_HISTORY_SERVER',
|
||||
'IMPALA_STATESTORE', 'IMPALAD'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_s_nn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_s_nn_id)
|
||||
|
||||
@b.errormsg("Failure while 's-dn' node group template creation: ")
|
||||
def _create_s_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-cdh-s-dn',
|
||||
'plugin_config': self.cdh_config,
|
||||
'description': 'test node group template for CDH plugin',
|
||||
'node_processes': ['HDFS_SECONDARYNAMENODE', 'HDFS_DATANODE',
|
||||
'HBASE_REGIONSERVER', 'FLUME_AGENT',
|
||||
'IMPALAD'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_s_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template,
|
||||
self.ng_tmpl_s_dn_id)
|
||||
|
||||
@b.errormsg("Failure while services cluster template creation: ")
|
||||
def _create_services_cluster_template(self):
|
||||
s_cl_config = {
|
||||
'general': {
|
||||
'CDH5 repo list URL': self.plugin_config.CDH_REPO_LIST_URL,
|
||||
'CM5 repo list URL': self.plugin_config.CM_REPO_LIST_URL,
|
||||
'CDH5 repo key URL (for debian-based only)':
|
||||
self.plugin_config.CDH_APT_KEY_URL,
|
||||
'CM5 repo key URL (for debian-based only)':
|
||||
self.plugin_config.CM_APT_KEY_URL,
|
||||
'Enable Swift': True
|
||||
}
|
||||
}
|
||||
template = {
|
||||
'name': 'test-services-cluster-template-cdh',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for CDH plugin',
|
||||
'cluster_configs': s_cl_config,
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'worker-node-s-nn',
|
||||
'node_group_template_id': self.ng_tmpl_s_nn_id,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-s-dn',
|
||||
'node_group_template_id': self.ng_tmpl_s_dn_id,
|
||||
'count': 1
|
||||
}
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.services_cluster_template_id = self.create_cluster_template(
|
||||
**template)
|
||||
self.addCleanup(self.delete_cluster_template,
|
||||
self.services_cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while services cluster creation: ")
|
||||
def _create_services_cluster(self):
|
||||
cluster_name = '%s-%s' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.services_cluster_template_id,
|
||||
'description': 'services test cluster',
|
||||
'cluster_configs': {
|
||||
'HDFS': {
|
||||
'dfs_replication': 1
|
||||
}
|
||||
}
|
||||
}
|
||||
self.cluster_id = self.create_cluster(**cluster)
|
||||
self.poll_cluster_state(self.cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
self.addCleanup(self.delete_cluster, self.cluster_id)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing: ")
|
||||
def _check_cinder(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing: ")
|
||||
def _check_mapreduce(self):
|
||||
self.map_reduce_testing(self.cluster_info, check_log=False)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability: ")
|
||||
def _check_swift(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
self.poll_jobs_status(list(self._run_edp_test()))
|
||||
|
||||
def _run_edp_test(self):
|
||||
# check pig
|
||||
pig_job = self.edp_info.read_pig_example_script()
|
||||
pig_lib = self.edp_info.read_pig_example_jar()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job}],
|
||||
lib_data_list=[{'jar': pig_lib}],
|
||||
swift_binaries=False,
|
||||
hdfs_local_output=True)
|
||||
|
||||
# check mapreduce
|
||||
mapreduce_jar = self.edp_info.read_mapreduce_example_jar()
|
||||
mapreduce_configs = self.edp_info.mapreduce_example_configs()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar}],
|
||||
configs=mapreduce_configs,
|
||||
swift_binaries=False,
|
||||
hdfs_local_output=True)
|
||||
|
||||
# check mapreduce streaming
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs(),
|
||||
swift_binaries=False,
|
||||
hdfs_local_output=True)
|
||||
|
||||
# check hive
|
||||
yield self.check_edp_hive()
|
||||
|
||||
# check Java
|
||||
java_jar = self.edp_info.read_java_example_lib(2)
|
||||
java_configs = self.edp_info.java_example_configs(2)
|
||||
yield self.edp_testing(
|
||||
utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_jar}],
|
||||
configs=java_configs)
|
||||
|
||||
# check Shell
|
||||
shell_script_data = self.edp_info.read_shell_example_script()
|
||||
shell_file_data = self.edp_info.read_shell_example_text_file()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_SHELL,
|
||||
job_data_list=[{'script': shell_script_data}],
|
||||
lib_data_list=[{'text': shell_file_data}],
|
||||
configs=self.edp_info.shell_example_configs())
|
||||
|
||||
@b.errormsg("Failure while check services testing: ")
|
||||
def _check_services(self):
|
||||
# check HBase
|
||||
self.check_hbase_availability(self.cluster_info)
|
||||
# check flume
|
||||
self.check_flume_availability(self.cluster_info)
|
||||
# check sqoop2
|
||||
self.check_sqoop2_availability(self.cluster_info)
|
||||
# check key value store
|
||||
self.check_key_value_store_availability(self.cluster_info)
|
||||
# check solr
|
||||
self.check_solr_availability(self.cluster_info)
|
||||
# check Impala
|
||||
self.check_impala_services(self.cluster_info)
|
||||
# check sentry
|
||||
self.check_sentry_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-nm-dn', 1]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-dn', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-nm', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-nm', 1, '%s' % self.ng_tmpl_nm_id
|
||||
]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-dn', 1, '%s' % self.ng_tmpl_dn_id
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing after cluster scaling: ")
|
||||
def _check_cinder_after_scaling(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing after cluster scaling: ")
|
||||
def _check_mapreduce_after_scaling(self):
|
||||
self.map_reduce_testing(self.cluster_info, check_log=False)
|
||||
|
||||
@b.errormsg(
|
||||
"Failure during check of Swift availability after cluster scaling: ")
|
||||
def _check_swift_after_scaling(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self._check_edp()
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().cdh_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for CDH plugin were skipped")
|
||||
@testcase.attr('cdh')
|
||||
def test_cdh_plugin_gating(self):
|
||||
self._success = False
|
||||
self._create_nm_dn_ng_template()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._test_event_log(self.cluster_id)
|
||||
|
||||
self._check_cinder()
|
||||
self._check_mapreduce()
|
||||
self._check_swift()
|
||||
self._check_edp()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_cinder_after_scaling()
|
||||
self._check_edp_after_scaling()
|
||||
self._check_mapreduce_after_scaling()
|
||||
self._check_swift_after_scaling()
|
||||
|
||||
self._success = True
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().cdh_config.SKIP_CHECK_SERVICES_TEST,
|
||||
"All services tests for CDH plugin were skipped")
|
||||
@testcase.attr('cdh')
|
||||
def test_cdh_plugin_services_gating(self):
|
||||
self._success = False
|
||||
self._create_s_nn_ng_template()
|
||||
self._create_s_dn_ng_template()
|
||||
self._create_services_cluster_template()
|
||||
self._create_services_cluster()
|
||||
self._check_services()
|
||||
self._success = True
|
||||
|
||||
def print_manager_log(self):
|
||||
if not self.cluster_id:
|
||||
return
|
||||
|
||||
manager_node = None
|
||||
for ng in self.sahara.clusters.get(self.cluster_id).node_groups:
|
||||
if 'CLOUDERA_MANAGER' in ng['node_processes']:
|
||||
manager_node = ng['instances'][0]['management_ip']
|
||||
break
|
||||
|
||||
if not manager_node:
|
||||
print("Cloudera Manager node not found")
|
||||
return
|
||||
|
||||
self.open_ssh_connection(manager_node)
|
||||
try:
|
||||
log = self.execute_command('sudo cat /var/log/cloudera-scm-server/'
|
||||
'cloudera-scm-server.log')[1]
|
||||
finally:
|
||||
self.close_ssh_connection()
|
||||
|
||||
print("\n\nCLOUDERA MANAGER LOGS\n\n")
|
||||
print(log)
|
||||
print("\n\nEND OF CLOUDERA MANAGER LOGS\n\n")
|
||||
|
||||
def tearDown(self):
|
||||
if not self._success:
|
||||
self.print_manager_log()
|
||||
super(CDHGatingTest, self).tearDown()
|
|
@ -1,223 +0,0 @@
|
|||
# Copyright (c) 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class HDP2GatingTest(swift.SwiftTest, scaling.ScalingTest,
|
||||
edp.EDPTest):
|
||||
|
||||
config = cfg.ITConfig().hdp2_config
|
||||
SKIP_EDP_TEST = config.SKIP_EDP_TEST
|
||||
SKIP_SWIFT_TEST = config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = config.SKIP_SCALING_TEST
|
||||
|
||||
def setUp(self):
|
||||
super(HDP2GatingTest, self).setUp()
|
||||
self.cluster_id = None
|
||||
self.cluster_template_id = None
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().hdp2_config
|
||||
|
||||
@b.errormsg("Failure while 'rm-nn' node group template creation: ")
|
||||
def _create_rm_nn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-hdp2-rm-nn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for HDP plugin',
|
||||
'node_processes': self.plugin_config.MASTER_NODE_PROCESSES,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_rm_nn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_rm_nn_id)
|
||||
|
||||
@b.errormsg("Failure while 'nm-dn' node group template creation: ")
|
||||
def _create_nm_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-hdp2-nm-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for HDP plugin',
|
||||
'node_processes': self.plugin_config.WORKER_NODE_PROCESSES,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_nm_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_nm_dn_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-cluster-template-hdp2',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for HDP plugin',
|
||||
'cluster_configs': {
|
||||
'YARN': {
|
||||
'yarn.log-aggregation-enable': False
|
||||
}
|
||||
},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'master-node-dn',
|
||||
'node_group_template_id': self.ng_tmpl_rm_nn_id,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-nm',
|
||||
'node_group_template_id': self.ng_tmpl_nm_dn_id,
|
||||
'count': 3
|
||||
}
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s-v2' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {}
|
||||
}
|
||||
cluster_id = self.create_cluster(**cluster)
|
||||
self.addCleanup(self.delete_cluster, cluster_id)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability: ")
|
||||
def _check_swift(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
self.poll_jobs_status(list(self._run_edp_test()))
|
||||
|
||||
def _run_edp_test(self):
|
||||
# check pig
|
||||
pig_job = self.edp_info.read_pig_example_script()
|
||||
pig_lib = self.edp_info.read_pig_example_jar()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job}],
|
||||
lib_data_list=[{'jar': pig_lib}],
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
# check mapreduce
|
||||
mapreduce_jar = self.edp_info.read_mapreduce_example_jar()
|
||||
mapreduce_configs = self.edp_info.mapreduce_example_configs()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar}],
|
||||
configs=mapreduce_configs,
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
# check mapreduce streaming
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs())
|
||||
|
||||
# check java
|
||||
java_jar = self.edp_info.read_java_example_lib(2)
|
||||
java_configs = self.edp_info.java_example_configs(2)
|
||||
yield self.edp_testing(
|
||||
utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_jar}],
|
||||
configs=java_configs)
|
||||
|
||||
# check shell
|
||||
shell_script_data = self.edp_info.read_shell_example_script()
|
||||
shell_file_data = self.edp_info.read_shell_example_text_file()
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_SHELL,
|
||||
job_data_list=[{'script': shell_script_data}],
|
||||
lib_data_list=[{'text': shell_file_data}],
|
||||
configs=self.edp_info.shell_example_configs())
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
datanode_count_after_resizing = (
|
||||
self.cluster_info['node_info']['datanode_count']
|
||||
+ self.plugin_config.SCALE_EXISTING_NG_COUNT)
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-nm',
|
||||
datanode_count_after_resizing]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': ['new-worker-node-tt-dn',
|
||||
self.plugin_config.SCALE_NEW_NG_COUNT,
|
||||
'%s' % self.ng_tmpl_nm_dn_id]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg(
|
||||
"Failure during check of Swift availability after cluster scaling: ")
|
||||
def _check_swift_after_scaling(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self._check_edp()
|
||||
|
||||
@testcase.attr('hdp2')
|
||||
@testcase.skipIf(config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
'All tests for HDP2 plugin were skipped')
|
||||
def test_hdp2_plugin_gating(self):
|
||||
self._create_rm_nn_ng_template()
|
||||
self._create_nm_dn_ng_template()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_swift()
|
||||
self._check_edp()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_swift_after_scaling()
|
||||
self._check_edp_after_scaling()
|
||||
|
||||
def tearDown(self):
|
||||
super(HDP2GatingTest, self).tearDown()
|
|
@ -1,220 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import cinder
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import map_reduce
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class HDPGatingTest(cinder.CinderVolumeTest, edp.EDPTest,
|
||||
map_reduce.MapReduceTest, swift.SwiftTest,
|
||||
scaling.ScalingTest):
|
||||
config = cfg.ITConfig().hdp_config
|
||||
SKIP_CINDER_TEST = config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = config.SKIP_EDP_TEST
|
||||
SKIP_MAP_REDUCE_TEST = config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = config.SKIP_SCALING_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().hdp_config
|
||||
|
||||
@testcase.skipIf(config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
'All tests for HDP plugin were skipped')
|
||||
@testcase.attr('hdp1')
|
||||
def test_hdp_plugin_gating(self):
|
||||
|
||||
# --------------------------CLUSTER CREATION---------------------------
|
||||
|
||||
# ------------------"tt-dn" node group template creation---------------
|
||||
|
||||
node_group_template_tt_dn_id = self.create_node_group_template(
|
||||
name='test-node-group-template-hdp-tt-dn',
|
||||
plugin_config=self.plugin_config,
|
||||
description='test node group template for HDP plugin',
|
||||
volumes_per_node=self.volumes_per_node,
|
||||
volumes_size=self.volumes_size,
|
||||
node_processes=self.plugin_config.WORKER_NODE_PROCESSES,
|
||||
node_configs={},
|
||||
floating_ip_pool=self.floating_ip_pool,
|
||||
auto_security_group=True
|
||||
)
|
||||
self.addCleanup(self.delete_node_group_template,
|
||||
node_group_template_tt_dn_id)
|
||||
|
||||
# --------------------------Cluster template creation--------------------------
|
||||
|
||||
cluster_template_id = self.create_cluster_template(
|
||||
name='test-cluster-template-hdp',
|
||||
plugin_config=self.plugin_config,
|
||||
description='test cluster template for HDP plugin',
|
||||
cluster_configs={},
|
||||
node_groups=[
|
||||
dict(
|
||||
name='master-node-jt-nn',
|
||||
flavor_id=self.flavor_id,
|
||||
node_processes=(
|
||||
self.plugin_config.MASTER_NODE_PROCESSES),
|
||||
node_configs={},
|
||||
floating_ip_pool=self.floating_ip_pool,
|
||||
count=1,
|
||||
auto_security_group=True
|
||||
),
|
||||
dict(
|
||||
name='worker-node-tt-dn',
|
||||
node_group_template_id=node_group_template_tt_dn_id,
|
||||
count=3)
|
||||
],
|
||||
net_id=self.internal_neutron_net
|
||||
)
|
||||
self.addCleanup(self.delete_cluster_template, cluster_template_id)
|
||||
|
||||
# ------------------------------Cluster creation-------------------------------
|
||||
|
||||
cluster_name = (self.common_config.CLUSTER_NAME + '-' +
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
|
||||
cluster_id = self.create_cluster(
|
||||
name=cluster_name,
|
||||
plugin_config=self.plugin_config,
|
||||
cluster_template_id=cluster_template_id,
|
||||
description='test cluster',
|
||||
cluster_configs={}
|
||||
)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
self.addCleanup(self.delete_cluster, self.cluster_id)
|
||||
|
||||
# --------------------------------EVENT LOG TESTING---------------------------
|
||||
|
||||
self._test_event_log(cluster_id)
|
||||
|
||||
# --------------------------------CINDER TESTING-------------------------------
|
||||
|
||||
self.cinder_volume_testing(cluster_info)
|
||||
|
||||
# ---------------------------------EDP TESTING---------------------------------
|
||||
|
||||
pig_job_data = self.edp_info.read_pig_example_script()
|
||||
pig_lib_data = self.edp_info.read_pig_example_jar()
|
||||
|
||||
mapreduce_jar_data = self.edp_info.read_mapreduce_example_jar()
|
||||
|
||||
shell_script_data = self.edp_info.read_shell_example_script()
|
||||
shell_file_data = self.edp_info.read_shell_example_text_file()
|
||||
|
||||
# This is a modified version of WordCount that takes swift configs
|
||||
java_lib_data = self.edp_info.read_java_example_lib()
|
||||
|
||||
job_ids = []
|
||||
job_id = self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job_data}],
|
||||
lib_data_list=[{'jar': pig_lib_data}],
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
job_ids.append(job_id)
|
||||
|
||||
job_id = self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar_data}],
|
||||
configs=self.edp_info.mapreduce_example_configs(),
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
job_ids.append(job_id)
|
||||
|
||||
job_id = self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs())
|
||||
job_ids.append(job_id)
|
||||
|
||||
job_id = self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_lib_data}],
|
||||
configs=self.edp_info.java_example_configs(),
|
||||
pass_input_output_args=True)
|
||||
job_ids.append(job_id)
|
||||
|
||||
job_id = self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_SHELL,
|
||||
job_data_list=[{'script': shell_script_data}],
|
||||
lib_data_list=[{'text': shell_file_data}],
|
||||
configs=self.edp_info.shell_example_configs())
|
||||
job_ids.append(job_id)
|
||||
|
||||
self.poll_jobs_status(job_ids)
|
||||
|
||||
|
||||
# -----------------------------MAP REDUCE TESTING------------------------------
|
||||
|
||||
self.map_reduce_testing(cluster_info)
|
||||
|
||||
# --------------------------CHECK SWIFT AVAILABILITY---------------------------
|
||||
|
||||
self.check_swift_availability(cluster_info)
|
||||
|
||||
# -------------------------------CLUSTER SCALING-------------------------------
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
datanode_count_after_resizing = (
|
||||
cluster_info['node_info']['datanode_count']
|
||||
+ self.plugin_config.SCALE_EXISTING_NG_COUNT)
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-tt-dn',
|
||||
datanode_count_after_resizing]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-tt-dn',
|
||||
self.plugin_config.SCALE_NEW_NG_COUNT,
|
||||
'%s' % node_group_template_tt_dn_id
|
||||
]
|
||||
}
|
||||
]
|
||||
new_cluster_info = self.cluster_scaling(cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(
|
||||
new_cluster_info['node_info'], self.plugin_config)
|
||||
|
||||
# --------------------------------EVENT LOG TESTING---------------------------
|
||||
self._test_event_log(cluster_id)
|
||||
|
||||
# -------------------------CINDER TESTING AFTER SCALING-----------------------
|
||||
|
||||
self.cinder_volume_testing(new_cluster_info)
|
||||
|
||||
# ----------------------MAP REDUCE TESTING AFTER SCALING-----------------------
|
||||
|
||||
self.map_reduce_testing(new_cluster_info)
|
||||
|
||||
# -------------------CHECK SWIFT AVAILABILITY AFTER SCALING--------------------
|
||||
|
||||
self.check_swift_availability(new_cluster_info)
|
|
@ -1,70 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.tests.integration.tests.gating.test_mapr_gating as mapr_test
|
||||
|
||||
|
||||
class MapR311GatingTest(mapr_test.MapRGatingTest):
|
||||
mapr_config = cfg.ITConfig().mapr_311_config
|
||||
SKIP_MAP_REDUCE_TEST = mapr_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = mapr_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = mapr_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = mapr_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = mapr_config.SKIP_EDP_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().mapr_311_config
|
||||
|
||||
def setUp(self):
|
||||
super(MapR311GatingTest, self).setUp()
|
||||
self._tt_name = 'tasktracker'
|
||||
self._mr_version = 1
|
||||
self._mkdir_cmd = 'sudo -u %(user)s hadoop fs -mkdir %(path)s'
|
||||
self._node_processes = [
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
'ZooKeeper',
|
||||
'Oozie',
|
||||
'Webserver'
|
||||
]
|
||||
self._master_node_processes = [
|
||||
'Metrics',
|
||||
'Webserver',
|
||||
'ZooKeeper',
|
||||
'HTTPFS',
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'Oozie',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
]
|
||||
self._worker_node_processes = [
|
||||
'TaskTracker',
|
||||
'HiveServer2',
|
||||
'HiveMetastore',
|
||||
'FileServer',
|
||||
]
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().mapr_311_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for MapR plugin were skipped")
|
||||
@testcase.attr('mapr311')
|
||||
def test_mapr_plugin_gating(self):
|
||||
super(MapR311GatingTest, self).test_mapr_plugin_gating()
|
|
@ -1,66 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.tests.integration.tests.gating.test_mapr_gating as mapr_test
|
||||
|
||||
|
||||
class MapR401MRv1GatingTest(mapr_test.MapRGatingTest):
|
||||
mapr_config = cfg.ITConfig().mapr_401mrv1_config
|
||||
SKIP_MAP_REDUCE_TEST = mapr_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = mapr_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = mapr_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = mapr_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = mapr_config.SKIP_EDP_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return MapR401MRv1GatingTest.mapr_config
|
||||
|
||||
def setUp(self):
|
||||
super(MapR401MRv1GatingTest, self).setUp()
|
||||
self._tt_name = 'tasktracker'
|
||||
self._mr_version = 1
|
||||
self._node_processes = [
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
'ZooKeeper',
|
||||
'Oozie',
|
||||
'Webserver'
|
||||
]
|
||||
self._master_node_processes = [
|
||||
'Metrics',
|
||||
'Webserver',
|
||||
'ZooKeeper',
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'Oozie',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
]
|
||||
self._worker_node_processes = [
|
||||
'TaskTracker',
|
||||
'FileServer',
|
||||
]
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().mapr_401mrv1_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for MapR plugin were skipped")
|
||||
@testcase.attr('mapr401mrv1')
|
||||
def test_mapr_plugin_gating(self):
|
||||
super(MapR401MRv1GatingTest, self).test_mapr_plugin_gating()
|
|
@ -1,53 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.tests.integration.tests.gating.test_mapr_gating as mapr_test
|
||||
|
||||
|
||||
class MapR401MRv2GatingTest(mapr_test.MapRGatingTest):
|
||||
mapr_config = cfg.ITConfig().mapr_401mrv2_config
|
||||
SKIP_MAP_REDUCE_TEST = mapr_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = mapr_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = mapr_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = mapr_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = mapr_config.SKIP_EDP_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return MapR401MRv2GatingTest.mapr_config
|
||||
|
||||
def setUp(self):
|
||||
super(MapR401MRv2GatingTest, self).setUp()
|
||||
self._tt_name = 'nodemanager'
|
||||
self._mr_version = 2
|
||||
self._node_processes = [
|
||||
'NodeManager',
|
||||
'ResourceManager',
|
||||
'HistoryServer',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
'ZooKeeper',
|
||||
'Oozie',
|
||||
'Webserver'
|
||||
]
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().mapr_401mrv2_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for MapR plugin were skipped")
|
||||
@testcase.attr('mapr401mrv2')
|
||||
def test_mapr_plugin_gating(self):
|
||||
super(MapR401MRv2GatingTest, self).test_mapr_plugin_gating()
|
|
@ -1,85 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.tests.integration.tests.gating.test_mapr_gating as mapr_test
|
||||
|
||||
|
||||
class MapR402MRv1GatingTest(mapr_test.MapRGatingTest):
|
||||
mapr_config = cfg.ITConfig().mapr_402mrv1_config
|
||||
SKIP_MAP_REDUCE_TEST = mapr_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = mapr_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = mapr_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = mapr_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = mapr_config.SKIP_EDP_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return MapR402MRv1GatingTest.mapr_config
|
||||
|
||||
def setUp(self):
|
||||
super(MapR402MRv1GatingTest, self).setUp()
|
||||
self._tt_name = 'tasktracker'
|
||||
self._mr_version = 1
|
||||
self._node_processes = [
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
'ZooKeeper',
|
||||
'Oozie',
|
||||
'Webserver',
|
||||
'Metrics',
|
||||
|
||||
'Sqoop2-Server',
|
||||
'Sqoop2-Client',
|
||||
'Pig',
|
||||
'Mahout',
|
||||
'Hue',
|
||||
'HTTPFS',
|
||||
'HiveMetastore',
|
||||
'HiveServer2',
|
||||
'Flume',
|
||||
'Drill'
|
||||
]
|
||||
self._master_node_processes = [
|
||||
'Flume',
|
||||
'Hue',
|
||||
'Metrics',
|
||||
'Webserver',
|
||||
'ZooKeeper',
|
||||
'HTTPFS',
|
||||
'TaskTracker',
|
||||
'JobTracker',
|
||||
'Oozie',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
]
|
||||
self._worker_node_processes = [
|
||||
'TaskTracker',
|
||||
'HiveServer2',
|
||||
'HiveMetastore',
|
||||
'FileServer',
|
||||
'Sqoop2-Client',
|
||||
'Sqoop2-Server',
|
||||
]
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().mapr_402mrv1_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for MapR plugin were skipped")
|
||||
@testcase.attr('mapr402mrv1')
|
||||
def test_mapr_plugin_gating(self):
|
||||
super(MapR402MRv1GatingTest, self).test_mapr_plugin_gating()
|
|
@ -1,87 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
import sahara.tests.integration.tests.gating.test_mapr_gating as mapr_test
|
||||
|
||||
|
||||
class MapR402MRv2GatingTest(mapr_test.MapRGatingTest):
|
||||
mapr_config = cfg.ITConfig().mapr_402mrv2_config
|
||||
SKIP_MAP_REDUCE_TEST = mapr_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = mapr_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = mapr_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = mapr_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = mapr_config.SKIP_EDP_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return MapR402MRv2GatingTest.mapr_config
|
||||
|
||||
def setUp(self):
|
||||
super(MapR402MRv2GatingTest, self).setUp()
|
||||
self._tt_name = 'nodemanager'
|
||||
self._mr_version = 2
|
||||
self._node_processes = [
|
||||
'NodeManager',
|
||||
'ResourceManager',
|
||||
'HistoryServer',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
'ZooKeeper',
|
||||
'Oozie',
|
||||
'Webserver',
|
||||
'Metrics',
|
||||
|
||||
'Sqoop2-Server',
|
||||
'Sqoop2-Client',
|
||||
'Pig',
|
||||
'Mahout',
|
||||
'Hue',
|
||||
'HTTPFS',
|
||||
'HiveMetastore',
|
||||
'HiveServer2',
|
||||
'Flume',
|
||||
'Drill'
|
||||
]
|
||||
self._master_node_processes = [
|
||||
'Flume',
|
||||
'Hue',
|
||||
'Metrics',
|
||||
'Webserver',
|
||||
'ZooKeeper',
|
||||
'HTTPFS',
|
||||
'NodeManager',
|
||||
'HistoryServer',
|
||||
'ResourceManager',
|
||||
'Oozie',
|
||||
'FileServer',
|
||||
'CLDB',
|
||||
]
|
||||
self._worker_node_processes = [
|
||||
'NodeManager',
|
||||
'HiveServer2',
|
||||
'HiveMetastore',
|
||||
'FileServer',
|
||||
'Sqoop2-Client',
|
||||
'Sqoop2-Server',
|
||||
]
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().mapr_402mrv2_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for MapR plugin were skipped")
|
||||
@testcase.attr('mapr402mrv2')
|
||||
def test_mapr_plugin_gating(self):
|
||||
super(MapR402MRv2GatingTest, self).test_mapr_plugin_gating()
|
|
@ -1,487 +0,0 @@
|
|||
# Copyright (c) 2015, MapR Technologies
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import random
|
||||
import string
|
||||
import time
|
||||
import uuid
|
||||
|
||||
import fixtures
|
||||
from oslo_serialization import jsonutils as json
|
||||
import six
|
||||
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import cinder
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import map_reduce
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
SERVICES_COUNT_CMD = 'maprcli dashboard info -json'
|
||||
|
||||
|
||||
class MapRGatingTest(map_reduce.MapReduceTest, swift.SwiftTest,
|
||||
scaling.ScalingTest, cinder.CinderVolumeTest,
|
||||
edp.EDPTest):
|
||||
def _get_mapr_cluster_info(self):
|
||||
return json.loads(self.execute_command(SERVICES_COUNT_CMD)[1])
|
||||
|
||||
def _get_active_count(self, service):
|
||||
info = self._get_mapr_cluster_info()
|
||||
services = info['data'][0]['services']
|
||||
return services[service]['active'] if service in services else -1
|
||||
|
||||
def _get_tasktracker_count(self):
|
||||
return self._get_active_count(self._tt_name)
|
||||
|
||||
def _get_datanode_count(self):
|
||||
return self._get_active_count('fileserver')
|
||||
|
||||
def await_active_workers_for_namenode(self, node_info, plugin_config):
|
||||
tt_count = node_info['tasktracker_count']
|
||||
dn_count = node_info['datanode_count']
|
||||
self.open_ssh_connection(node_info['namenode_ip'])
|
||||
timeout = self.common_config.HDFS_INITIALIZATION_TIMEOUT * 60
|
||||
try:
|
||||
with fixtures.Timeout(timeout, gentle=True):
|
||||
while True:
|
||||
active_tt_count = self._get_tasktracker_count()
|
||||
active_dn_count = self._get_datanode_count()
|
||||
|
||||
all_tt_started = active_tt_count == tt_count
|
||||
all_dn_started = active_dn_count == dn_count
|
||||
|
||||
if all_tt_started and all_dn_started:
|
||||
break
|
||||
|
||||
time.sleep(10)
|
||||
|
||||
except fixtures.TimeoutException:
|
||||
self.fail(
|
||||
'Tasktracker or datanode cannot be started within '
|
||||
'%s minute(s) for namenode.'
|
||||
% self.common_config.HDFS_INITIALIZATION_TIMEOUT
|
||||
)
|
||||
finally:
|
||||
self.close_ssh_connection()
|
||||
|
||||
def create_mapr_fs_dir(self, ip, path):
|
||||
args = {'user': self.plugin_config.HADOOP_USER, 'path': path}
|
||||
self.open_ssh_connection(ip)
|
||||
self.execute_command(self._mkdir_cmd % args)
|
||||
self.close_ssh_connection()
|
||||
|
||||
def put_file_to_mapr_fs(self, ip, path, data):
|
||||
local = '/tmp/%s' % six.text_type(uuid.uuid4())
|
||||
args = {
|
||||
'user': self.plugin_config.HADOOP_USER,
|
||||
'mfs': path,
|
||||
'local': local,
|
||||
}
|
||||
command = 'sudo -u %(user)s hadoop fs -put %(local)s %(mfs)s' % args
|
||||
self.open_ssh_connection(ip)
|
||||
self.write_file_to(local, data)
|
||||
self.execute_command(command)
|
||||
self.execute_command('rm -fr %s' % local)
|
||||
self.close_ssh_connection()
|
||||
|
||||
@b.skip_test('SKIP_EDP_TEST', 'Test for EDP was skipped.')
|
||||
def edp_testing(self, job_type, job_data_list, lib_data_list=None,
|
||||
configs=None, pass_input_output_args=False,
|
||||
swift_binaries=False, hdfs_local_output=False):
|
||||
|
||||
job_data_list = job_data_list or []
|
||||
lib_data_list = lib_data_list or []
|
||||
configs = configs or {}
|
||||
|
||||
test_id = 'edp-mapr-test-%s' % str(uuid.uuid4())[:8]
|
||||
swift = self.connect_to_swift()
|
||||
container = test_id
|
||||
swift.put_container(container)
|
||||
|
||||
input_folder = '/%s' % test_id
|
||||
cldb_ip = self.cluster_info['node_info']['namenode_ip']
|
||||
self.create_mapr_fs_dir(cldb_ip, input_folder)
|
||||
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.delete_swift_container, swift, container)
|
||||
|
||||
input_data = ''.join(
|
||||
random.choice(':' + ' ' + '\n' + string.ascii_lowercase)
|
||||
for x in six.moves.range(10000)
|
||||
)
|
||||
input_file = '%s/input' % input_folder
|
||||
self.put_file_to_mapr_fs(cldb_ip, input_file, input_data)
|
||||
|
||||
input_id = None
|
||||
output_id = None
|
||||
job_binary_list = []
|
||||
lib_binary_list = []
|
||||
job_binary_internal_list = []
|
||||
|
||||
maprfs_input_url = 'maprfs://%s' % input_file
|
||||
maprfs_output_url = 'maprfs://%s/output' % (input_folder + '-out')
|
||||
|
||||
if not utils_edp.compare_job_type(job_type,
|
||||
utils_edp.JOB_TYPE_JAVA,
|
||||
utils_edp.JOB_TYPE_SPARK):
|
||||
input_id = self._create_data_source(
|
||||
'input-%s' % str(uuid.uuid4())[:8], 'maprfs',
|
||||
maprfs_input_url)
|
||||
output_id = self._create_data_source(
|
||||
'output-%s' % str(uuid.uuid4())[:8], 'maprfs',
|
||||
maprfs_output_url)
|
||||
|
||||
if job_data_list:
|
||||
if swift_binaries:
|
||||
self._create_job_binaries(job_data_list,
|
||||
job_binary_internal_list,
|
||||
job_binary_list,
|
||||
swift_connection=swift,
|
||||
container_name=container)
|
||||
else:
|
||||
self._create_job_binaries(job_data_list,
|
||||
job_binary_internal_list,
|
||||
job_binary_list)
|
||||
|
||||
if lib_data_list:
|
||||
if swift_binaries:
|
||||
self._create_job_binaries(lib_data_list,
|
||||
job_binary_internal_list,
|
||||
lib_binary_list,
|
||||
swift_connection=swift,
|
||||
container_name=container)
|
||||
else:
|
||||
self._create_job_binaries(lib_data_list,
|
||||
job_binary_internal_list,
|
||||
lib_binary_list)
|
||||
|
||||
job_id = self._create_job(
|
||||
'edp-test-job-%s' % str(uuid.uuid4())[:8], job_type,
|
||||
job_binary_list, lib_binary_list)
|
||||
if not configs:
|
||||
configs = {}
|
||||
|
||||
if utils_edp.compare_job_type(
|
||||
job_type, utils_edp.JOB_TYPE_JAVA) and pass_input_output_args:
|
||||
self._enable_substitution(configs)
|
||||
if "args" in configs:
|
||||
configs["args"].extend([maprfs_input_url, maprfs_output_url])
|
||||
else:
|
||||
configs["args"] = [maprfs_input_url, maprfs_output_url]
|
||||
|
||||
job_execution = self.sahara.job_executions.create(
|
||||
job_id, self.cluster_id, input_id, output_id,
|
||||
configs=configs)
|
||||
if not self.common_config.RETAIN_EDP_AFTER_TEST:
|
||||
self.addCleanup(self.sahara.job_executions.delete,
|
||||
job_execution.id)
|
||||
|
||||
return job_execution.id
|
||||
|
||||
def setUp(self):
|
||||
super(MapRGatingTest, self).setUp()
|
||||
self.cluster_id = None
|
||||
self.cluster_template_id = None
|
||||
self._mkdir_cmd = 'sudo -u %(user)s hadoop fs -mkdir -p %(path)s'
|
||||
self._tt_name = None
|
||||
self._mr_version = None
|
||||
self._node_processes = None
|
||||
self._master_node_processes = None
|
||||
self._worker_node_processes = None
|
||||
|
||||
ng_params = {
|
||||
}
|
||||
|
||||
@b.errormsg("Failure while 'single' node group template creation: ")
|
||||
def _create_single_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-mapr-single',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for MapR plugin',
|
||||
'node_processes': self._node_processes,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'node_configs': self.ng_params
|
||||
}
|
||||
self.ng_tmpl_single_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template,
|
||||
self.ng_tmpl_single_id)
|
||||
|
||||
@b.errormsg("Failure while 'master' node group template creation: ")
|
||||
def _create_master_ng_template(self):
|
||||
plugin_version = self.plugin_config.HADOOP_VERSION.replace('.', '')
|
||||
template = {
|
||||
'name': 'mapr-%s-master' % plugin_version,
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'Master node group template for MapR plugin',
|
||||
'node_processes': self._master_node_processes,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': False,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_master_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template,
|
||||
self.ng_tmpl_master_id)
|
||||
|
||||
@b.errormsg("Failure while 'worker' node group template creation: ")
|
||||
def _create_worker_ng_template(self):
|
||||
plugin_version = self.plugin_config.HADOOP_VERSION.replace('.', '')
|
||||
template = {
|
||||
'name': 'mapr-%s-worker' % plugin_version,
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'Worker node group template for MapR plugin',
|
||||
'node_processes': self._worker_node_processes,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': False,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_worker_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template,
|
||||
self.ng_tmpl_worker_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_master_worker_cluster_template(self):
|
||||
plugin_version = self.plugin_config.HADOOP_VERSION.replace('.', '')
|
||||
template = {
|
||||
'name': 'mapr-%s-master-worker' % plugin_version,
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for MapR plugin',
|
||||
'cluster_configs': {
|
||||
'Hive': {
|
||||
'Hive Version': '0.13',
|
||||
}
|
||||
},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'mapr-%s-master' % plugin_version,
|
||||
'node_group_template_id': self.ng_tmpl_master_id,
|
||||
'count': 1
|
||||
},
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template,
|
||||
self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_single_node_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-cluster-template-mapr-single',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for MapR plugin',
|
||||
'cluster_configs': {
|
||||
'Hive': {
|
||||
'Hive Version': '0.13',
|
||||
}
|
||||
},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'single',
|
||||
'node_group_template_id': self.ng_tmpl_single_id,
|
||||
'count': 1
|
||||
},
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template,
|
||||
self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s-v2' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {}
|
||||
}
|
||||
cluster_id = self.create_cluster(**cluster)
|
||||
self.addCleanup(self.delete_cluster, cluster_id)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing: ")
|
||||
def _check_cinder(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing: ")
|
||||
def _check_mapreduce(self):
|
||||
self.map_reduce_testing(
|
||||
self.cluster_info, script='mapr/map_reduce_test_script.sh')
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability: ")
|
||||
def _check_swift(self):
|
||||
self.check_swift_availability(
|
||||
self.cluster_info, script='mapr/swift_test_script.sh')
|
||||
|
||||
@b.skip_test('SKIP_EDP_TEST',
|
||||
message='Test for EDP was skipped.')
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
for edp_job in self._run_edp_tests():
|
||||
self.poll_jobs_status([edp_job])
|
||||
|
||||
def _run_edp_tests(self):
|
||||
skipped_edp_job_types = self.plugin_config.SKIP_EDP_JOB_TYPES
|
||||
|
||||
if utils_edp.JOB_TYPE_PIG not in skipped_edp_job_types:
|
||||
yield self._edp_pig_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE_STREAMING not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_streaming_test()
|
||||
if utils_edp.JOB_TYPE_JAVA not in skipped_edp_job_types:
|
||||
yield self._edp_java_test()
|
||||
|
||||
def _edp_pig_test(self):
|
||||
pig_job = self.edp_info.read_pig_example_script()
|
||||
pig_lib = self.edp_info.read_pig_example_jar()
|
||||
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job}],
|
||||
lib_data_list=[{'jar': pig_lib}],
|
||||
swift_binaries=True
|
||||
)
|
||||
|
||||
def _edp_mapreduce_test(self):
|
||||
mapreduce_jar = self.edp_info.read_mapreduce_example_jar()
|
||||
mapreduce_configs = self.edp_info.mapreduce_example_configs()
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar}],
|
||||
configs=mapreduce_configs,
|
||||
swift_binaries=True
|
||||
)
|
||||
|
||||
def _edp_mapreduce_streaming_test(self):
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs()
|
||||
)
|
||||
|
||||
def _edp_java_test(self):
|
||||
java_jar = self.edp_info.read_java_example_lib(self._mr_version)
|
||||
java_configs = self.edp_info.java_example_configs(self._mr_version)
|
||||
return self.edp_testing(
|
||||
utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_jar}],
|
||||
configs=java_configs,
|
||||
pass_input_output_args=False
|
||||
)
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
plugin_version = self.plugin_config.HADOOP_VERSION.replace('.', '')
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': ['mapr-%s-worker' % plugin_version,
|
||||
1, '%s' % self.ng_tmpl_worker_id]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing after cluster scaling: ")
|
||||
def _check_cinder_after_scaling(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing after cluster scaling: ")
|
||||
def _check_mapreduce_after_scaling(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg(
|
||||
"Failure during check of Swift availability after cluster scaling: ")
|
||||
def _check_swift_after_scaling(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self._check_edp()
|
||||
|
||||
@b.errormsg("Failure while cluster decomission: ")
|
||||
def _check_decomission(self):
|
||||
plugin_version = self.plugin_config.HADOOP_VERSION.replace('.', '')
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['mapr-%s-worker' % plugin_version, 1]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing after cluster decomission: ")
|
||||
def _check_cinder_after_decomission(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing after cluster decomission: ")
|
||||
def _check_mapreduce_after_decomission(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability after"
|
||||
" cluster decomission: ")
|
||||
def _check_swift_after_decomission(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster decomission: ")
|
||||
def _check_edp_after_decomission(self):
|
||||
self._check_edp()
|
||||
|
||||
def test_mapr_plugin_gating(self):
|
||||
self._create_master_ng_template()
|
||||
self._create_worker_ng_template()
|
||||
self._create_master_worker_cluster_template()
|
||||
self._create_cluster()
|
||||
|
||||
self._check_cinder()
|
||||
self._check_mapreduce()
|
||||
self._check_swift()
|
||||
self._check_edp()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._check_cinder_after_scaling()
|
||||
self._check_mapreduce_after_scaling()
|
||||
self._check_swift_after_scaling()
|
||||
self._check_edp_after_scaling()
|
||||
|
||||
if not self.plugin_config.SKIP_DECOMISSION_TEST:
|
||||
self._check_decomission()
|
||||
self._check_cinder_after_decomission()
|
||||
self._check_mapreduce_after_decomission()
|
||||
self._check_swift_after_decomission()
|
||||
self._check_edp_after_decomission()
|
||||
|
||||
def tearDown(self):
|
||||
super(MapRGatingTest, self).tearDown()
|
|
@ -1,170 +0,0 @@
|
|||
# Copyright 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class SparkGatingTest(swift.SwiftTest, scaling.ScalingTest,
|
||||
edp.EDPTest):
|
||||
|
||||
config = cfg.ITConfig().spark_config
|
||||
SKIP_EDP_TEST = config.SKIP_EDP_TEST
|
||||
|
||||
def setUp(self):
|
||||
super(SparkGatingTest, self).setUp()
|
||||
self.cluster_id = None
|
||||
self.cluster_template_id = None
|
||||
self.ng_template_ids = []
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().spark_config
|
||||
|
||||
@b.errormsg("Failure while 'm-nn' node group template creation: ")
|
||||
def _create_m_nn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-spark-m-nn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Spark plugin',
|
||||
'node_processes': self.plugin_config.MASTER_NODE_PROCESSES,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_m_nn_id = self.create_node_group_template(**template)
|
||||
self.ng_template_ids.append(self.ng_tmpl_m_nn_id)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_m_nn_id)
|
||||
|
||||
@b.errormsg("Failure while 's-dn' node group template creation: ")
|
||||
def _create_s_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-spark-s-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Spark plugin',
|
||||
'node_processes': self.plugin_config.WORKER_NODE_PROCESSES,
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {}
|
||||
}
|
||||
self.ng_tmpl_s_dn_id = self.create_node_group_template(**template)
|
||||
self.ng_template_ids.append(self.ng_tmpl_s_dn_id)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_s_dn_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-cluster-template-spark',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for Spark plugin',
|
||||
'cluster_configs': {'HDFS': {'dfs.replication': 1}},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'master-node',
|
||||
'node_group_template_id': self.ng_tmpl_m_nn_id,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node',
|
||||
'node_group_template_id': self.ng_tmpl_s_dn_id,
|
||||
'count': 1
|
||||
}
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {}
|
||||
}
|
||||
cluster_id = self.create_cluster(**cluster)
|
||||
self.addCleanup(self.delete_cluster, cluster_id)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
self._edp_test()
|
||||
|
||||
def _edp_test(self):
|
||||
# check spark
|
||||
spark_jar = self.edp_info.read_spark_example_jar()
|
||||
spark_configs = self.edp_info.spark_example_configs()
|
||||
job_id = self.edp_testing(
|
||||
utils_edp.JOB_TYPE_SPARK,
|
||||
job_data_list=[{'jar': spark_jar}],
|
||||
lib_data_list=[],
|
||||
configs=spark_configs)
|
||||
self.poll_jobs_status([job_id])
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node', 2]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node', 1, '%s' % self.ng_tmpl_s_dn_id
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self._check_edp()
|
||||
|
||||
@testcase.attr('spark')
|
||||
@testcase.skipIf(config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
'All tests for Spark plugin were skipped')
|
||||
def test_spark_plugin_gating(self):
|
||||
|
||||
self._create_m_nn_ng_template()
|
||||
self._create_s_dn_ng_template()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_edp()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_edp_after_scaling()
|
||||
|
||||
def tearDown(self):
|
||||
super(SparkGatingTest, self).tearDown()
|
|
@ -1,137 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import time
|
||||
|
||||
from oslo_utils import timeutils
|
||||
import saharaclient.api.base as sab
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class TransientGatingTest(edp.EDPTest):
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().vanilla_two_config
|
||||
|
||||
def _prepare_test(self):
|
||||
self.SKIP_EDP_TEST = self.plugin_config.SKIP_EDP_TEST
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-transient-cluster-template',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for transient cluster',
|
||||
'net_id': self.internal_neutron_net,
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'master-node',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['namenode', 'resourcemanager',
|
||||
'oozie', 'historyserver'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['datanode', 'nodemanager'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'count': 1
|
||||
}
|
||||
],
|
||||
'cluster_configs': {
|
||||
'HDFS': {
|
||||
'dfs.replication': 1
|
||||
},
|
||||
'MapReduce': {
|
||||
'mapreduce.tasktracker.map.tasks.maximum': 16,
|
||||
'mapreduce.tasktracker.reduce.tasks.maximum': 16
|
||||
},
|
||||
'YARN': {
|
||||
'yarn.resourcemanager.scheduler.class':
|
||||
'org.apache.hadoop.yarn.server.resourcemanager.scheduler'
|
||||
'.fair.FairScheduler'
|
||||
}
|
||||
}
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
self.cluster_ids = []
|
||||
for number_of_cluster in range(3):
|
||||
cluster_name = '%s-%d-transient' % (
|
||||
self.common_config.CLUSTER_NAME,
|
||||
number_of_cluster+1)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'transient cluster',
|
||||
'cluster_configs': {},
|
||||
'is_transient': True
|
||||
}
|
||||
|
||||
self.cluster_ids.append(self.create_cluster(**cluster))
|
||||
self.addCleanup(self.delete_cluster,
|
||||
self.cluster_ids[number_of_cluster])
|
||||
|
||||
for number_of_cluster in range(3):
|
||||
self.poll_cluster_state(self.cluster_ids[number_of_cluster])
|
||||
|
||||
@b.errormsg("Failure while transient cluster testing: ")
|
||||
def _check_transient(self):
|
||||
pig_job_data = self.edp_info.read_pig_example_script()
|
||||
pig_lib_data = self.edp_info.read_pig_example_jar()
|
||||
job_ids = []
|
||||
for cluster_id in self.cluster_ids:
|
||||
self.cluster_id = cluster_id
|
||||
job_ids.append(self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job_data}],
|
||||
lib_data_list=[{'jar': pig_lib_data}]))
|
||||
self.poll_jobs_status(job_ids)
|
||||
|
||||
# set timeout in seconds
|
||||
timeout = self.common_config.TRANSIENT_CLUSTER_TIMEOUT * 60
|
||||
s_time = timeutils.utcnow()
|
||||
raise_failure = True
|
||||
# wait for cluster deleting
|
||||
while timeutils.delta_seconds(s_time, timeutils.utcnow()) < timeout:
|
||||
try:
|
||||
self.sahara.clusters.get(self.cluster_id)
|
||||
except sab.APIException as api_ex:
|
||||
if 'not found' in api_ex.message:
|
||||
raise_failure = False
|
||||
break
|
||||
time.sleep(2)
|
||||
|
||||
if raise_failure:
|
||||
self.fail('Transient cluster has not been deleted within %s '
|
||||
'minutes.'
|
||||
% self.common_config.TRANSIENT_CLUSTER_TIMEOUT)
|
||||
|
||||
@testcase.attr('transient')
|
||||
def test_transient_gating(self):
|
||||
self._prepare_test()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._check_transient()
|
|
@ -1,314 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import cinder
|
||||
from sahara.tests.integration.tests import cluster_configs
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import map_reduce
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class VanillaGatingTest(cinder.CinderVolumeTest,
|
||||
cluster_configs.ClusterConfigTest,
|
||||
map_reduce.MapReduceTest, swift.SwiftTest,
|
||||
scaling.ScalingTest, edp.EDPTest):
|
||||
config = cfg.ITConfig().vanilla_config
|
||||
SKIP_CINDER_TEST = config.SKIP_CINDER_TEST
|
||||
SKIP_CLUSTER_CONFIG_TEST = config.SKIP_CLUSTER_CONFIG_TEST
|
||||
SKIP_EDP_TEST = config.SKIP_EDP_TEST
|
||||
SKIP_MAP_REDUCE_TEST = config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = config.SKIP_SCALING_TEST
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().vanilla_config
|
||||
|
||||
@b.errormsg("Failure while 'tt-dn' node group template creation: ")
|
||||
def _create_tt_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-tt-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla 1 plugin',
|
||||
'node_processes': ['tasktracker', 'datanode'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {
|
||||
'HDFS': cluster_configs.DN_CONFIG,
|
||||
'MapReduce': cluster_configs.TT_CONFIG
|
||||
}
|
||||
}
|
||||
self.ng_tmpl_tt_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_tt_dn_id)
|
||||
|
||||
@b.errormsg("Failure while 'tt' node group template creation: ")
|
||||
def _create_tt_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-tt',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla 1 plugin',
|
||||
'volumes_per_node': self.volumes_per_node,
|
||||
'volumes_size': self.volumes_size,
|
||||
'node_processes': ['tasktracker'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {
|
||||
'MapReduce': cluster_configs.TT_CONFIG
|
||||
}
|
||||
}
|
||||
self.ng_tmpl_tt_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_tt_id)
|
||||
|
||||
@b.errormsg("Failure while 'dn' node group template creation: ")
|
||||
def _create_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla 1 plugin',
|
||||
'volumes_per_node': self.volumes_per_node,
|
||||
'volumes_size': self.volumes_size,
|
||||
'node_processes': ['datanode'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {
|
||||
'HDFS': cluster_configs.DN_CONFIG
|
||||
}
|
||||
}
|
||||
self.ng_tmpl_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_dn_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-cluster-template-vanilla',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for Vanilla 1 plugin',
|
||||
'net_id': self.internal_neutron_net,
|
||||
'cluster_configs': {
|
||||
'HDFS': cluster_configs.CLUSTER_HDFS_CONFIG,
|
||||
'MapReduce': cluster_configs.CLUSTER_MR_CONFIG,
|
||||
'general': {
|
||||
'Enable Swift': True
|
||||
}
|
||||
},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'master-node-jt-nn',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['namenode', 'jobtracker'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {
|
||||
'HDFS': cluster_configs.NN_CONFIG,
|
||||
'MapReduce': cluster_configs.JT_CONFIG
|
||||
},
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'master-node-sec-nn-oz',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['secondarynamenode', 'oozie'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': {
|
||||
'HDFS': cluster_configs.SNN_CONFIG,
|
||||
'JobFlow': cluster_configs.OOZIE_CONFIG
|
||||
},
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-tt-dn',
|
||||
'node_group_template_id': self.ng_tmpl_tt_dn_id,
|
||||
'count': 2
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-tt',
|
||||
'node_group_template_id': self.ng_tmpl_tt_id,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-dn',
|
||||
'node_group_template_id': self.ng_tmpl_dn_id,
|
||||
'count': 1
|
||||
}
|
||||
]
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
kw = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {}
|
||||
}
|
||||
cluster_id = self.create_cluster(**kw)
|
||||
self.addCleanup(self.delete_cluster, cluster_id)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing: ")
|
||||
def _check_cinder(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while cluster config testing: ")
|
||||
def _check_cluster_config(self):
|
||||
self.cluster_config_testing(self.cluster_info)
|
||||
|
||||
def _run_edp_test(self):
|
||||
pig_job_data = self.edp_info.read_pig_example_script()
|
||||
pig_lib_data = self.edp_info.read_pig_example_jar()
|
||||
mapreduce_jar_data = self.edp_info.read_mapreduce_example_jar()
|
||||
# This is a modified version of WordCount that takes swift configs
|
||||
java_lib_data = self.edp_info.read_java_example_lib()
|
||||
shell_script_data = self.edp_info.read_shell_example_script()
|
||||
shell_file_data = self.edp_info.read_shell_example_text_file()
|
||||
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job_data}],
|
||||
lib_data_list=[{'jar': pig_lib_data}],
|
||||
configs=self.edp_info.pig_example_configs(),
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar_data}],
|
||||
configs=self.edp_info.mapreduce_example_configs(),
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs())
|
||||
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_lib_data}],
|
||||
configs=self.edp_info.java_example_configs(),
|
||||
pass_input_output_args=True)
|
||||
|
||||
yield self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_SHELL,
|
||||
job_data_list=[{'script': shell_script_data}],
|
||||
lib_data_list=[{'text': shell_file_data}],
|
||||
configs=self.edp_info.shell_example_configs())
|
||||
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
self.poll_jobs_status(list(self._run_edp_test()))
|
||||
|
||||
@b.errormsg("Failure while MapReduce testing: ")
|
||||
def _check_mapreduce(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability: ")
|
||||
def _check_swift(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-tt-dn', 1]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-dn', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-tt', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-tt', 1, self.ng_tmpl_tt_id
|
||||
]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-dn', 1, self.ng_tmpl_dn_id
|
||||
]
|
||||
}
|
||||
]
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing after cluster scaling: ")
|
||||
def _check_cinder_after_scaling(self):
|
||||
self.cluster_config_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while config testing after cluster scaling: ")
|
||||
def _check_cluster_config_after_scaling(self):
|
||||
self.cluster_config_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing after cluster scaling: ")
|
||||
def _check_mapredure_after_scaling(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability after scaling: ")
|
||||
def _check_swift_after_scaling(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self.poll_jobs_status(list(self._run_edp_test()))
|
||||
|
||||
@testcase.skipIf(config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
'All tests for Vanilla plugin were skipped')
|
||||
@testcase.attr('vanilla1')
|
||||
def test_vanilla_plugin_gating(self):
|
||||
self._create_tt_dn_ng_template()
|
||||
self._create_tt_ng_template()
|
||||
self._create_dn_ng_template()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_cinder()
|
||||
self._check_cluster_config()
|
||||
self._check_edp()
|
||||
self._check_mapreduce()
|
||||
self._check_swift()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_cinder_after_scaling()
|
||||
self._check_cluster_config_after_scaling()
|
||||
self._check_mapredure_after_scaling()
|
||||
self._check_swift_after_scaling()
|
||||
self._check_edp_after_scaling()
|
|
@ -1,353 +0,0 @@
|
|||
# Copyright (c) 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from testtools import testcase
|
||||
|
||||
from sahara.tests.integration.configs import config as cfg
|
||||
from sahara.tests.integration.tests import base as b
|
||||
from sahara.tests.integration.tests import cinder
|
||||
from sahara.tests.integration.tests import cluster_configs
|
||||
from sahara.tests.integration.tests import edp
|
||||
from sahara.tests.integration.tests import map_reduce
|
||||
from sahara.tests.integration.tests import scaling
|
||||
from sahara.tests.integration.tests import swift
|
||||
from sahara.utils import edp as utils_edp
|
||||
|
||||
|
||||
class VanillaTwoGatingTest(cluster_configs.ClusterConfigTest,
|
||||
map_reduce.MapReduceTest, swift.SwiftTest,
|
||||
scaling.ScalingTest, cinder.CinderVolumeTest,
|
||||
edp.EDPTest):
|
||||
|
||||
vanilla_two_config = cfg.ITConfig().vanilla_two_config
|
||||
SKIP_MAP_REDUCE_TEST = vanilla_two_config.SKIP_MAP_REDUCE_TEST
|
||||
SKIP_SWIFT_TEST = vanilla_two_config.SKIP_SWIFT_TEST
|
||||
SKIP_SCALING_TEST = vanilla_two_config.SKIP_SCALING_TEST
|
||||
SKIP_CINDER_TEST = vanilla_two_config.SKIP_CINDER_TEST
|
||||
SKIP_EDP_TEST = vanilla_two_config.SKIP_EDP_TEST
|
||||
|
||||
def setUp(self):
|
||||
super(VanillaTwoGatingTest, self).setUp()
|
||||
self.cluster_id = None
|
||||
self.cluster_template_id = None
|
||||
|
||||
def get_plugin_config(self):
|
||||
return cfg.ITConfig().vanilla_two_config
|
||||
|
||||
ng_params = {
|
||||
'MapReduce': {
|
||||
'yarn.app.mapreduce.am.resource.mb': 256,
|
||||
'yarn.app.mapreduce.am.command-opts': '-Xmx256m'
|
||||
},
|
||||
'YARN': {
|
||||
'yarn.scheduler.minimum-allocation-mb': 256,
|
||||
'yarn.scheduler.maximum-allocation-mb': 1024,
|
||||
'yarn.nodemanager.vmem-check-enabled': False
|
||||
}
|
||||
}
|
||||
|
||||
@b.errormsg("Failure while 'nm-dn' node group template creation: ")
|
||||
def _create_nm_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-nm-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla plugin',
|
||||
'node_processes': ['nodemanager', 'datanode'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': self.ng_params
|
||||
}
|
||||
self.ng_tmpl_nm_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_nm_dn_id)
|
||||
|
||||
@b.errormsg("Failure while 'nm' node group template creation: ")
|
||||
def _create_nm_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-nm',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla plugin',
|
||||
'volumes_per_node': self.volumes_per_node,
|
||||
'volumes_size': self.volumes_size,
|
||||
'node_processes': ['nodemanager'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': self.ng_params
|
||||
}
|
||||
self.ng_tmpl_nm_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_nm_id)
|
||||
|
||||
@b.errormsg("Failure while 'dn' node group template creation: ")
|
||||
def _create_dn_ng_template(self):
|
||||
template = {
|
||||
'name': 'test-node-group-template-vanilla-dn',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test node group template for Vanilla plugin',
|
||||
'volumes_per_node': self.volumes_per_node,
|
||||
'volumes_size': self.volumes_size,
|
||||
'node_processes': ['datanode'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'node_configs': self.ng_params
|
||||
}
|
||||
self.ng_tmpl_dn_id = self.create_node_group_template(**template)
|
||||
self.addCleanup(self.delete_node_group_template, self.ng_tmpl_dn_id)
|
||||
|
||||
@b.errormsg("Failure while cluster template creation: ")
|
||||
def _create_cluster_template(self):
|
||||
template = {
|
||||
'name': 'test-cluster-template-vanilla',
|
||||
'plugin_config': self.plugin_config,
|
||||
'description': 'test cluster template for Vanilla plugin',
|
||||
'cluster_configs': {
|
||||
'HDFS': {
|
||||
'dfs.replication': 1
|
||||
}
|
||||
},
|
||||
'node_groups': [
|
||||
{
|
||||
'name': 'master-node-rm-nn',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['namenode', 'resourcemanager',
|
||||
'hiveserver'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'count': 1,
|
||||
'node_configs': self.ng_params
|
||||
},
|
||||
{
|
||||
'name': 'master-node-oo-hs',
|
||||
'flavor_id': self.flavor_id,
|
||||
'node_processes': ['oozie', 'historyserver',
|
||||
'secondarynamenode'],
|
||||
'floating_ip_pool': self.floating_ip_pool,
|
||||
'auto_security_group': True,
|
||||
'count': 1,
|
||||
'node_configs': self.ng_params
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-nm-dn',
|
||||
'node_group_template_id': self.ng_tmpl_nm_dn_id,
|
||||
'count': 2
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-dn',
|
||||
'node_group_template_id': self.ng_tmpl_dn_id,
|
||||
'count': 1
|
||||
},
|
||||
{
|
||||
'name': 'worker-node-nm',
|
||||
'node_group_template_id': self.ng_tmpl_nm_id,
|
||||
'count': 1
|
||||
}
|
||||
],
|
||||
'net_id': self.internal_neutron_net
|
||||
}
|
||||
self.cluster_template_id = self.create_cluster_template(**template)
|
||||
self.addCleanup(self.delete_cluster_template, self.cluster_template_id)
|
||||
|
||||
@b.errormsg("Failure while cluster creation: ")
|
||||
def _create_cluster(self):
|
||||
cluster_name = '%s-%s-v2' % (self.common_config.CLUSTER_NAME,
|
||||
self.plugin_config.PLUGIN_NAME)
|
||||
cluster = {
|
||||
'name': cluster_name,
|
||||
'plugin_config': self.plugin_config,
|
||||
'cluster_template_id': self.cluster_template_id,
|
||||
'description': 'test cluster',
|
||||
'cluster_configs': {}
|
||||
}
|
||||
cluster_id = self.create_cluster(**cluster)
|
||||
self.addCleanup(self.delete_cluster, cluster_id)
|
||||
self.poll_cluster_state(cluster_id)
|
||||
self.cluster_info = self.get_cluster_info(self.plugin_config)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing: ")
|
||||
def _check_cinder(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing: ")
|
||||
def _check_mapreduce(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure during check of Swift availability: ")
|
||||
def _check_swift(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing: ")
|
||||
def _check_edp(self):
|
||||
self.poll_jobs_status(list(self._run_edp_tests()))
|
||||
|
||||
def _run_edp_tests(self):
|
||||
skipped_edp_job_types = self.plugin_config.SKIP_EDP_JOB_TYPES
|
||||
|
||||
if utils_edp.JOB_TYPE_PIG not in skipped_edp_job_types:
|
||||
yield self._edp_pig_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE_STREAMING not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_streaming_test()
|
||||
if utils_edp.JOB_TYPE_JAVA not in skipped_edp_job_types:
|
||||
yield self._edp_java_test()
|
||||
if utils_edp.JOB_TYPE_HIVE not in skipped_edp_job_types:
|
||||
yield self._check_edp_hive()
|
||||
if utils_edp.JOB_TYPE_SHELL not in skipped_edp_job_types:
|
||||
yield self._edp_shell_test()
|
||||
|
||||
def _run_edp_tests_after_scaling(self):
|
||||
skipped_edp_job_types = self.plugin_config.SKIP_EDP_JOB_TYPES
|
||||
|
||||
if utils_edp.JOB_TYPE_PIG not in skipped_edp_job_types:
|
||||
yield self._edp_pig_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_test()
|
||||
if utils_edp.JOB_TYPE_MAPREDUCE_STREAMING not in skipped_edp_job_types:
|
||||
yield self._edp_mapreduce_streaming_test()
|
||||
if utils_edp.JOB_TYPE_JAVA not in skipped_edp_job_types:
|
||||
yield self._edp_java_test()
|
||||
if utils_edp.JOB_TYPE_SHELL not in skipped_edp_job_types:
|
||||
yield self._edp_shell_test()
|
||||
if utils_edp.JOB_TYPE_HIVE not in skipped_edp_job_types:
|
||||
yield self._check_edp_hive()
|
||||
|
||||
def _edp_pig_test(self):
|
||||
pig_job = self.edp_info.read_pig_example_script()
|
||||
pig_lib = self.edp_info.read_pig_example_jar()
|
||||
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_PIG,
|
||||
job_data_list=[{'pig': pig_job}],
|
||||
lib_data_list=[{'jar': pig_lib}],
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
def _edp_mapreduce_test(self):
|
||||
mapreduce_jar = self.edp_info.read_mapreduce_example_jar()
|
||||
mapreduce_configs = self.edp_info.mapreduce_example_configs()
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': mapreduce_jar}],
|
||||
configs=mapreduce_configs,
|
||||
swift_binaries=True,
|
||||
hdfs_local_output=True)
|
||||
|
||||
def _edp_mapreduce_streaming_test(self):
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_MAPREDUCE_STREAMING,
|
||||
job_data_list=[],
|
||||
lib_data_list=[],
|
||||
configs=self.edp_info.mapreduce_streaming_configs())
|
||||
|
||||
def _edp_java_test(self):
|
||||
java_jar = self.edp_info.read_java_example_lib(2)
|
||||
java_configs = self.edp_info.java_example_configs(2)
|
||||
return self.edp_testing(
|
||||
utils_edp.JOB_TYPE_JAVA,
|
||||
job_data_list=[],
|
||||
lib_data_list=[{'jar': java_jar}],
|
||||
configs=java_configs)
|
||||
|
||||
def _edp_shell_test(self):
|
||||
shell_script_data = self.edp_info.read_shell_example_script()
|
||||
shell_file_data = self.edp_info.read_shell_example_text_file()
|
||||
return self.edp_testing(
|
||||
job_type=utils_edp.JOB_TYPE_SHELL,
|
||||
job_data_list=[{'script': shell_script_data}],
|
||||
lib_data_list=[{'text': shell_file_data}],
|
||||
configs=self.edp_info.shell_example_configs())
|
||||
|
||||
def _check_edp_hive(self):
|
||||
return self.check_edp_hive()
|
||||
|
||||
@b.errormsg("Failure while cluster scaling: ")
|
||||
def _check_scaling(self):
|
||||
change_list = [
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-nm-dn', 1]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-dn', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'resize',
|
||||
'info': ['worker-node-nm', 0]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-nm', 1, '%s' % self.ng_tmpl_nm_id
|
||||
]
|
||||
},
|
||||
{
|
||||
'operation': 'add',
|
||||
'info': [
|
||||
'new-worker-node-dn', 1, '%s' % self.ng_tmpl_dn_id
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
self.cluster_info = self.cluster_scaling(self.cluster_info,
|
||||
change_list)
|
||||
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
|
||||
self.plugin_config)
|
||||
|
||||
@b.errormsg("Failure while Cinder testing after cluster scaling: ")
|
||||
def _check_cinder_after_scaling(self):
|
||||
self.cinder_volume_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while Map Reduce testing after cluster scaling: ")
|
||||
def _check_mapreduce_after_scaling(self):
|
||||
self.map_reduce_testing(self.cluster_info)
|
||||
|
||||
@b.errormsg(
|
||||
"Failure during check of Swift availability after cluster scaling: ")
|
||||
def _check_swift_after_scaling(self):
|
||||
self.check_swift_availability(self.cluster_info)
|
||||
|
||||
@b.errormsg("Failure while EDP testing after cluster scaling: ")
|
||||
def _check_edp_after_scaling(self):
|
||||
self.poll_jobs_status(list(self._run_edp_tests_after_scaling()))
|
||||
|
||||
@testcase.skipIf(
|
||||
cfg.ITConfig().vanilla_two_config.SKIP_ALL_TESTS_FOR_PLUGIN,
|
||||
"All tests for Vanilla plugin were skipped")
|
||||
@testcase.attr('vanilla2')
|
||||
def test_vanilla_two_plugin_gating(self):
|
||||
self._create_nm_dn_ng_template()
|
||||
self._create_nm_ng_template()
|
||||
self._create_dn_ng_template()
|
||||
self._create_cluster_template()
|
||||
self._create_cluster()
|
||||
self._test_event_log(self.cluster_id)
|
||||
|
||||
self._check_cinder()
|
||||
self._check_mapreduce()
|
||||
self._check_swift()
|
||||
self._check_edp()
|
||||
|
||||
if not self.plugin_config.SKIP_SCALING_TEST:
|
||||
self._check_scaling()
|
||||
self._test_event_log(self.cluster_id)
|
||||
self._check_cinder_after_scaling()
|
||||
self._check_mapreduce_after_scaling()
|
||||
self._check_swift_after_scaling()
|
||||
self._check_edp_after_scaling()
|
||||
|
||||
def tearDown(self):
|
||||
super(VanillaTwoGatingTest, self).tearDown()
|
|
@ -1,139 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_utils import excutils
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
class MapReduceTest(base.ITestCase):
|
||||
DEFAULT_TEST_SCRIPT = 'map_reduce_test_script.sh'
|
||||
|
||||
def _run_pi_job(self):
|
||||
self.execute_command('./script.sh run_pi_job')
|
||||
|
||||
def _get_name_of_completed_pi_job(self):
|
||||
try:
|
||||
job_name = self.execute_command('./script.sh get_pi_job_name')
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure while name obtaining completed \'PI\' job: ' +
|
||||
str(e)
|
||||
)
|
||||
self.capture_error_log_from_cluster_node(
|
||||
'/tmp/MapReduceTestOutput/log.txt'
|
||||
)
|
||||
return job_name[1][:-1]
|
||||
|
||||
def _run_wordcount_job(self):
|
||||
try:
|
||||
self.execute_command('./script.sh run_wordcount_job')
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print('\nFailure while \'Wordcount\' job launch: ' + str(e))
|
||||
self.capture_error_log_from_cluster_node(
|
||||
'/tmp/MapReduceTestOutput/log.txt'
|
||||
)
|
||||
|
||||
def _transfer_helper_script_to_nodes(self, cluster_info, script=None):
|
||||
script = script or MapReduceTest.DEFAULT_TEST_SCRIPT
|
||||
data = self.sahara.clusters.get(cluster_info['cluster_id'])
|
||||
node_groups = data.node_groups
|
||||
for node_group in node_groups:
|
||||
if node_group['volumes_per_node'] != 0:
|
||||
self._add_params_to_script_and_transfer_to_node(
|
||||
cluster_info, node_group, node_with_volumes=True,
|
||||
script=script)
|
||||
else:
|
||||
self._add_params_to_script_and_transfer_to_node(
|
||||
cluster_info, node_group, script=script)
|
||||
|
||||
def _add_params_to_script_and_transfer_to_node(self, cluster_info,
|
||||
node_group,
|
||||
node_with_volumes=False,
|
||||
script=None):
|
||||
script = script or MapReduceTest.DEFAULT_TEST_SCRIPT
|
||||
plugin_config = cluster_info['plugin_config']
|
||||
hadoop_log_directory = plugin_config.HADOOP_LOG_DIRECTORY
|
||||
if node_with_volumes:
|
||||
hadoop_log_directory = (
|
||||
plugin_config.HADOOP_LOG_DIRECTORY_ON_VOLUME)
|
||||
extra_script_parameters = {
|
||||
'HADOOP_EXAMPLES_JAR_PATH': plugin_config.HADOOP_EXAMPLES_JAR_PATH,
|
||||
'HADOOP_LOG_DIRECTORY': hadoop_log_directory,
|
||||
'HADOOP_USER': plugin_config.HADOOP_USER,
|
||||
'NODE_COUNT': cluster_info['node_info']['node_count']
|
||||
}
|
||||
for instance in node_group['instances']:
|
||||
try:
|
||||
self.open_ssh_connection(instance['management_ip'])
|
||||
self.transfer_helper_script_to_node(
|
||||
script, extra_script_parameters
|
||||
)
|
||||
self.close_ssh_connection()
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(str(e))
|
||||
|
||||
@base.skip_test('SKIP_MAP_REDUCE_TEST',
|
||||
message='Test for Map Reduce was skipped.')
|
||||
def map_reduce_testing(self, cluster_info, check_log=True, script=None):
|
||||
script = script or MapReduceTest.DEFAULT_TEST_SCRIPT
|
||||
self._transfer_helper_script_to_nodes(cluster_info, script)
|
||||
plugin_config = cluster_info['plugin_config']
|
||||
namenode_ip = cluster_info['node_info']['namenode_ip']
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
self._run_pi_job()
|
||||
job_name = self._get_name_of_completed_pi_job()
|
||||
self.close_ssh_connection()
|
||||
|
||||
# Check that cluster used each "tasktracker" node while work of PI-job.
|
||||
# Count of map-tasks and reduce-tasks in helper script guarantees that
|
||||
# cluster will use each from such nodes while work of PI-job.
|
||||
if check_log:
|
||||
node_ip_and_process_list = cluster_info['node_ip_list']
|
||||
|
||||
have_logs = False
|
||||
for node_ip, process_list in node_ip_and_process_list.items():
|
||||
if plugin_config.PROCESS_NAMES['tt'] in process_list:
|
||||
self.open_ssh_connection(node_ip)
|
||||
try:
|
||||
self.execute_command(
|
||||
'./script.sh check_directory -job_name %s' %
|
||||
job_name)
|
||||
have_logs = True
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
self.close_ssh_connection()
|
||||
|
||||
if not have_logs:
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
try:
|
||||
self.capture_error_log_from_cluster_node(
|
||||
'/tmp/MapReduceTestOutput/log.txt')
|
||||
finally:
|
||||
self.close_ssh_connection()
|
||||
|
||||
self.fail("Log file of completed 'PI' job on 'tasktracker' or "
|
||||
"'nodemanager' cluster node not found.")
|
||||
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
self._run_wordcount_job()
|
||||
self.close_ssh_connection()
|
|
@ -1,163 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
log=/tmp/config-test-log.txt
|
||||
|
||||
case $1 in
|
||||
NameNodeHeapSize)
|
||||
FUNC="check_nn_heap_size"
|
||||
;;
|
||||
|
||||
SecondaryNameNodeHeapSize)
|
||||
FUNC="check_snn_heap_size"
|
||||
;;
|
||||
|
||||
JobTrackerHeapSize)
|
||||
FUNC="check_jt_heap_size"
|
||||
;;
|
||||
|
||||
DataNodeHeapSize)
|
||||
FUNC="check_dn_heap_size"
|
||||
;;
|
||||
|
||||
TaskTrackerHeapSize)
|
||||
FUNC="check_tt_heap_size"
|
||||
;;
|
||||
|
||||
OozieHeapSize)
|
||||
FUNC="check_oozie_heap_size"
|
||||
;;
|
||||
|
||||
oozie.notification.url.connection.timeout)
|
||||
FUNC="check_oozie_notification_url_connection_timeout"
|
||||
;;
|
||||
|
||||
dfs.replication)
|
||||
FUNC="check_dfs_replication"
|
||||
;;
|
||||
|
||||
mapred.map.tasks.speculative.execution)
|
||||
FUNC="check_mapred_map_tasks_speculative_execution"
|
||||
;;
|
||||
|
||||
mapred.child.java.opts)
|
||||
FUNC="check_mapred_child_java_opts"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-value" ]; then
|
||||
VALUE="$2"
|
||||
fi
|
||||
shift
|
||||
|
||||
check_submitted_parameter() {
|
||||
|
||||
case "$1" in
|
||||
config_value)
|
||||
if [ -z "$VALUE" ]; then
|
||||
echo "Config value is not specified" >> $log
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
compare_config_values() {
|
||||
|
||||
check_submitted_parameter config_value
|
||||
|
||||
if [ "$VALUE" = "$1" ]; then
|
||||
echo -e "CHECK IS SUCCESSFUL \n\n" >> $log && exit 0
|
||||
else
|
||||
echo -e "Config value while cluster creation request: $VALUE \n" >> $log
|
||||
echo -e "Actual config value on node: $1 \n" >> $log
|
||||
echo "$VALUE != $1" >> $log && exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_heap_size() {
|
||||
|
||||
heap_size=`ps aux | grep java | grep $1 | grep -o 'Xmx[0-9]\{1,10\}m' | tail -n 1 | grep -o '[0-9]\{1,100\}'`
|
||||
|
||||
compare_config_values $heap_size
|
||||
}
|
||||
|
||||
check_nn_heap_size() {
|
||||
|
||||
echo -e "*********************** NAME NODE HEAP SIZE **********************\n" >> $log
|
||||
|
||||
check_heap_size "namenode"
|
||||
}
|
||||
|
||||
check_snn_heap_size() {
|
||||
|
||||
echo -e "*********************** SECONDARY NAME NODE HEAP SIZE **********************\n" >> $log
|
||||
|
||||
check_heap_size "secondarynamenode"
|
||||
}
|
||||
|
||||
check_jt_heap_size() {
|
||||
|
||||
echo -e "********************** JOB TRACKER HEAP SIZE *********************\n" >> $log
|
||||
|
||||
check_heap_size "jobtracker"
|
||||
}
|
||||
|
||||
check_dn_heap_size() {
|
||||
|
||||
echo -e "*********************** DATA NODE HEAP SIZE **********************\n" >> $log
|
||||
|
||||
check_heap_size "datanode"
|
||||
}
|
||||
|
||||
check_tt_heap_size() {
|
||||
|
||||
echo -e "********************* TASK TRACKER HEAP SIZE *********************\n" >> $log
|
||||
|
||||
check_heap_size "tasktracker"
|
||||
}
|
||||
|
||||
check_oozie_heap_size() {
|
||||
|
||||
echo -e "************************* OOZIE HEAP SIZE ************************\n" >> $log
|
||||
|
||||
check_heap_size "oozie"
|
||||
}
|
||||
|
||||
check_oozie_notification_url_connection_timeout() {
|
||||
|
||||
echo -e "************ OOZIE.NOTIFICATION.URL.CONNECTION.TIMEOUT ***********\n" >> $log
|
||||
|
||||
value=`cat /opt/oozie/conf/oozie-site.xml | grep -A 1 '.*oozie.notification.url.connection.timeout.*' | tail -n 1 | grep -o "[0-9]\{1,10\}"`
|
||||
|
||||
compare_config_values $value
|
||||
}
|
||||
|
||||
check_dfs_replication() {
|
||||
|
||||
echo -e "************************* DFS.REPLICATION ************************\n" >> $log
|
||||
|
||||
value=`cat /etc/hadoop/hdfs-site.xml | grep -A 1 '.*dfs.replication.*' | tail -n 1 | grep -o "[0-9]\{1,10\}"`
|
||||
|
||||
compare_config_values $value
|
||||
}
|
||||
|
||||
check_mapred_map_tasks_speculative_execution() {
|
||||
|
||||
echo -e "************* MAPRED.MAP.TASKS.SPECULATIVE.EXECUTION *************\n" >> $log
|
||||
|
||||
value=`cat /etc/hadoop/mapred-site.xml | grep -A 1 '.*mapred.map.tasks.speculative.execution.*' | tail -n 1 | grep -o "[a-z,A-Z]\{4,5\}" | grep -v "value"`
|
||||
|
||||
compare_config_values $value
|
||||
}
|
||||
|
||||
check_mapred_child_java_opts() {
|
||||
|
||||
echo -e "********************* MAPRED.CHILD.JAVA.OPTS *********************\n" >> $log
|
||||
|
||||
value=`cat /etc/hadoop/mapred-site.xml | grep -A 1 '.*mapred.child.java.opts.*' | tail -n 1 | grep -o "\-Xmx[0-9]\{1,10\}m"`
|
||||
|
||||
compare_config_values $value
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,16 +0,0 @@
|
|||
a1.sources = r1
|
||||
a1.sinks = k1
|
||||
a1.channels = c1
|
||||
|
||||
a1.sources.r1.type = avro
|
||||
a1.sources.r1.bind = localhost
|
||||
a1.sources.r1.port = 44444
|
||||
|
||||
a1.sinks.k1.type = file_roll
|
||||
a1.sinks.k1.sink.directory = /var/log/flume-ng
|
||||
a1.sinks.k1.sink.rollInterval = 0
|
||||
a1.channels.c1.type = memory
|
||||
a1.channels.c1.capacity = 1000
|
||||
a1.channels.c1.transactionCapacity = 100
|
||||
a1.sources.r1.channels= c1
|
||||
a1.sinks.k1.channel = c1
|
|
@ -1,6 +0,0 @@
|
|||
hello world 1
|
||||
hello world 2
|
||||
hello world 3
|
||||
hello world 4
|
||||
hello world 5
|
||||
hello world 6
|
|
@ -1,21 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
sudo flume-ng agent -n a1 -f flume.conf > flume.log 2>&1 &
|
||||
sleep 5
|
||||
sudo flume-ng avro-client -H localhost -p 44444 -F flume.data
|
||||
sleep 5
|
||||
cd /var/log/flume-ng
|
||||
file=`ls -l|grep 1[0-9].*-1|grep 5|awk -F" " '{print $NF}'`
|
||||
num=`cat $file | grep "hello world" | wc -l`
|
||||
|
||||
check_flume_availability(){
|
||||
echo $num
|
||||
if [ $num -lt 1 ]; then
|
||||
echo "Flume Agent is not available"
|
||||
exit 1
|
||||
else
|
||||
echo "Flume Agent is available"
|
||||
fi
|
||||
}
|
||||
|
||||
check_flume_availability
|
|
@ -1,64 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
case $1 in
|
||||
create_data)
|
||||
FUNC="create_data"
|
||||
;;
|
||||
check_get_data)
|
||||
FUNC="check_get_data"
|
||||
;;
|
||||
check_delete_data)
|
||||
FUNC="check_delete_data"
|
||||
;;
|
||||
esac
|
||||
|
||||
create_data(){
|
||||
exec hbase shell << EOF
|
||||
disable 'scores'
|
||||
drop 'scores'
|
||||
create 'scores','grade','course'
|
||||
put 'scores','Jack','grade','5'
|
||||
put 'scores','Jack','course:math','90'
|
||||
put 'scores','Jack','course:art','deleteme'
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
get_data(){
|
||||
exec hbase shell << EOF
|
||||
get 'scores','Jack','course:art'
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
delete_data(){
|
||||
exec hbase shell << EOF
|
||||
delete 'scores','Jack','course:art'
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
check_get_data(){
|
||||
res=`get_data`
|
||||
if ! [[ `echo $res | grep "deleteme" | wc -l` -ge 1 ]]; then
|
||||
echo "Insert data failed"
|
||||
exit 1
|
||||
else
|
||||
echo "Insert data successful"
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
check_delete_data(){
|
||||
res1=`delete_data`
|
||||
res2=`get_data`
|
||||
if ! [[ `echo $res2 | grep "deleteme" | wc -l` -eq 0 ]]; then
|
||||
echo "Delete data failed"
|
||||
exit 1
|
||||
else
|
||||
echo "Delete data successful"
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,31 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
set -e
|
||||
|
||||
log=/tmp/impala-test-log.txt
|
||||
|
||||
case $1 in
|
||||
query)
|
||||
FUNC="check_query"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-ip" ]; then
|
||||
IP="$2"
|
||||
else
|
||||
echo -e "-ip is missing \n" >> $log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
check_query() {
|
||||
if (impala-shell -i $IP:21000 -q "SELECT 2 > 1" --quiet|grep 'true'); then
|
||||
echo -e "Impala Query Successful \n" >> $log
|
||||
exit 0
|
||||
else
|
||||
echo -e "Impala Query Fail \n" >> $log
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,6 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<indexer table="test-keyvalue">
|
||||
<field name="firstname_s" value="info:firstname"/>
|
||||
<field name="lastname_s" value="info:lastname"/>
|
||||
<field name="age_i" value="info:age" type="int"/>
|
||||
</indexer>
|
|
@ -1,85 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
set -e
|
||||
|
||||
log=/tmp/key-value-store-test-log.txt
|
||||
|
||||
case $1 in
|
||||
create_table)
|
||||
FUNC="create_table"
|
||||
;;
|
||||
create_solr_collection)
|
||||
FUNC="create_solr_collection"
|
||||
;;
|
||||
add_indexer)
|
||||
FUNC="add_hbase_indexer"
|
||||
;;
|
||||
create_data)
|
||||
FUNC="create_data"
|
||||
;;
|
||||
check_solr)
|
||||
FUNC="check_solr_query"
|
||||
;;
|
||||
remove_data)
|
||||
FUNC="remove_data"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-ip" ]; then
|
||||
IP="$2"
|
||||
else
|
||||
IP="127.0.0.1"
|
||||
fi
|
||||
|
||||
create_table(){
|
||||
exec hbase shell << EOF
|
||||
disable 'test-keyvalue'
|
||||
drop 'test-keyvalue'
|
||||
create 'test-keyvalue', { NAME => 'info', REPLICATION_SCOPE => 1 }
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
create_solr_collection(){
|
||||
solrctl instancedir --generate $HOME/solr_keyvalue_configs
|
||||
sleep 3
|
||||
solrctl instancedir --create keyvalue_collection $HOME/solr_keyvalue_configs
|
||||
sleep 30
|
||||
solrctl collection --create keyvalue_collection -s 1 -c keyvalue_collection
|
||||
sleep 3
|
||||
}
|
||||
|
||||
add_hbase_indexer(){
|
||||
hbase-indexer add-indexer -n myindexer -c key_value_store_indexer.xml -cp solr.zk=localhost:2181/solr -cp solr.collection=keyvalue_collection
|
||||
sleep 3
|
||||
}
|
||||
|
||||
create_data(){
|
||||
exec hbase shell << EOF
|
||||
put 'test-keyvalue', 'row1', 'info:firstname', 'John'
|
||||
put 'test-keyvalue', 'row1', 'info:lastname', 'Smith'
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
remove_data(){
|
||||
exec hbase shell << EOF
|
||||
delete 'test-keyvalue', 'row1', 'info:firstname', 'John'
|
||||
delete 'test-keyvalue', 'row1', 'info:lastname', 'Smith'
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
check_solr_query(){
|
||||
sleep 3
|
||||
if [ `curl "http://$IP:8983/solr/keyvalue_collection_shard1_replica1/select?q=*:*&wt=json&indent=true" | grep "John" | wc -l` -ge 1 ]; then
|
||||
echo -e "Solr query is Successful. \n" >> $log
|
||||
exit 0
|
||||
else
|
||||
echo -e "Solr query is Failed. \n" >> $log
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,138 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
dir=/tmp/MapReduceTestOutput
|
||||
log=$dir/log.txt
|
||||
|
||||
HADOOP_EXAMPLES_JAR_PATH=""
|
||||
HADOOP_LOG_DIRECTORY=""
|
||||
HADOOP_USER=""
|
||||
|
||||
NODE_COUNT=""
|
||||
|
||||
case $1 in
|
||||
run_pi_job)
|
||||
FUNC="run_pi_job"
|
||||
;;
|
||||
|
||||
get_pi_job_name)
|
||||
FUNC="get_pi_job_name"
|
||||
;;
|
||||
|
||||
check_directory)
|
||||
FUNC="check_job_directory_existence"
|
||||
;;
|
||||
|
||||
run_wordcount_job)
|
||||
FUNC="run_wordcount_job"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-job_name" ]; then
|
||||
JOB_NAME="$2"
|
||||
fi
|
||||
shift
|
||||
|
||||
check_submitted_parameter() {
|
||||
|
||||
case "$1" in
|
||||
job_name)
|
||||
if [ -z "$JOB_NAME" ]; then
|
||||
echo "Job name not specified"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
check_job_directory_existence() {
|
||||
|
||||
check_submitted_parameter job_name
|
||||
|
||||
app_name=${JOB_NAME/"job"/"application"}
|
||||
if ! [ -d $HADOOP_LOG_DIRECTORY/$JOB_NAME -o -d $HADOOP_LOG_DIRECTORY/$app_name ]; then
|
||||
echo "Log file of \"PI\" job not found"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
create_log_directory() {
|
||||
|
||||
if ! [ -d $dir ]; then
|
||||
mkdir $dir
|
||||
chmod -R 777 $dir
|
||||
touch $log
|
||||
fi
|
||||
}
|
||||
|
||||
run_pi_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
echo -e "****************************** NETSTAT ***************************\n" >> $log
|
||||
|
||||
echo -e "`sudo netstat -plten | grep java` \n\n\n" >> $log
|
||||
|
||||
echo -e "************************ START OF \"PI\" JOB *********************\n" >> $log
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH pi $(($NODE_COUNT*10)) $(($NODE_COUNT*1000))" >> $log
|
||||
|
||||
echo -e "************************ END OF \"PI\" JOB ***********************" >> $log
|
||||
}
|
||||
|
||||
get_pi_job_name() {
|
||||
|
||||
#This sleep needs here for obtaining correct job name. Not always job name may immediately appear in job list.
|
||||
sleep 60
|
||||
|
||||
job_name=`sudo -u $HADOOP_USER bash -lc "hadoop job -list all | grep '^[[:space:]]*job_' | sort | tail -n1" | awk '{print $1}'`
|
||||
|
||||
if [ $job_name = "JobId" ]; then
|
||||
echo "\"PI\" job name has not been obtained since \"PI\" job was not launched" >> $log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$job_name"
|
||||
}
|
||||
|
||||
check_return_code_after_command_execution() {
|
||||
|
||||
if [ "$1" = "-exit" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$1" = "-clean_hdfs" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -rmr /map-reduce-test" && exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
run_wordcount_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
dmesg > $dir/input
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -ls /"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -mkdir /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -copyFromLocal $dir/input /map-reduce-test/mydata"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH wordcount /map-reduce-test/mydata /map-reduce-test/output"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -copyToLocal /map-reduce-test/output $dir/output"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -rmr /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,141 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
dir=/tmp/MapReduceTestOutput
|
||||
log=$dir/log.txt
|
||||
|
||||
HADOOP_EXAMPLES_JAR_PATH=""
|
||||
HADOOP_LOG_DIRECTORY=""
|
||||
HADOOP_USER=""
|
||||
|
||||
NODE_COUNT=""
|
||||
|
||||
case $1 in
|
||||
run_pi_job)
|
||||
FUNC="run_pi_job"
|
||||
;;
|
||||
|
||||
get_pi_job_name)
|
||||
FUNC="get_pi_job_name"
|
||||
;;
|
||||
|
||||
check_directory)
|
||||
FUNC="check_job_directory_existence"
|
||||
;;
|
||||
|
||||
run_wordcount_job)
|
||||
FUNC="run_wordcount_job"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-job_name" ]; then
|
||||
JOB_NAME="$2"
|
||||
fi
|
||||
shift
|
||||
|
||||
check_submitted_parameter() {
|
||||
|
||||
case "$1" in
|
||||
job_name)
|
||||
if [ -z "$JOB_NAME" ]; then
|
||||
echo "Job name not specified"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
check_job_directory_existence() {
|
||||
|
||||
check_submitted_parameter job_name
|
||||
|
||||
app_name=${JOB_NAME/"job"/"application"}
|
||||
if ! [ -d $HADOOP_LOG_DIRECTORY/$JOB_NAME -o -d $HADOOP_LOG_DIRECTORY/$app_name ]; then
|
||||
echo "Log file of \"PI\" job not found"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
create_log_directory() {
|
||||
|
||||
if ! [ -d $dir ]; then
|
||||
mkdir $dir
|
||||
chmod -R 777 $dir
|
||||
touch $log
|
||||
fi
|
||||
}
|
||||
|
||||
run_pi_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
echo -e "****************************** NETSTAT ***************************\n" >> $log
|
||||
|
||||
echo -e "`sudo netstat -plten | grep java` \n\n\n" >> $log
|
||||
|
||||
echo -e "************************ START OF \"PI\" JOB *********************\n" >> $log
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH pi $(($NODE_COUNT*10)) $(($NODE_COUNT*1000))" >> $log
|
||||
|
||||
echo -e "************************ END OF \"PI\" JOB ***********************" >> $log
|
||||
}
|
||||
|
||||
get_pi_job_name() {
|
||||
|
||||
#This sleep needs here for obtaining correct job name. Not always job name may immediately appear in job list.
|
||||
sleep 60
|
||||
|
||||
job_name=`sudo -u $HADOOP_USER bash -lc "hadoop job -list all | grep '^[[:space:]]*job_' | sort | tail -n1" | awk '{print $1}'`
|
||||
|
||||
if [ $job_name = "JobId" ]; then
|
||||
echo "\"PI\" job name has not been obtained since \"PI\" job was not launched" >> $log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$job_name"
|
||||
}
|
||||
|
||||
check_return_code_after_command_execution() {
|
||||
|
||||
if [ "$1" = "-exit" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$1" = "-clean_hdfs" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -rmr /map-reduce-test" && exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
run_wordcount_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
dmesg > $dir/input
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -ls /"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -mkdir /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyFromLocal $dir/input /map-reduce-test/mydata"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH wordcount /map-reduce-test/mydata /map-reduce-test/output"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyToLocal /map-reduce-test/output $dir/output"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -rmr /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
}
|
||||
|
||||
run_hive_job() {
|
||||
# Implement
|
||||
}
|
||||
$FUNC
|
|
@ -1,138 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
dir=/tmp/MapReduceTestOutput
|
||||
log=$dir/log.txt
|
||||
|
||||
HADOOP_EXAMPLES_JAR_PATH=""
|
||||
HADOOP_LOG_DIRECTORY=""
|
||||
HADOOP_USER=""
|
||||
|
||||
NODE_COUNT=""
|
||||
|
||||
case $1 in
|
||||
run_pi_job)
|
||||
FUNC="run_pi_job"
|
||||
;;
|
||||
|
||||
get_pi_job_name)
|
||||
FUNC="get_pi_job_name"
|
||||
;;
|
||||
|
||||
check_directory)
|
||||
FUNC="check_job_directory_existence"
|
||||
;;
|
||||
|
||||
run_wordcount_job)
|
||||
FUNC="run_wordcount_job"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
|
||||
if [ "$1" = "-job_name" ]; then
|
||||
JOB_NAME="$2"
|
||||
fi
|
||||
shift
|
||||
|
||||
check_submitted_parameter() {
|
||||
|
||||
case "$1" in
|
||||
job_name)
|
||||
if [ -z "$JOB_NAME" ]; then
|
||||
echo "Job name not specified"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
check_job_directory_existence() {
|
||||
|
||||
check_submitted_parameter job_name
|
||||
|
||||
app_name=${JOB_NAME/"job"/"application"}
|
||||
if ! [ -d $HADOOP_LOG_DIRECTORY/$JOB_NAME -o -d $HADOOP_LOG_DIRECTORY/$app_name ]; then
|
||||
echo "Log file of \"PI\" job not found"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
create_log_directory() {
|
||||
|
||||
if ! [ -d $dir ]; then
|
||||
mkdir $dir
|
||||
chmod -R 777 $dir
|
||||
touch $log
|
||||
fi
|
||||
}
|
||||
|
||||
run_pi_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
echo -e "****************************** NETSTAT ***************************\n" >> $log
|
||||
|
||||
echo -e "`sudo netstat -plten | grep java` \n\n\n" >> $log
|
||||
|
||||
echo -e "************************ START OF \"PI\" JOB *********************\n" >> $log
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH pi $(($NODE_COUNT*10)) $(($NODE_COUNT*1000))" >> $log
|
||||
|
||||
echo -e "************************ END OF \"PI\" JOB ***********************" >> $log
|
||||
}
|
||||
|
||||
get_pi_job_name() {
|
||||
|
||||
#This sleep needs here for obtaining correct job name. Not always job name may immediately appear in job list.
|
||||
sleep 60
|
||||
|
||||
job_name=`sudo -u $HADOOP_USER bash -lc "hadoop job -list all | grep '^[[:space:]]*job_' | sort | tail -n1" | awk '{print $1}'`
|
||||
|
||||
if [ $job_name = "JobId" ]; then
|
||||
echo "\"PI\" job name has not been obtained since \"PI\" job was not launched" >> $log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$job_name"
|
||||
}
|
||||
|
||||
check_return_code_after_command_execution() {
|
||||
|
||||
if [ "$1" = "-exit" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$1" = "-clean_hdfs" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -rmr /map-reduce-test" && exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
run_wordcount_job() {
|
||||
|
||||
create_log_directory
|
||||
|
||||
dmesg > $dir/input
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -ls /"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -mkdir /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyFromLocal $dir/input /map-reduce-test/mydata"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop jar $HADOOP_EXAMPLES_JAR_PATH wordcount /map-reduce-test/mydata /map-reduce-test/output"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyToLocal /map-reduce-test/output $dir/output"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -rmr /map-reduce-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
}
|
||||
|
||||
$FUNC
|
|
@ -1,69 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
OS_TENANT_NAME=""
|
||||
OS_USERNAME=""
|
||||
OS_PASSWORD=""
|
||||
|
||||
HADOOP_USER=""
|
||||
|
||||
SWIFT_CONTAINER_NAME=""
|
||||
|
||||
SWIFT_PARAMS="-D fs.swift.service.sahara.username=$OS_USERNAME"
|
||||
SWIFT_PARAMS+=" -D fs.swift.service.sahara.tenant=$OS_TENANT_NAME"
|
||||
SWIFT_PARAMS+=" -D fs.swift.service.sahara.password=$OS_PASSWORD"
|
||||
|
||||
|
||||
compare_files() {
|
||||
a=`md5sum $1 | awk {'print \$1'}`
|
||||
b=`md5sum $2 | awk {'print \$1'}`
|
||||
|
||||
if [ "$a" = "$b" ]; then
|
||||
echo "md5-sums of files $1 and $2 are equal"
|
||||
else
|
||||
echo -e "\nUpload file to Swift: $1 \n"
|
||||
echo -e "Download file from Swift: $2 \n"
|
||||
echo -e "md5-sums of files $1 and $2 are not equal \n"
|
||||
echo "$1 != $2"; cleanup; exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
clean_local() {
|
||||
sudo rm -rf /tmp/test-file /tmp/swift-test-file
|
||||
}
|
||||
|
||||
clean_mapr_fs() {
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -rmr /swift-test"
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
clean_local; clean_mapr_fs
|
||||
}
|
||||
|
||||
check_return_code_after_command_execution() {
|
||||
if [ "$1" -ne 0 ]; then
|
||||
cleanup; exit 1;
|
||||
fi
|
||||
}
|
||||
|
||||
check_swift_availability() {
|
||||
dd if=/dev/urandom of=/tmp/test-file bs=1048576 count=1
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -mkdir /swift-test"
|
||||
check_return_code_after_command_execution `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyFromLocal /tmp/test-file /swift-test/test-file"
|
||||
check_return_code_after_command_execution `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs $SWIFT_PARAMS -cp /swift-test/test-file swift://$SWIFT_CONTAINER_NAME.sahara/test-file"
|
||||
check_return_code_after_command_execution `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs $SWIFT_PARAMS -cp swift://$SWIFT_CONTAINER_NAME.sahara/test-file /swift-test/swift-test-file"
|
||||
check_return_code_after_command_execution `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop fs -copyToLocal /swift-test/swift-test-file /tmp/swift-test-file"
|
||||
check_return_code_after_command_execution `echo "$?"`
|
||||
|
||||
compare_files /tmp/test-file /tmp/swift-test-file; cleanup
|
||||
}
|
||||
|
||||
check_swift_availability
|
|
@ -1,24 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
set -e
|
||||
log=/tmp/config-sentry-test-log.txt
|
||||
|
||||
check_sentry(){
|
||||
conffile_dir=$(sudo find / -name "*-sentry-SENTRY_SERVER" | head -1)
|
||||
if [ -z $conffile_dir ]; then
|
||||
echo "Sentry configuration file directory not found" >> $log
|
||||
exit 1
|
||||
else
|
||||
conffile=$conffile_dir"/sentry-site.xml"
|
||||
fi
|
||||
|
||||
conffile_tmp=/tmp/sentry-site.xml
|
||||
sudo cp $conffile $conffile_tmp
|
||||
sudo chmod 664 $conffile_tmp
|
||||
|
||||
psql_jar=$(ls /usr/share/cmf/lib/postgresql* | head -1)
|
||||
export HADOOP_CLASSPATH=:$psql_jar
|
||||
sentry --command schema-tool -conffile $conffile_tmp -dbType postgres -info &>> $log
|
||||
}
|
||||
|
||||
check_sentry
|
|
@ -1,23 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
set -e
|
||||
|
||||
check_solr_availability(){
|
||||
solrctl instancedir --generate $HOME/solr_configs
|
||||
sleep 3
|
||||
solrctl instancedir --create collection2 $HOME/solr_configs
|
||||
sleep 30
|
||||
solrctl collection --create collection2 -s 1 -c collection2
|
||||
sleep 3
|
||||
cd /usr/share/doc/solr-doc/example/exampledocs
|
||||
/usr/lib/jvm/java-7-oracle-cloudera/bin/java -Durl=http://localhost:8983/solr/collection2/update -jar post.jar monitor.xml
|
||||
if [ `curl "http://localhost:8983/solr/collection2_shard1_replica1/select?q=UltraSharp&wt=json&indent=true" | grep "Dell Widescreen UltraSharp 3007WFP" | wc -l` -ge 1 ]; then
|
||||
echo "solr is available"
|
||||
exit 0
|
||||
else
|
||||
echo "solr is not available"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_solr_availability
|
|
@ -1,28 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
connect_server_list_jobs(){
|
||||
exec sqoop2 << EOF
|
||||
set server --host localhost --port 12000
|
||||
show server --all
|
||||
show job --all
|
||||
exit
|
||||
EOF
|
||||
}
|
||||
|
||||
check_sqoop2(){
|
||||
res=`connect_server_list_jobs`
|
||||
if [ `echo $res | grep "localhost" | wc -l` -lt 1 ]; then
|
||||
echo "sqoop2 is not available"
|
||||
exit 1
|
||||
else
|
||||
if [ `echo $res | grep "job(s) to show" | wc -l` -lt 1 ]; then
|
||||
echo "sqoop2 is not available"
|
||||
exit 1
|
||||
else
|
||||
echo "sqoop2 is available"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
check_sqoop2
|
|
@ -1,67 +0,0 @@
|
|||
#!/bin/bash -x
|
||||
|
||||
OS_TENANT_NAME=""
|
||||
OS_USERNAME=""
|
||||
OS_PASSWORD=""
|
||||
|
||||
HADOOP_USER=""
|
||||
|
||||
SWIFT_CONTAINER_NAME=""
|
||||
|
||||
compare_files() {
|
||||
|
||||
a=`md5sum $1 | awk {'print \$1'}`
|
||||
b=`md5sum $2 | awk {'print \$1'}`
|
||||
|
||||
if [ "$a" = "$b" ]; then
|
||||
echo "md5-sums of files $1 and $2 are equal"
|
||||
else
|
||||
echo -e "\nUpload file to Swift: $1 \n"
|
||||
echo -e "Download file from Swift: $2 \n"
|
||||
echo -e "md5-sums of files $1 and $2 are not equal \n"
|
||||
echo "$1 != $2" && exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_return_code_after_command_execution() {
|
||||
|
||||
if [ "$1" = "-exit" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$1" = "-clean_hdfs" ]; then
|
||||
if [ "$2" -ne 0 ]; then
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -rmr /swift-test" && exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
check_swift_availability() {
|
||||
|
||||
dd if=/dev/urandom of=/tmp/test-file bs=1048576 count=1
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -mkdir /swift-test"
|
||||
check_return_code_after_command_execution -exit `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -copyFromLocal /tmp/test-file /swift-test/"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop distcp -D fs.swift.service.sahara.username=$OS_USERNAME -D fs.swift.service.sahara.tenant=$OS_TENANT_NAME -D fs.swift.service.sahara.password=$OS_PASSWORD /swift-test/test-file swift://$SWIFT_CONTAINER_NAME.sahara/"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop distcp -D fs.swift.service.sahara.username=$OS_USERNAME -D fs.swift.service.sahara.tenant=$OS_TENANT_NAME -D fs.swift.service.sahara.password=$OS_PASSWORD swift://$SWIFT_CONTAINER_NAME.sahara/test-file /swift-test/swift-test-file"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -copyToLocal /swift-test/swift-test-file /tmp/swift-test-file"
|
||||
check_return_code_after_command_execution -clean_hdfs `echo "$?"`
|
||||
|
||||
sudo -u $HADOOP_USER bash -lc "hadoop dfs -rmr /swift-test"
|
||||
|
||||
compare_files /tmp/test-file /tmp/swift-test-file
|
||||
|
||||
sudo rm /tmp/test-file /tmp/swift-test-file
|
||||
}
|
||||
|
||||
check_swift_availability
|
|
@ -1,120 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_utils import excutils
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
class ScalingTest(base.ITestCase):
|
||||
def _change_node_info_while_ng_adding(self, ngt_id, count, cluster_info):
|
||||
cluster_info['node_info']['node_count'] += count
|
||||
node_processes = self.sahara.node_group_templates.get(
|
||||
ngt_id).node_processes
|
||||
if cluster_info['plugin_config'].PROCESS_NAMES['tt'] in node_processes:
|
||||
cluster_info['node_info']['tasktracker_count'] += count
|
||||
if cluster_info['plugin_config'].PROCESS_NAMES['dn'] in node_processes:
|
||||
cluster_info['node_info']['datanode_count'] += count
|
||||
|
||||
def _change_node_info_while_ng_resizing(self, name, count, cluster_info):
|
||||
node_groups = self.sahara.clusters.get(
|
||||
cluster_info['cluster_id']).node_groups
|
||||
for node_group in node_groups:
|
||||
if node_group['name'] == name:
|
||||
processes = node_group['node_processes']
|
||||
old_count = node_group['count']
|
||||
cluster_info['node_info']['node_count'] += -old_count + count
|
||||
if cluster_info['plugin_config'].PROCESS_NAMES['tt'] in processes:
|
||||
cluster_info['node_info']['tasktracker_count'] += (
|
||||
-old_count + count
|
||||
)
|
||||
if cluster_info['plugin_config'].PROCESS_NAMES['dn'] in processes:
|
||||
cluster_info['node_info']['datanode_count'] += -old_count + count
|
||||
|
||||
@staticmethod
|
||||
def _add_new_field_to_scale_body_while_ng_resizing(
|
||||
scale_body, name, count):
|
||||
scale_body['resize_node_groups'].append(
|
||||
{
|
||||
'name': name,
|
||||
'count': count
|
||||
}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _add_new_field_to_scale_body_while_ng_adding(
|
||||
scale_body, ngt_id, count, name):
|
||||
scale_body['add_node_groups'].append(
|
||||
{
|
||||
'node_group_template_id': ngt_id,
|
||||
'count': count,
|
||||
'name': name
|
||||
}
|
||||
)
|
||||
|
||||
@base.skip_test('SKIP_SCALING_TEST',
|
||||
'Test for cluster scaling was skipped.')
|
||||
def cluster_scaling(self, cluster_info, change_list):
|
||||
scale_body = {'add_node_groups': [], 'resize_node_groups': []}
|
||||
for change in change_list:
|
||||
if change['operation'] == 'resize':
|
||||
node_group_name = change['info'][0]
|
||||
node_group_size = change['info'][1]
|
||||
self._add_new_field_to_scale_body_while_ng_resizing(
|
||||
scale_body, node_group_name, node_group_size
|
||||
)
|
||||
self._change_node_info_while_ng_resizing(
|
||||
node_group_name, node_group_size, cluster_info
|
||||
)
|
||||
if change['operation'] == 'add':
|
||||
node_group_name = change['info'][0]
|
||||
node_group_size = change['info'][1]
|
||||
node_group_id = change['info'][2]
|
||||
self._add_new_field_to_scale_body_while_ng_adding(
|
||||
scale_body, node_group_id, node_group_size, node_group_name
|
||||
)
|
||||
self._change_node_info_while_ng_adding(
|
||||
node_group_id, node_group_size, cluster_info
|
||||
)
|
||||
scale_body = {key: value for key, value in scale_body.items() if value}
|
||||
self.sahara.clusters.scale(cluster_info['cluster_id'], scale_body)
|
||||
self.poll_cluster_state(cluster_info['cluster_id'])
|
||||
new_node_ip_list = self.get_cluster_node_ip_list_with_node_processes(
|
||||
cluster_info['cluster_id']
|
||||
)
|
||||
try:
|
||||
new_node_info = self.get_node_info(new_node_ip_list,
|
||||
cluster_info['plugin_config'])
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(
|
||||
'\nFailure during check of node process deployment '
|
||||
'on cluster node: ' + str(e)
|
||||
)
|
||||
expected_node_info = cluster_info['node_info']
|
||||
self.assertEqual(
|
||||
expected_node_info, new_node_info,
|
||||
'Failure while node info comparison.\n'
|
||||
'Expected node info after cluster scaling: %s.\n'
|
||||
'Actual node info after cluster scaling: %s.'
|
||||
% (expected_node_info, new_node_info)
|
||||
)
|
||||
return {
|
||||
'cluster_id': cluster_info['cluster_id'],
|
||||
'node_ip_list': new_node_ip_list,
|
||||
'node_info': new_node_info,
|
||||
'plugin_config': cluster_info['plugin_config']
|
||||
}
|
|
@ -1,62 +0,0 @@
|
|||
# Copyright (c) 2013 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import uuid
|
||||
|
||||
from oslo_utils import excutils
|
||||
|
||||
from sahara.tests.integration.tests import base
|
||||
|
||||
|
||||
class SwiftTest(base.ITestCase):
|
||||
DEFAULT_TEST_SCRIPT = 'swift_test_script.sh'
|
||||
|
||||
@base.skip_test(
|
||||
'SKIP_SWIFT_TEST',
|
||||
message='Test for check of Swift availability was skipped.')
|
||||
def check_swift_availability(self, cluster_info, script=None):
|
||||
script = script or SwiftTest.DEFAULT_TEST_SCRIPT
|
||||
plugin_config = cluster_info['plugin_config']
|
||||
# Make unique name of Swift container during Swift testing
|
||||
swift_container_name = 'Swift-test-' + str(uuid.uuid4())[:8]
|
||||
extra_script_parameters = {
|
||||
'OS_TENANT_NAME': self.common_config.OS_TENANT_NAME,
|
||||
'OS_USERNAME': self.common_config.OS_USERNAME,
|
||||
'OS_PASSWORD': self.common_config.OS_PASSWORD,
|
||||
'HADOOP_USER': plugin_config.HADOOP_USER,
|
||||
'SWIFT_CONTAINER_NAME': swift_container_name
|
||||
}
|
||||
namenode_ip = cluster_info['node_info']['namenode_ip']
|
||||
self.open_ssh_connection(namenode_ip)
|
||||
try:
|
||||
self.transfer_helper_script_to_node(
|
||||
script, parameter_list=extra_script_parameters
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(str(e))
|
||||
swift = self.connect_to_swift()
|
||||
swift.put_container(swift_container_name)
|
||||
try:
|
||||
self.execute_command('./script.sh')
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
print(str(e))
|
||||
|
||||
finally:
|
||||
self.delete_swift_container(swift, swift_container_name)
|
||||
self.close_ssh_connection()
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
tox -e integration
|
6
tox.ini
6
tox.ini
|
@ -16,12 +16,6 @@ commands = bash tools/pretty_tox.sh '{posargs}'
|
|||
whitelist_externals = bash
|
||||
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
|
||||
|
||||
[testenv:integration]
|
||||
setenv =
|
||||
VIRTUAL_ENV={envdir}
|
||||
DISCOVER_DIRECTORY=sahara/tests/integration
|
||||
commands = bash tools/pretty_tox.sh '{posargs}'
|
||||
|
||||
[testenv:scenario]
|
||||
setenv = VIRTUALENV={envdir}
|
||||
commands = python {toxinidir}/sahara/tests/scenario/runner.py {posargs}
|
||||
|
|
Loading…
Reference in New Issue