Fix development environment and functional tests
Changing devstack environment vagrant box and also rename the devstack VM to 'devstack' from 'pg-tips' Also fixing all the tests that were broken when they were moved from tests/unit to tests/functional with this review https://review.openstack.org/#/c/400237/ Update devstack README with a section called Development workflow for monasca-transform with steps developers can take to develop and run tests. Change-Id: I11678148ba2bcb96eb3e2a522176683dc8bca30a
This commit is contained in:
parent
6f41ca085d
commit
f99a3faf68
|
@ -29,10 +29,10 @@ to be used in the devstack instance. It is important therefore that changes
|
|||
should not be pushed from the vm as the unevaluated commit would be pushed.
|
||||
|
||||
N.B. If you are running with virtualbox you may find that the `./stack.sh` fails with the filesystem becoming read only. There is a work around:
|
||||
|
||||
1. vagrant up --no-provision && vagrant halt
|
||||
2. open virtualbox gui
|
||||
3. open target vm settings and change storage controller from SCSI to SATA
|
||||
|
||||
1. vagrant up --no-provision && vagrant halt
|
||||
2. open virtualbox gui
|
||||
3. open target vm settings and change storage controller from SCSI to SATA
|
||||
4. vagrant up
|
||||
|
||||
### Using the upstream committed state of monasca-transform
|
||||
|
@ -100,6 +100,63 @@ database are updated with fresh copies also though the start scripts, driver and
|
|||
service python code are left as they are (because I'm not envisaging much change
|
||||
in those).
|
||||
|
||||
### Development workflow
|
||||
|
||||
Here are the normal steps a developer can take to make any code changes. It is
|
||||
essential that the developer runs all tests in functional tests in a devstack
|
||||
environment before submitting any changes for review/merge.
|
||||
|
||||
Please follow steps mentioned in
|
||||
"To run monasca-transform using the provided vagrant environment" section above
|
||||
to create a devstack VM environment before following steps below:
|
||||
|
||||
1. Make code changes on the host machine (e.g. ~/monasca-transform)
|
||||
2. vagrant ssh (to connect to the devstack VM)
|
||||
3. cd /opt/stack/monasca-transform
|
||||
4. tools/vagrant/refresh_monasca_transform.sh (See "Updating the code for dev"
|
||||
section above)
|
||||
5. cd /opt/stack/monasca-transform (since monasca-transform folder
|
||||
gets recreated in Step 4. above)
|
||||
6. tox -e pep8
|
||||
7. tox -e py27
|
||||
8. tox -e functional
|
||||
|
||||
Note: It is mandatory to run functional unit tests before submitting any changes
|
||||
for review/merge. These can be currently be run only in a devstack VM since tests
|
||||
need access to Apache Spark libraries. This is accomplished by setting
|
||||
SPARK_HOME environment variable which is being done in tox.ini.
|
||||
|
||||
export SPARK_HOME=/opt/spark/current
|
||||
|
||||
#### How to find and fix test failures ?
|
||||
|
||||
To find which tests failed after running functional tests (After you have run
|
||||
functional tests as per steps in Development workflow)
|
||||
|
||||
export OS_TEST_PATH=tests/functional
|
||||
export SPARK_HOME=/opt/spark/current
|
||||
source .tox/functional/bin/activate
|
||||
testr run
|
||||
testr failing (to get list of tests that failed)
|
||||
|
||||
You can add
|
||||
|
||||
import pdb
|
||||
pdb.set_trace()
|
||||
|
||||
in test or in code where you want to start python debugger.
|
||||
|
||||
Run test using
|
||||
|
||||
python -m testtools.run <test>
|
||||
|
||||
For example:
|
||||
|
||||
python -m testtools.run \
|
||||
tests.functional.usage.test_host_cpu_usage_component_second_agg.SparkTest
|
||||
|
||||
Reference: https://wiki.openstack.org/wiki/Testr
|
||||
|
||||
## To run monasca-transform using a different deployment technology
|
||||
|
||||
Monasca-transform requires supporting services, such as Kafka and
|
||||
|
|
|
@ -46,7 +46,9 @@ class MySQLOffsetSpecs(OffsetSpecs):
|
|||
db = create_engine(DbUtil.get_python_db_connection_string(),
|
||||
isolation_level="READ UNCOMMITTED")
|
||||
|
||||
db.echo = True
|
||||
if cfg.CONF.service.enable_debug_log_entries:
|
||||
db.echo = True
|
||||
|
||||
# reflect the tables
|
||||
Base.prepare(db, reflect=True)
|
||||
|
||||
|
|
|
@ -11,11 +11,10 @@
|
|||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from monasca_transform.log_utils import LogUtils
|
||||
|
||||
from stevedore import extension
|
||||
|
||||
from monasca_transform.log_utils import LogUtils
|
||||
|
||||
|
||||
class GenericTransformBuilder (object):
|
||||
"""Build transformation pipeline based on
|
||||
|
|
|
@ -7,6 +7,7 @@ flake8<2.6.0,>=2.5.4 # MIT
|
|||
nose # LGPL
|
||||
mock>=2.0 # BSD
|
||||
fixtures>=3.0.0 # Apache-2.0/BSD
|
||||
os-testr>=0.8.0 # Apache-2.0
|
||||
# required to build documentation
|
||||
sphinx>=1.5.1 # BSD
|
||||
#oslosphinx>=4.7.0 # Apache-2.0
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
import mock
|
||||
|
||||
from pyspark.sql import SQLContext
|
||||
|
||||
from monasca_transform.config.config_initializer import ConfigInitializer
|
||||
|
@ -20,11 +21,12 @@ from monasca_transform.transform.builder.generic_transform_builder \
|
|||
from monasca_transform.transform.transform_utils import RecordStoreUtils
|
||||
from monasca_transform.transform.transform_utils import TransformSpecsUtils
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.mem_total_all.data_provider \
|
||||
import DataProvider
|
||||
from tests.unit \
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit import SparkContextTest
|
||||
|
||||
|
||||
class TransformBuilderTest(SparkContextTest):
|
||||
|
@ -34,7 +36,7 @@ class TransformBuilderTest(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/test_config.conf'])
|
||||
'tests/functional/test_resources/config/test_config.conf'])
|
||||
|
||||
@mock.patch('monasca_transform.transform.builder.generic_transform_builder'
|
||||
'.GenericTransformBuilder._get_insert_component_manager')
|
||||
|
|
|
@ -33,9 +33,11 @@ class DummyInsert(InsertComponent):
|
|||
".dimension_list"
|
||||
).collect()[0].asDict()
|
||||
|
||||
cfg.CONF.set_override('adapter',
|
||||
'tests.unit.messaging.adapter:DummyAdapter',
|
||||
group='messaging')
|
||||
cfg.CONF.set_override(
|
||||
'adapter',
|
||||
'tests.functional.messaging.adapter:DummyAdapter',
|
||||
group='messaging')
|
||||
|
||||
# Approach 1
|
||||
# using foreachPartition to iterate through elements in an
|
||||
# RDD is the recommended approach so as to not overwhelm kafka with the
|
||||
|
|
|
@ -33,9 +33,10 @@ class DummyInsertPreHourly(InsertComponent):
|
|||
).collect()[0].asDict()
|
||||
metric_id = agg_params['metric_id']
|
||||
|
||||
cfg.CONF.set_override('adapter',
|
||||
'tests.unit.messaging.adapter:DummyAdapter',
|
||||
group='messaging')
|
||||
cfg.CONF.set_override(
|
||||
'adapter',
|
||||
'tests.functional.messaging.adapter:DummyAdapter',
|
||||
group='messaging')
|
||||
# Approach 1
|
||||
# using foreachPartition to iterate through elements in an
|
||||
# RDD is the recommended approach so as to not overwhelm kafka with the
|
||||
|
|
|
@ -24,7 +24,7 @@ class TestConfigInitializer(unittest.TestCase):
|
|||
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/test_config.conf'
|
||||
'tests/functional/test_resources/config/test_config.conf'
|
||||
])
|
||||
self.assertEqual('test_offsets_repo_class',
|
||||
cfg.CONF.repositories.offsets)
|
||||
|
|
|
@ -115,24 +115,60 @@ class TestDataDrivenSpecsRepo(SparkContextTest):
|
|||
agg_params_json["aggregated_metric_name"])
|
||||
|
||||
def check_pre_transform_specs_data_frame(
|
||||
self, pre_transform_specs_data_frame):
|
||||
self, pre_transform_specs_data_frame, is_json_specs=False):
|
||||
|
||||
# gather the references and uses here
|
||||
self.assertEqual(
|
||||
Counter([u'mem.usable_mb',
|
||||
u'mem.total_mb',
|
||||
u'disk.total_used_space_mb', u'disk.total_space_mb',
|
||||
u'cpu.total_logical_cores',
|
||||
u'cpu.idle_perc', u'nova.vm.cpu.total_allocated',
|
||||
u'nova.vm.mem.total_allocated_mb', u'vcpus',
|
||||
u'vm.mem.total_mb', u'vm.mem.used_mb',
|
||||
u'nova.vm.disk.total_allocated_gb',
|
||||
u'vm.disk.allocation', u'vm.cpu.utilization_perc',
|
||||
u'swiftlm.diskusage.host.val.size',
|
||||
u'swiftlm.diskusage.host.val.avail',
|
||||
u'storage.objects.size']),
|
||||
Counter([row.event_type for row in
|
||||
pre_transform_specs_data_frame.collect()]))
|
||||
if is_json_specs:
|
||||
# gather the references and uses here
|
||||
self.assertEqual(Counter([u'container.cpu.total_time',
|
||||
u'cpu.idle_perc',
|
||||
u'cpu.total_logical_cores',
|
||||
u'cpu.total_time_sec',
|
||||
u'disk.total_space_mb',
|
||||
u'disk.total_used_space_mb',
|
||||
u'kubernetes.node.allocatable.cpu',
|
||||
u'kubernetes.node.capacity.cpu',
|
||||
u'mem.total_mb',
|
||||
u'mem.usable_mb',
|
||||
u'nova.vm.cpu.total_allocated',
|
||||
u'nova.vm.disk.total_allocated_gb',
|
||||
u'nova.vm.mem.total_allocated_mb',
|
||||
u'pod.cpu.total_time',
|
||||
u'pod.mem.used_bytes',
|
||||
u'pod.net.in_bytes_sec',
|
||||
u'pod.net.out_bytes_sec',
|
||||
u'storage.objects.size',
|
||||
u'swiftlm.diskusage.host.val.avail',
|
||||
u'swiftlm.diskusage.host.val.size',
|
||||
u'vcpus',
|
||||
u'vm.cpu.utilization_perc',
|
||||
u'vm.disk.allocation',
|
||||
u'vm.mem.total_mb',
|
||||
u'vm.mem.used_mb']),
|
||||
Counter(
|
||||
[row.event_type for row in
|
||||
pre_transform_specs_data_frame.collect()]))
|
||||
else:
|
||||
# gather the references and uses here
|
||||
self.assertEqual(Counter([u'cpu.idle_perc',
|
||||
u'cpu.total_logical_cores',
|
||||
u'disk.total_space_mb',
|
||||
u'disk.total_used_space_mb',
|
||||
u'mem.total_mb',
|
||||
u'mem.usable_mb',
|
||||
u'nova.vm.cpu.total_allocated',
|
||||
u'nova.vm.disk.total_allocated_gb',
|
||||
u'nova.vm.mem.total_allocated_mb',
|
||||
u'storage.objects.size',
|
||||
u'swiftlm.diskusage.host.val.avail',
|
||||
u'swiftlm.diskusage.host.val.size',
|
||||
u'vcpus',
|
||||
u'vm.cpu.utilization_perc',
|
||||
u'vm.disk.allocation',
|
||||
u'vm.mem.total_mb',
|
||||
u'vm.mem.used_mb']),
|
||||
Counter(
|
||||
[row.event_type for row in
|
||||
pre_transform_specs_data_frame.collect()]))
|
||||
|
||||
# mem.usable_mb
|
||||
event_type = 'mem.usable_mb'
|
||||
|
@ -601,4 +637,5 @@ class TestJSONDataDrivenSpecsRepo(TestDataDrivenSpecsRepo):
|
|||
pre_transform_specs_type))
|
||||
|
||||
self.check_pre_transform_specs_data_frame(
|
||||
json_pre_transform_specs_data_frame)
|
||||
json_pre_transform_specs_data_frame,
|
||||
is_json_specs=True)
|
||||
|
|
|
@ -24,7 +24,7 @@ class TestDBUtil(unittest.TestCase):
|
|||
def setUp(self):
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/test_config.conf'
|
||||
'tests/functional/test_resources/config/test_config.conf'
|
||||
])
|
||||
self.config = Config()
|
||||
self.config.config(
|
||||
|
|
|
@ -22,9 +22,10 @@ from monasca_transform.component.usage.fetch_quantity \
|
|||
from monasca_transform.transform.transform_utils import RecordStoreUtils
|
||||
from monasca_transform.transform.transform_utils import TransformSpecsUtils
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.mem_total_all.data_provider \
|
||||
import DataProvider
|
||||
from tests.unit import SparkContextTest
|
||||
|
||||
|
||||
class SetAggregatedMetricNameTest(SparkContextTest):
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from pyspark.sql import SQLContext
|
||||
from tests.unit.spark_context_test import SparkContextTest
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
|
||||
from monasca_transform.component.setter.rollup_quantity \
|
||||
import RollupQuantity
|
||||
|
|
|
@ -27,7 +27,6 @@ class SparkContextTest(unittest.TestCase):
|
|||
setAppName("monasca-transform unit tests").\
|
||||
set("spark.sql.shuffle.partitions", "10")
|
||||
self.spark_context = SparkContext.getOrCreate(conf=spark_conf)
|
||||
|
||||
# quiet logging
|
||||
logger = self.spark_context._jvm.org.apache.log4j
|
||||
logger.LogManager.getLogger("org").setLevel(logger.Level.WARN)
|
||||
|
|
|
@ -19,13 +19,14 @@ import sys
|
|||
import unittest
|
||||
import uuid
|
||||
|
||||
from monasca_transform.offset_specs import JSONOffsetSpecs
|
||||
from monasca_transform.offset_specs import OffsetSpec
|
||||
|
||||
from tests.functional.json_offset_specs import JSONOffsetSpecs
|
||||
|
||||
|
||||
class TestJSONOffsetSpecs(unittest.TestCase):
|
||||
|
||||
test_resources_path = 'tests/unit/test_resources'
|
||||
test_resources_path = 'tests/functional/test_resources'
|
||||
|
||||
def setUp(self):
|
||||
pass
|
||||
|
@ -269,7 +270,7 @@ class TestJSONOffsetSpecs(unittest.TestCase):
|
|||
json_file = json.load(f)
|
||||
return json_file
|
||||
|
||||
@unittest.skip
|
||||
@unittest.skip("skipping not implemented")
|
||||
def test_get_offsets_is_obj_based(self):
|
||||
self.fail('We need to assert that we get objects back '
|
||||
'from the get offsets method')
|
||||
|
|
|
@ -28,7 +28,7 @@ enable_record_store_df_cache = False
|
|||
record_store_df_cache_storage_level = MEMORY_ONLY_SER_2
|
||||
enable_debug_log_entries = true
|
||||
# the location for the transform-service log
|
||||
service_log_path=/home/david/
|
||||
service_log_path=/tmp/
|
||||
# the filename for the transform-service log
|
||||
service_log_filename=monasca-transform.log
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ offsets = tests.functional.json_offset_specs:JSONOffsetSpecs
|
|||
data_driven_specs = tests.functional.data_driven_specs.json_data_driven_specs_repo:JSONDataDrivenSpecsRepo
|
||||
|
||||
[messaging]
|
||||
adapter = tests.unit.messaging.adapter:DummyAdapter
|
||||
adapter = tests.functional.messaging.adapter:DummyAdapter
|
||||
|
||||
[stage_processors]
|
||||
enable_pre_hourly_processor = False
|
||||
|
|
|
@ -17,7 +17,7 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/cpu_kafka_data/'
|
||||
_resource_path = 'tests/functional/test_resources/cpu_kafka_data/'
|
||||
|
||||
kafka_data_path = os.path.join(_resource_path,
|
||||
"cpu_kafka_data.txt")
|
||||
|
|
|
@ -17,7 +17,8 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/cpu_kafka_data_second_stage/'
|
||||
_resource_path = ('tests/functional/test_resources/'
|
||||
'cpu_kafka_data_second_stage/')
|
||||
|
||||
kafka_data_path = os.path.join(_resource_path,
|
||||
"cpu_kafka_data.txt")
|
||||
|
|
|
@ -17,7 +17,7 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/fetch_quantity_data/'
|
||||
_resource_path = 'tests/functional/test_resources/fetch_quantity_data/'
|
||||
|
||||
fetch_quantity_data_path = os.path.join(_resource_path,
|
||||
"fetch_quantity_data.txt")
|
||||
|
|
|
@ -17,7 +17,7 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/'\
|
||||
_resource_path = 'tests/functional/test_resources/'\
|
||||
'fetch_quantity_data_second_stage/'
|
||||
|
||||
fetch_quantity_data_path = os.path.join(
|
||||
|
|
|
@ -17,7 +17,7 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/' \
|
||||
_resource_path = 'tests/functional/test_resources/' \
|
||||
'fetch_quantity_util_second_stage/'
|
||||
|
||||
kafka_data_path = os.path.join(_resource_path, "kafka_data.txt")
|
||||
|
|
|
@ -17,7 +17,7 @@ import os
|
|||
|
||||
class DataProvider(object):
|
||||
|
||||
_resource_path = 'tests/unit/test_resources/mem_total_all/'
|
||||
_resource_path = 'tests/functional/test_resources/mem_total_all/'
|
||||
|
||||
record_store_path = os.path.join(_resource_path,
|
||||
"record_store_df.txt")
|
||||
|
|
|
@ -17,12 +17,12 @@ import unittest
|
|||
import mock
|
||||
from oslo_config import cfg
|
||||
from pyspark.streaming.kafka import OffsetRange
|
||||
from tests.unit.spark_context_test import SparkContextTest
|
||||
from tests.unit.test_resources.fetch_quantity_data.data_provider \
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.fetch_quantity_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.unit.test_resources.mock_component_manager \
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit.test_resources.mock_data_driven_specs_repo \
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
|
||||
from monasca_transform.component.usage.fetch_quantity \
|
||||
|
@ -42,7 +42,7 @@ class TestFetchQuantityAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -18,17 +18,17 @@ import mock
|
|||
from oslo_config import cfg
|
||||
from pyspark.sql import SQLContext
|
||||
from pyspark.streaming.kafka import OffsetRange
|
||||
from tests.unit.component.insert.dummy_insert import DummyInsert
|
||||
from tests.unit.spark_context_test import SparkContextTest
|
||||
from tests.unit.test_resources.fetch_quantity_data.data_provider \
|
||||
from tests.functional.component.insert.dummy_insert import DummyInsert
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.fetch_quantity_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.unit.test_resources.fetch_quantity_data_second_stage.data_provider \
|
||||
from tests.functional.test_resources.fetch_quantity_data_second_stage.data_provider \
|
||||
import DataProvider as SecondStageDataProvider
|
||||
from tests.unit.test_resources.mock_component_manager \
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit.test_resources.mock_data_driven_specs_repo \
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
from tests.unit.usage import dump_as_ascii_string
|
||||
from tests.functional.usage import dump_as_ascii_string
|
||||
|
||||
from monasca_transform.config.config_initializer import ConfigInitializer
|
||||
from monasca_transform.driver.mon_metrics_kafka \
|
||||
|
@ -46,7 +46,7 @@ class TestFetchQuantityInstanceUsageAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -12,16 +12,20 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
import json
|
||||
import unittest
|
||||
|
||||
import mock
|
||||
from oslo_config import cfg
|
||||
import unittest
|
||||
|
||||
|
||||
from pyspark.streaming.kafka import OffsetRange
|
||||
from tests.unit.spark_context_test import SparkContextTest
|
||||
from tests.unit.test_resources.cpu_kafka_data.data_provider import DataProvider
|
||||
from tests.unit.test_resources.mock_component_manager \
|
||||
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.cpu_kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit.test_resources.mock_data_driven_specs_repo \
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
|
||||
from monasca_transform.component.usage.fetch_quantity_util import \
|
||||
|
@ -31,7 +35,6 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
import MonMetricsKafkaProcessor
|
||||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
|
||||
|
||||
class TestFetchQuantityUtilAgg(SparkContextTest):
|
||||
|
@ -41,7 +44,7 @@ class TestFetchQuantityUtilAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -25,17 +25,19 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
from monasca_transform.processor.pre_hourly_processor import PreHourlyProcessor
|
||||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
|
||||
from tests.functional.component.insert.dummy_insert import DummyInsert
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.unit import DataProvider
|
||||
from tests.unit \
|
||||
import DataProvider as SecondStageDataProvider
|
||||
from tests.unit import DummyInsert
|
||||
from tests.unit import dump_as_ascii_string
|
||||
from tests.unit \
|
||||
import MockComponentManager
|
||||
from tests.unit \
|
||||
from tests.functional.test_resources.cpu_kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.fetch_quantity_util_second_stage.\
|
||||
data_provider import DataProvider as SecondStageDataProvider
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.mock_component_manager import \
|
||||
MockComponentManager
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
from tests.unit import SparkContextTest
|
||||
from tests.functional.usage import dump_as_ascii_string
|
||||
|
||||
|
||||
class TestFetchQuantityUtilAgg(SparkContextTest):
|
||||
|
@ -45,12 +47,12 @@ class TestFetchQuantityUtilAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
DummyAdapter.init()
|
||||
DummyAdapter.adapter_impl.metric_list = []
|
||||
DummyAdapter.adapter_impl.metric_list = []
|
||||
|
||||
def get_pre_transform_specs_json(self):
|
||||
"""get pre_transform_specs driver table info."""
|
||||
|
@ -194,7 +196,7 @@ class TestFetchQuantityUtilAgg(SparkContextTest):
|
|||
if value.get('metric').get(
|
||||
'name') == 'cpu.utilized_logical_cores_agg'][0]
|
||||
|
||||
self.assertEqual(8.0,
|
||||
self.assertEqual(7.134214285714285,
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('value'))
|
||||
self.assertEqual('useast',
|
||||
|
@ -283,7 +285,7 @@ class TestFetchQuantityUtilAgg(SparkContextTest):
|
|||
if value.get('metric').get(
|
||||
'name') == 'cpu.utilized_logical_cores_agg'][0]
|
||||
|
||||
self.assertEqual(8.0,
|
||||
self.assertEqual(7.134214285714285,
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('value'))
|
||||
self.assertEqual('useast',
|
||||
|
@ -301,7 +303,7 @@ class TestFetchQuantityUtilAgg(SparkContextTest):
|
|||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('dimensions')
|
||||
.get('project_id'))
|
||||
self.assertEqual('prehourly',
|
||||
self.assertEqual('hourly',
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('dimensions')
|
||||
.get('aggregation_period'))
|
||||
|
|
|
@ -23,10 +23,11 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.unit import DataProvider
|
||||
from tests.unit \
|
||||
import MockComponentManager
|
||||
from tests.unit import SparkContextTest
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.cpu_kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.mock_component_manager import \
|
||||
MockComponentManager
|
||||
|
||||
|
||||
class SparkTest(SparkContextTest):
|
||||
|
@ -36,7 +37,7 @@ class SparkTest(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -17,14 +17,17 @@ import mock
|
|||
from oslo_config import cfg
|
||||
from pyspark.sql import SQLContext
|
||||
from pyspark.streaming.kafka import OffsetRange
|
||||
from tests.unit.component.insert.dummy_insert import DummyInsert
|
||||
from tests.unit.messaging.adapter import DummyAdapter
|
||||
from tests.unit.spark_context_test import SparkContextTest
|
||||
from tests.unit.test_resources.cpu_kafka_data.data_provider import DataProvider
|
||||
from tests.unit.test_resources.cpu_kafka_data_second_stage.data_provider \
|
||||
|
||||
from tests.functional.component.insert.dummy_insert import DummyInsert
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.cpu_kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.cpu_kafka_data_second_stage.data_provider \
|
||||
import DataProvider as SecondStageDataProvider
|
||||
from tests.unit.test_resources.mock_component_manager \
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.functional.usage import dump_as_ascii_string
|
||||
|
||||
from monasca_transform.config.config_initializer import ConfigInitializer
|
||||
from monasca_transform.driver.mon_metrics_kafka \
|
||||
|
@ -32,7 +35,6 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
from monasca_transform.processor.pre_hourly_processor import PreHourlyProcessor
|
||||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
from tests.functional.usage import dump_as_ascii_string
|
||||
|
||||
|
||||
class SparkTest(SparkContextTest):
|
||||
|
@ -42,7 +44,7 @@ class SparkTest(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
@ -150,7 +152,7 @@ class SparkTest(SparkContextTest):
|
|||
if value.get('metric').get('name') ==
|
||||
'cpu.total_logical_cores_agg' and
|
||||
value.get('metric').get('dimensions').get('host') ==
|
||||
'mini-mon'][0]
|
||||
'test-cp1-comp0333-mgmt'][0]
|
||||
|
||||
self.assertEqual(9.0,
|
||||
total_cpu_logical_agg_metric.get(
|
||||
|
@ -189,7 +191,7 @@ class SparkTest(SparkContextTest):
|
|||
if value.get('metric').get('name') ==
|
||||
'cpu.total_logical_cores_agg' and
|
||||
value.get('metric').get('dimensions').get('host') ==
|
||||
'devstack'][0]
|
||||
'test-cp1-comp0027-mgmt'][0]
|
||||
|
||||
self.assertEqual(6.0,
|
||||
total_cpu_logical_agg_metric.get(
|
||||
|
@ -230,7 +232,7 @@ class SparkTest(SparkContextTest):
|
|||
value.get('metric').get('dimensions').get('host') ==
|
||||
'all'][0]
|
||||
|
||||
self.assertEqual(8.0,
|
||||
self.assertEqual(7.134214285714285,
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('value'))
|
||||
self.assertEqual('useast',
|
||||
|
@ -268,9 +270,9 @@ class SparkTest(SparkContextTest):
|
|||
if value.get('metric').get('name') ==
|
||||
'cpu.utilized_logical_cores_agg' and
|
||||
value.get('metric').get('dimensions').get('host') ==
|
||||
'mini-mon'][0]
|
||||
'test-cp1-comp0333-mgmt'][0]
|
||||
|
||||
self.assertEqual(5.0,
|
||||
self.assertEqual(4.9665,
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('value'))
|
||||
self.assertEqual('useast',
|
||||
|
@ -308,9 +310,9 @@ class SparkTest(SparkContextTest):
|
|||
if value.get('metric').get('name') ==
|
||||
'cpu.utilized_logical_cores_agg' and
|
||||
value.get('metric').get('dimensions').get('host') ==
|
||||
'devstack'][0]
|
||||
'test-cp1-comp0027-mgmt'][0]
|
||||
|
||||
self.assertEqual(3.0,
|
||||
self.assertEqual(2.1677142857142853,
|
||||
utilized_cpu_logical_agg_metric.get(
|
||||
'metric').get('value'))
|
||||
self.assertEqual('useast',
|
||||
|
|
|
@ -18,9 +18,10 @@ from monasca_transform.component.usage.fetch_quantity \
|
|||
from monasca_transform.transform.transform_utils import RecordStoreUtils
|
||||
from monasca_transform.transform.transform_utils import TransformSpecsUtils
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.mem_total_all.data_provider \
|
||||
import DataProvider
|
||||
from tests.unit import SparkContextTest
|
||||
|
||||
|
||||
class UsageComponentTest(SparkContextTest):
|
||||
|
|
|
@ -24,12 +24,13 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.unit import DataProvider
|
||||
from tests.unit \
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit \
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
from tests.unit import SparkContextTest
|
||||
|
||||
|
||||
class TestVmCpuAllocatedAgg(SparkContextTest):
|
||||
|
@ -39,7 +40,7 @@ class TestVmCpuAllocatedAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -25,17 +25,19 @@ from monasca_transform.driver.mon_metrics_kafka \
|
|||
from monasca_transform.processor.pre_hourly_processor import PreHourlyProcessor
|
||||
from monasca_transform.transform import RddTransformContext
|
||||
from monasca_transform.transform import TransformContextUtils
|
||||
|
||||
from tests.functional.component.insert.dummy_insert import DummyInsert
|
||||
from tests.functional.messaging.adapter import DummyAdapter
|
||||
from tests.functional.spark_context_test import SparkContextTest
|
||||
from tests.functional.test_resources.kafka_data.data_provider \
|
||||
import DataProvider
|
||||
from tests.functional.test_resources.kafka_data_second_stage.data_provider \
|
||||
import DataProvider as SecondStageDataProvider
|
||||
from tests.unit import DataProvider
|
||||
from tests.unit import DummyAdapter
|
||||
from tests.unit import DummyInsert
|
||||
from tests.unit import dump_as_ascii_string
|
||||
from tests.unit \
|
||||
from tests.functional.test_resources.mock_component_manager \
|
||||
import MockComponentManager
|
||||
from tests.unit \
|
||||
from tests.functional.test_resources.mock_data_driven_specs_repo \
|
||||
import MockDataDrivenSpecsRepo
|
||||
from tests.unit import SparkContextTest
|
||||
from tests.functional.usage import dump_as_ascii_string
|
||||
|
||||
|
||||
class TestVmCpuAllocatedAgg(SparkContextTest):
|
||||
|
@ -45,7 +47,7 @@ class TestVmCpuAllocatedAgg(SparkContextTest):
|
|||
# configure the system with a dummy messaging adapter
|
||||
ConfigInitializer.basic_config(
|
||||
default_config_files=[
|
||||
'tests/unit/test_resources/config/'
|
||||
'tests/functional/test_resources/config/'
|
||||
'test_config_with_dummy_messaging_adapter.conf'])
|
||||
# reset metric_id list dummy adapter
|
||||
if not DummyAdapter.adapter_impl:
|
||||
|
|
|
@ -0,0 +1,39 @@
|
|||
[DEFAULTS]
|
||||
|
||||
[repositories]
|
||||
offsets = test_offsets_repo_class
|
||||
data_driven_specs = test_data_driven_specs_repo_class
|
||||
|
||||
[database]
|
||||
server_type = test_server_type
|
||||
host = test_host_name
|
||||
database_name = test_database_name
|
||||
username = test_database_user_name
|
||||
password = test_database_password
|
||||
use_ssl = True
|
||||
ca_file = test_ca_file_path
|
||||
|
||||
[stage_processors]
|
||||
enable_pre_hourly_processor = False
|
||||
|
||||
[pre_hourly_processor]
|
||||
late_metric_slack_time = 600
|
||||
enable_instance_usage_df_cache = False
|
||||
instance_usage_df_cache_storage_level = MEMORY_ONLY_SER_2
|
||||
enable_batch_time_filtering = True
|
||||
data_provider=tests.unit.processor.test_is_time_to_run:TestProcessUtilDataProvider
|
||||
|
||||
[service]
|
||||
enable_record_store_df_cache = False
|
||||
record_store_df_cache_storage_level = MEMORY_ONLY_SER_2
|
||||
enable_debug_log_entries = true
|
||||
# the location for the transform-service log
|
||||
service_log_path=/tmp/
|
||||
# the filename for the transform-service log
|
||||
service_log_filename=monasca-transform.log
|
||||
|
||||
|
||||
|
||||
# Whether debug-level log entries should be included in the application
|
||||
# log. If this setting is false, info-level will be used for logging.
|
||||
enable_debug_log_entries = true
|
|
@ -12,8 +12,8 @@ Vagrant.configure(2) do |config|
|
|||
|
||||
# Every Vagrant development environment requires a box. You can search for
|
||||
# boxes at https://atlas.hashicorp.com/search.
|
||||
config.vm.box = "ubuntu/xenial64"
|
||||
config.vm.hostname = "pg-tips"
|
||||
config.vm.box = "bento/ubuntu-16.04"
|
||||
config.vm.hostname = "devstack"
|
||||
|
||||
# Disable automatic box update checking. If you disable this, then
|
||||
# boxes will only be checked for updates when the user runs
|
||||
|
@ -57,28 +57,32 @@ Vagrant.configure(2) do |config|
|
|||
config.vm.provider "virtualbox" do |vb|
|
||||
# # Display the VirtualBox GUI when booting the machine
|
||||
# vb.gui = true
|
||||
vb.name = "pg-tips"
|
||||
vb.name = "devstack"
|
||||
|
||||
# # Customize the amount of memory on the VM:
|
||||
vb.memory = "16384"
|
||||
vb.cpus = "4"
|
||||
# bento box bug
|
||||
# https://github.com/chef/bento/issues/688
|
||||
vb.customize ["modifyvm", :id, "--cableconnected1", "on"]
|
||||
# increase the root partition to 100G
|
||||
vb.customize [
|
||||
"clonehd", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-xenial-16.04-cloudimg.vmdk",
|
||||
"#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-xenial-16.04-cloudimg.vdi",
|
||||
"clonehd", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-16.04-amd64-disk001.vmdk",
|
||||
"#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-16.04-amd64-disk001.vdi",
|
||||
"--format", "VDI"
|
||||
]
|
||||
vb.customize [
|
||||
"modifyhd", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-xenial-16.04-cloudimg.vdi",
|
||||
"modifyhd", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-16.04-amd64-disk001.vdi",
|
||||
"--resize", 100 * 1024
|
||||
]
|
||||
vb.customize [
|
||||
"storageattach", :id,
|
||||
"--storagectl", "SCSI Controller",
|
||||
"--storagectl", "SATA Controller",
|
||||
"--port", "0",
|
||||
"--device", "0",
|
||||
"--type", "hdd",
|
||||
"--nonrotational", "on",
|
||||
"--medium", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-xenial-16.04-cloudimg.vdi"
|
||||
"--medium", "#{ENV["HOME"]}/VirtualBox VMs/#{vb.name}/ubuntu-16.04-amd64-disk001.vdi"
|
||||
]
|
||||
|
||||
end
|
||||
|
@ -95,11 +99,11 @@ Vagrant.configure(2) do |config|
|
|||
# install dependencies for our process
|
||||
config.vm.provision "shell", path: "install.sh"
|
||||
# provision the environments
|
||||
config.vm.provision "shell", path: "provision-pg-tips.sh", privileged: false
|
||||
config.vm.provision "shell", path: "provision-devstack.sh", privileged: false
|
||||
|
||||
# pg-tips will now have devstack cloned so push our local.conf file into place
|
||||
# devstack VM will now have devstack cloned so push our local.conf file into place
|
||||
config.vm.provision "file", source: "local.conf", \
|
||||
destination: "/home/ubuntu/devstack/local.conf"
|
||||
destination: "/home/vagrant/devstack/local.conf"
|
||||
|
||||
if !ENV['LOCAL_REPO'] || ENV['LOCAL_REPO'].empty?
|
||||
puts "Using default repo"
|
||||
|
|
|
@ -49,7 +49,7 @@ disable_service monasca-thresh
|
|||
disable_service horizon
|
||||
#disable_service tempest
|
||||
#disable_service cinder
|
||||
enable_plugin monasca-transform /home/ubuntu/monasca-transform
|
||||
enable_plugin monasca-transform /home/vagrant/monasca-transform
|
||||
# the following must be disabled as the test does not work at this point
|
||||
# see Bug #1636508
|
||||
disable_service monasca-smoke-test
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
#!/usr/bin/env bash
|
||||
echo Id - `id`
|
||||
|
||||
echo Configuring git via https
|
||||
git config --global url."https://".insteadOf git://
|
||||
|
||||
if [ -d devstack ]
|
||||
then
|
||||
echo devstack directory already cloned
|
||||
else
|
||||
git clone https://git.openstack.org/openstack-dev/devstack
|
||||
fi
|
||||
|
||||
if [ -d monasca-transform ]
|
||||
then
|
||||
echo removing monasca-transform
|
||||
sudo rm -rf monasca-transform
|
||||
fi
|
|
@ -1,14 +1,14 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
if grep -q pg-tips <<<`hostname`; then
|
||||
if grep -q devstack <<<`hostname`; then
|
||||
echo Refreshing monasca-transform
|
||||
else
|
||||
echo Yikes, no - this is not pg-tips!
|
||||
echo Yikes, no - this is not devstack!
|
||||
exit 1
|
||||
fi
|
||||
if [ -d "/home/ubuntu/devstack" ] ; then
|
||||
if [ -d "/home/vagrant/devstack" ] ; then
|
||||
|
||||
. /home/ubuntu/devstack/.stackenv
|
||||
. /home/vagrant/devstack/.stackenv
|
||||
|
||||
fi
|
||||
|
||||
|
@ -23,21 +23,21 @@ else
|
|||
echo "monasca-transform service not running"
|
||||
fi
|
||||
|
||||
sudo rm -rf /home/ubuntu/monasca-transform-source /home/ubuntu/monasca-transform
|
||||
sudo rm -rf /home/vagrant/monasca-transform-source /home/vagrant/monasca-transform
|
||||
|
||||
sudo ./setup_local_repos.sh
|
||||
|
||||
# update the database with configuration
|
||||
sudo cp /home/ubuntu/monasca-transform/scripts/ddl/pre_transform_specs.sql /opt/monasca/transform/lib/pre_transform_specs.sql
|
||||
sudo cp /home/ubuntu/monasca-transform/scripts/ddl/transform_specs.sql /opt/monasca/transform/lib/transform_specs.sql
|
||||
sudo cp /home/vagrant/monasca-transform/scripts/ddl/pre_transform_specs.sql /opt/monasca/transform/lib/pre_transform_specs.sql
|
||||
sudo cp /home/vagrant/monasca-transform/scripts/ddl/transform_specs.sql /opt/monasca/transform/lib/transform_specs.sql
|
||||
sudo mysql -h "127.0.0.1" -um-transform -ppassword < /opt/monasca/transform/lib/pre_transform_specs.sql
|
||||
sudo mysql -h "127.0.0.1" -um-transform -ppassword < /opt/monasca/transform/lib/transform_specs.sql
|
||||
|
||||
# update the zip file used for spark submit
|
||||
sudo cp /home/ubuntu/monasca-transform/scripts/monasca-transform.zip /opt/monasca/transform/lib/.
|
||||
sudo cp /home/vagrant/monasca-transform/scripts/monasca-transform.zip /opt/monasca/transform/lib/.
|
||||
|
||||
# update the configuration file
|
||||
sudo cp /home/ubuntu/monasca-transform/devstack/files/monasca-transform/monasca-transform.conf /etc/.
|
||||
sudo cp /home/vagrant/monasca-transform/devstack/files/monasca-transform/monasca-transform.conf /etc/.
|
||||
if [ -n "$SERVICE_HOST" ]; then
|
||||
sudo sudo sed -i "s/brokers=192\.168\.15\.6:9092/brokers=${SERVICE_HOST}:9092/g" /etc/monasca-transform.conf
|
||||
fi
|
||||
|
@ -48,8 +48,8 @@ sudo rm -rf /opt/monasca/transform/venv
|
|||
# refresh the monasca-transform code to /opt/stack
|
||||
sudo rm -rf /opt/stack/monasca-transform
|
||||
pushd /opt/stack
|
||||
sudo git clone /home/ubuntu/monasca-transform
|
||||
sudo chown -R ubuntu:ubuntu /opt/stack/monasca-transform
|
||||
sudo git clone /home/vagrant/monasca-transform
|
||||
sudo chown -R vagrant:vagrant /opt/stack/monasca-transform
|
||||
virtualenv /opt/monasca/transform/venv
|
||||
. /opt/monasca/transform/venv/bin/activate
|
||||
pip install -e /opt/stack/monasca-transform/
|
||||
|
@ -68,4 +68,4 @@ sudo sed -i "s/publish_kafka_project_id=d2cb21079930415a9f2a33588b9f2bb6/publish
|
|||
# the control for it in DevStack is via screen -x stack
|
||||
sudo service monasca-transform start
|
||||
|
||||
popd
|
||||
popd
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
rsync -a --exclude='tools/vagrant/.vagrant' /monasca-transform-source /home/ubuntu/
|
||||
mv /home/ubuntu/monasca-transform-source /home/ubuntu/monasca-transform
|
||||
pushd /home/ubuntu/monasca-transform
|
||||
rsync -a --exclude='tools/vagrant/.vagrant' /monasca-transform-source /home/vagrant/
|
||||
mv /home/vagrant/monasca-transform-source /home/vagrant/monasca-transform
|
||||
pushd /home/vagrant/monasca-transform
|
||||
# prepare the codebase
|
||||
#
|
||||
# generate the sql scripts to populate the database
|
||||
|
@ -20,11 +20,11 @@ if [ ${CURRENT_BRANCH} != 'master' ]
|
|||
then
|
||||
echo Maintaining current branch ${CURRENT_BRANCH}
|
||||
# set the branch to what we're using in local.conf
|
||||
if [[ -z `grep ${CURRENT_BRANCH} /home/ubuntu/devstack/local.conf` ]]; then
|
||||
sed -i "s/enable_plugin monasca-transform \/home\/ubuntu\/monasca-transform//g" /home/ubuntu/devstack/local.conf
|
||||
sed -i "s/# END DEVSTACK LOCAL.CONF CONTENTS//g" /home/ubuntu/devstack/local.conf
|
||||
printf "enable_plugin monasca-transform /home/ubuntu/monasca-transform ${CURRENT_BRANCH}\n" >> /home/ubuntu/devstack/local.conf
|
||||
printf "# END DEVSTACK LOCAL.CONF CONTENTS" >> /home/ubuntu/devstack/local.conf
|
||||
if [[ -z `grep ${CURRENT_BRANCH} /home/vagrant/devstack/local.conf` ]]; then
|
||||
sed -i "s/enable_plugin monasca-transform \/home\/vagrant\/monasca-transform//g" /home/vagrant/devstack/local.conf
|
||||
sed -i "s/# END DEVSTACK LOCAL.CONF CONTENTS//g" /home/vagrant/devstack/local.conf
|
||||
printf "enable_plugin monasca-transform /home/vagrant/monasca-transform ${CURRENT_BRANCH}\n" >> /home/vagrant/devstack/local.conf
|
||||
printf "# END DEVSTACK LOCAL.CONF CONTENTS" >> /home/vagrant/devstack/local.conf
|
||||
fi
|
||||
fi
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ ACTUAL_PROXY=`echo ${http_proxy} | awk 'BEGIN { FS = "//"} ;{ print $2 }'`
|
|||
PROXY_HOST=`echo $ACTUAL_PROXY | awk 'BEGIN { FS = ":"} ;{ print $1 }'`
|
||||
PROXY_PORT=`echo $ACTUAL_PROXY | awk 'BEGIN { FS = ":"} ;{ gsub("[^0-9]", "", $2) ; print $2 }'`
|
||||
echo Assuming http proxy host = ${PROXY_HOST}, port = ${PROXY_PORT}
|
||||
sed -i "s/proxy_host/${PROXY_HOST}/g" /home/ubuntu/settings.xml
|
||||
sed -i "s/proxy_port/${PROXY_PORT}/g" /home/ubuntu/settings.xml
|
||||
cp /home/ubuntu/settings.xml /root/.m2/.
|
||||
sed -i "s/proxy_host/${PROXY_HOST}/g" /home/vagrant/settings.xml
|
||||
sed -i "s/proxy_port/${PROXY_PORT}/g" /home/vagrant/settings.xml
|
||||
cp /home/vagrant/settings.xml /root/.m2/.
|
||||
chown root:root /root/.m2/settings.xml
|
||||
|
|
53
tox.ini
53
tox.ini
|
@ -9,9 +9,7 @@ install_command = pip install -U {opts} {packages}
|
|||
setenv =
|
||||
PYTHONUNBUFFERED=1
|
||||
VIRTUAL_ENV={envdir}
|
||||
DISCOVER_DIRECTORY=tests
|
||||
PYSPARK_HOME=/home/ubuntu/pyspark_venv/bin/python
|
||||
SPARK_HOME=/opt/spark/current
|
||||
OS_TEST_PATH=tests/unit
|
||||
deps = -r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
psutil==3.0.1
|
||||
|
@ -20,6 +18,55 @@ whitelist_externals = bash
|
|||
commands =
|
||||
find . -type f -name "*.pyc" -delete
|
||||
|
||||
|
||||
[testenv:py27]
|
||||
basepython = python2.7
|
||||
setenv = {[testenv]setenv}
|
||||
OS_TEST_PATH=tests/unit
|
||||
whitelist_externals =
|
||||
{[testenv]whitelist_externals}
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
ostestr {posargs}
|
||||
|
||||
[testenv:py35]
|
||||
basepython = python3.5
|
||||
setenv = {[testenv]setenv}
|
||||
OS_TEST_PATH=tests/unit
|
||||
whitelist_externals =
|
||||
{[testenv]whitelist_externals}
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
ostestr {posargs}
|
||||
|
||||
[testenv:functional]
|
||||
basepython = python2.7
|
||||
install_command = {[testenv]install_command}
|
||||
setenv = {[testenv]setenv}
|
||||
SPARK_HOME=/opt/spark/current
|
||||
OS_TEST_PATH=tests/functional
|
||||
whitelist_externals =
|
||||
{[testenv]whitelist_externals}
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
ostestr --serial {posargs}
|
||||
|
||||
[testenv:functional-py35]
|
||||
basepython = python3.5
|
||||
install_command = {[testenv]install_command}
|
||||
setenv = {[testenv]setenv}
|
||||
SPARK_HOME=/opt/spark/current
|
||||
OS_TEST_PATH=tests/functional
|
||||
whitelist_externals =
|
||||
{[testenv]whitelist_externals}
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
ostestr --serial {posargs}
|
||||
|
||||
[testenv:pep8]
|
||||
commands = flake8
|
||||
|
||||
|
|
Loading…
Reference in New Issue