remove hdp from elements

Change-Id: I7dbd6b003a7f333673004bb29141b06ace604191
blueprint: remove-hdp206
This commit is contained in:
Vitaly Gridnev 2016-06-06 21:26:53 +03:00
parent 3e1078e99e
commit 497e4bb6de
15 changed files with 46 additions and 512 deletions

View File

@ -1,13 +1,22 @@
Diskimage-builder script for creation cloud images
==================================================
This script builds Ubuntu, Fedora, CentOS cloud images for use in Sahara. By default the all plugin are targeted, all images will be built. The '-p' option can be used to select plugin (vanilla, spark, hdp or cloudera, plain). The '-i' option can be used to select image type (ubuntu, fedora or centos). The '-v' option can be used to select hadoop version (1, 2 etc).
This script builds Ubuntu, Fedora, CentOS cloud images for use in Sahara.
By default the all plugin are targeted, all images will be built. The '-p' option can be used to
select plugin (vanilla, spark or cloudera, plain, mapr, ambari). The '-i' option can be used to
select image type (ubuntu, fedora or centos). The '-v' option can be used to select hadoop version.
NOTE: You should use Ubuntu or Fedora host OS for building images, CentOS as a host OS has not been tested well.
NOTE: You should use Ubuntu or Fedora host OS for building images, CentOS as a host OS has
not been tested well.
For users:
1. Use your environment (export / setenv) to alter the scripts behavior. Environment variables the script accepts are 'DIB_HADOOP_VERSION_1' and 'DIB_HADOOP_VERSION_2', 'JAVA_DOWNLOAD_URL', 'JAVA_TARGET_LOCATION', 'OOZIE_DOWNLOAD_URL', 'HIVE_VERSION', 'ubuntu_[vanilla|spark|cloudera|plain]_[hadoop_1|hadoop_2]_image_name', 'fedora_[vanilla|plain]_hadoop_[1|2]_image_name', 'centos_[vanilla|hdp|cloudera|plain]_[hadoop_1|hadoop_2]_image_name'.
1. Use your environment (export / setenv) to alter the scripts behavior.
Environment variables the script accepts are 'DIB_HADOOP_VERSION_2', 'JAVA_DOWNLOAD_URL',
'JAVA_TARGET_LOCATION', 'OOZIE_DOWNLOAD_URL', 'HIVE_VERSION',
'ubuntu_[vanilla|spark|cloudera|plain]_[hadoop_1|hadoop_2]_image_name',
'fedora_[vanilla|plain]_hadoop_[1|2]_image_name',
'centos_[vanilla|cloudera|plain]_[hadoop_2]_image_name'.
2. For creating all images just clone this repository and run script.
@ -15,25 +24,28 @@ For users:
tox -e venv -- sahara-image-create
3. If you want to use your local mirrors, you should specify http urls for Fedora, CentOS and Ubuntu mirrors using parameters 'FEDORA_MIRROR', 'CENTOS_MIRROR' and 'UBUNTU_MIRROR' like this:
3. If you want to use your local mirrors, you should specify http urls for Fedora, CentOS and
Ubuntu mirrors using parameters 'FEDORA_MIRROR', 'CENTOS_MIRROR' and 'UBUNTU_MIRROR' like this:
.. sourcecode:: bash
USE_MIRRORS=true FEDORA_MIRROR="url_for_fedora_mirror" CENTOS_MIRROR="url_for_centos_mirror" UBUNTU_MIRROR="url_for_ubuntu_mirror" tox -e venv -- sahara-image-create
USE_MIRRORS=true FEDORA_MIRROR="url_for_fedora_mirror" CENTOS_MIRROR="url_for_centos_mirror" \
UBUNTU_MIRROR="url_for_ubuntu_mirror" tox -e venv -- sahara-image-create
NOTE: Do not create all images for all plugins with the same mirrors. Different plugins use different OS version.
NOTE: Do not create all images for all plugins with the same mirrors.
Different plugins use different OS version.
4. To select which plugin to target use the '-p' commandline option like this:
.. sourcecode:: bash
tox -e venv -- sahara-image-create -p [vanilla|spark|hdp|cloudera|storm|mapr|ambari|plain]
tox -e venv -- sahara-image-create -p [vanilla|spark|cloudera|storm|mapr|ambari|plain]
5. To select which hadoop version to target use the '-v' commandline option like this:
.. sourcecode:: bash
tox -e venv -- sahara-image-create -v [1|2|plain]
tox -e venv -- sahara-image-create -v [2.7.1|plain]
6. To select which operating system to target use the '-i' commandline option like this:
@ -47,18 +59,28 @@ NOTE: Do not create all images for all plugins with the same mirrors. Different
tox -e venv -- sahara-image-create -p spark -s [1.3.1|1.6.0]
8. If the host system is missing packages required for diskimage-create.sh, the '-u' commandline option will instruct the script to install them without prompt.
8. If the host system is missing packages required for diskimage-create.sh, the '-u'
commandline option will instruct the script to install them without prompt.
NOTE for 4, 5, 6:
For Vanilla you can create ubuntu, fedora and centos cloud image with hadoop 1.x.x and 2.x.x versions. Use environment variables 'DIB_HADOOP_VERSION_1' and 'DIB_HADOOP_VERSION_2' to change defaults.
For Spark you can create only ubuntu images, so you shouldn't specify an image type. The default Spark and HDFS versions included in the build are tested and known working together with the Sahara Spark plugin, other combinations should be used only for evaluation or testing purposes. You can select a different Spark version with commandline option '-s' and Hadoop HDFS version with '-v', but only Cludera CDH versions are available for now.
For Cloudera you can create ubuntu and centos images with preinstalled cloudera hadoop. You shouldn't specify hadoop version.
You can create centos, ubuntu, fedora images without hadoop ('plain' image)
For Vanilla you can create ubuntu, fedora and centos cloud image with 2.x.x versions.
Use environment variables 'DIB_HADOOP_VERSION_2' to change defaults.
For Spark you can create only ubuntu images, so you shouldn't specify an image type.
The default Spark and HDFS versions included in the build are tested and known working together
with the Sahara Spark plugin, other combinations should be used only for evaluation or testing
purposes. You can select a different Spark version with commandline option '-s' and Hadoop HDFS
version with '-v', but only Cludera CDH versions are available for now.
For Cloudera you can create ubuntu and centos images with preinstalled cloudera hadoop.
You shouldn't specify hadoop version. You can create centos, ubuntu, fedora images without
hadoop ('plain' image)
NOTE for CentOS images (for vanilla, hdp and cloudera plugins):
NOTE for CentOS images (for vanilla, ambari and cloudera plugins):
Resizing disk space during firstboot on that images fails with errors (https://bugs.launchpad.net/sahara/+bug/1304100). So, you will get an instance that will have a small available disk space. To solve this problem we build images with 10G available disk space as default. If you need in more available disk space you should export parameter DIB_IMAGE_SIZE:
Resizing disk space during firstboot on that images fails with errors
(https://bugs.launchpad.net/sahara/+bug/1304100). So, you will get an instance that will have a
small available disk space. To solve this problem we build images with 10G available disk space
as default. If you need in more available disk space you should export parameter DIB_IMAGE_SIZE:
.. sourcecode:: bash
@ -66,11 +88,15 @@ Resizing disk space during firstboot on that images fails with errors (https://b
For all another images parameter DIB_IMAGE_SIZE will be unset.
`DIB_CLOUD_INIT_DATASOURCES` contains a growing collection of data source modules and most are enabled by default. This causes cloud-init to query each data source
`DIB_CLOUD_INIT_DATASOURCES` contains a growing collection of data source modules and most
are enabled by default. This causes cloud-init to query each data source
on first boot. This can cause delays or even boot problems depending on your environment.
You must define `DIB_CLOUD_INIT_DATASOURCES` as a comma-separated list of valid data sources to limit the data sources that will be queried for metadata on first boot.
You must define `DIB_CLOUD_INIT_DATASOURCES` as a comma-separated list of valid data sources to
limit the data sources that will be queried for metadata on first boot.
For developers:
If you want to add your element to this repository, you should edit this script in your commit (you should export variables for your element and add name of element to variables 'element_sequence').
If you want to add your element to this repository, you should edit this script in your commit
(you should export variables for your element and add name of element
to variables 'element_sequence').

View File

@ -27,9 +27,9 @@ TRACING=
usage() {
echo
echo "Usage: $(basename $0)"
echo " [-p vanilla|spark|hdp|cloudera|storm|mapr|ambari|plain]"
echo " [-p vanilla|spark|cloudera|storm|mapr|ambari|plain]"
echo " [-i ubuntu|fedora|centos|centos7]"
echo " [-v 2|2.6|2.7.1|4|5.0|5.3|5.4|5.5]"
echo " [-v 2.6|2.7.1|4|5.0|5.3|5.4|5.5]"
echo " [-r 5.0.0|5.1.0]"
echo " [-s 1.3.1|1.6.0]"
echo " [-d]"
@ -50,7 +50,6 @@ usage() {
echo " '-h' display this message"
echo
echo "You shouldn't specify image type for spark plugin"
echo "You shouldn't specify image type for hdp plugin"
echo "You shouldn't specify hadoop version for plain images"
echo "Debug mode should only be enabled for local debugging purposes, not for production systems"
echo "By default all images for all plugins will be created"
@ -222,23 +221,6 @@ case "$PLUGIN" in
exit 1
fi
;;
"hdp")
case "$BASE_IMAGE_OS" in
"" | "centos");;
*)
echo -e "'$BASE_IMAGE_OS' image type is not supported by 'hdp'.\nAborting"
exit 1
;;
esac
case "$HADOOP_VERSION" in
"" | "1" | "2");;
*)
echo -e "Unknown hadoop version selected.\nAborting"
exit 1
;;
esac
;;
"ambari")
case "$BASE_IMAGE_OS" in
"" | "centos" | "centos7" | "ubuntu" )
@ -593,40 +575,6 @@ if [ -z "$PLUGIN" -o "$PLUGIN" = "storm" ]; then
image_create ubuntu $ubuntu_image_name $ubuntu_elements_sequence
unset DIB_CLOUD_INIT_DATASOURCES
fi
#########################
# Images for HDP plugin #
#########################
if [ -z "$PLUGIN" -o "$PLUGIN" = "hdp" ]; then
echo "For hdp plugin option -i is ignored"
# Generate HDP images
# Parameter 'DIB_IMAGE_SIZE' should be specified for CentOS only
export DIB_IMAGE_SIZE=${IMAGE_SIZE:-"10"}
# Ignoring image type option
if [ -z "$HADOOP_VERSION" -o "$HADOOP_VERSION" = "1" ]; then
export centos_image_name_hdp_1_3=${centos_hdp_hadoop_1_image_name:-"centos-6_6-64-hdp-1-3"}
# Elements to include in an HDP-based image
centos_elements_sequence="hadoop-hdp yum $JAVA_ELEMENT"
# generate image with HDP 1.3
export DIB_HDP_VERSION="1.3"
image_create centos $centos_image_name_hdp_1_3 $centos_elements_sequence
fi
if [ -z "$HADOOP_VERSION" -o "$HADOOP_VERSION" = "2" ]; then
export centos_image_name_hdp_2_0=${centos_hdp_hadoop_2_image_name:-"centos-6_6-64-hdp-2-0"}
# Elements to include in an HDP-based image
centos_elements_sequence="hadoop-hdp yum $JAVA_ELEMENT"
# generate image with HDP 2.0
export DIB_HDP_VERSION="2.0"
image_create centos $centos_image_name_hdp_2_0 $centos_elements_sequence
fi
unset DIB_IMAGE_SIZE
fi
############################
# Images for Ambari plugin #

View File

@ -1,37 +0,0 @@
==========
hadoop-hdp
==========
Installs the JDK, the Hortonworks Data Platform, and Apache Ambari.
Currently, the following versions of the Hortonworks Data Platform are
supported for image building:
- 1.3
- 2.0
The following script:
.. code:: bash
diskimage-create/diskimage-create.sh
is the default script to use for creating CentOS images with HDP
installed/configured. This script can be used without modification, or can
be used as an example to describe how a more customized script may be created
with the ``hadoop-hdp`` element.
In order to create the HDP images with ``diskimage-create.sh``, use the
following syntax to select the ``hdp`` plugin:
.. code:: bash
diskimage-create.sh -p hdp
Environment Variables
---------------------
DIB_HDP_VERSION
:Required: Yes
:Description: Version of the Hortonworks Data Platform to install.
:Example: ``DIB_HDP_VERSION=2.0``

View File

@ -1,6 +0,0 @@
disable-firewall
disable-selinux
java
package-installs
sahara-version
source-repositories

View File

@ -1,163 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for HDP.
#
# Please set the DIB_HDP_VERSION environment variable
# to configure the install to use a given version.
# Currently, only 1.3 and 2.0 versions are supported for
# HDP.
##########################################################
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -ue
set -o pipefail
function install_ganglia {
# Install ganglia
local gv="3.5.0-99"
install-packages libganglia-$gv ganglia-gmond-$gv ganglia-gmond-modules-python-$gv ganglia-devel-$gv ganglia-gmetad-$gv ganglia-web-3.5.7-99
}
function install_mysql {
# install mysql
install-packages mysql mysql-server mysql-connector-java
}
function install_nagios {
install-packages nagios hdp_mon_nagios_addons
}
function install_fping {
install-packages fping
}
function install_perl_libraries {
install-packages perl-Digest-SHA1 perl-Digest-HMAC perl-Crypt-DES perl-Net-SNMP
}
function install_ambari {
install-packages ambari-server ambari-agent ambari-log4j
}
function install_nss {
install-packages nss-softokn-freebl
}
function installHDP_1_3 {
# ====== INSTALL Ambari =======
cd /tmp
wget -nv http://s3.amazonaws.com/public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.3.38/ambari.repo -O /etc/yum.repos.d/ambari.repo
install_ambari
# ====== INSTALL HDP =======
wget -nv http://s3.amazonaws.com/public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.2.0/hdp.repo -O /etc/yum.repos.d/hdp.repo
install_mysql
install_perl_libraries
install_nss
install-packages net-snmp net-snmp-utils
install-packages hadoop hadoop-libhdfs hadoop-native hadoop-pipes hadoop-sbin hadoop-lzo lzo lzo-devel hadoop-lzo-native
install-packages snappy snappy-devel
install-packages oozie zookeeper hbase webhcat-tar-hive sqoop oozie-client extjs-2.2-1 hive hcatalog pig webhcat-tar-pig
install-packages python-rrdtool rrdtool-devel rrdtool
install_ganglia
install_nagios
install_fping
}
function installHDP_2_0 {
install-packages net-snmp net-snmp-utils
# ====== INSTALL Ambari =======
cd /tmp
wget -nv http://s3.amazonaws.com/public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.3.38/ambari.repo -O /etc/yum.repos.d/ambari.repo
install_ambari
# ====== INSTALL HDP =======
wget -nv http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.0/hdp.repo -O /etc/yum.repos.d/hdp.repo
install_perl_libraries
install-packages python-rrdtool
install-packages extjs
install_nss
install_mysql
# install Hadoop packages
install-packages hadoop hadoop-libhdfs hadoop-hdfs hadoop-lzo hadoop-lzo-native hadoop-mapreduce hadoop-mapreduce-historyserver hadoop-client oozie oozie-client zookeeper hbase pig webhcat-tar-hive webhcat-tar-pig hive hcatalog
# Install Yarn
install-packages hadoop-yarn hadoop-yarn-nodemanager hadoop-yarn-proxyserver hadoop-yarn-resourcemanager
# Install sqoop
install-packages sqoop
# Install openssl
install-packages openssl
install_ganglia
install_nagios
install_fping
# install compression libraries
install-packages snappy snappy-devel lzo lzo-devel
}
# Start of Main HDP Install Element
# Call version-specific script to install the desired version of HDP
if [[ $DIB_HDP_VERSION == "1.3" ]]; then
echo "Installing HDP Version $DIB_HDP_VERSION..."
installHDP_1_3
else
if [[ $DIB_HDP_VERSION == "2.0" ]]; then
echo "Installing HDP Version $DIB_HDP_VERSION..."
installHDP_2_0
else
echo "Invalid HDP Version specified, exiting install."
exit 1
fi
fi

View File

@ -1,5 +0,0 @@
wget:
ntp:
bind-utils:
# install cloud-init which is necessary for all images
cloud-init:

View File

@ -1,35 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning off yum repositories
# after the HDP install is completed.
##########################################################
#
# This script just changes the enabled flag to 0 to prevent yum install
# from going out over the network. It allows Sahara to provision VMs
# in disconnected mode.
#
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
find /etc/yum.repos.d -name "*.repo" -type f | xargs sed "s/enabled=1/enabled=0/" -i

View File

@ -1,32 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning off gmetad (Ganglia daemon)
# once the HDP install has completed.
#
##########################################################
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Turn off gmetad
chkconfig gmetad off

View File

@ -1,31 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning on ntpd
# once the HDP install has completed.
#
##########################################################
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Turn on ntp service
chkconfig ntpd on

View File

@ -1,34 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning off the ambari-server
# and ambari-agent services, once the HDP install has completed.
#
##########################################################
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Turn off ambari-server service for the first boot of this image
chkconfig ambari-server off
# Turn off ambari-agent service for the first boot of this image
chkconfig ambari-agent off

View File

@ -1,41 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning off the Hadoop 2
# and yarn services, once the HDP install has completed.
#
##########################################################
# Turn off these hadoop services at first boot, since
# Ambari will configure the environment before the Hadoop
# cluster is started.
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# This is only necessary for an HDP 2.x install
if [[ $DIB_HDP_VERSION == "2.0" ]]; then
chkconfig hadoop-mapreduce-historyserver off
chkconfig hadoop-yarn-nodemanager off
chkconfig hadoop-yarn-proxyserver off
chkconfig hadoop-yarn-resourcemanager off
fi

View File

@ -1,52 +0,0 @@
#!/bin/bash
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
# Element install script for turning on a local yum
# repository for certain HDP elements that should be available on an
# image that may not have web access.
#
##########################################################
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# create a new local yum repository definition that
# includes:
# 1. HDP-Utils
# 2. Ambari Updates (for Ambari updates to Nagios)
cat >> /etc/yum.repos.d/hdp-local-repos.repo <<EOF
[HDP-UTILS-1.1.0.16-LOCAL]
name=Hortonworks Data Platform Version - HDP-UTILS-1.1.0.16 - LOCAL
baseurl=file:///opt/hdp-local-repos/hdputils/repos/centos6
gpgcheck=0
enabled=1
priority=1
[Updates-ambari-1.2.5.17-LOCAL]
name=ambari-1.2.5.17 - Updates LOCAL
baseurl=file:///opt/hdp-local-repos/ambari/centos6/1.x/updates/1.2.5.17
gpgcheck=0
enabled=1
priority=1
EOF

View File

@ -1 +0,0 @@
ambari-updates tar /opt/hdp-local-repos/ambari http://s3.amazonaws.com/public-repo-1.hortonworks.com/ambari/centos6/ambari-1.2.5.17-centos6.tar.gz

View File

@ -1,2 +0,0 @@
hadoopswift file /opt/hdp-local-repos/hadoop-swift/hadoop-swift-1.0-1.x86_64.rpm https://s3.amazonaws.com/public-repo-1.hortonworks.com/sahara/swift/hadoop-swift-1.0-1.x86_64.rpm

View File

@ -1 +0,0 @@
hdputils tar /opt/hdp-local-repos/hdputils http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos6/HDP-UTILS-1.1.0.16-centos6.tar.gz