Move elements to new repo savanna-image-elements
Change-Id: I25edc62207eafe69281f70432a0e7d40beb4e121
This commit is contained in:
parent
2f637e1755
commit
f29a070505
13
README.rst
13
README.rst
|
@ -5,14 +5,5 @@ Savanna-extra is place for Savanna components not included into the main `Savann
|
||||||
|
|
||||||
Here is the list of components:
|
Here is the list of components:
|
||||||
|
|
||||||
* `Diskimage-builder <https://github.com/stackforge/diskimage-builder>`_ elements: https://github.com/stackforge/savanna-extra/blob/master/elements/README.rst
|
* Sources for Swift filesystem implementation for Hadoop: https://github.com/stackforge/savanna-extra/blob/master/hadoop-swiftfs/README.rst
|
||||||
|
* `Diskimage-builder <https://github.com/stackforge/diskimage-builder>`_ elements moved to the new repo: https://github.com/stackforge/savanna-image-elements
|
||||||
* Script for creating Fedora and Ubuntu cloud images with our elements and default parameters. You should only need to run this command:
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
sudo bash diskimage-create.sh
|
|
||||||
|
|
||||||
Note: More information about script `diskimage-create <https://github.com/stackforge/savanna-extra/blob/master/diskimage-create/README.rst>`_
|
|
||||||
|
|
||||||
* Sources for Swift filesystem implementation for Hadoop: https://github.com/stackforge/savanna-extra/blob/master/hadoop-swiftfs/README.rst
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
*~
|
|
|
@ -1,68 +0,0 @@
|
||||||
Diskimage-builder tools for creation cloud images
|
|
||||||
=================================================
|
|
||||||
|
|
||||||
Steps how to create cloud image with Apache Hadoop installed using diskimage-builder project:
|
|
||||||
|
|
||||||
1. Clone the repository "https://github.com/stackforge/diskimage-builder" locally.
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
git clone https://github.com/stackforge/diskimage-builder
|
|
||||||
|
|
||||||
2. Add ~/diskimage-builder/bin/ directory to your path (for example, PATH=$PATH:/home/$USER/diskimage-builder/bin/ ).
|
|
||||||
|
|
||||||
3. Export the following variable ELEMENTS_PATH=/home/$USER/diskimage-builder/elements/ to your .bashrc. Then source it.
|
|
||||||
|
|
||||||
4. Copy file "img-build-sudoers" from ~/disk-image-builder/sudoers.d/ to your /etc/sudoers.d/.
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
chmod 440 /etc/sudoers.d/img-build-sudoers
|
|
||||||
chown root:root /etc/sudoers.d/img-build-sudoers
|
|
||||||
|
|
||||||
5. Export savanna-elements commit id variable (from savanna-extra directory):
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
export SAVANNA_ELEMENTS_COMMIT_ID=`git show --format=%H | head -1`
|
|
||||||
|
|
||||||
6. Move elements/ directory to disk-image-builder/elements/
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
mv elements/* /path_to_disk_image_builder/diskimage-builder/elements/
|
|
||||||
|
|
||||||
7. Export DIB commit id variable (from DIB directory):
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
export DIB_COMMIT_ID=`git show --format=%H | head -1`
|
|
||||||
|
|
||||||
8. Call the following command to create cloud image is able to run on OpenStack:
|
|
||||||
|
|
||||||
8.1. Ubuntu cloud image
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
JAVA_FILE=jdk-7u21-linux-x64.tar.gz DIB_HADOOP_VERSION=1.1.2 OOZIE_FILE=oozie-3.3.2.tar.gz disk-image-create base vm hadoop oozie ubuntu root-passwd -o ubuntu_hadoop_1_1_2
|
|
||||||
|
|
||||||
8.2. Fedora cloud image
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
JAVA_FILE=jdk-7u21-linux-x64.tar.gz DIB_HADOOP_VERSION=1.1.2 OOZIE_FILE=oozie-3.3.2.tar.gz DIB_IMAGE_SIZE=10 disk-image-create base vm fedora hadoop root-passwd oozie -o fedora_hadoop_1_1_2
|
|
||||||
|
|
||||||
Note: If you are building this image from Ubuntu or Fedora 18 OS host, you should add element 'selinux-permissive'.
|
|
||||||
|
|
||||||
.. sourcecode:: bash
|
|
||||||
|
|
||||||
JAVA_FILE=jdk-7u21-linux-x64.tar.gz DIB_HADOOP_VERSION=1.1.2 OOZIE_FILE=oozie-3.3.2.tar.gz DIB_IMAGE_SIZE=10 disk-image-create base vm fedora hadoop root-passwd oozie selinux-permissive -o fedora_hadoop_1_1_2
|
|
||||||
|
|
||||||
In this command 'DIB_HADOOP_VERSION' parameter is version of hadoop needs to be installed.
|
|
||||||
You can use 'JAVA_DOWNLOAD_URL' parameter to specify download link for JDK (tarball or bin).
|
|
||||||
'DIB_IMAGE_SIZE' is parameter that specifes a volume of hard disk of instance. You need to specify it because Fedora doesn't use all available volume.
|
|
||||||
If you have already downloaded the jdk package, move it to "elements/hadoop/install.d/" and use its filename as 'JAVA_FILE' parameter.
|
|
||||||
In order of working EDP components with Savanna DIB images you need pre-installed Oozie libs.
|
|
||||||
Use OOZIE_DOWNLOAD_URL to specify link to Oozie archive (tar.gz). For example we have built Oozie libs here:
|
|
||||||
http://a8e0dce84b3f00ed7910-a5806ff0396addabb148d230fde09b7b.r31.cf1.rackcdn.com/oozie-3.3.2.tar.gz
|
|
||||||
If you have already downloaded archive, move it to "elements/oozie/install.d/" and use its filename as 'OOZIE_FILE' parameter.
|
|
|
@ -1 +0,0 @@
|
||||||
Installs Java and Hadoop, configures SSH
|
|
|
@ -1 +0,0 @@
|
||||||
savanna-version
|
|
|
@ -1,34 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
distro=$(lsb_release -is || :)
|
|
||||||
case "$distro" in
|
|
||||||
Ubuntu )
|
|
||||||
mkdir /run/hadoop
|
|
||||||
chown hadoop:hadoop /run/hadoop/
|
|
||||||
mkdir -p /home/ubuntu/.ssh
|
|
||||||
touch /home/ubuntu/.ssh/authorized_keys
|
|
||||||
chown -R ubuntu:ubuntu /home/ubuntu
|
|
||||||
;;
|
|
||||||
Fedora )
|
|
||||||
sleep 20
|
|
||||||
rm /etc/resolv.conf
|
|
||||||
service network restart
|
|
||||||
if [ $(lsb_release -rs) = '19' ]; then
|
|
||||||
chown -R fedora:fedora /etc/hadoop
|
|
||||||
chown -R fedora:fedora /home/fedora
|
|
||||||
else
|
|
||||||
chown -R ec2-user:ec2-user /home/ec2-user
|
|
||||||
chown -R ec2-user:ec2-user /etc/hadoop
|
|
||||||
fi
|
|
||||||
#TODO: configure iptables (https://bugs.launchpad.net/savanna/+bug/1195744)
|
|
||||||
iptables -F
|
|
||||||
;;
|
|
||||||
* )
|
|
||||||
echo "Unknown distro: $distro. Exiting."
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Common
|
|
||||||
chown root:root /mnt
|
|
||||||
mkdir -p /var/run/hadoop ; chown hadoop:hadoop /var/run/hadoop
|
|
||||||
mkdir -p /mnt/log/hadoop ; chown hadoop:hadoop /mnt/log/hadoop
|
|
|
@ -1,34 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
echo "Java setup begins"
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# NOTE: $(dirname $0) is read-only, use space under $TARGET_ROOT
|
|
||||||
JAVA_HOME=$TARGET_ROOT/usr/java
|
|
||||||
mkdir -p $JAVA_HOME
|
|
||||||
|
|
||||||
if [ -n "$JAVA_DOWNLOAD_URL" ]; then
|
|
||||||
install-packages wget
|
|
||||||
wget --no-cookies --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com" -P $JAVA_HOME $JAVA_DOWNLOAD_URL
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "Java downloaded"
|
|
||||||
else
|
|
||||||
echo "Error downloading java. Exiting."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
JAVA_FILE=$(basename $JAVA_DOWNLOAD_URL)
|
|
||||||
elif [ -n "$JAVA_FILE" ]; then
|
|
||||||
install -D -g root -o root -m 0755 $(dirname $0)/$JAVA_FILE $JAVA_HOME
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd $JAVA_HOME
|
|
||||||
if echo $JAVA_FILE | grep -q -s -F .tar.gz ; then
|
|
||||||
echo -e "\n" | tar -zxvf $JAVA_FILE
|
|
||||||
elif echo $JAVA_FILE | grep -q -s -F .bin ; then
|
|
||||||
echo -e "\n" | sh $JAVA_FILE
|
|
||||||
else
|
|
||||||
echo "Unknown file type: $JAVA_FILE. Exiting."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
rm $JAVA_FILE
|
|
||||||
|
|
||||||
echo "Java was installed"
|
|
|
@ -1,35 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
echo "Adjusting ssh configuration"
|
|
||||||
|
|
||||||
# /etc/ssh/sshd_config is provided by openssh-server
|
|
||||||
# /etc/ssh/ssh_config is provided by openssh-client
|
|
||||||
# Note: You need diskimage-builder w/ SHA 82eacdec (11 July 2013) for
|
|
||||||
# this install to work on Fedora - https://review.openstack.org/#/c/36739/
|
|
||||||
install-packages augeas-tools openssh-server openssh-client
|
|
||||||
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/GSSAPIAuthentication no
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/UseDNS no
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/PermitTunnel yes
|
|
||||||
|
|
||||||
# ssh-client configuration
|
|
||||||
# Common
|
|
||||||
augtool -s set /files/etc/ssh/ssh_config/Host/StrictHostKeyChecking no
|
|
||||||
augtool -s set /files/etc/ssh/ssh_config/Host/GSSAPIAuthentication no
|
|
||||||
|
|
||||||
distro=$(lsb_release -is || :)
|
|
||||||
echo $distro
|
|
||||||
case "$distro" in
|
|
||||||
Ubuntu )
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/GSSAPICleanupCredentials yes
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/AuthorizedKeysFile .ssh/authorized_keys
|
|
||||||
;;
|
|
||||||
Fedora )
|
|
||||||
sed -i 's/ssh_pwauth: 0/ssh_pwauth: 1/' /etc/cloud/cloud.cfg
|
|
||||||
augtool -s clear /files/etc/sudoers/Defaults[type=':nrpe']/requiretty/negate
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/SyslogFacility AUTH
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/StrictModes yes
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/RSAAuthentication yes
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/PubkeyAuthentication yes
|
|
||||||
;;
|
|
||||||
esac
|
|
|
@ -1,76 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# XXX: This is in post-install.d, instead of install.d, because the
|
|
||||||
# hadoop RPM claims ownership of files owned by the filesystem RPM,
|
|
||||||
# such as /usr and /bin, and installing hadoop then updating
|
|
||||||
# filesystem results in a failure. This can be moved to install.d when
|
|
||||||
# HADOOP-9777 is resolved.
|
|
||||||
# https://issues.apache.org/jira/browse/HADOOP-9777
|
|
||||||
|
|
||||||
distro=$(lsb_release -is || :)
|
|
||||||
if [ ! "$distro" == "Fedora" -a ! "$distro" == "Ubuntu" ]; then
|
|
||||||
echo "Unknown distro: $distro. Exiting."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Hadoop setup begins for $distro"
|
|
||||||
tmp_dir=/tmp/hadoop
|
|
||||||
|
|
||||||
echo "Creating hadoop user & group"
|
|
||||||
case "$distro" in
|
|
||||||
Ubuntu )
|
|
||||||
addgroup hadoop
|
|
||||||
adduser --ingroup hadoop --disabled-password --gecos GECOS hadoop
|
|
||||||
adduser hadoop sudo
|
|
||||||
;;
|
|
||||||
Fedora )
|
|
||||||
adduser -G adm,wheel hadoop
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo "Hadoop version $DIB_HADOOP_VERSION will be injected into image. Starting the download"
|
|
||||||
case "$distro" in
|
|
||||||
Ubuntu )
|
|
||||||
package="hadoop_$DIB_HADOOP_VERSION-1_x86_64.deb"
|
|
||||||
;;
|
|
||||||
Fedora )
|
|
||||||
package="hadoop-$DIB_HADOOP_VERSION-1.x86_64.rpm"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
install-packages wget
|
|
||||||
wget -P $tmp_dir "http://archive.apache.org/dist/hadoop/core/hadoop-$DIB_HADOOP_VERSION/$package"
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo -e "Could not find Hadoop version $DIB_HADOOP_VERSION.\nAborting"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
case "$distro" in
|
|
||||||
Ubuntu )
|
|
||||||
dpkg -i $tmp_dir/$package
|
|
||||||
;;
|
|
||||||
Fedora )
|
|
||||||
if [ $(lsb_release -rs) = '19' ]; then
|
|
||||||
rpm -i $tmp_dir/$package --relocate /usr=/usr --replacefiles
|
|
||||||
else
|
|
||||||
rpm -ivh --replacefiles $tmp_dir/$package
|
|
||||||
fi
|
|
||||||
chmod 755 /usr/sbin/start-*
|
|
||||||
chmod 755 /usr/sbin/stop-*
|
|
||||||
chmod 755 /usr/sbin/slaves.sh
|
|
||||||
chmod 755 /usr/sbin/update-hadoop-env.sh
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
rm -r $tmp_dir
|
|
||||||
|
|
||||||
echo "Pre-configuring Hadoop"
|
|
||||||
filename=$(find $TARGET_ROOT/usr/java/ -maxdepth 1 -name "jdk*")
|
|
||||||
cat >> /home/hadoop/.bashrc <<EOF
|
|
||||||
PATH=\$PATH:/usr/sbin:$filename/bin
|
|
||||||
JAVA_HOME=$filename
|
|
||||||
HADOOP_HOME=/usr/share/hadoop/
|
|
||||||
EOF
|
|
||||||
sed -i -e "s,export JAVA_HOME=.*,export JAVA_HOME=$filename," \
|
|
||||||
-e "s,export HADOOP_LOG_DIR=.*,export HADOOP_LOG_DIR=/mnt/log/hadoop/\$USER," \
|
|
||||||
-e "s,export HADOOP_SECURE_DN_LOG_DIR=.*,export HADOOP_SECURE_DN_LOG_DIR=/mnt/log/hadoop/hdfs," \
|
|
||||||
/etc/hadoop/hadoop-env.sh
|
|
|
@ -1,20 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [ -z "$JAVA_DOWNLOAD_URL" ]; then
|
|
||||||
if [ -z "$JAVA_FILE" ]; then
|
|
||||||
echo "JAVA_FILE and JAVA_DOWNLOAD_URL are not set. Impossible to install java. Exit"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [ -z "$DIB_HADOOP_VERSION" ]; then
|
|
||||||
echo "DIB_HADOOP_VERSION is not set. Impossible to install hadoop. Exit"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
version_check=$(echo $DIB_HADOOP_VERSION | sed -e '/[0-9]\.[0-9]\.[0-9]/d')
|
|
||||||
if [[ -z $version_check ]]; then
|
|
||||||
echo "All variables are set, continue."
|
|
||||||
else
|
|
||||||
echo "Version error. Exit"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
|
@ -1,3 +0,0 @@
|
||||||
Installs Hive on Ubuntu and Fedora.
|
|
||||||
Hive stores metadata in MySQL databases. So, this element requires 'mysql' element.
|
|
||||||
You can specify download link for Hive using parameter 'HIVE_DOWNLOAD_URL' or choose Hive version using parameter 'HIVE_VERSION'.
|
|
|
@ -1,2 +0,0 @@
|
||||||
hadoop
|
|
||||||
mysql
|
|
|
@ -1,29 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
install-packages wget tar
|
|
||||||
|
|
||||||
tmp_dir=/tmp/hive
|
|
||||||
mkdir -p $tmp_dir
|
|
||||||
cd $tmp_dir
|
|
||||||
|
|
||||||
if [ -z $HIVE_DOWNLOAD_URL ]; then
|
|
||||||
HIVE_DOWNLOAD_URL=http://www.apache.org/dist/hive/hive-$HIVE_VERSION/hive-$HIVE_VERSION-bin.tar.gz
|
|
||||||
fi
|
|
||||||
wget $HIVE_DOWNLOAD_URL
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo -e "Could not download hive.\nAborting"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
HIVE_FILE=$(basename $HIVE_DOWNLOAD_URL)
|
|
||||||
tar xzf $HIVE_FILE
|
|
||||||
HIVE_DIR="${HIVE_FILE%.*}"
|
|
||||||
HIVE_DIR="${HIVE_DIR%.*}"
|
|
||||||
mv $HIVE_DIR /opt/hive
|
|
||||||
rm -r $tmp_dir
|
|
||||||
chmod -R a+rw /opt/hive
|
|
||||||
|
|
||||||
ln -s /usr/share/java/mysql-connector-java.jar /opt/hive/lib/libmysql-java.jar
|
|
||||||
chown -R hadoop:hadoop /opt/hive
|
|
||||||
cat >> /home/hadoop/.bashrc <<EOF
|
|
||||||
HIVE_HOME=/opt/hive
|
|
||||||
PATH=\$PATH:\$HIVE_HOME/bin
|
|
||||||
EOF
|
|
|
@ -1,8 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
if [ -z "$HIVE_DOWNLOAD_URL" ]; then
|
|
||||||
version_check=$(echo $HIVE_VERSION | sed -e '/[0-9]\.[0-9][0-9]\.[0-9]/d')
|
|
||||||
if [ ! -z $version_check ]; then
|
|
||||||
echo -e "Unable to install Hive: You should specify HIVE_DOWNLOAD_URL or HIVE_VERSION.\nAborting"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
|
@ -1,2 +0,0 @@
|
||||||
This element setups basic components of MySQL on Ubuntu and Fedora.
|
|
||||||
It is light version of original MySQL element (https://github.com/openstack/tripleo-image-elements/tree/master/elements/mysql).
|
|
|
@ -1,13 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
set -e
|
|
||||||
set -o xtrace
|
|
||||||
if [ $(lsb_release -is) = 'Fedora' ]; then
|
|
||||||
install-packages community-mysql community-mysql-libs community-mysql-server mysql-connector-java
|
|
||||||
mkdir -p /etc/mysql/conf.d
|
|
||||||
elif [ $(lsb_release -is) = 'Ubuntu' ]; then
|
|
||||||
install-packages mysql-server-5.5 mysql-client-5.5 libmysql-java
|
|
||||||
else
|
|
||||||
echo "Unknown distribution"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
rm -rf /var/lib/mysql/ib_logfile*
|
|
|
@ -1,16 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Disable startup MySQL on boot in Ubuntu
|
|
||||||
# Service mysqld dones't start on boot in Fedora
|
|
||||||
|
|
||||||
if [ $(lsb_release -is) = 'Ubuntu' ]; then
|
|
||||||
if [ -e "/etc/init/mysql.conf" ]; then
|
|
||||||
sed -i "s/start on runlevel \[.*\]/start on never runlevel [2345]/g" /etc/init/mysql.conf
|
|
||||||
else
|
|
||||||
update-rc.d -f mysql remove
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Script for starting mysql
|
|
||||||
|
|
||||||
install -D -g root -o root -m 0755 $(dirname $0)/start-mysql.sh /opt/start-mysql.sh
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
if [ $(lsb_release -is) = 'Ubuntu' ]; then
|
|
||||||
sudo service mysql start
|
|
||||||
elif [ $(lsb_release -is) = 'Fedora' ]; then
|
|
||||||
sudo service mysqld start
|
|
||||||
fi
|
|
|
@ -1 +0,0 @@
|
||||||
Oozie deployment
|
|
|
@ -1 +0,0 @@
|
||||||
savanna-version
|
|
|
@ -1,21 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
echo "Oozie setup"
|
|
||||||
|
|
||||||
install-packages zip unzip tar wget
|
|
||||||
|
|
||||||
if [ -n "$OOZIE_DOWNLOAD_URL" ]; then
|
|
||||||
wget -P /opt/ $OOZIE_DOWNLOAD_URL
|
|
||||||
OOZIE_FILE=$(basename $OOZIE_DOWNLOAD_URL)
|
|
||||||
OOZIE_DIR="${OOZIE_FILE%.*}"
|
|
||||||
OOZIE_DIR="${OOZIE_DIR%.*}"
|
|
||||||
elif [ -n "$OOZIE_FILE" ]; then
|
|
||||||
install -D -g root -o root -m 0755 $(dirname $0)/$OOZIE_FILE /opt
|
|
||||||
OOZIE_DIR="${OOZIE_FILE%.*}"
|
|
||||||
OOZIE_DIR="${OOZIE_DIR%.*}"
|
|
||||||
fi
|
|
||||||
wget -P /tmp/ http://extjs.com/deploy/ext-2.2.zip
|
|
||||||
|
|
||||||
cd /opt/
|
|
||||||
tar xzf $OOZIE_FILE
|
|
||||||
mv $OOZIE_DIR oozie
|
|
||||||
rm $OOZIE_FILE
|
|
|
@ -1,9 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
mkdir /opt/oozie/libext/
|
|
||||||
cp -r /usr/share/hadoop/*.jar /opt/oozie/libext
|
|
||||||
cp -r /usr/share/hadoop/lib/*.jar /opt/oozie/libext
|
|
||||||
ln -s /usr/share/java/mysql-connector-java.jar /opt/oozie/libtools/mysql.jar
|
|
||||||
ln -s /usr/share/java/mysql-connector-java.jar /opt/oozie/oozie-server/lib/mysql.jar
|
|
||||||
chown -R hadoop:hadoop /opt/oozie
|
|
||||||
su hadoop -c "/opt/oozie/bin/oozie-setup.sh prepare-war -extjs /tmp/ext-2.2.zip -jars /usr/share/hadoop/lib/*.jar:/usr/share/hadoop/*.jar"
|
|
|
@ -1,9 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [ -z "$OOZIE_DOWNLOAD_URL" ]; then
|
|
||||||
if [ -z "$OOZIE_FILE" ]; then
|
|
||||||
echo "OOZIE_FILE and OOZIE_DOWNLOAD_URL are not set. Impossible to install Oozie. Exit"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
|
@ -1,4 +0,0 @@
|
||||||
Assign a password to root.
|
|
||||||
|
|
||||||
This is useful when booting outside of a cloud environment (e.g. manually via
|
|
||||||
kvm).
|
|
|
@ -1,11 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
if [ -z "$DIB_PASSWORD" ]; then
|
|
||||||
echo "Error during setup password for root"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
sed -i "s/disable_root: true/disable_root: false/" /etc/cloud/cloud.cfg
|
|
||||||
install-packages augeas-tools openssh-server openssh-client
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/PasswordAuthentication yes
|
|
||||||
augtool -s set /files/etc/ssh/sshd_config/PermitRootLogin yes
|
|
||||||
augtool -s set /files/etc/ssh/ssh_config/PasswordAuthentication yes
|
|
||||||
echo -e "$DIB_PASSWORD\n$DIB_PASSWORD\n" | passwd
|
|
|
@ -1,9 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
if [ -z "$SAVANNA_ELEMENTS_COMMIT_ID" -o -z "$DIB_COMMIT_ID" ]
|
|
||||||
then
|
|
||||||
exit 3
|
|
||||||
else
|
|
||||||
echo -e "Savanna-elements-extra commit id: $SAVANNA_ELEMENTS_COMMIT_ID,
|
|
||||||
Diskimage-builder commit id: $DIB_COMMIT_ID" > /etc/savanna-extra.version
|
|
||||||
fi
|
|
|
@ -1 +0,0 @@
|
||||||
You can add your own Swift into image. Use 'swift_url' to specify download link for Swift jar file.
|
|
|
@ -1,12 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
install-packages wget
|
|
||||||
if [ -z "$swift_url" ]; then
|
|
||||||
wget -O /usr/share/hadoop/lib/hadoop-swift.jar "http://savanna-files.mirantis.com/hadoop-swift/hadoop-swift-latest.jar"
|
|
||||||
else
|
|
||||||
wget -O /usr/share/hadoop/lib/hadoop-swift.jar $swift_url
|
|
||||||
fi
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo -e "Could not download Swift Hadoop FS implementation.\nAborting"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
chmod 0644 /usr/share/hadoop/lib/hadoop-swift.jar
|
|
Loading…
Reference in New Issue