The change is required because the cloud-init user (centos, ubuntu,
cloud-user, ...) is used in the firstboot code.
All distributions where vanilla can be deployed are based
on systemd.
Story: 2004479
Task: 28194
Change-Id: I9d8a626b84d5d3c2a91348895cded5fd32ded52a
Also tweak Hive a bit and refer to artifacts in a new (but not totally
ideal) location.
Co-Authored-By: Jeremy Freudberg <jeremyfreudberg@gmail.com>
Change-Id: I3a25ee8c282849911089adf6c3593b1bb50fd067
* Handle Hadoop classpath better
* Include proper support for Spark classpath
* Formally limit element's use to Vanilla and Spark
Change-Id: I65abd7e375dba11599a4ab943d24f878235cd71d
Closes-Bug: #1727757
Closes-Bug: #1728061
As prereq of support for S3 datasource, the hadoop-aws jar needs to be
in the Hadoop classpath. The jar is copied into the proper folder when
possible on the appropriate plugins, and otherwise can be provided from
a download URL by the user.
Additionally, set the correct value of DIB_HDFS_LIB_DIR on the Vanilla
plugin to avoid any unnecessary simlinking.
Partially-Implements: bp sahara-support-s3
Change-Id: I94c5b0055b87f6a4e1382118d0718e588fccfe87
Up to now we only check for fedora, centos, centos7 and rhel. Rhel7 is
being added to allow the use of rhel7 images
Change-Id: Id0dfa9aab51ec7bb2fe4838c2aa0650f3f026128
we don't have vanilla 2.6.0 in supported list
in all current branches of sahara. we can
just drop that. if needed, stable/mitaka
branch should be used for building that image.
Change-Id: I81ed8209f2154f112fe7f6718029b84548793380
As also the previous comment on the top of 40-setup-hadoop said, this
was done because the Hadoop v1 RPM was buggy, so it needed to be
installed after the rest of the system is set up.
Since the support for Hadoop v1 has been dropped, the installation of
Hadoop can be done in install.d.
Change-Id: If3b0f8f595d6bf36017e63d331b5c1f7faa532e2