Retire Sahara: remove repo content

Sahara project is retiring
- https://review.opendev.org/c/openstack/governance/+/919374

this commit remove the content of this project repo

Depends-On: https://review.opendev.org/c/openstack/project-config/+/919376
Change-Id: Ic79f8551f977e8b971733cb15658fc5614773f87
This commit is contained in:
Ghanshyam Mann 2024-05-10 17:28:31 -07:00
parent 02a1f2a715
commit dcf9a8fc59
192 changed files with 8 additions and 17355 deletions

30
.gitignore vendored
View File

@ -1,30 +0,0 @@
*.egg-info
*.egg[s]
*.log
*.py[co]
.coverage
.testrepository
.tox
.stestr
.venv
.idea
AUTHORS
ChangeLog
build
cover
develop-eggs
dist
doc/build
doc/html
eggs
etc/sahara.conf
etc/sahara/*.conf
etc/sahara/*.topology
sdist
target
tools/lintstack.head.py
tools/pylint_exceptions
doc/source/sample.config
# Files created by releasenotes build
releasenotes/build

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./sahara_plugin_mapr/tests/unit
top_dir=./

View File

@ -1,10 +0,0 @@
- project:
templates:
- check-requirements
- openstack-python3-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- sahara-buildimages-mapr:
voting: false

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/sahara-plugin-mapr
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Storyboard:
https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-mapr
For more specific information about contributing to this repository, see the
sahara-plugin-mapr contributor guide:
https://docs.openstack.org/sahara-plugin-mapr/latest/contributor/contributing.html

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,38 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/sahara.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
OpenStack Data Processing ("Sahara") MapR plugin
================================================
OpenStack Sahara MapR Plugin provides the users the option to
start MapR clusters on OpenStack Sahara.
Check out OpenStack Sahara documentation to see how to deploy the
MapR Plugin.
Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara
Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-mapr
Sahara docs site: https://docs.openstack.org/sahara/latest/
Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html
How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html
Source: https://opendev.org/openstack/sahara-plugin-mapr
Bugs and feature requests: https://storyboard.openstack.org/#!/openstack/sahara-plugin-mapr
Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-mapr/
License
-------
Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
openstackdocstheme>=2.2.1 # Apache-2.0
os-api-ref>=1.4.0 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
sphinxcontrib-httpdomain>=1.3.0 # BSD
whereto>=0.3.0 # Apache-2.0

View File

@ -1,215 +0,0 @@
# -*- coding: utf-8 -*-
#
# sahara-plugin-mapr documentation build configuration file.
#
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-mapr'
openstackdocs_pdf_link = True
openstackdocs_use_storyboard = True
openstackdocs_projects = [
'sahara'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara team'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'saharamaprplugin-testsdoc'
# -- Options for LaTeX output --------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'doc-sahara-plugin-mapr.tex', 'Sahara MapR Plugin Documentation',
'Sahara team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
smartquotes_excludes = {'builders': ['latex']}
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'sahara-plugin-mapr', 'sahara-plugin-mapr Documentation',
['Sahara team'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'sahara-plugin-mapr', 'sahara-plugin-mapr Documentation',
'Sahara team', 'sahara-plugin-mapr', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

View File

@ -1,14 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system, how
we communicate as a community, etc.
sahara-plugin-mapr is maintained by the OpenStack Sahara project.
To understand our development process and how you can contribute to it, please
look at the Sahara project's general contributor's page:
http://docs.openstack.org/sahara/latest/contributor/contributing.html

View File

@ -1,8 +0,0 @@
=================
Contributor Guide
=================
.. toctree::
:maxdepth: 2
contributing

View File

@ -1,9 +0,0 @@
MapR plugin for Sahara
======================
.. toctree::
:maxdepth: 2
user/index
contributor/index

View File

@ -1,9 +0,0 @@
==========
User Guide
==========
.. toctree::
:maxdepth: 2
mapr-plugin

View File

@ -1,129 +0,0 @@
MapR Distribution Plugin
========================
The MapR Sahara plugin allows to provision MapR clusters on
OpenStack in an easy way and do it, quickly, conveniently and simply.
Operation
---------
The MapR Plugin performs the following four primary functions during cluster
creation:
1. MapR components deployment - the plugin manages the deployment of the
required software to the target VMs
2. Services Installation - MapR services are installed according to provided
roles list
3. Services Configuration - the plugin combines default settings with user
provided settings
4. Services Start - the plugin starts appropriate services according to
specified roles
Images
------
The Sahara MapR plugin can make use of either minimal (operating system only)
images or pre-populated MapR images. The base requirement for both is that the
image is cloud-init enabled and contains a supported operating system (see
http://maprdocs.mapr.com/home/InteropMatrix/r_os_matrix.html).
The advantage of a pre-populated image is that provisioning time is reduced, as
packages do not need to be downloaded which make up the majority of the time
spent in the provisioning cycle. In addition, provisioning large clusters will
put a burden on the network as packages for all nodes need to be downloaded
from the package repository.
.. list-table:: Support matrix for the `mapr` plugin
:widths: 15 15 20 15 35
:header-rows: 1
* - Version
(image tag)
- Distribution
- Build method
- Version
(build parameter)
- Notes
* - 5.2.0.mrv2
- Ubuntu 14.04, CentOS 7
- sahara-image-pack
- 5.2.0.mrv2
-
* - 5.2.0.mrv2
- Ubuntu 14.04, CentOS 7
- sahara-image-create
- 5.2.0
-
For more information about building image, refer to
:sahara-doc:`Sahara documentation <user/building-guest-images.html>`.
MapR plugin needs an image to be tagged in Sahara Image Registry with
two tags: 'mapr' and '<MapR version>' (e.g. '5.2.0.mrv2').
The default username specified for these images is different for each
distribution. For more information, refer to the
:sahara-doc:`registering image <user/registering-image.html>` section
of the Sahara documentation.
Hadoop Version Support
----------------------
The MapR plugin currently supports Hadoop 2.7.0 (5.2.0.mrv2).
Cluster Validation
------------------
When the user creates or scales a Hadoop cluster using a mapr plugin, the
cluster topology requested by the user is verified for consistency.
Every MapR cluster must contain:
* at least 1 *CLDB* process
* exactly 1 *Webserver* process
* odd number of *ZooKeeper* processes but not less than 1
* *FileServer* process on every node
* at least 1 ephemeral drive (then you need to specify the ephemeral drive in
the flavor not on the node group template creation) or 1 Cinder volume
per instance
Every Hadoop cluster must contain exactly 1 *Oozie* process
Every MapReduce v1 cluster must contain:
* at least 1 *JobTracker* process
* at least 1 *TaskTracker* process
Every MapReduce v2 cluster must contain:
* exactly 1 *ResourceManager* process
* exactly 1 *HistoryServer* process
* at least 1 *NodeManager* process
Every Spark cluster must contain:
* exactly 1 *Spark Master* process
* exactly 1 *Spark HistoryServer* process
* at least 1 *Spark Slave* (worker) process
HBase service is considered valid if:
* cluster has at least 1 *HBase-Master* process
* cluster has at least 1 *HBase-RegionServer* process
Hive service is considered valid if:
* cluster has exactly 1 *HiveMetastore* process
* cluster has exactly 1 *HiveServer2* process
Hue service is considered valid if:
* cluster has exactly 1 *Hue* process
* *Hue* process resides on the same node as *HttpFS* process
HttpFS service is considered valid if cluster has exactly 1 *HttpFS* process
Sqoop service is considered valid if cluster has exactly 1 *Sqoop2-Server*
process
The MapR Plugin
---------------
For more information, please contact MapR.

View File

@ -1,6 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. Last release of sahara and its plugins
to support python 2.7 is OpenStack Train. The minimum version of Python now
supported by sahara and its plugins is Python 3.6.

View File

@ -1,6 +0,0 @@
===========================
2023.2 Series Release Notes
===========================
.. release-notes::
:branch: stable/2023.2

View File

@ -1,210 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Sahara Release Notes documentation build configuration file
extensions = [
'reno.sphinxext',
'openstackdocstheme'
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-mapr'
openstackdocs_use_storyboard = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara Developers'
# Release do not need a version number in the title, they
# cover multiple versions.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SaharaMapRReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'SaharaMapRReleaseNotes.tex',
'Sahara MapR Plugin Release Notes Documentation',
'Sahara Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'saharamaprreleasenotes',
'Sahara MapR Plugin Release Notes Documentation',
['Sahara Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'SaharaMapRReleaseNotes',
'Sahara MapR Plugin Release Notes Documentation',
'Sahara Developers', 'SaharaMapRReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,17 +0,0 @@
==================================
Sahara MapR Plugin Release Notes
==================================
.. toctree::
:maxdepth: 1
unreleased
2023.2
zed
yoga
xena
wallaby
victoria
ussuri
train
stein

View File

@ -1,44 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-mapr\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-24 23:44+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-04-25 10:41+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "Current Series Release Notes"
msgstr "Aktuelle Serie Releasenotes"
msgid ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgstr ""
"Python 2.7 Unterstützung wurde beendet. Der letzte Release von Sahara und "
"seinen Plugins der Python 2.7 unterstützt ist OpenStack Train. Die minimal "
"Python Version welche von Sahara und seinen Plugins unterstützt wird, ist "
"Python 3.6."
msgid "Sahara MapR Plugin Release Notes"
msgstr "Sahara MapR Plugin Releasenotes"
msgid "Stein Series Release Notes"
msgstr "Stein Serie Releasenotes"
msgid "Train Series Release Notes"
msgstr "Train Serie Releasenotes"
msgid "Upgrade Notes"
msgstr "Aktualisierungsnotizen"
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Serie Releasenotes"

View File

@ -1,48 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-mapr\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-10-07 22:02+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-11-04 12:51+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "3.0.0"
msgstr "3.0.0"
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgstr ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support Python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgid "Sahara MapR Plugin Release Notes"
msgstr "Sahara MapR Plugin Release Notes"
msgid "Stein Series Release Notes"
msgstr "Stein Series Release Notes"
msgid "Train Series Release Notes"
msgstr "Train Series Release Notes"
msgid "Upgrade Notes"
msgstr "Upgrade Notes"
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Series Release Notes"
msgid "Victoria Series Release Notes"
msgstr "Victoria Series Release Notes"

View File

@ -1,6 +0,0 @@
===================================
Stein Series Release Notes
===================================
.. release-notes::
:branch: stable/stein

View File

@ -1,6 +0,0 @@
==========================
Train Series Release Notes
==========================
.. release-notes::
:branch: stable/train

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,6 +0,0 @@
===========================
Ussuri Series Release Notes
===========================
.. release-notes::
:branch: stable/ussuri

View File

@ -1,6 +0,0 @@
=============================
Victoria Series Release Notes
=============================
.. release-notes::
:branch: stable/victoria

View File

@ -1,6 +0,0 @@
============================
Wallaby Series Release Notes
============================
.. release-notes::
:branch: stable/wallaby

View File

@ -1,6 +0,0 @@
=========================
Xena Series Release Notes
=========================
.. release-notes::
:branch: stable/xena

View File

@ -1,6 +0,0 @@
=========================
Yoga Series Release Notes
=========================
.. release-notes::
:branch: stable/yoga

View File

@ -1,6 +0,0 @@
========================
Zed Series Release Notes
========================
.. release-notes::
:branch: stable/zed

View File

@ -1,18 +0,0 @@
# Requirements lower bounds listed here are our best effort to keep them up to
# date but we do not test them so no guarantee of having them all correct. If
# you find any incorrect lower bounds, let us know or propose a fix.
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
Babel!=2.4.0,>=2.3.4 # BSD
eventlet>=0.26.0 # MIT
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
requests>=2.14.2 # Apache-2.0
sahara>=10.0.0.0b1

View File

@ -1,26 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from https://docs.openstack.org/oslo.i18n/latest/
# user/usage.html
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='sahara_plugin_mapr')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,272 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-mapr VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2019-09-20 17:28+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-09-25 06:29+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(message)s, required by %(required_by)s"
msgstr "%(message)s, benötigt von %(required_by)s"
#, python-format
msgid "%(service)s service cannot be installed alongside %(package)s package"
msgstr ""
"Der Dienst %(service)s kann nicht zusammen mit dem Paket %(package)s "
"installiert werden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "%s is in running state"
msgstr "%s befindet sich im aktiven Zustand"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "%s is not in running state"
msgstr "%s befindet sich nicht im aktiven Zustand"
#, python-format
msgid "%s must have at least 1 volume or ephemeral drive"
msgstr "%s muss mindestens 1 Datenträger oder ephemeres Laufwerk haben"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "CLDB failed to start"
msgstr "CLDB konnte nicht gestartet werden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Can not map service"
msgstr "Der Dienst kann nicht zugeordnet werden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Config '%s' missing 'value'"
msgstr "Config '%s' fehlender 'value'"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Config missing 'name'"
msgstr "Config fehlender 'name'"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure SSH connection"
msgstr "Konfiguriere SSH-Verbindung"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure cluster topology"
msgstr "Konfiguriere Cluster-Topologie"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure database"
msgstr "Datenbank konfigurieren"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Create Sentry role for Hive"
msgstr "Erstelle Sentry-Rolle für Hive"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Execute configure.sh"
msgstr "Führe configure.sh aus"
#, python-format
msgid ""
"Hadoop cluster should contain at least %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
msgstr ""
"Hadoop-Cluster sollte mindestens %(expected_count)d %(component)s-"
"Komponente(n) enthalten. Tatsächliche %(component)s Anzahl ist "
"%(actual_count)d"
#, python-format
msgid ""
"Hadoop cluster should contain at most %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
msgstr ""
"Hadoop-Cluster sollte höchstens %(expected_count)d %(component)s "
"Komponente(n) enthalten. Tatsächliche %(component)s Anzahl ist "
"%(actual_count)d"
#, python-format
msgid ""
"Hadoop cluster should contain odd number of %(component)s but "
"%(actual_count)s found."
msgstr ""
"Der Hadoop-Cluster sollte eine ungerade Anzahl von %(component)s, aber "
"%(actual_count)s sind enthalten."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Init Sentry DB schema"
msgstr "Init-Sentry-DB-Schema"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Initializing MapR-FS"
msgstr "Initialisieren von MapR-FS"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Install %s service"
msgstr "Installiere den %s Dienst"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install Java"
msgstr "Installiere Java"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install MapR repositories"
msgstr "Installiere MapR-Repositories"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install MySQL client"
msgstr "Installiere MySQL-Client"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install Scala"
msgstr "Installiere Scala"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install security repos"
msgstr "Installiere Sicherheitsrepositionen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Invalid argument type %s"
msgstr "Ungültiger Argumenttyp %s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Migrating Hue database"
msgstr "Migrieren der Hue-Datenbank"
#, python-format
msgid "Node \"%(ng_name)s\" is missing component %(component)s"
msgstr "Knoten '%(ng_name)s' fehlt Komponente %(component)s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Node failed to connect to CLDB: %s"
msgstr "Knoten konnte keine Verbindung zu CLDB herstellen: %s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Rebuilt Oozie war"
msgstr "Wiederaufbau von Oozie war"
#, python-format
msgid ""
"Service %(service)s requires %(os)s OS. Use %(os)s image and add \"%(os)s\" "
"tag to it."
msgstr ""
"Service %(service)s benötigt %(os)s OS. Verwenden Sie das %(os)s Abbild und "
"fügen Sie das '%(os)s'-Tag hinzu."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Service not found in services list"
msgstr "Dienst nicht in der Serviceliste gefunden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Set cluster mode"
msgstr "Stelle Cluster-Modus"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Some %s processes are not in running state"
msgstr "Einige %s-Prozesse sind nicht im aktiven Zustand"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Some ZooKeeper processes are not in running state"
msgstr "Einige ZooKeeper-Prozesse sind nicht im aktiven Zustand"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies CentOS MapR core repository."
msgstr "Gibt das CentOS MapR-Kernrepository an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies CentOS MapR ecosystem repository."
msgstr "Gibt das CentOS MapR-Ökosystem-Repository an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies Sentry storage mode."
msgstr "Gibt den Sentry-Speichermodus an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies Ubuntu MapR core repository."
msgstr "Gibt das Ubuntu MapR-Core-Repository an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies Ubuntu MapR ecosystem repository."
msgstr "Gibt das Ubuntu MapR-Ökosystem-Repository an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies heap size for MapR-FS in percents of maximum value."
msgstr ""
"Legt die Größe des Heapspeichers für MapR-FS in Prozent des Maximalwerts "
"fest."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies that MapR-DB is in use."
msgstr "Gibt an, dass MapR-DB verwendet wird."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specifies thrift version."
msgstr "Gibt die thrift Version an."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Specify the version of the service"
msgstr "Geben Sie die Version des Service an"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start CLDB nodes"
msgstr "Starte CLDB-Knoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start ZooKeepers nodes"
msgstr "Starten Sie ZooKeepers-Knoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start non-CLDB nodes"
msgstr "Starte Sie Nicht-CLDB Knoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Stop Warden nodes"
msgstr "Stoppe Warden-Knoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Stop ZooKeepers nodes"
msgstr "Stoppe ZooKeepers-Knoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Template object must be defined"
msgstr "Vorlagenobjekt muss definiert sein"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid ""
"The MapR Distribution provides a full Hadoop stack that includes the MapR "
"File System (MapR-FS), MapReduce, a complete Hadoop ecosystem, and the MapR "
"Control System user interface"
msgstr ""
"MapR Distribution bietet einen vollständigen Hadoop-Stapel, der das MapR-"
"Dateisystem (MapR-FS), MapReduce, ein vollständiges Hadoop-Ökosystem und die "
"Benutzeroberfläche des MapR-Steuerungssystems enthält"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Wait for {node_process} on {instance} to change status to \"{status}\""
msgstr ""
"Warten Sie auf {node_process} auf {instance}, um den Status in \"{status}\" "
"zu ändern"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Write config files to instances"
msgstr "Schreibe Konfigurationsdateien in Instanzen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "ZooKeeper is in running state"
msgstr "ZooKeeper ist in Bearbeitung"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "ZooKeeper is not in running state"
msgstr "ZooKeeper befindet sich nicht im aktiven Status"

View File

@ -1,217 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-mapr VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2020-05-06 10:53+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-05-05 11:21+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(message)s, required by %(required_by)s"
msgstr "%(message)s, required by %(required_by)s"
#, python-format
msgid "%(service)s service cannot be installed alongside %(package)s package"
msgstr "%(service)s service cannot be installed alongside %(package)s package"
#, python-format
msgid "%s is in running state"
msgstr "%s is in running state"
#, python-format
msgid "%s is not in running state"
msgstr "%s is not in running state"
#, python-format
msgid "%s must have at least 1 volume or ephemeral drive"
msgstr "%s must have at least 1 volume or ephemeral drive"
msgid "CLDB failed to start"
msgstr "CLDB failed to start"
msgid "Can not map service"
msgstr "Can not map service"
#, python-format
msgid "Config '%s' missing 'value'"
msgstr "Config '%s' missing 'value'"
msgid "Config missing 'name'"
msgstr "Config missing 'name'"
msgid "Configure SSH connection"
msgstr "Configure SSH connection"
msgid "Configure cluster topology"
msgstr "Configure cluster topology"
msgid "Configure database"
msgstr "Configure database"
msgid "Create Sentry role for Hive"
msgstr "Create Sentry role for Hive"
msgid "Execute configure.sh"
msgstr "Execute configure.sh"
#, python-format
msgid ""
"Hadoop cluster should contain at least %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
msgstr ""
"Hadoop cluster should contain at least %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
#, python-format
msgid ""
"Hadoop cluster should contain at most %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
msgstr ""
"Hadoop cluster should contain at most %(expected_count)d %(component)s "
"component(s). Actual %(component)s count is %(actual_count)d"
#, python-format
msgid ""
"Hadoop cluster should contain odd number of %(component)s but "
"%(actual_count)s found."
msgstr ""
"Hadoop cluster should contain odd number of %(component)s but "
"%(actual_count)s found."
msgid "Init Sentry DB schema"
msgstr "Init Sentry DB schema"
msgid "Initializing MapR-FS"
msgstr "Initializing MapR-FS"
#, python-format
msgid "Install %s service"
msgstr "Install %s service"
msgid "Install Java"
msgstr "Install Java"
msgid "Install MapR repositories"
msgstr "Install MapR repositories"
msgid "Install MySQL client"
msgstr "Install MySQL client"
msgid "Install Scala"
msgstr "Install Scala"
msgid "Install security repos"
msgstr "Install security repos"
#, python-format
msgid "Invalid argument type %s"
msgstr "Invalid argument type %s"
msgid "Migrating Hue database"
msgstr "Migrating Hue database"
#, python-format
msgid "Node \"%(ng_name)s\" is missing component %(component)s"
msgstr "Node \"%(ng_name)s\" is missing component %(component)s"
#, python-format
msgid "Node failed to connect to CLDB: %s"
msgstr "Node failed to connect to CLDB: %s"
msgid "Rebuilt Oozie war"
msgstr "Rebuilt Oozie war"
#, python-format
msgid ""
"Service %(service)s requires %(os)s OS. Use %(os)s image and add \"%(os)s\" "
"tag to it."
msgstr ""
"Service %(service)s requires %(os)s OS. Use %(os)s image and add \"%(os)s\" "
"tag to it."
msgid "Service not found in services list"
msgstr "Service not found in services list"
msgid "Set cluster mode"
msgstr "Set cluster mode"
#, python-format
msgid "Some %s processes are not in running state"
msgstr "Some %s processes are not in running state"
msgid "Some ZooKeeper processes are not in running state"
msgstr "Some ZooKeeper processes are not in running state"
msgid "Specifies CentOS MapR core repository."
msgstr "Specifies CentOS MapR core repository."
msgid "Specifies CentOS MapR ecosystem repository."
msgstr "Specifies CentOS MapR ecosystem repository."
msgid "Specifies Sentry storage mode."
msgstr "Specifies Sentry storage mode."
msgid "Specifies Ubuntu MapR core repository."
msgstr "Specifies Ubuntu MapR core repository."
msgid "Specifies Ubuntu MapR ecosystem repository."
msgstr "Specifies Ubuntu MapR ecosystem repository."
msgid "Specifies heap size for MapR-FS in percents of maximum value."
msgstr "Specifies heap size for MapR-FS in percents of maximum value."
msgid "Specifies that MapR-DB is in use."
msgstr "Specifies that MapR-DB is in use."
msgid "Specifies thrift version."
msgstr "Specifies thrift version."
msgid "Specify the version of the service"
msgstr "Specify the version of the service"
msgid "Start CLDB nodes"
msgstr "Start CLDB nodes"
msgid "Start ZooKeepers nodes"
msgstr "Start ZooKeepers nodes"
msgid "Start non-CLDB nodes"
msgstr "Start non-CLDB nodes"
msgid "Stop Warden nodes"
msgstr "Stop Warden nodes"
msgid "Stop ZooKeepers nodes"
msgstr "Stop ZooKeepers nodes"
msgid "Template object must be defined"
msgstr "Template object must be defined"
msgid ""
"The MapR Distribution provides a full Hadoop stack that includes the MapR "
"File System (MapR-FS), MapReduce, a complete Hadoop ecosystem, and the MapR "
"Control System user interface"
msgstr ""
"The MapR Distribution provides a full Hadoop stack that includes the MapR "
"File System (MapR-FS), MapReduce, a complete Hadoop ecosystem, and the MapR "
"Control System user interface"
msgid "Wait for {node_process} on {instance} to change status to \"{status}\""
msgstr "Wait for {node_process} on {instance} to change status to \"{status}\""
msgid "Write config files to instances"
msgstr "Write config files to instances"
msgid "ZooKeeper is in running state"
msgstr "ZooKeeper is in running state"
msgid "ZooKeeper is not in running state"
msgstr "ZooKeeper is not in running state"

View File

@ -1,154 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractClusterContext(object, metaclass=abc.ABCMeta):
@abc.abstractproperty
def mapr_home(self):
return
@abc.abstractproperty
def configure_sh_path(self):
return
@abc.abstractproperty
def configure_sh(self):
return
@abc.abstractproperty
def hadoop_version(self):
return
@abc.abstractproperty
def hadoop_home(self):
return
@abc.abstractproperty
def hadoop_lib(self):
return
@abc.abstractproperty
def hadoop_conf(self):
return
@abc.abstractproperty
def cluster(self):
return
@abc.abstractproperty
def name_node_uri(self):
return
@abc.abstractproperty
def resource_manager_uri(self):
return
@abc.abstractproperty
def oozie_server_uri(self):
return
@abc.abstractproperty
def oozie_server(self):
return
@abc.abstractproperty
def oozie_http(self):
return
@abc.abstractproperty
def cluster_mode(self):
return
@abc.abstractproperty
def is_node_aware(self):
return
@abc.abstractproperty
def some_instance(self):
return
@abc.abstractproperty
def distro(self):
return
@abc.abstractproperty
def mapr_db(self):
return
@abc.abstractmethod
def filter_instances(self, instances, node_process=None, service=None):
return
@abc.abstractmethod
def removed_instances(self, node_process=None, service=None):
return
@abc.abstractmethod
def added_instances(self, node_process=None, service=None):
return
@abc.abstractmethod
def changed_instances(self, node_process=None, service=None):
return
@abc.abstractmethod
def existing_instances(self, node_process=None, service=None):
return
@abc.abstractproperty
def should_be_restarted(self):
return
@abc.abstractproperty
def mapr_repos(self):
return
@abc.abstractproperty
def is_prebuilt(self):
return
@abc.abstractproperty
def local_repo(self):
return
@abc.abstractproperty
def required_services(self):
return
@abc.abstractproperty
def all_services(self):
return
@abc.abstractproperty
def mapr_version(self):
return
@abc.abstractproperty
def ubuntu_base_repo(self):
return
@abc.abstractproperty
def ubuntu_ecosystem_repo(self):
return
@abc.abstractproperty
def centos_base_repo(self):
return
@abc.abstractproperty
def centos_ecosystem_repo(self):
return

View File

@ -1,26 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractValidator(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def validate(self, cluster_context):
pass
@abc.abstractmethod
def validate_scaling(self, cluster_context, existing, additional):
pass

View File

@ -1,26 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractConfigurer(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def configure(self, cluster_context, instances=None):
pass
@abc.abstractmethod
def update(self, cluster_context, instances=None):
pass

View File

@ -1,21 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractHealthChecker(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def get_checks(self, cluster_context, instances=None):
pass

View File

@ -1,34 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractNodeManager(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def start(self, cluster_context, instances=None):
pass
@abc.abstractmethod
def stop(self, cluster_context, instances=None):
pass
@abc.abstractmethod
def move_nodes(self, cluster_context, instances):
pass
@abc.abstractmethod
def remove_nodes(self, cluster_context, instances):
pass

View File

@ -1,78 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
class AbstractVersionHandler(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def get_node_processes(self):
return
@abc.abstractmethod
def get_configs(self):
return
@abc.abstractmethod
def configure_cluster(self, cluster):
pass
@abc.abstractmethod
def start_cluster(self, cluster):
pass
@abc.abstractmethod
def validate(self, cluster):
pass
@abc.abstractmethod
def validate_scaling(self, cluster, existing, additional):
pass
@abc.abstractmethod
def scale_cluster(self, cluster, instances):
pass
@abc.abstractmethod
def decommission_nodes(self, cluster, instances):
pass
@abc.abstractmethod
def get_edp_engine(self, cluster, job_type):
return
@abc.abstractmethod
def get_edp_job_types(self):
return []
@abc.abstractmethod
def get_edp_config_hints(self, job_type):
return {}
@abc.abstractmethod
def get_context(self, cluster, added=None, removed=None):
return
@abc.abstractmethod
def get_services(self):
return
@abc.abstractmethod
def get_required_services(self):
return
@abc.abstractmethod
def get_open_ports(self, node_group):
return

View File

@ -1,402 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from oslo_log import log as logging
from sahara.plugins import conductor
from sahara.plugins import context
import sahara.plugins.utils as utils
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.abstract.configurer as ac
from sahara_plugin_mapr.plugins.mapr.domain import distro as d
from sahara_plugin_mapr.plugins.mapr.domain import service as srvc
import sahara_plugin_mapr.plugins.mapr.services.management.management as mng
import sahara_plugin_mapr.plugins.mapr.services.mapreduce.mapreduce as mr
from sahara_plugin_mapr.plugins.mapr.services.maprfs import maprfs
from sahara_plugin_mapr.plugins.mapr.services.mysql import mysql
import sahara_plugin_mapr.plugins.mapr.services.yarn.yarn as yarn
from sahara_plugin_mapr.plugins.mapr.util import event_log as el
import sahara_plugin_mapr.plugins.mapr.util.general as util
import sahara_plugin_mapr.plugins.mapr.util.password_utils as pu
LOG = logging.getLogger(__name__)
_JAVA_HOME = '/usr/java/jdk1.7.0_51'
_CONFIGURE_SH_TIMEOUT = 600
_SET_MODE_CMD = 'maprcli cluster mapreduce set -mode '
_TOPO_SCRIPT = 'plugins/mapr/resources/topology.sh'
INSTALL_JAVA_SCRIPT = 'plugins/mapr/resources/install_java.sh'
INSTALL_SCALA_SCRIPT = 'plugins/mapr/resources/install_scala.sh'
INSTALL_MYSQL_CLIENT = 'plugins/mapr/resources/install_mysql_client.sh'
ADD_MAPR_REPO_SCRIPT = 'plugins/mapr/resources/add_mapr_repo.sh'
ADD_SECURITY_REPO_SCRIPT = 'plugins/mapr/resources/add_security_repos.sh'
SERVICE_INSTALL_PRIORITY = [
mng.Management(),
yarn.YARNv251(),
yarn.YARNv241(),
yarn.YARNv270(),
mr.MapReduce(),
maprfs.MapRFS(),
]
class BaseConfigurer(ac.AbstractConfigurer, metaclass=abc.ABCMeta):
def configure(self, cluster_context, instances=None):
instances = instances or cluster_context.get_instances()
self._configure_ssh_connection(cluster_context, instances)
self._install_mapr_repo(cluster_context, instances)
if not cluster_context.is_prebuilt:
self._prepare_bare_image(cluster_context, instances)
self._install_services(cluster_context, instances)
if cluster_context.is_node_aware:
self._configure_topology(cluster_context, instances)
self._configure_database(cluster_context, instances)
self._configure_services(cluster_context, instances)
self._configure_sh_cluster(cluster_context, instances)
self._set_cluster_mode(cluster_context, instances)
self._post_configure_services(cluster_context, instances)
self._write_config_files(cluster_context, instances)
self._configure_environment(cluster_context, instances)
self._update_cluster_info(cluster_context)
def update(self, cluster_context, instances=None):
LOG.debug('Configuring existing instances')
instances = instances or cluster_context.get_instances()
existing = cluster_context.existing_instances()
if cluster_context.is_node_aware:
self._configure_topology(cluster_context, existing)
if cluster_context.has_control_nodes(instances):
self._configure_sh_cluster(cluster_context, existing)
self._post_configure_sh(cluster_context, existing)
self._write_config_files(cluster_context, existing)
self._update_services(cluster_context, existing)
self._restart_services(cluster_context)
self._update_cluster_info(cluster_context)
LOG.info('Existing instances successfully configured')
def _configure_services(self, cluster_context, instances):
for service in cluster_context.cluster_services:
service.configure(cluster_context, instances)
def _install_services(self, cluster_context, instances):
for service in self._service_install_sequence(cluster_context):
service.install(cluster_context, instances)
def _service_install_sequence(self, cluster_context):
def key(service):
if service in SERVICE_INSTALL_PRIORITY:
return SERVICE_INSTALL_PRIORITY.index(service)
return -service._priority
return sorted(cluster_context.cluster_services, key=key, reverse=True)
def _prepare_bare_image(self, cluster_context, instances):
LOG.debug('Preparing bare image')
if d.UBUNTU == cluster_context.distro:
self._install_security_repos(cluster_context, instances)
self._install_java(cluster_context, instances)
self._install_scala(cluster_context, instances)
self._install_mysql_client(cluster_context, instances)
LOG.debug('Bare images successfully prepared')
@el.provision_step(_("Install security repos"))
def _install_security_repos(self, cluster_context, instances):
LOG.debug("Installing security repos")
@el.provision_event()
def install_security_repos(instance):
return util.run_script(instance, ADD_SECURITY_REPO_SCRIPT, "root")
util.execute_on_instances(instances, install_security_repos)
@el.provision_step(_("Install MySQL client"))
def _install_mysql_client(self, cluster_context, instances):
LOG.debug("Installing MySQL client")
distro_name = cluster_context.distro.name
@el.provision_event()
def install_mysql_client(instance):
return util.run_script(instance, INSTALL_MYSQL_CLIENT,
"root", distro_name)
util.execute_on_instances(instances, install_mysql_client)
@el.provision_step(_("Install Scala"))
def _install_scala(self, cluster_context, instances):
LOG.debug("Installing Scala")
distro_name = cluster_context.distro.name
@el.provision_event()
def install_scala(instance):
return util.run_script(instance, INSTALL_SCALA_SCRIPT,
"root", distro_name)
util.execute_on_instances(instances, install_scala)
@el.provision_step(_("Install Java"))
def _install_java(self, cluster_context, instances):
LOG.debug("Installing Java")
distro_name = cluster_context.distro.name
@el.provision_event()
def install_java(instance):
return util.run_script(instance, INSTALL_JAVA_SCRIPT,
"root", distro_name)
util.execute_on_instances(instances, install_java)
@el.provision_step(_("Configure cluster topology"))
def _configure_topology(self, cluster_context, instances):
LOG.debug("Configuring cluster topology")
topology_map = cluster_context.topology_map
topology_map = ("%s %s" % item for item in topology_map.items())
topology_map = "\n".join(topology_map) + "\n"
data_path = "%s/topology.data" % cluster_context.mapr_home
script = utils.get_file_text(_TOPO_SCRIPT, 'sahara_plugin_mapr')
script_path = '%s/topology.sh' % cluster_context.mapr_home
@el.provision_event()
def write_topology_data(instance):
util.write_file(instance, data_path, topology_map, owner="root")
util.write_file(instance, script_path, script,
mode="+x", owner="root")
util.execute_on_instances(instances, write_topology_data)
LOG.info('Cluster topology successfully configured')
@el.provision_step(_("Write config files to instances"))
def _write_config_files(self, cluster_context, instances):
LOG.debug('Writing config files')
@el.provision_event()
def write_config_files(instance, config_files):
for file in config_files:
util.write_file(instance, file.path, file.data, mode=file.mode,
owner="mapr")
node_groups = util.unique_list(instances, lambda i: i.node_group)
for node_group in node_groups:
config_files = cluster_context.get_config_files(node_group)
ng_instances = [i for i in node_group.instances if i in instances]
util.execute_on_instances(ng_instances, write_config_files,
config_files=config_files)
LOG.debug("Config files are successfully written")
def _configure_environment(self, cluster_context, instances):
self.configure_general_environment(cluster_context, instances)
self._post_install_services(cluster_context, instances)
def _configure_database(self, cluster_context, instances):
mysql_instance = mysql.MySQL.get_db_instance(cluster_context)
@el.provision_event(instance=mysql_instance,
name=_("Configure database"))
def decorated():
distro_name = cluster_context.distro.name
distro_version = cluster_context.distro_version
mysql.MySQL.install_mysql(mysql_instance, distro_name,
distro_version)
mysql.MySQL.start_mysql_server(cluster_context)
mysql.MySQL.create_databases(cluster_context, instances)
decorated()
def _post_install_services(self, cluster_context, instances):
LOG.debug('Executing service post install hooks')
for s in cluster_context.cluster_services:
service_instances = cluster_context.filter_instances(instances,
service=s)
if service_instances:
s.post_install(cluster_context, instances)
LOG.info('Post install hooks execution successfully executed')
def _update_cluster_info(self, cluster_context):
LOG.debug('Updating UI information.')
info = {'Admin user credentials': {'Username': 'mapr',
'Password': pu.get_mapr_password
(cluster_context.cluster)}}
for service in cluster_context.cluster_services:
for title, node_process, ui_info in (
service.get_ui_info(cluster_context)):
removed = cluster_context.removed_instances(node_process)
instances = cluster_context.get_instances(node_process)
instances = [i for i in instances if i not in removed]
if len(instances) == 1:
display_name_template = "%(title)s"
else:
display_name_template = "%(title)s %(index)s"
for index, instance in enumerate(instances, start=1):
args = {"title": title, "index": index}
display_name = display_name_template % args
data = ui_info.copy()
data[srvc.SERVICE_UI] = (data[srvc.SERVICE_UI] %
instance.get_ip_or_dns_name())
info.update({display_name: data})
ctx = context.ctx()
conductor.cluster_update(ctx, cluster_context.cluster, {'info': info})
def configure_general_environment(self, cluster_context, instances=None):
LOG.debug('Executing post configure hooks')
mapr_user_pass = pu.get_mapr_password(cluster_context.cluster)
if not instances:
instances = cluster_context.get_instances()
def set_user_password(instance):
LOG.debug('Setting password for user "mapr"')
if self.mapr_user_exists(instance):
with instance.remote() as r:
r.execute_command(
'echo "%s:%s"|chpasswd' %
('mapr', mapr_user_pass),
run_as_root=True)
else:
LOG.warning('User "mapr" does not exists')
def create_home_mapr(instance):
target_path = '/home/mapr'
LOG.debug("Creating home directory for user 'mapr'")
args = {'path': target_path,
'user': 'mapr',
'group': 'mapr'}
cmd = ('mkdir -p %(path)s && chown %(user)s:%(group)s %(path)s'
% args)
if self.mapr_user_exists(instance):
with instance.remote() as r:
r.execute_command(cmd, run_as_root=True)
else:
LOG.warning('User "mapr" does not exists')
util.execute_on_instances(instances, set_user_password)
util.execute_on_instances(instances, create_home_mapr)
@el.provision_step(_("Execute configure.sh"))
def _configure_sh_cluster(self, cluster_context, instances):
LOG.debug('Executing configure.sh')
if not instances:
instances = cluster_context.get_instances()
script = cluster_context.configure_sh
db_specs = dict(mysql.MySQL.METRICS_SPECS._asdict())
db_specs.update({
'host': mysql.MySQL.get_db_instance(cluster_context).internal_ip,
'port': mysql.MySQL.MYSQL_SERVER_PORT,
})
with context.PluginsThreadGroup() as tg:
for instance in instances:
tg.spawn('configure-sh-%s' % instance.id,
self._configure_sh_instance, cluster_context,
instance, script, db_specs)
LOG.debug('Executing configure.sh successfully completed')
@el.provision_event(instance_reference=2)
def _configure_sh_instance(self, cluster_context, instance, command,
specs):
if not self.mapr_user_exists(instance):
command += ' --create-user'
if cluster_context.check_for_process(instance, mng.METRICS):
command += (' -d %(host)s:%(port)s -du %(user)s -dp %(password)s '
'-ds %(db_name)s') % specs
with instance.remote() as r:
r.execute_command('sudo -i ' + command,
timeout=_CONFIGURE_SH_TIMEOUT)
@el.provision_step(_("Configure SSH connection"))
def _configure_ssh_connection(self, cluster_context, instances):
@el.provision_event()
def configure_ssh(instance):
echo_param = 'echo "KeepAlive yes" >> ~/.ssh/config'
echo_timeout = 'echo "ServerAliveInterval 60" >> ~/.ssh/config'
with instance.remote() as r:
r.execute_command(echo_param)
r.execute_command(echo_timeout)
util.execute_on_instances(instances, configure_ssh)
def mapr_user_exists(self, instance):
with instance.remote() as r:
ec, __ = r.execute_command(
"id -u %s" %
'mapr', run_as_root=True, raise_when_error=False)
return ec == 0
def post_start(self, cluster_context, instances=None):
instances = instances or cluster_context.get_instances()
LOG.debug('Executing service post start hooks')
for service in cluster_context.cluster_services:
updated = cluster_context.filter_instances(instances,
service=service)
service.post_start(cluster_context, updated)
LOG.info('Post start hooks successfully executed')
@el.provision_step(_("Set cluster mode"))
def _set_cluster_mode(self, cluster_context, instances):
cluster_mode = cluster_context.cluster_mode
if not cluster_mode:
return
command = "maprcli cluster mapreduce set -mode %s" % cluster_mode
@el.provision_event()
def set_cluster_mode(instance):
return util.execute_command([instance], command,
run_as='mapr')
util.execute_on_instances(instances, set_cluster_mode)
@el.provision_step(_("Install MapR repositories"))
def _install_mapr_repo(self, cluster_context, instances):
distro_name = cluster_context.distro.name
@el.provision_event()
def install_mapr_repos(instance):
return util.run_script(instance, ADD_MAPR_REPO_SCRIPT, "root",
distro_name, **cluster_context.mapr_repos)
util.execute_on_instances(instances, install_mapr_repos)
def _update_services(self, cluster_context, instances):
for service in cluster_context.cluster_services:
updated = cluster_context.filter_instances(instances,
service=service)
service.update(cluster_context, updated)
def _restart_services(self, cluster_context):
restart = cluster_context.should_be_restarted
for service, instances in restart.items():
service.restart(util.unique_list(instances))
def _post_configure_sh(self, cluster_context, instances):
LOG.debug('Executing post configure.sh hooks')
for service in cluster_context.cluster_services:
service.post_configure_sh(cluster_context, instances)
LOG.info('Post configure.sh hooks successfully executed')
def _post_configure_services(self, cluster_context, instances):
for service in cluster_context.cluster_services:
service.post_configure(cluster_context, instances)

View File

@ -1,443 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from oslo_config import cfg
import sahara.plugins.exceptions as e
from sahara.plugins import topology_helper as th
import sahara.plugins.utils as u
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.abstract.cluster_context as cc
import sahara_plugin_mapr.plugins.mapr.domain.configuration_file as bcf
import sahara_plugin_mapr.plugins.mapr.domain.distro as distro
import sahara_plugin_mapr.plugins.mapr.services.management.management as mng
import sahara_plugin_mapr.plugins.mapr.services.maprfs.maprfs as mfs
import sahara_plugin_mapr.plugins.mapr.services.oozie.oozie as oozie
from sahara_plugin_mapr.plugins.mapr.services.swift import swift
import sahara_plugin_mapr.plugins.mapr.services.yarn.yarn as yarn
import sahara_plugin_mapr.plugins.mapr.util.general as g
import sahara_plugin_mapr.plugins.mapr.util.service_utils as su
CONF = cfg.CONF
CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper")
class BaseClusterContext(cc.AbstractClusterContext):
ubuntu_base = 'http://package.mapr.com/releases/v%s/ubuntu/ mapr optional'
centos_base = 'http://package.mapr.com/releases/v%s/redhat/'
def __init__(self, cluster, version_handler, added=None, removed=None):
self._cluster = cluster
self._distro = None
self._distro_version = None
self._all_services = version_handler.get_services()
self._required_services = version_handler.get_required_services()
self._cluster_services = None
self._mapr_home = '/opt/mapr'
self._name_node_uri = 'maprfs:///'
self._cluster_mode = None
self._node_aware = None
self._oozie_server_uri = None
self._oozie_server = None
self._oozie_http = None
self._some_instance = None
self._configure_sh_path = None
self._configure_sh = None
self._mapr_db = None
self._hadoop_home = None
self._hadoop_version = None
self._added_instances = added or []
self._removed_instances = removed or []
self._changed_instances = (
self._added_instances + self._removed_instances)
self._existing_instances = [i for i in self.get_instances()
if i not in self._changed_instances]
self._restart = collections.defaultdict(list)
self._ubuntu_base_repo = None
self._ubuntu_ecosystem_repo = None
self._centos_base_repo = None
self._centos_ecosystem_repo = None
self._repos = {}
self._is_prebuilt = None
self._local_repo = '/opt/mapr-repository'
self._mapr_version = None
@property
def cluster(self):
return self._cluster
@property
def cluster_services(self):
if not self._cluster_services:
self._cluster_services = self.get_cluster_services()
return self._cluster_services
@property
def required_services(self):
return self._required_services
@property
def all_services(self):
return self._all_services
@property
def mapr_home(self):
return self._mapr_home
@property
def hadoop_version(self):
return self._hadoop_version
@property
def hadoop_home(self):
if not self._hadoop_home:
f = '%(mapr_home)s/hadoop/hadoop-%(hadoop_version)s'
args = {
'mapr_home': self.mapr_home,
'hadoop_version': self.hadoop_version,
}
self._hadoop_home = f % args
return self._hadoop_home
@property
def name_node_uri(self):
return self._name_node_uri
@property
def oozie_server_uri(self):
if not self._oozie_server_uri:
oozie_http = self.oozie_http
url = 'http://%s/oozie' % oozie_http if oozie_http else None
self._oozie_server_uri = url
return self._oozie_server_uri
@property
def oozie_server(self):
if not self._oozie_server:
self._oozie_server = self.get_instance(oozie.OOZIE)
return self._oozie_server
@property
def oozie_http(self):
if not self._oozie_http:
oozie_server = self.oozie_server
ip = oozie_server.management_ip if oozie_server else None
self._oozie_http = '%s:11000' % ip if ip else None
return self._oozie_http
@property
def cluster_mode(self):
return self._cluster_mode
@property
def is_node_aware(self):
return self._node_aware and CONF.enable_data_locality
@property
def some_instance(self):
if not self._some_instance:
self._some_instance = self.cluster.node_groups[0].instances[0]
return self._some_instance
@property
def distro_version(self):
if not self._distro_version:
self._distro_version = distro.get_version(self.some_instance)
return self._distro_version
@property
def distro(self):
if not self._distro:
self._distro = distro.get(self.some_instance)
return self._distro
@property
def mapr_db(self):
if self._mapr_db is None:
mapr_db = mfs.MapRFS.ENABLE_MAPR_DB_CONFIG
mapr_db = self._get_cluster_config_value(mapr_db)
self._mapr_db = '-noDB' if not mapr_db else ''
return self._mapr_db
@property
def configure_sh_path(self):
if not self._configure_sh_path:
self._configure_sh_path = '%s/server/configure.sh' % self.mapr_home
return self._configure_sh_path
@property
def configure_sh(self):
if not self._configure_sh:
f = ('%(script_path)s'
' -N %(cluster_name)s'
' -C %(cldbs)s'
' -Z %(zookeepers)s'
' -no-autostart -f %(m7)s')
args = {
'script_path': self.configure_sh_path,
'cluster_name': self.cluster.name,
'cldbs': self.get_cldb_nodes_ip(),
'zookeepers': self.get_zookeeper_nodes_ip(),
'm7': self.mapr_db
}
self._configure_sh = f % args
return self._configure_sh
def _get_cluster_config_value(self, config):
cluster_configs = self.cluster.cluster_configs
service = config.applicable_target
name = config.name
if service in cluster_configs and name in cluster_configs[service]:
return cluster_configs[service][name]
else:
return config.default_value
def get_node_processes(self):
node_processes = []
for ng in self.cluster.node_groups:
for np in ng.node_processes:
if np not in node_processes:
node_processes.append(self.get_node_process_by_name(np))
return node_processes
def get_node_process_by_name(self, name):
for service in self.cluster_services:
for node_process in service.node_processes:
if node_process.ui_name == name:
return node_process
def get_instances(self, node_process=None):
if node_process is not None:
node_process = su.get_node_process_name(node_process)
return u.get_instances(self.cluster, node_process)
def get_instance(self, node_process):
node_process_name = su.get_node_process_name(node_process)
instances = u.get_instances(self.cluster, node_process_name)
return instances[0] if instances else None
def get_instances_ip(self, node_process):
return [i.internal_ip for i in self.get_instances(node_process)]
def get_instance_ip(self, node_process):
i = self.get_instance(node_process)
return i.internal_ip if i else None
def get_zookeeper_nodes_ip_with_port(self, separator=','):
return separator.join(['%s:%s' % (ip, mng.ZK_CLIENT_PORT)
for ip in self.get_instances_ip(mng.ZOOKEEPER)])
def check_for_process(self, instance, process):
return su.has_node_process(instance, process)
def get_services_configs_dict(self, services=None):
if not services:
services = self.cluster_services
result = dict()
for service in services:
result.update(service.get_configs_dict())
return result
def get_chosen_service_version(self, service_name):
service_configs = self.cluster.cluster_configs.get(service_name, None)
if not service_configs:
return None
return service_configs.get('%s Version' % service_name, None)
def get_cluster_services(self, node_group=None):
node_processes = None
if node_group:
node_processes = node_group.node_processes
else:
node_processes = [np for ng in self.cluster.node_groups
for np in ng.node_processes]
node_processes = g.unique_list(node_processes)
services = g.unique_list(node_processes, self.get_service)
return services + [swift.Swift()]
def get_service(self, node_process):
ui_name = self.get_service_name_by_node_process(node_process)
if ui_name is None:
raise e.PluginInvalidDataException(
_('Service not found in services list'))
version = self.get_chosen_service_version(ui_name)
service = self._find_service_instance(ui_name, version)
if service is None:
raise e.PluginInvalidDataException(_('Can not map service'))
return service
def _find_service_instance(self, ui_name, version):
# if version is None, the latest service version is returned
for service in self.all_services[::-1]:
if service.ui_name == ui_name:
if version is not None and service.version != version:
continue
return service
def get_service_name_by_node_process(self, node_process):
node_process_name = su.get_node_process_name(node_process)
for service in self.all_services:
service_node_processes = [np.ui_name
for np in service.node_processes]
if node_process_name in service_node_processes:
return service.ui_name
def get_instances_count(self, node_process=None):
if node_process is not None:
node_process = su.get_node_process_name(node_process)
return u.get_instances_count(self.cluster, node_process)
def get_node_groups(self, node_process=None):
if node_process is not None:
node_process = su.get_node_process_name(node_process)
return u.get_node_groups(self.cluster, node_process)
def get_cldb_nodes_ip(self, separator=','):
return separator.join(self.get_instances_ip(mfs.CLDB))
def get_zookeeper_nodes_ip(self, separator=','):
return separator.join(
self.get_instances_ip(mng.ZOOKEEPER))
def get_resourcemanager_ip(self):
return self.get_instance_ip(yarn.RESOURCE_MANAGER)
def get_historyserver_ip(self):
return self.get_instance_ip(yarn.HISTORY_SERVER)
def has_control_nodes(self, instances):
for inst in instances:
zookeepers = self.check_for_process(inst, mng.ZOOKEEPER)
cldbs = self.check_for_process(inst, mfs.CLDB)
if zookeepers or cldbs:
return True
return False
def is_present(self, service):
is_service_subclass = lambda s: isinstance(s, service.__class__)
return any(is_service_subclass(s) for s in self.cluster_services)
def filter_instances(self, instances, node_process=None, service=None):
if node_process:
return su.filter_by_node_process(instances, node_process)
if service:
return su.filter_by_service(instances, service)
return list(instances)
def removed_instances(self, node_process=None, service=None):
instances = self._removed_instances
return self.filter_instances(instances, node_process, service)
def added_instances(self, node_process=None, service=None):
instances = self._added_instances
return self.filter_instances(instances, node_process, service)
def changed_instances(self, node_process=None, service=None):
instances = self._changed_instances
return self.filter_instances(instances, node_process, service)
def existing_instances(self, node_process=None, service=None):
instances = self._existing_instances
return self.filter_instances(instances, node_process, service)
@property
def should_be_restarted(self):
return self._restart
@property
def mapr_repos(self):
if not self._repos:
self._repos = {
"ubuntu_mapr_base_repo": self.ubuntu_base_repo,
"ubuntu_mapr_ecosystem_repo": self.ubuntu_ecosystem_repo,
"centos_mapr_base_repo": self.centos_base_repo,
"centos_mapr_ecosystem_repo": self.centos_ecosystem_repo,
}
return self._repos
@property
def local_repo(self):
return self._local_repo
@property
def is_prebuilt(self):
if self._is_prebuilt is None:
self._is_prebuilt = g.is_directory(
self.some_instance, self.local_repo)
return self._is_prebuilt
@property
def mapr_version(self):
return self._mapr_version
@property
def ubuntu_base_repo(self):
default_value = self._ubuntu_base_repo or self.ubuntu_base % self\
.mapr_version
return self.cluster.cluster_configs.get(
'general', {}).get('Ubuntu base repo', default_value)
@property
def ubuntu_ecosystem_repo(self):
default_value = self._ubuntu_ecosystem_repo
return self.cluster.cluster_configs.get(
'general', {}).get('Ubuntu ecosystem repo', default_value)
@property
def centos_base_repo(self):
default_value = self._centos_base_repo or self.centos_base % self\
.mapr_version
return self.cluster.cluster_configs.get(
'general', {}).get('CentOS base repo', default_value)
@property
def centos_ecosystem_repo(self):
default_value = self._centos_ecosystem_repo
return self.cluster.cluster_configs.get(
'general', {}).get('CentOS ecosystem repo', default_value)
def get_configuration(self, node_group):
services = self.get_cluster_services(node_group)
user_configs = node_group.configuration()
default_configs = self.get_services_configs_dict(services)
return u.merge_configs(default_configs, user_configs)
def get_config_files(self, node_group):
services = self.get_cluster_services(node_group)
configuration = self.get_configuration(node_group)
instance = node_group.instances[0]
config_files = []
for service in services:
service_conf_files = service.get_config_files(
cluster_context=self,
configs=configuration[service.ui_name],
instance=instance,
)
for conf_file in service_conf_files:
file_atr = bcf.FileAttr(conf_file.remote_path,
conf_file.render(), conf_file.mode,
conf_file.owner)
config_files.append(file_atr)
return config_files
@property
def topology_map(self):
return th.generate_topology_map(self.cluster, self.is_node_aware)

View File

@ -1,35 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara_plugin_mapr.plugins.mapr.abstract.cluster_validator as v
import sahara_plugin_mapr.plugins.mapr.util.validation_utils as vu
import sahara_plugin_mapr.plugins.mapr.versions.version_handler_factory as vhf
class BaseValidator(v.AbstractValidator):
def validate(self, cluster_context):
for service in cluster_context.required_services:
vu.assert_present(service, cluster_context)
for service in cluster_context.cluster_services:
for rule in service.validation_rules:
rule(cluster_context)
def validate_scaling(self, cluster_context, existing, additional):
cluster = cluster_context.cluster
version = cluster.hadoop_version
handler = vhf.VersionHandlerFactory.get().get_handler(version)
cluster = vu.create_fake_cluster(cluster, existing, additional)
cluster_context = handler.get_context(cluster)
self.validate(cluster_context)

View File

@ -1,93 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from sahara.plugins import context
from sahara.plugins import edp
from sahara.service.edp.job_binaries import manager as jb_manager
import sahara_plugin_mapr.plugins.mapr.util.maprfs_helper as mfs
import sahara_plugin_mapr.plugins.mapr.versions.version_handler_factory as vhf
class MapROozieJobEngine(edp.PluginsOozieJobEngine):
def __init__(self, cluster):
super(MapROozieJobEngine, self).__init__(cluster)
self.cluster_context = self._get_cluster_context(self.cluster)
hdfs_user = 'mapr'
def get_hdfs_user(self):
return MapROozieJobEngine.hdfs_user
def create_hdfs_dir(self, remote, dir_name):
mfs.create_maprfs4_dir(remote, dir_name, self.get_hdfs_user())
def _upload_workflow_file(self, where, job_dir, wf_xml, hdfs_user):
f_name = 'workflow.xml'
with where.remote() as r:
mfs.put_file_to_maprfs(r, wf_xml, f_name, job_dir, hdfs_user)
return os.path.join(job_dir, f_name)
def _upload_job_files_to_hdfs(self, where, job_dir, job, configs,
proxy_configs=None):
mains = job.mains or []
libs = job.libs or []
builtin_libs = edp.get_builtin_binaries(job, configs)
uploaded_paths = []
hdfs_user = self.get_hdfs_user()
lib_dir = job_dir + '/lib'
with where.remote() as r:
for m in mains:
path = jb_manager.JOB_BINARIES. \
get_job_binary_by_url(m.url). \
copy_binary_to_cluster(m, proxy_configs=proxy_configs,
remote=r, context=context.ctx())
target = os.path.join(job_dir, m.name)
mfs.copy_from_local(r, path, target, hdfs_user)
uploaded_paths.append(target)
if len(libs) > 0:
self.create_hdfs_dir(r, lib_dir)
for l in libs:
path = jb_manager.JOB_BINARIES. \
get_job_binary_by_url(l.url). \
copy_binary_to_cluster(l, proxy_configs=proxy_configs,
remote=r, context=context.ctx())
target = os.path.join(lib_dir, l.name)
mfs.copy_from_local(r, path, target, hdfs_user)
uploaded_paths.append(target)
for lib in builtin_libs:
mfs.put_file_to_maprfs(r, lib['raw'], lib['name'], lib_dir,
hdfs_user)
uploaded_paths.append(lib_dir + '/' + lib['name'])
return uploaded_paths
def get_name_node_uri(self, cluster):
return self.cluster_context.name_node_uri
def get_oozie_server_uri(self, cluster):
return self.cluster_context.oozie_server_uri
def get_oozie_server(self, cluster):
return self.cluster_context.oozie_server
def get_resource_manager_uri(self, cluster):
return self.cluster_context.resource_manager_uri
def _get_cluster_context(self, cluster):
h_version = cluster.hadoop_version
v_handler = vhf.VersionHandlerFactory.get().get_handler(h_version)
return v_handler.get_context(cluster)

View File

@ -1,121 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
from sahara.plugins import health_check_base
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.abstract.health_checker as hc
from sahara_plugin_mapr.plugins.mapr.domain import node_process as np
from sahara_plugin_mapr.plugins.mapr.services.management import management
from sahara_plugin_mapr.plugins.mapr.services.spark import spark
class BaseHealthChecker(hc.AbstractHealthChecker):
def _is_available(self, process):
return process.open_ports and process not in spark.SparkOnYarn().\
node_processes
def get_checks(self, cluster_context, instances=None):
checks = [
functools.partial(ZookeeperCheck, cluster_context=cluster_context)]
for node_process in cluster_context.get_node_processes():
if self._is_available(
node_process) and node_process.ui_name != 'ZooKeeper':
checks.append(functools.partial
(MapRNodeProcessCheck,
cluster_context=cluster_context,
process=node_process))
return checks
class ZookeeperCheck(health_check_base.BasicHealthCheck):
def __init__(self, cluster, cluster_context):
super(ZookeeperCheck, self).__init__(cluster)
self.cluster_context = cluster_context
def get_health_check_name(self):
return 'MapR ZooKeeper check'
def is_available(self):
return self.cluster_context.cluster.plugin_name == 'mapr'
def _is_zookeeper_running(self, instance):
cmd = 'service mapr-zookeeper status'
with instance.remote() as r:
__, out = r.execute_command(cmd, run_as_root=True)
return 'zookeeper running as process' in out \
or 'active (running)' in out
def check_health(self):
instances = self.cluster_context.get_instances(
node_process=management.ZOOKEEPER)
active_count = 0
for instance in instances:
if self._is_zookeeper_running(instance):
active_count += 1
if active_count == 0:
raise health_check_base.RedHealthError(_(
"ZooKeeper is not in running state"))
if active_count < len(instances):
raise health_check_base.YellowHealthError(_(
"Some ZooKeeper processes are not in running state"))
return _("ZooKeeper is in running state")
class MapRNodeProcessCheck(health_check_base.BasicHealthCheck):
IMPORTANT_PROCESSES = [
'CLDB',
'FileServer',
'NodeManager',
'ResourceManager'
]
def __init__(self, cluster, cluster_context, process):
super(MapRNodeProcessCheck, self).__init__(cluster)
self.process = process
self.cluster_context = cluster_context
def get_health_check_name(self):
return 'MapR %s check' % self.process.ui_name
def is_available(self):
return self.cluster_context.cluster.plugin_name == 'mapr'
def check_health(self):
instances = self.cluster_context.get_instances(
node_process=self.process)
active_count = 0
for instance in instances:
status = self.process.status(instance)
if status == np.Status.RUNNING:
active_count += 1
if active_count == 0:
if self.process.ui_name in self.IMPORTANT_PROCESSES:
raise health_check_base.RedHealthError(_(
"%s is not in running state") % self.process.ui_name)
else:
raise health_check_base.YellowHealthError(_(
"%s is not in running state") % self.process.ui_name)
if active_count < len(instances):
if self.process.ui_name in self.IMPORTANT_PROCESSES:
raise health_check_base.YellowHealthError(_(
"Some %s processes are not in running state")
% self.process.ui_name)
return _("%s is in running state") % self.process.ui_name

View File

@ -1,201 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
from oslo_utils import timeutils
from sahara.plugins import context
import sahara.plugins.exceptions as ex
from sahara.plugins import utils
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.abstract.node_manager as s
import sahara_plugin_mapr.plugins.mapr.services.management.management as mng
import sahara_plugin_mapr.plugins.mapr.services.maprfs.maprfs as mfs
import sahara_plugin_mapr.plugins.mapr.util.event_log as el
LOG = logging.getLogger(__name__)
GET_SERVER_ID_CMD = ('maprcli node list -json -filter [ip==%s] -columns id'
' | grep id | grep -o \'[0-9]*\'')
NODE_LIST_CMD = 'maprcli node list -json'
MOVE_NODE_CMD = 'maprcli node move -serverids %s -topology /decommissioned'
REMOVE_NODE_CMD = ('maprcli node remove -filter [ip==%(ip)s] -nodes %(nodes)s'
' -zkconnect %(zookeepers)s')
WAIT_NODE_ALARM_NO_HEARTBEAT = 360
WARDEN_SERVICE = 'warden'
START = 'start'
STOP = 'stop'
DELAY = 5
DEFAULT_RETRY_COUNT = 10
class BaseNodeManager(s.AbstractNodeManager):
def move_nodes(self, cluster_context, instances):
LOG.debug("Moving the nodes to /decommissioned topology")
cldb_instances = self._get_cldb_instances(cluster_context, instances)
with random.choice(cldb_instances).remote() as cldb_remote:
for instance in instances:
with instance.remote() as r:
command = GET_SERVER_ID_CMD % instance.internal_ip
ec, out = r.execute_command(command, run_as_root=True)
command = MOVE_NODE_CMD % out.strip()
cldb_remote.execute_command(command, run_as_root=True)
LOG.info("Nodes successfully moved")
def remove_nodes(self, cluster_context, instances):
LOG.debug("Removing nodes from cluster")
cldb_instances = self._get_cldb_instances(cluster_context, instances)
with random.choice(cldb_instances).remote() as cldb_remote:
for instance in instances:
args = {
'ip': instance.internal_ip,
'nodes': instance.fqdn(),
'zookeepers':
cluster_context.get_zookeeper_nodes_ip_with_port(),
}
command = REMOVE_NODE_CMD % args
cldb_remote.execute_command(command, run_as_root=True)
LOG.info("Nodes successfully removed")
def start(self, cluster_context, instances=None):
instances = instances or cluster_context.get_instances()
zookeepers = cluster_context.filter_instances(instances, mng.ZOOKEEPER)
cldbs = cluster_context.filter_instances(instances, mfs.CLDB)
others = [i for i in instances
if not cluster_context.check_for_process(i, mfs.CLDB)]
utils.add_provisioning_step(cluster_context.cluster.id,
_("Start ZooKeepers nodes"),
len(zookeepers))
self._start_zk_nodes(zookeepers)
utils.add_provisioning_step(cluster_context.cluster.id,
_("Start CLDB nodes"), len(cldbs))
self._start_cldb_nodes(cldbs)
if others:
utils.add_provisioning_step(cluster_context.cluster.id,
_("Start non-CLDB nodes"),
len(list(others)))
self._start_non_cldb_nodes(others)
self._await_cldb(cluster_context, instances)
def stop(self, cluster_context, instances=None):
instances = instances or cluster_context.get_instances()
zookeepers = cluster_context.filter_instances(instances, mng.ZOOKEEPER)
utils.add_provisioning_step(cluster_context.cluster.id,
_("Stop ZooKeepers nodes"),
len(zookeepers))
self._stop_zk_nodes(zookeepers)
utils.add_provisioning_step(cluster_context.cluster.id,
_("Stop Warden nodes"), len(instances))
self._stop_warden_on_nodes(instances)
def _await_cldb(self, cluster_context, instances=None, timeout=600):
instances = instances or cluster_context.get_instances()
cldb_node = cluster_context.get_instance(mfs.CLDB)
start_time = timeutils.utcnow()
retry_count = 0
with cldb_node.remote() as r:
LOG.debug("Waiting {count} seconds for CLDB initialization".format(
count=timeout))
while timeutils.delta_seconds(start_time,
timeutils.utcnow()) < timeout:
ec, out = r.execute_command(NODE_LIST_CMD,
raise_when_error=False)
resp = json.loads(out)
status = resp['status']
if str(status).lower() == 'ok':
ips = [n['ip'] for n in resp['data']]
retry_count += 1
for i in instances:
if (i.internal_ip not in ips and
(retry_count > DEFAULT_RETRY_COUNT)):
msg = _("Node failed to connect to CLDB: %s"
) % i.internal_ip
raise ex.HadoopProvisionError(msg)
break
else:
context.sleep(DELAY)
else:
raise ex.HadoopProvisionError(_("CLDB failed to start"))
def _start_nodes(self, instances, sys_service):
with context.PluginsThreadGroup() as tg:
for instance in instances:
tg.spawn('start-%s-%s' % (sys_service, instance.id),
self._start_service, instance, sys_service)
def _stop_nodes(self, instances, sys_service):
with context.PluginsThreadGroup() as tg:
for instance in instances:
tg.spawn('stop-%s-%s' % (sys_service, instance.id),
self._stop_service, instance, sys_service)
def _start_zk_nodes(self, instances):
LOG.debug('Starting ZooKeeper nodes')
self._start_nodes(instances, mng.ZOOKEEPER.ui_name)
LOG.info('ZooKeeper nodes successfully started')
def _start_cldb_nodes(self, instances):
LOG.debug('Starting CLDB nodes')
self._start_nodes(instances, WARDEN_SERVICE)
LOG.info('CLDB nodes successfully started')
def _start_non_cldb_nodes(self, instances):
LOG.debug('Starting non-control nodes')
self._start_nodes(instances, WARDEN_SERVICE)
LOG.info('Non-control nodes successfully started')
def _stop_zk_nodes(self, instances):
self._stop_nodes(instances, mng.ZOOKEEPER.ui_name)
def _stop_warden_on_nodes(self, instances):
self._stop_nodes(instances, WARDEN_SERVICE)
@staticmethod
def _do_service_action(instance, service, action):
with instance.remote() as r:
cmd = "service mapr-%(service)s %(action)s"
args = {'service': service.lower(), 'action': action}
cmd = cmd % args
LOG.debug(
'Executing "{command}" on node={ip}'.format(
command=cmd, ip=instance.internal_ip))
r.execute_command(cmd, run_as_root=True)
@el.provision_event(instance_reference=1)
def _start_service(self, instance, service):
return self._do_service_action(instance, service, START)
@el.provision_event(instance_reference=1)
def _stop_service(self, instance, service):
return self._do_service_action(instance, service, STOP)
def _get_cldb_instances(self, cluster_context, instances):
current = self._get_current_cluster_instances(cluster_context,
instances)
return cluster_context.filter_instances(current, mfs.CLDB)
@staticmethod
def await_no_heartbeat():
delay = WAIT_NODE_ALARM_NO_HEARTBEAT
LOG.debug('Waiting for "NO_HEARBEAT" alarm')
context.sleep(delay)
def _get_current_cluster_instances(self, cluster_context, instances):
all_instances = cluster_context.get_instances()
return [x for x in all_instances if x not in instances]

View File

@ -1,198 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections as c
import sahara.plugins.provisioning as p
import sahara.plugins.utils as u
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.abstract.version_handler as vh
import sahara_plugin_mapr.plugins.mapr.base.base_cluster_configurer as b_conf
import sahara_plugin_mapr.plugins.mapr.base.base_cluster_validator as bv
import sahara_plugin_mapr.plugins.mapr.base.base_edp_engine as edp
import sahara_plugin_mapr.plugins.mapr.base.base_health_checker as health
import sahara_plugin_mapr.plugins.mapr.base.base_node_manager as bs
from sahara_plugin_mapr.plugins.mapr import images
import sahara_plugin_mapr.plugins.mapr.util.general as util
class BaseVersionHandler(vh.AbstractVersionHandler):
def __init__(self):
self._validator = bv.BaseValidator()
self._configurer = b_conf.BaseConfigurer()
self._health_checker = health.BaseHealthChecker()
self._node_manager = bs.BaseNodeManager()
self._version = None
self._required_services = []
self._services = []
self._node_processes = {}
self._configs = []
self.images = images
def get_edp_engine(self, cluster, job_type):
if job_type in edp.MapROozieJobEngine.get_supported_job_types():
return edp.MapROozieJobEngine(cluster)
return None
def get_edp_job_types(self):
return edp.MapROozieJobEngine.get_supported_job_types()
def get_edp_config_hints(self, job_type):
return edp.MapROozieJobEngine.get_possible_job_config(job_type)
def get_services(self):
return self._services
def get_required_services(self):
return self._required_services
def get_node_processes(self):
if not self._node_processes:
self._node_processes = {
s.ui_name: [np.ui_name for np in s.node_processes]
for s in self.get_services() if s.node_processes}
return self._node_processes
def get_configs(self):
if not self._configs:
configs = [c for s in self.get_services() for c in s.get_configs()]
configs += self._get_version_configs()
configs += self._get_repo_configs()
self._configs = util.unique_list(configs)
return self._configs
def _get_repo_configs(self):
ubuntu_base = p.Config(
name="Ubuntu base repo",
applicable_target="general",
scope='cluster',
priority=1,
default_value="",
description=_(
'Specifies Ubuntu MapR core repository.')
)
centos_base = p.Config(
name="CentOS base repo",
applicable_target="general",
scope='cluster',
priority=1,
default_value="",
description=_(
'Specifies CentOS MapR core repository.')
)
ubuntu_eco = p.Config(
name="Ubuntu ecosystem repo",
applicable_target="general",
scope='cluster',
priority=1,
default_value="",
description=_(
'Specifies Ubuntu MapR ecosystem repository.')
)
centos_eco = p.Config(
name="CentOS ecosystem repo",
applicable_target="general",
scope='cluster',
priority=1,
default_value="",
description=_(
'Specifies CentOS MapR ecosystem repository.')
)
return [ubuntu_base, centos_base, ubuntu_eco, centos_eco]
def _get_version_configs(self):
services = self.get_services()
service_version_dict = c.defaultdict(list)
for service in services:
service_version_dict[service.ui_name].append(service.version)
result = []
for service in services:
versions = service_version_dict[service.ui_name]
if len(versions) > 1:
result.append(service.get_version_config(versions))
return result
def get_configs_dict(self):
configs = dict()
for service in self.get_services():
configs.update(service.get_configs_dict())
return configs
def configure_cluster(self, cluster):
instances = u.get_instances(cluster)
cluster_context = self.get_context(cluster, added=instances)
self._configurer.configure(cluster_context)
def start_cluster(self, cluster):
instances = u.get_instances(cluster)
cluster_context = self.get_context(cluster, added=instances)
self._node_manager.start(cluster_context)
self._configurer.post_start(cluster_context)
def validate(self, cluster):
cluster_context = self.get_context(cluster)
self._validator.validate(cluster_context)
def validate_scaling(self, cluster, existing, additional):
cluster_context = self.get_context(cluster)
self._validator.validate_scaling(cluster_context, existing, additional)
def scale_cluster(self, cluster, instances):
cluster_context = self.get_context(cluster, added=instances)
cluster_context._cluster_services = None
self._configurer.configure(cluster_context, instances)
self._configurer.update(cluster_context, instances)
self._node_manager.start(cluster_context, instances)
def decommission_nodes(self, cluster, instances):
cluster_context = self.get_context(cluster, removed=instances)
cluster_context._cluster_services = None
self._node_manager.move_nodes(cluster_context, instances)
self._node_manager.stop(cluster_context, instances)
self._node_manager.await_no_heartbeat()
self._node_manager.remove_nodes(cluster_context, instances)
self._configurer.update(cluster_context, instances)
def get_open_ports(self, node_group):
result = []
for service in self.get_services():
for node_process in service.node_processes:
if node_process.ui_name in node_group.node_processes:
result += node_process.open_ports
return util.unique_list(result)
def get_cluster_checks(self, cluster):
cluster_context = self.get_context(cluster)
return self._health_checker.get_checks(cluster_context)
def get_image_arguments(self):
if hasattr(self, 'images'):
return self.images.get_image_arguments()
else:
return NotImplemented
def pack_image(self, hadoop_version, remote, test_only=False,
image_arguments=None):
if hasattr(self, 'images'):
self.images.pack_image(
remote, test_only=test_only, image_arguments=image_arguments)
def validate_images(self, cluster, test_only=False, image_arguments=None):
if hasattr(self, 'images'):
self.images.validate_images(
cluster, test_only=test_only, image_arguments=image_arguments)

View File

@ -1,192 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import os
import re
import jinja2 as j2
import sahara.plugins.exceptions as e
import sahara.plugins.utils as utils
from sahara_plugin_mapr.i18n import _
class FileAttr(object, metaclass=abc.ABCMeta):
def __init__(self, path, data, mode, owner):
self.path = path
self.data = data
self.mode = mode
self.owner = owner
class BaseConfigurationFile(object, metaclass=abc.ABCMeta):
def __init__(self, file_name):
self.f_name = file_name
self._config_dict = dict()
self._local_path = None
self._remote_path = None
self.mode = None
self.owner = None
@property
def remote_path(self):
return self._remote_path
@remote_path.setter
def remote_path(self, path):
self._remote_path = os.path.join(path, self.f_name)
@abc.abstractmethod
def render(self):
pass
@abc.abstractmethod
def parse(self, content):
pass
def fetch(self, instance):
with instance.remote() as r:
content = r.read_file_from(self.remote_path, run_as_root=True)
self.parse(content)
def load_properties(self, config_dict):
for k, v in config_dict.items():
self.add_property(k, v)
def add_property(self, name, value):
self._config_dict[name] = value
def add_properties(self, properties):
for prop in properties.items():
self.add_property(*prop)
def _get_config_value(self, name):
return self._config_dict.get(name, None)
def __repr__(self):
return '<Configuration file %s>' % self.f_name
class HadoopXML(BaseConfigurationFile):
def __init__(self, file_name):
super(HadoopXML, self).__init__(file_name)
def parse(self, content):
configs = utils.parse_hadoop_xml_with_name_and_value(content)
for cfg in configs:
self.add_property(cfg["name"], cfg["value"])
def render(self):
return utils.create_hadoop_xml(self._config_dict)
class RawFile(BaseConfigurationFile):
def __init__(self, file_name):
super(RawFile, self).__init__(file_name)
def render(self):
return self._config_dict.get('content', '')
def parse(self, content):
self._config_dict.update({'content': content})
class PropertiesFile(BaseConfigurationFile):
def __init__(self, file_name, separator='='):
super(PropertiesFile, self).__init__(file_name)
self.separator = separator
def parse(self, content):
for line in content.splitlines():
prop = line.strip()
if len(prop) == 0:
continue
if prop[0] in ['#', '!']:
continue
name_value = prop.split(self.separator, 1)
name = name_value[0]
# check whether the value is empty
value = name_value[1] if (len(name_value) == 2) else ''
self.add_property(name.strip(), value.strip())
def render(self):
lines = ['%s%s%s' % (k, self.separator, v) for k, v in
self._config_dict.items()]
return "\n".join(lines) + '\n'
class TemplateFile(BaseConfigurationFile):
def __init__(self, file_name):
super(TemplateFile, self).__init__(file_name)
self._template = None
@staticmethod
def _j2_render(template, arg_dict):
if template:
return template.render(arg_dict)
else:
raise e.PluginsInvalidDataException(
_('Template object must be defined'))
def render(self):
return self._j2_render(self._template, self._config_dict)
def parse(self, content):
self._template = j2.Template(content)
class EnvironmentConfig(BaseConfigurationFile):
def __init__(self, file_name):
super(EnvironmentConfig, self).__init__(file_name)
self._lines = []
self._regex = re.compile(r'export\s+(\w+)=(.+)')
self._tmpl = 'export %s="%s"'
def parse(self, content):
for line in content.splitlines():
line = self._escape(line)
match = self._regex.match(line)
if match:
name, value = match.groups()
value = value.replace("\"", '')
self._lines.append((name, value))
self.add_property(name, value)
else:
self._lines.append(line)
@staticmethod
def _escape(string):
try:
string = string.decode("utf-8")
except AttributeError:
pass
string = str(string).strip()
string = string.replace("\"", "")
return string
def render(self):
result = []
for line in self._lines:
if isinstance(line, tuple):
name, value = line
args = (name, self._config_dict.get(name) or value)
result.append(self._tmpl % args)
if name in self._config_dict:
del self._config_dict[name]
else:
result.append(line)
extra_ops = [self._tmpl % i for i in self._config_dict.items()]
return '\n'.join(result + extra_ops) + '\n'

View File

@ -1,96 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class Distro(object):
def __init__(self, name, internal_name, install_cmd, version_separator):
self._name = name
self._internal_name = internal_name
self._install_command = install_cmd
self._version_separator = version_separator
@property
def name(self):
return self._name
@property
def internal_name(self):
return self._internal_name
@property
def install_command(self):
return self._install_command
@property
def version_separator(self):
return self._version_separator
def create_install_cmd(self, packages):
s = self.version_separator
def join_package_version(pv_item):
p, v = pv_item if len(pv_item) > 1 else (pv_item[0], None)
return p + s + v + '*' if v else p
packages = ' '.join(map(join_package_version, packages))
command = '%(install_cmd)s %(packages)s'
args = {'install_cmd': self.install_command, 'packages': packages}
return command % args
UBUNTU = Distro(
name='Ubuntu',
internal_name='Ubuntu',
install_cmd='apt-get install --force-yes -y',
version_separator='=',
)
CENTOS = Distro(
name='CentOS',
internal_name='CentOS',
install_cmd='yum install -y',
version_separator='-',
)
RHEL = Distro(
name='RedHatEnterpriseServer',
internal_name='RedHat',
install_cmd='yum install -y',
version_separator='-',
)
SUSE = Distro(
name='Suse',
internal_name='Suse',
install_cmd='zypper',
version_separator=':',
)
def get_all():
return [UBUNTU, CENTOS, RHEL, SUSE]
def get(instance):
with instance.remote() as r:
name = r.get_os_distrib()
for d in get_all():
if d.internal_name.lower() in name:
return d
def get_version(instance):
with instance.remote() as r:
version = r.get_os_version()
return version

View File

@ -1,177 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_serialization.jsonutils as json
from sahara.plugins import utils as plugin_utils
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.util.general as util
WARDEN_MANAGED_CMD = ('sudo -u mapr maprcli node services'
' -name %(service)s'
' -action %(action)s'
' -nodes %(nodes)s')
class NodeProcess(object):
def __init__(self, name, ui_name, package, open_ports=None):
self._name = name
self._ui_name = ui_name
self._package = package
self._open_ports = open_ports or []
@property
def name(self):
return self._name
@property
def ui_name(self):
return self._ui_name
@property
def package(self):
return self._package
@property
def open_ports(self):
return self._open_ports
def start(self, instances):
self.execute_action(instances, Action.START)
def restart(self, instances):
self.execute_action(instances, Action.RESTART)
def stop(self, instances):
self.execute_action(instances, Action.STOP)
def execute_action(self, instances, action):
if len(instances) == 0:
return
nodes = ','.join(map(lambda i: i.internal_ip, instances))
args = {'service': self.name, 'action': action.name, 'nodes': nodes}
command = WARDEN_MANAGED_CMD % args
with instances[0].remote() as r:
r.execute_command(command)
self._wait_for_status(instances, action.status)
def _wait_for_status(self, instances, status, sleep=3, timeout=60):
def poll_status(instance):
operation_name = _('Wait for {node_process} on {instance}'
' to change status to "{status}"')
args = {
'node_process': self.ui_name,
'instance': instance.instance_name,
'status': status.name,
}
return plugin_utils.poll(
get_status=lambda: self.status(instance) == status,
operation_name=operation_name.format(**args),
timeout=timeout,
sleep=sleep,
)
util.execute_on_instances(instances, poll_status)
def status(self, instance):
command = 'maprcli service list -node %s -json' % instance.internal_ip
with instance.remote() as remote:
ec, out = remote.execute_command(util._run_as('mapr', command))
node_processes = json.loads(out)['data']
for node_process in node_processes:
if node_process['name'] == self.name:
return Status.by_value(node_process['state'])
return Status.NOT_CONFIGURED
def is_started(self, instance):
# At least tried to do it =)
return self.status(instance) in [Status.RUNNING,
Status.FAILED,
Status.STAND_BY]
class Status(object):
class Item(object):
def __init__(self, name, value):
self._name = name
self._value = value
@property
def name(self):
return self._name
@property
def value(self):
return self._value
# The package for the service is not installed and/or
# the service is not configured (configure.sh has not run)
NOT_CONFIGURED = Item('Not Configured', 0)
# The package for the service is installed and configured
CONFIGURED = Item('Configured', 1)
# The service is installed, started by the warden, and is currently running
RUNNING = Item('Running', 2)
# The service is installed and configure.sh has run,
# but the service is not running
STOPPED = Item('Stopped', 3)
# The service is installed and configured, but not running
FAILED = Item('Failed', 4)
# The service is installed and is in standby mode, waiting to take over
# in case of failure of another instance.
# Mainly used for JobTracker warm standby
STAND_BY = Item('Standby', 5)
@staticmethod
def items():
return [
Status.NOT_CONFIGURED,
Status.CONFIGURED,
Status.RUNNING,
Status.STOPPED,
Status.FAILED,
Status.STAND_BY,
]
@staticmethod
def by_value(value):
for v in Status.items():
if v.value == value:
return v
class Action(object):
class Item(object):
def __init__(self, name, status):
self._name = name
self._status = status
@property
def name(self):
return self._name
@property
def status(self):
return self._status
START = Item('start', Status.RUNNING)
STOP = Item('stop', Status.STOPPED)
RESTART = Item('restart', Status.RUNNING)

View File

@ -1,243 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
import sahara.plugins.exceptions as ex
import sahara.plugins.provisioning as p
from sahara.plugins import utils
from sahara_plugin_mapr.i18n import _
from sahara_plugin_mapr.plugins.mapr.util import commands as cmd
from sahara_plugin_mapr.plugins.mapr.util import event_log as el
from sahara_plugin_mapr.plugins.mapr.util import general as g
from sahara_plugin_mapr.plugins.mapr.util import service_utils as su
LOG = logging.getLogger(__name__)
SERVICE_UI = 'Web UI'
_INSTALL_PACKAGES_TIMEOUT = 3600
class Service(object, metaclass=g.Singleton):
def __init__(self):
self._name = None
self._ui_name = None
self._node_processes = []
self._version = None
self._dependencies = []
self._ui_info = []
self._cluster_defaults = []
self._node_defaults = []
self._validation_rules = []
self._priority = 1
@property
def name(self):
return self._name
@property
def ui_name(self):
return self._ui_name
@property
def version(self):
return self._version
@property
def node_processes(self):
return self._node_processes
@property
def dependencies(self):
return self._dependencies
@property
def cluster_defaults(self):
return self._cluster_defaults
@property
def node_defaults(self):
return self._node_defaults
@property
def validation_rules(self):
return self._validation_rules
def get_ui_info(self, cluster_context):
return self._ui_info
def install(self, cluster_context, instances):
service_instances = cluster_context.filter_instances(instances,
service=self)
@el.provision_step(_("Install %s service") % self.ui_name,
cluster_context_reference=0, instances_reference=1)
def _install(_context, _instances):
g.execute_on_instances(_instances,
self._install_packages_on_instance,
_context)
if service_instances:
_install(cluster_context, service_instances)
@el.provision_event(instance_reference=1)
def _install_packages_on_instance(self, instance, cluster_context):
processes = [p for p in self.node_processes if
p.ui_name in instance.node_group.node_processes]
if processes is not None and len(processes) > 0:
packages = self._get_packages(cluster_context, processes)
cmd = cluster_context.distro.create_install_cmd(packages)
with instance.remote() as r:
r.execute_command(cmd, run_as_root=True,
timeout=_INSTALL_PACKAGES_TIMEOUT)
def _get_packages(self, cluster_context, node_processes):
result = []
result += self.dependencies
result += [(np.package, self.version) for np in node_processes]
return result
def _set_service_dir_owner(self, cluster_context, instances):
service_instances = cluster_context.filter_instances(instances,
service=self)
LOG.debug("Changing %s service dir owner", self.ui_name)
for instance in service_instances:
cmd.chown(instance, 'mapr:mapr', self.service_dir(cluster_context))
def post_install(self, cluster_context, instances):
pass
def post_start(self, cluster_context, instances):
pass
def configure(self, cluster_context, instances=None):
pass
def update(self, cluster_context, instances=None):
pass
def get_file_path(self, file_name):
template = 'plugins/mapr/services/%(service)s/resources/%(file_name)s'
args = {'service': self.name, 'file_name': file_name}
return template % args
def get_configs(self):
result = []
for d_file in self.cluster_defaults:
data = self._load_config_file(self.get_file_path(d_file))
result += [self._create_config_obj(c, self.ui_name) for c in data]
for d_file in self.node_defaults:
data = self._load_config_file(self.get_file_path(d_file))
result += [self._create_config_obj(c, self.ui_name, scope='node')
for c in data]
return result
def get_configs_dict(self):
result = dict()
for conf_obj in self.get_configs():
result.update({conf_obj.name: conf_obj.default_value})
return {self.ui_name: result}
def _load_config_file(self, file_path=None):
return json.loads(utils.get_file_text(file_path, 'sahara_plugin_mapr'))
def get_config_files(self, cluster_context, configs, instance=None):
return []
def _create_config_obj(self, item, target='general', scope='cluster',
high_priority=False):
def _prepare_value(value):
if isinstance(value, str):
return value.strip().lower()
return value
conf_name = _prepare_value(item.get('name', None))
conf_value = _prepare_value(item.get('value', None))
if not conf_name:
raise ex.HadoopProvisionError(_("Config missing 'name'"))
if conf_value is None:
raise ex.PluginInvalidDataException(
_("Config '%s' missing 'value'") % conf_name)
if high_priority or item.get('priority', 2) == 1:
priority = 1
else:
priority = 2
return p.Config(
name=conf_name,
applicable_target=target,
scope=scope,
config_type=item.get('config_type', "string"),
config_values=item.get('config_values', None),
default_value=conf_value,
is_optional=item.get('is_optional', True),
description=item.get('description', None),
priority=priority)
def get_version_config(self, versions):
return p.Config(
name='%s Version' % self._ui_name,
applicable_target=self.ui_name,
scope='cluster',
config_type='dropdown',
config_values=[(v, v) for v in sorted(versions, reverse=True)],
is_optional=False,
description=_('Specify the version of the service'),
priority=1)
def __eq__(self, other):
if isinstance(other, self.__class__):
version_eq = self.version == other.version
ui_name_eq = self.ui_name == other.ui_name
return version_eq and ui_name_eq
return NotImplemented
def restart(self, instances):
for node_process in self.node_processes:
filtered_instances = su.filter_by_node_process(instances,
node_process)
if filtered_instances:
node_process.restart(filtered_instances)
def service_dir(self, cluster_context):
args = {'mapr_home': cluster_context.mapr_home, 'name': self.name}
return '%(mapr_home)s/%(name)s' % args
def home_dir(self, cluster_context):
args = {
'service_dir': self.service_dir(cluster_context),
'name': self.name,
'version': self.version,
}
return '%(service_dir)s/%(name)s-%(version)s' % args
def conf_dir(self, cluster_context):
return '%s/conf' % self.home_dir(cluster_context)
def post_configure_sh(self, cluster_context, instances):
pass
def post_configure(self, cluster_context, instances):
pass

View File

@ -1,44 +0,0 @@
# Copyright (c) 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import images
from sahara.plugins import utils as plugin_utils
_validator = images.SaharaImageValidator.from_yaml(
'plugins/mapr/resources/images/image.yaml',
resource_roots=['plugins/mapr/resources/images'],
package='sahara_plugin_mapr')
def get_image_arguments():
return _validator.get_argument_list()
def pack_image(remote, test_only=False, image_arguments=None):
_validator.validate(remote, test_only=test_only,
image_arguments=image_arguments)
def validate_images(cluster, test_only=False, image_arguments=None):
image_arguments = get_image_arguments()
if not test_only:
instances = plugin_utils.get_instances(cluster)
else:
instances = plugin_utils.get_instances(cluster)[0]
for instance in instances:
with instance.remote() as r:
_validator.validate(r, test_only=test_only,
image_arguments=image_arguments)

View File

@ -1,110 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara.plugins.provisioning as p
from sahara_plugin_mapr.i18n import _
import sahara_plugin_mapr.plugins.mapr.versions.version_handler_factory as vhf
class MapRPlugin(p.ProvisioningPluginBase):
title = 'MapR Hadoop Distribution'
description = _('The MapR Distribution provides a full Hadoop stack that'
' includes the MapR File System (MapR-FS), MapReduce,'
' a complete Hadoop ecosystem, and the MapR Control System'
' user interface')
def _get_handler(self, hadoop_version):
return vhf.VersionHandlerFactory.get().get_handler(hadoop_version)
def get_title(self):
return MapRPlugin.title
def get_description(self):
return MapRPlugin.description
def get_labels(self):
return {
'plugin_labels': {'enabled': {'status': True}},
'version_labels': {
'5.2.0.mrv2': {'enabled': {'status': True}},
}
}
def get_versions(self):
return vhf.VersionHandlerFactory.get().get_versions()
def get_node_processes(self, hadoop_version):
return self._get_handler(hadoop_version).get_node_processes()
def get_configs(self, hadoop_version):
return self._get_handler(hadoop_version).get_configs()
def configure_cluster(self, cluster):
self._get_handler(cluster.hadoop_version).configure_cluster(cluster)
def start_cluster(self, cluster):
self._get_handler(cluster.hadoop_version).start_cluster(cluster)
def validate(self, cluster):
self._get_handler(cluster.hadoop_version).validate(cluster)
def validate_scaling(self, cluster, existing, additional):
v_handler = self._get_handler(cluster.hadoop_version)
v_handler.validate_scaling(cluster, existing, additional)
def scale_cluster(self, cluster, instances):
v_handler = self._get_handler(cluster.hadoop_version)
v_handler.scale_cluster(cluster, instances)
def decommission_nodes(self, cluster, instances):
v_handler = self._get_handler(cluster.hadoop_version)
v_handler.decommission_nodes(cluster, instances)
def get_edp_engine(self, cluster, job_type):
v_handler = self._get_handler(cluster.hadoop_version)
return v_handler.get_edp_engine(cluster, job_type)
def get_edp_job_types(self, versions=None):
res = {}
for vers in self.get_versions():
if not versions or vers in versions:
vh = self._get_handler(vers)
res[vers] = vh.get_edp_job_types()
return res
def get_edp_config_hints(self, job_type, version):
v_handler = self._get_handler(version)
return v_handler.get_edp_config_hints(job_type)
def get_open_ports(self, node_group):
v_handler = self._get_handler(node_group.cluster.hadoop_version)
return v_handler.get_open_ports(node_group)
def get_health_checks(self, cluster):
v_handler = self._get_handler(cluster.hadoop_version)
return v_handler.get_cluster_checks(cluster)
def get_image_arguments(self, hadoop_version):
return self._get_handler(hadoop_version).get_image_arguments()
def pack_image(self, hadoop_version, remote,
test_only=False, image_arguments=None):
version = self._get_handler(hadoop_version)
version.pack_image(hadoop_version, remote, test_only=test_only,
image_arguments=image_arguments)
def validate_images(self, cluster, test_only=False, image_arguments=None):
self._get_handler(cluster.hadoop_version).validate_images(
cluster, test_only=test_only, image_arguments=image_arguments)

View File

@ -1,37 +0,0 @@
#!/bin/sh
if [ "$1" = "Ubuntu" ]; then
sudo apt-get update
cat >> /etc/apt/sources.list.d/maprtech.list <<- EOF
deb %(ubuntu_mapr_base_repo)s
deb %(ubuntu_mapr_ecosystem_repo)s
EOF
sudo apt-get install -y --force-yes wget
wget -O - http://package.mapr.com/releases/pub/maprgpg.key | \
sudo apt-key add -
sudo apt-get update
elif [ "$1" = 'CentOS' -o "$1" = 'RedHatEnterpriseServer' ]; then
cat >> /etc/yum.repos.d/maprtech.repo <<- EOF
[maprtech]
name=MapR Technologies
baseurl=%(centos_mapr_base_repo)s
enabled=1
gpgcheck=0
protect=1
[maprecosystem]
name=MapR Technologies
baseurl=%(centos_mapr_ecosystem_repo)s
enabled=1
gpgcheck=0
protect=1
EOF
rpm --import http://package.mapr.com/releases/pub/maprgpg.key
yum install -y wget
if [ ! -e /etc/yum.repos.d/epel.repo ]; then
rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
fi
else
echo "Unknown distribution"
exit 1
fi

View File

@ -1,6 +0,0 @@
#!/bin/sh
cat >> /etc/apt/sources.list.d/security_repo.list <<- EOF
deb http://security.ubuntu.com/ubuntu precise-security main
EOF
apt-get update

View File

@ -1,13 +0,0 @@
#!/bin/bash
disk_list_file=/tmp/disk.list
if [ -f ${disk_list_file} ]; then
rm -f ${disk_list_file}
fi
for path in $*; do
device=`findmnt ${path} -cno SOURCE`
umount -f ${device}
echo ${device} >> ${disk_list_file}
done

View File

@ -1,10 +0,0 @@
#!/bin/bash
echo "Installing mysql compat"
MARIADB_VERSION=$(rpm -qa mariadb | cut -d- -f2)
INSTALLED=$(rpm -qa | grep -i mariadb-compat-${MARIADB_VERSION}-)
if [[ -z "$INSTALLED" ]]; then
rpm -ivh --nodeps http://yum.mariadb.org/$MARIADB_VERSION/rhel7-amd64/rpms/MariaDB-compat-$MARIADB_VERSION-1.el7.centos.x86_64.rpm
fi

View File

@ -1,20 +0,0 @@
#!/bin/bash
check=$(systemctl --no-pager list-unit-files iptables.service | grep 'enabled' | wc -l)
if [ $check -eq 1 ]; then
if [ $test_only -eq 0 ]; then
if type -p systemctl && [[ "$(systemctl --no-pager list-unit-files firewalld)" =~ 'enabled' ]]; then
systemctl disable firewalld
fi
if type -p service; then
service ip6tables save
service iptables save
chkconfig ip6tables off
chkconfig iptables off
fi
else
exit 0
fi
fi

View File

@ -1,6 +0,0 @@
#!/bin/bash
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y soci soci-mysql
yum remove -y epel-release

View File

@ -1,27 +0,0 @@
#!/bin/bash
DISTRO_NAME=$distro
source "/tmp/package_utils.sh"
echo "START: installing MapR core repository"
MAPR_REPO_URL="http://package.mapr.com/releases/v${plugin_version}/redhat/mapr-v${plugin_version}GA.rpm.tgz"
MAPR_REPO_DIR="/opt/mapr-repository/core"
if [ ! -d "$MAPR_REPO_DIR" ] || [ -z "$(ls -A $MAPR_REPO_DIR)" ]; then
if [ $test_only -eq 0 ]; then
MAPR_REPO_NAME="mapr_core"
echo "Downloading MapR repository archive"
mkdir -p "$MAPR_REPO_DIR" && curl "$MAPR_REPO_URL" | tar -xz -C "$MAPR_REPO_DIR"
echo "Creating local repository"
create_repo "$MAPR_REPO_DIR"
echo "Adding MapR repository"
add_local_repo "$MAPR_REPO_NAME" "$MAPR_REPO_DIR"
fi
fi
echo "END: installing MapR core repository"

View File

@ -1,27 +0,0 @@
#!/bin/bash
VERSIONS_PY="/tmp/versions.py"
DISTRO_NAME=$distro
source "/tmp/package_utils.sh"
echo "START: installing MapR ecosystem repository"
MAPR_REPO_URL="http://package.mapr.com/releases/MEP/MEP-2.0.0/redhat/"
MAPR_REPO_DIR="/opt/mapr-repository/ecosystem"
if [ ! -d "$MAPR_REPO_DIR" ] || [ -z "$(ls -A $MAPR_REPO_DIR)" ]; then
if [ $test_only -eq 0 ]; then
MAPR_REPO_NAME="mapr_ecosystem"
MAPR_PKG_GROUPS="/tmp/packages.json"
MAPR_SPEC="/tmp/spec_$plugin_version.json"
echo "Creating local MapR ecosystem repository"
localize_repo "$MAPR_REPO_NAME" "$MAPR_REPO_URL" "$MAPR_PKG_GROUPS" "$MAPR_SPEC" "$MAPR_REPO_DIR"
echo $MAPR_SPEC
fi
fi
echo "END: installing MapR ecosystem repository"

View File

@ -1,14 +0,0 @@
#!/bin/bash
echo "Installing OpenJDK"
if [ $test_only -eq 0 ]; then
yum install -y java-1.8.0-openjdk-devel
JRE_HOME="/usr/lib/jvm/java-openjdk/jre"
JDK_HOME="/usr/lib/jvm/java-openjdk"
echo "OpenJDK has been installed"
else
exit 0
fi

View File

@ -1,34 +0,0 @@
#!/bin/bash
echo "START: installing Scala"
sudo yum -y update
DEF_VERSION="2.11.6"
if [ $test_only -eq 0 ]; then
RETURN_CODE="$(curl -s -L -o /dev/null -w "%{http_code}" https://www.scala-lang.org/)"
if [ "$RETURN_CODE" != "200" ]; then
echo "https://www.scala-lang.org is unreachable" && exit 1
fi
if [ "${scala_version}" != "1" ]; then
VERSION=$scala_version
else
VERSION="$(curl -s -L --fail https://www.scala-lang.org| tr -d '\n' | sed 's/^.*<div[^<]\+scala-version">[^0-9]\+\([0-9\.\?]\+\)<.\+$/\1/')"
if [ $? != 0 -o -z "${VERSION}" ]; then
echo "Installing default version $DEF_VERSION"
VERSION=${DEF_VERSION}
fi
fi
PKG=scala-${VERSION}
URL="https://downloads.lightbend.com/scala/${VERSION}"
rpm -Uhv ${URL}/${PKG}.rpm
fi
echo "END: installing Scala"

View File

@ -1,6 +0,0 @@
#!/bin/bash
if [ $test_only -eq 0 ]; then
sed '/^Defaults requiretty*/ s/^/#/' -i /etc/sudoers
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
check=$(cat /etc/selinux/config | grep "SELINUX=permissive" | wc -l)
if [ $check -eq 0 ]; then
if [ $test_only -eq 0 ]; then
echo "SELINUX=permissive" > /etc/selinux/config
echo "SELINUXTYPE=targeted" >> /etc/selinux/config
else
exit 0
fi
fi

View File

@ -1,5 +0,0 @@
#!/bin/bash
if [ $test_only -eq 0 ]; then
yum clean all && yum repolist
fi

View File

@ -1,31 +0,0 @@
#!/bin/bash
EXTJS_DESTINATION_DIR="/opt/mapr-repository"
EXTJS_DOWNLOAD_URL="https://tarballs.openstack.org/sahara-extra/dist/common-artifacts/ext-2.2.zip"
EXTJS_NO_UNPACK=1
extjs_basepath=$(basename ${EXTJS_DOWNLOAD_URL})
extjs_archive=/tmp/${extjs_basepath}
extjs_folder="${extjs_basepath%.*}"
setup_extjs() {
curl -sS -o $extjs_archive $EXTJS_DOWNLOAD_URL
mkdir -p $EXTJS_DESTINATION_DIR
}
if [ -z "${EXTJS_NO_UNPACK:-}" ]; then
if [ ! -d "${EXTJS_DESTINATION_DIR}/${extjs_folder}" ]; then
setup_extjs
unzip -o -d "$EXTJS_DESTINATION_DIR" $extjs_archive
rm -f $extjs_archive
else
exit 0
fi
else
if [ ! -f "${EXTJS_DESTINATION_DIR}/${extjs_basepath}" ]; then
setup_extjs
mv $extjs_archive $EXTJS_DESTINATION_DIR
else
exit 0
fi
fi

View File

@ -1,42 +0,0 @@
#!/bin/sh
# NOTE: $(dirname $0) is read-only, use space under $TARGET_ROOT
JAVA_LOCATION=${JAVA_TARGET_LOCATION:-"/usr/java"}
JAVA_NAME="oracle-jdk"
JAVA_HOME=$JAVA_LOCATION/$JAVA_NAME
JAVA_DOWNLOAD_URL=${JAVA_DOWNLOAD_URL:-"http://download.oracle.com/otn-pub/java/jdk/7u51-b13/jdk-7u51-linux-x64.tar.gz"}
# FIXME: probably not idempotent, find a better condition
if [ ! -d $JAVA_LOCATION ]; then
if [ $test_only -eq 0 ]; then
echo "Begin: installation of Java"
mkdir -p $JAVA_LOCATION
if [ -n "$JAVA_DOWNLOAD_URL" ]; then
JAVA_FILE=$(basename $JAVA_DOWNLOAD_URL)
wget --no-check-certificate --no-cookies -c \
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \
-O $JAVA_LOCATION/$JAVA_FILE $JAVA_DOWNLOAD_URL
elif [ -n "$JAVA_FILE" ]; then
install -D -g root -o root -m 0755 $(dirname $0)/$JAVA_FILE $JAVA_LOCATION
fi
cd $JAVA_LOCATION
echo "Decompressing Java archive"
printf "\n\n" | tar -zxf $JAVA_FILE
echo "Setting up $JAVA_NAME"
chown -R root:root $JAVA_LOCATION
JAVA_DIR=`ls -1 $JAVA_LOCATION | grep -v tar.gz`
ln -s $JAVA_LOCATION/$JAVA_DIR $JAVA_HOME
setup-java-home $JAVA_HOME $JAVA_HOME
rm $JAVA_FILE
echo "End: installation of Java"
else
exit 0
fi
fi

View File

@ -1,226 +0,0 @@
# execute_in_directory <directory> <command>
execute_in_directory() {
local directory="$(readlink -f "$1")"; shift
local cmd="$*"
pushd "$directory" && eval "$cmd" && popd
}
# get_distro
get_distro() {
echo "$DISTRO_NAME"
}
# download_apt_package <package> [version] [directory]
download_apt_package() {
local package="$1"
local version="${2:-}"
local directory="${3:-$(pwd)}"
local package_spec="$package${version:+=$version*}"
execute_in_directory "$directory" apt-get --allow-unauthenticated download "$package_spec"
}
# download_yum_package <package> [version] [directory]
download_yum_package() {
local package="$1"
local version="${2:-}"
local directory="${3:-$(pwd)}"
local package_spec="$package${version:+-$version*}"
yumdownloader --destdir "$directory" "$package_spec"
}
# download_package <package> [version] [directory] [distro]
download_package() {
local package="$1"
local version="${2:-}"
local directory="${3:-$(pwd)}"
local distro="${4:-$(get_distro)}"
if [[ "$distro" == "ubuntu" ]]; then
download_apt_package "$package" "$version" "$directory"
elif [[ "$distro" == "centos" || "$distro" == "centos7" || "$distro" == "rhel" || "$distro" == "rhel7" ]]; then
download_yum_package "$package" "$version" "$directory"
fi
}
# get_packages <package_groups_file> <spec_file> [version_separator]
get_packages() {
local package_groups_file="$1"
local spec_file="$2"
local version_separator="${3:-:}"
python "$VERSIONS_PY" --separator "$version_separator" "$package_groups_file" "$spec_file"
}
# download_packages <package_groups_file> <spec_file> [directory] [distro]
download_packages() {
local package_groups_file="$1"
local spec_file="$2"
local directory="${3:-$(pwd)}"
local distro="${4:-$(get_distro)}"
local version_separator=":"
local packages="$(get_packages "$package_groups_file" "$spec_file" "$version_separator")"
for package in $packages; do
IFS="$version_separator" read -ra package_version <<< "$package"
download_package "${package_version[@]}" "$directory" "$distro"
done
}
# create_apt_repo <directory>
create_apt_repo() {
local directory="$(readlink -f "$1")"
local binary_dir="$directory/binary"
local packages_gz="$binary_dir/Packages.gz"
mkdir -p "$binary_dir"
execute_in_directory "$directory" "dpkg-scanpackages -m . /dev/null | gzip -9c > $packages_gz"
}
# create_yum_repo <directory>
create_yum_repo() {
local directory="$(readlink -f "$1")"
createrepo "$directory"
}
# create_repo <directory> [distro]
create_repo() {
local directory="$(readlink -f "$1")"
local distro="${2:-$(get_distro)}"
if [[ "$distro" == "ubuntu" ]]; then
create_apt_repo "$directory"
elif [[ "$distro" == "centos" || "$distro" == "centos7" || "$distro" == "rhel" || "$distro" == "rhel7" ]]; then
create_yum_repo "$directory"
fi
}
# add_apt_repo <repo_name> <repo_url>
add_apt_repo() {
local repo_name="$1"
local repo_url="$2"
local repo="deb $repo_url"
local repo_path="/etc/apt/sources.list.d/$repo_name.list"
echo "$repo" > "$repo_path" && apt-get update
}
# add_yum_repo <repo_name> <repo_url>
add_yum_repo() {
local repo_name="$1"
local repo_url="$2"
local repo_path="/etc/yum.repos.d/$repo_name.repo"
cat > "$repo_path" << EOF
[$repo_name]
name=$repo_name
baseurl=$repo_url
enabled=1
gpgcheck=0
protect=1
EOF
yum clean all && rm -rf /var/cache/yum/* && yum check-update
}
# add_repo <repo_name> <repo_url> [distro]
add_repo() {
local repo_name="$1"
local repo_url="$2"
local distro="${3:-$(get_distro)}"
if [[ "$distro" == "ubuntu" ]]; then
add_apt_repo "$repo_name" "$repo_url"
elif [[ "$distro" == "centos" || "$distro" == "centos7" || "$distro" == "rhel" || "$distro" == "rhel7" ]]; then
add_yum_repo "$repo_name" "$repo_url"
fi
}
# add_local_apt_repo <repo_name> <directory>
add_local_apt_repo() {
local repo_name="$1"
local directory="$(readlink -f "$2")"
local repo_url="file:$directory binary/"
add_apt_repo "$repo_name" "$repo_url"
}
# add_local_yum_repo <repo_name> <directory>
add_local_yum_repo() {
local repo_name="$1"
local directory="$(readlink -f "$2")"
local repo_url="file://$directory"
add_yum_repo "$repo_name" "$repo_url"
}
# add_local_repo <repo_name> <directory> [distro]
add_local_repo() {
local repo_name="$1"
local directory="$(readlink -f "$2")"
local distro="${3:-$(get_distro)}"
if [[ "$distro" == "ubuntu" ]]; then
add_local_apt_repo "$repo_name" "$directory"
elif [[ "$distro" == "centos" || "$distro" == "centos7" || "$distro" == "rhel" || "$distro" == "rhel7" ]]; then
add_local_yum_repo "$repo_name" "$directory"
fi
}
# remove_apt_repo <repo_name>
remove_apt_repo() {
local repo_name="$1"
local repo_path="/etc/apt/sources.list.d/$repo_name.list"
rm "$repo_path" && apt-get update
}
# remove_yum_repo <repo_name>
remove_yum_repo() {
local repo_name="$1"
local repo_path="/etc/yum.repos.d/$repo_name.repo"
rm "$repo_path"
}
# remove_repo <repo_name> [distro]
remove_repo() {
local repo_name="$1"
local distro="${2:-$(get_distro)}"
if [[ "$distro" == "ubuntu" ]]; then
remove_apt_repo "$repo_name"
elif [[ "$distro" == "centos" || "$distro" == "centos7" || "$distro" == "rhel" || "$distro" == "rhel7" ]]; then
remove_yum_repo "$repo_name"
fi
}
# create_local_repo <repo_name> <repo_url> <package_groups_file> <spec_file> <directory>
create_local_repo() {
local repo_name="$1"
local repo_url="$2"
local package_groups_file="$3"
local spec_file="$4"
local directory="$5"
add_repo "$repo_name" "$repo_url"
mkdir -p "$directory" && directory="$(readlink -f "$directory")"
download_packages "$package_groups_file" "$spec_file" "$directory"
remove_repo "$repo_name"
create_repo "$directory"
}
# localize_repo <repo_name> <repo_url> <package_groups_file> <spec_file> <directory>
localize_repo() {
local repo_name="$1"
local repo_url="$2"
local package_groups_file="$3"
local spec_file="$4"
local directory="$5"
mkdir -p "$directory" && directory="$(readlink -f "$directory")"
create_local_repo "$repo_name" "$repo_url" "$package_groups_file" "$spec_file" "$directory"
add_local_repo "$repo_name" "$directory"
}

View File

@ -1,140 +0,0 @@
{
"asynchbase": {
"all": [
"mapr-asynchbase"
]
},
"drill": {
"all": [
"mapr-drill"
]
},
"flume": {
"all": [
"mapr-flume"
]
},
"hbase": {
"all": [
"mapr-hbase",
"mapr-hbase-internal",
"mapr-hbase-master",
"mapr-hbase-regionserver",
"mapr-hbasethrift",
"mapr-hbase-rest"
],
"0.98.12": [
"mapr-hbase",
"mapr-hbase-internal",
"mapr-hbase-master",
"mapr-hbase-regionserver",
"mapr-hbasethrift",
"mapr-libhbase",
"mapr-hbase-rest"
],
"1.1.1": [
"mapr-hbase",
"mapr-hbase-internal",
"mapr-hbase-master",
"mapr-hbase-regionserver",
"mapr-hbasethrift",
"mapr-libhbase",
"mapr-hbase-rest"
]
},
"hive": {
"all": [
"mapr-hive",
"mapr-hivemetastore",
"mapr-hiveserver2"
]
},
"httpfs": {
"all": [
"mapr-httpfs"
]
},
"hue": {
"all": [
"mapr-hue",
"mapr-hue-base",
"mapr-hue-livy"
],
"3.10.0": [
"mapr-hue",
"mapr-hue-livy"
]
},
"impala": {
"all": [
"mapr-impala",
"mapr-impala-catalog",
"mapr-impala-server",
"mapr-impala-statestore",
"mapr-impala-udf"
]
},
"mahout": {
"all": [
"mapr-mahout"
]
},
"oozie": {
"all": [
"mapr-oozie",
"mapr-oozie-internal"
]
},
"pig": {
"all": [
"mapr-pig"
]
},
"sentry": {
"all": [
"mapr-sentry"
]
},
"spark": {
"all": [
"mapr-spark",
"mapr-spark-historyserver",
"mapr-spark-master"
]
},
"sqoop": {
"all": [
"mapr-sqoop2-client",
"mapr-sqoop2-server"
]
},
"storm": {
"all": [
"mapr-storm",
"mapr-storm-ui",
"mapr-storm-nimbus",
"mapr-storm-supervisor"
]
},
"tez": {
"all": [
"mapr-tez"
]
},
"kafka": {
"all": [
"mapr-kafka"
]
},
"kafka-connect": {
"all": [
"mapr-kafka-connect-hdfs",
"mapr-kafka-connect-jdbc"
]
},
"kafka-rest": {
"all": [
"mapr-kafka-rest"
]
}
}

View File

@ -1,46 +0,0 @@
{
"drill": [
"1.1.0",
"1.2.0",
"1.4.0"
],
"flume": [
"1.5.0",
"1.6.0"
],
"hbase": [
"0.98.9",
"0.98.12"
],
"hive": [
"0.13",
"1.0",
"1.2"
],
"httpfs": [
"1.0"
],
"hue": [
"3.8.1",
"3.9.0"
],
"impala": [
"1.4.1"
],
"mahout": [
"0.10.0"
],
"oozie": [
"4.2.0"
],
"pig": [
"0.14",
"0.15"
],
"sqoop": [
"2.0.0"
],
"spark": [
"1.5.2"
]
}

View File

@ -1,50 +0,0 @@
{
"drill": [
"1.9.0"
],
"flume": [
"1.6.0"
],
"hbase": [
"1.1.1"
],
"hive": [
"1.2"
],
"httpfs": [
"1.0"
],
"hue": [
"3.10.0"
],
"impala": [
"2.5.0"
],
"mahout": [
"0.12.0"
],
"oozie": [
"4.2.0"
],
"pig": [
"0.16"
],
"sqoop": [
"2.0.0"
],
"spark": [
"2.0.1"
],
"sentry": [
"1.6.0"
],
"kafka": [
"0.9.0"
],
"kafka-connect": [
"2.0.1"
],
"kafka-rest": [
"2.0.1"
]
}

View File

@ -1,47 +0,0 @@
{
"drill": [
"1.9.0"
],
"flume": [
"1.6.0"
],
"hbase": [
"1.1.1"
],
"hive": [
"1.2"
],
"httpfs": [
"1.0"
],
"hue": [
"3.10.0"
],
"mahout": [
"0.12.0"
],
"oozie": [
"4.2.0"
],
"pig": [
"0.16"
],
"sqoop": [
"2.0.0"
],
"spark": [
"2.0.1"
],
"sentry": [
"1.6.0"
],
"kafka": [
"0.9.0"
],
"kafka-connect": [
"2.0.1"
],
"kafka-rest": [
"2.0.1"
]
}

View File

@ -1,83 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import sys
from oslo_serialization import jsonutils as json
_GROUP_VERSION_SEPARATOR = ","
_ALL_GROUP_VERSION = "all"
def _build_parser():
parser = argparse.ArgumentParser()
parser.add_argument("packages", help="path to the packages.json")
parser.add_argument("spec", help="path to the spec.json")
parser.add_argument("--separator", default=":",
help="separator between package name"
" and version in output")
return parser
def _load_json(path):
with open(path) as json_file:
return json.load(json_file)
def _version_matches(version, group_version):
for gv in group_version.split(_GROUP_VERSION_SEPARATOR):
if version.startswith(gv):
return True
return False
def _get_packages(version, group_spec):
for group_version in group_spec:
if _version_matches(version, group_version):
return group_spec[group_version]
return group_spec[_ALL_GROUP_VERSION]
def _get_package_versions(spec, package_groups):
return [(package, version)
for pg_name, versions in spec.items()
for version in versions
for package in _get_packages(version, package_groups[pg_name])]
parser = _build_parser()
def main(args=None):
args = parser.parse_args(args or sys.argv[1:])
spec = _load_json(args.spec)
package_groups = _load_json(args.packages)
separator = args.separator
package_versions = _get_package_versions(spec, package_groups)
package_format = "%s" + separator + "%s\n"
package_versions = map(lambda pv: package_format % pv, package_versions)
sys.stdout.writelines(package_versions)
if __name__ == "__main__":
main()

View File

@ -1,204 +0,0 @@
arguments:
java_distro:
default: openjdk
description: The distribution of Java to install. Defaults to cloudera-jdk.
choices:
- openjdk
- oracle-java
plugin_version:
default: 5.2.0
description: The distribution of MapR to install. Defaults to 5.2.0.
hidden: True
required: False
scala_version:
default: 1
description: The version of scala to install. Defaults to 1 to indicate
that the value should be autodetected or fallback to well
known version.
hidden: True
required: False
hdfs_lib_dir:
default: /usr/lib/hadoop-mapreduce
description: The path to HDFS_LIB_DIR. Default to /usr/lib/hadoop-mapreduce
required: False
validators:
- os_case:
- ubuntu:
- script: ubuntu/install_mapr_dependencies
- redhat:
- script: centos/epel_dependencies
- package:
- mtools
- rpcbind
- sdparm
- syslinux
- unzip
- wget
- zip
- os_case:
- redhat:
- package:
- cups
- cdparanoia-libs
- cups-libs
- createrepo
- cvs
- cyrus-sasl-gssapi
- cyrus-sasl-plain
- foomatic
- foomatic-db
- foomatic-db-filesystem
- foomatic-db-ppds
- gdbm-devel
- gettext
- ghostscript
- ghostscript-fonts
- glibc
- glibc-common
- glibc-devel
- glibc-headers
- gstreamer
- gstreamer-plugins-base
- gstreamer-tools
- hdparm
- irqbalance
- iso-codes
- kernel-headers
- libXt
- libXv
- libXxf86vm
- libgomp
- libgudev1
- libicu
- libmng
- liboil
- libtheora
- libtirpc
- libvisual
- libxslt
- mariadb
- mariadb-server
- mariadb-libs
- mesa-dri-drivers
- mesa-libGL
- mesa-libGLU
- mesa-private-llvm
- mysql-connector-java
- nmap-ncat
- ntp
- ntpdate
- numactl
- openjpeg-libs
- patch
- pax
- perl-CGI
- perl-ExtUtils-MakeMaker
- perl-ExtUtils-ParseXS
- perl-Test-Harness
- perl-Test-Simple
- perl-devel
- phonon-backend-gstreamer
- poppler
- poppler-data
- poppler-utils
- portreserve
- qt
- qt-x11
- qt3
- redhat-lsb
- redhat-lsb-core
- redhat-lsb-printing
- yum-utils
- xml-common
- ubuntu:
- package:
- binutils
- daemon
- dpkg-dev
- dpkg-repack
- gcc
- gcc-4.8
- gcc-doc
- gcc-multilib
- iputils-arping
- libasan0
- libatomic1
- libc-dev-bin
- libc6
- libc6-dev
- libcrypt-passwdmd5-perl
- libgcc-4.8-dev
- libgomp1
- libgssglue1
- libicu48
- libitm1
- libmysqlclient-dev
- libmysqlclient16
- libmysqlclient18
- libnfsidmap2
- libquadmath0
- libsasl2-dev
- libsasl2-modules-gssapi-mit
- libssl0.9.8
- libtirpc1
- libtsan0
- libxslt1.1
- linux-libc-dev
- manpages-dev
- mysql-common
- nfs-common
- open-iscsi
- syslinux-common
- zlib1g-dev
- script: common/configure_extjs
- os_case:
- redhat:
- script: centos/configure_hue
- copy_script: common/resources/package_utils.sh
- copy_script: common/resources/packages.json
- copy_script: common/resources/spec_5.1.0.json
- copy_script: common/resources/spec_5.2.0.json
- copy_script: common/resources/versions.py
- script:
centos/install_scala:
env_vars: [scala_version]
- script:
centos/install_mapr_core_repository:
env_vars: [plugin_version]
- script:
centos/install_mapr_eco_repository:
env_vars: [plugin_version]
- script: centos/selinux_permissive
- argument_case:
argument_name: java_distro
cases:
openjdk:
- script: centos/install_openjdk
oracle-java:
- script: common/oracle_java
- ubuntu:
- copy_script: common/resources/package_utils.sh
- copy_script: common/resources/packages.json
- copy_script: common/resources/spec_5.1.0.json
- copy_script: common/resources/spec_5.2.0.json
- copy_script: common/resources/spec_5.2.0_ubuntu.json
- copy_script: common/resources/versions.py
- script:
ubuntu/install_scala:
env_vars: [scala_version]
- script:
ubuntu/install_mapr_core_repository:
env_vars: [plugin_version]
- script:
ubuntu/install_mapr_eco_repository:
env_vars: [plugin_version]
- os_case:
- ubuntu:
- argument_case:
argument_name: java_distro
cases:
openjdk:
- script: ubuntu/install_openjdk
oracle-java:
- script: common/oracle_java

View File

@ -1,27 +0,0 @@
#!/bin/bash
DISTRO_NAME=$distro
source "/tmp/package_utils.sh"
echo "START: installing MapR core repository"
MAPR_REPO_URL="http://package.mapr.com/releases/v${plugin_version}/ubuntu/mapr-v${plugin_version}GA.deb.tgz"
MAPR_REPO_DIR="/opt/mapr-repository/core"
if [ ! -d "$MAPR_REPO_DIR" ] || [ -z "$(ls -A $MAPR_REPO_DIR)" ]; then
if [ $test_only -eq 0 ]; then
MAPR_REPO_NAME="mapr_core"
echo "Downloading MapR repository archive"
mkdir -p "$MAPR_REPO_DIR" && curl "$MAPR_REPO_URL" | tar -xz -C "$MAPR_REPO_DIR"
echo "Creating local repository"
create_repo "$MAPR_REPO_DIR"
echo "Adding MapR repository"
add_local_repo "$MAPR_REPO_NAME" "$MAPR_REPO_DIR"
fi
fi
echo "END: installing MapR core repository"

View File

@ -1,22 +0,0 @@
#!/bin/bash
echo "START: installing MapR core dependencies"
if [ ! -f /etc/apt/sources.list.d/security_repo.list ]; then
if [ $test_only -eq 0 ]; then
# Required for libicu48
cat >> /etc/apt/sources.list.d/security_repo.list << EOF
deb http://security.ubuntu.com/ubuntu precise-security main
EOF
# Required for libmysqlclient16
cat >> /etc/apt/sources.list.d/security_repo.list << EOF
deb http://old-releases.ubuntu.com/ubuntu lucid-security main
EOF
else
exit 0
fi
fi
apt-get update
echo "END: installing MapR core dependencies"

View File

@ -1,32 +0,0 @@
#!/bin/bash
VERSIONS_PY="/tmp/versions.py"
DISTRO_NAME=$distro
source "/tmp/package_utils.sh"
echo "START: installing MapR ecosystem repository"
MAPR_REPO_URL="http://package.mapr.com/releases/MEP/MEP-2.0.0/ubuntu/ binary trusty"
MAPR_REPO_DIR="/opt/mapr-repository/ecosystem"
if [ ! -d "$MAPR_REPO_DIR" ] || [ -z "$(ls -A $MAPR_REPO_DIR)" ]; then
if [ $test_only -eq 0 ]; then
MAPR_REPO_NAME="mapr_ecosystem"
MAPR_PKG_GROUPS="/tmp/packages.json"
if [ -f /tmp/spec_$plugin_version_ubuntu.json ]; then
MAPR_SPEC="/tmp/spec_$plugin_version_ubuntu.json"
else
MAPR_SPEC="/tmp/spec_$plugin_version.json"
fi
echo "Creating local MapR ecosystem repository"
localize_repo "$MAPR_REPO_NAME" "$MAPR_REPO_URL" "$MAPR_PKG_GROUPS" "$MAPR_SPEC" "$MAPR_REPO_DIR"
echo $MAPR_SPEC
fi
fi
echo "END: installing MapR ecosystem repository"

View File

@ -1,10 +0,0 @@
#!/bin/bash
echo "Installing OpenJDK"
if [ $test_only -eq 0 ]; then
apt-get install -y openjdk-7-jdk
echo "OpenJDK has been installed"
else
exit 0
fi

View File

@ -1,37 +0,0 @@
#!/bin/bash
echo "START: installing Scala"
DEF_VERSION="2.11.6"
if [ $test_only -eq 0 ]; then
RETURN_CODE="$(curl -s -L -o /dev/null -w "%{http_code}" https://www.scala-lang.org/)"
if [ "$RETURN_CODE" != "200" ]; then
echo "https://www.scala-lang.org is unreachable" && exit 1
fi
if [ $(lsb_release -c -s) == "trusty" ]; then
VERSION=${DEF_VERSION}
else
if [ "${scala_version}" != "1" ]; then
VERSION=$scala_version
else
VERSION="$(curl -s -L --fail https://www.scala-lang.org| tr -d '\n' | sed 's/^.*<div[^<]\+scala-version">[^0-9]\+\([0-9\.\?]\+\)<.\+$/\1/')"
if [ $? != 0 -o -z "${VERSION}" ]; then
echo "Installing default version $DEF_VERSION"
VERSION=${DEF_VERSION}
fi
fi
fi
PKG=scala-${VERSION}
URL="https://downloads.lightbend.com/scala/${VERSION}"
wget -N ${URL}/${PKG}.deb
dpkg -i ${PKG}.deb
rm ${PKG}.deb
fi
echo "END: installing Scala"

View File

@ -1,5 +0,0 @@
#!/bin/bash
if [ $test_only -eq 0 ]; then
apt-get update
fi

View File

@ -1,69 +0,0 @@
#!/bin/bash
set -e
JAVA_TARGET_LOCATION="/usr/java"
export JAVA_DOWNLOAD_URL=${JAVA_DOWNLOAD_URL:-"http://download.oracle.com/otn-pub/java/jdk/7u51-b13/jdk-7u51-linux-x64.tar.gz"}
JAVA_HOME=$TARGET_ROOT$JAVA_TARGET_LOCATION
mkdir -p $JAVA_HOME
JAVA_FILE=$(basename $JAVA_DOWNLOAD_URL)
wget --no-check-certificate --no-cookies -c \
--header "Cookie: gpw_e24=http://www.oracle.com/; \
oraclelicense=accept-securebackup-cookie" \
-O $JAVA_HOME/$JAVA_FILE $JAVA_DOWNLOAD_URL
if [ $? -eq 0 ]; then
echo "Java download successful"
else
echo "Error downloading $JAVA_DOWNLOAD_URL, exiting"
exit 1
fi
cd $JAVA_HOME
if [[ $JAVA_FILE == *.tar.gz ]]; then
echo -e "\n" | tar -zxf $JAVA_FILE
JAVA_NAME=`ls -1 $JAVA_TARGET_LOCATION | grep -v tar.gz`
chown -R root:root $JAVA_HOME
cat >> /etc/profile.d/java.sh <<- EOF
# Custom Java install
export JAVA_HOME=$JAVA_TARGET_LOCATION/$JAVA_NAME
export PATH=\$PATH:$JAVA_TARGET_LOCATION/$JAVA_NAME/bin
EOF
case "$1" in
Ubuntu )
update-alternatives --install "/usr/bin/java" "java" \
"$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" \
"$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javac" 1
update-alternatives --install "/usr/bin/javaws" "javaws" \
"$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javaws" 1
update-alternatives --set java \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/java
update-alternatives --set javac \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javac
update-alternatives --set javaws \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javaws
;;
Fedora | RedHatEnterpriseServer | CentOS )
alternatives --install /usr/bin/java java \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/java 200000
alternatives --install /usr/bin/javaws javaws \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javaws 200000
alternatives --install /usr/bin/javac javac \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/javac 200000
alternatives --install /usr/bin/jar jar \
$JAVA_TARGET_LOCATION/$JAVA_NAME/bin/jar 200000
;;
esac
elif [[ $JAVA_FILE == *.bin ]]; then
echo -e "\n" | sh $JAVA_FILE
else
echo "Unknown file type: $JAVA_FILE, exiting"
exit 1
fi
rm $JAVA_FILE

View File

@ -1,31 +0,0 @@
#!/bin/bash
if [ ! -f /etc/init.d/mysql* ]; then
if [[ $1 == *"Ubuntu"* ]]; then
sudo debconf-set-selections <<< \
'mysql-server mysql-server/root_password password root'
sudo debconf-set-selections <<< \
'mysql-server mysql-server/root_password_again password root'
sudo apt-get install --force-yes -y mysql-server
sudo apt-get install --force-yes -y libmysqlclient16
mysql -uroot -proot mysql -e "UPDATE user SET Password=PASSWORD('') \
WHERE User='root'; FLUSH PRIVILEGES;"
sudo sed -i "s/^\(bind-address\s*=\s*\).*\$/\10.0.0.0/" \
/etc/mysql/my.cnf
sudo service mysql restart
elif [[ $1 == *"CentOS"* ]] || \
[[ $1 == "RedHatEnterpriseServer" ]]; then
if [[ $2 == "7" ]]; then
sudo yum install -y mariadb-server
else
sudo yum install -y mysql-server
fi
elif [[ $1 == *"SUSE"* ]]; then
sudo zypper mysql-server
else
echo "Unknown distribution"
exit 1
fi
else
echo "Mysql server already installed"
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
if [[ $1 == *"Ubuntu"* ]]; then
sudo apt-get install --force-yes -y mysql-client libmysql-java
elif [[ $1 == *"CentOS"* ]] || [[ $1 == "RedHatEnterpriseServer" ]]; then
sudo yum install -y mysql
elif [[ $1 == *"SUSE"* ]]; then
sudo zypper install mysql-community-server-client mysql-connector-java
else
echo "Unknown distribution"
exit 1
fi

View File

@ -1,28 +0,0 @@
#!/bin/bash
#Current available version
DEF_VERSION="2.11.5"
VERSION=$(wget -qO- http://www.scala-lang.org |\
grep 'scala-version' | grep -Eo '([0-9]\.?)+')
if [ $? != 0 -o -z ${VERSION} ]; then
VERSION=${DEF_VERSION}
fi
PKG=scala-${VERSION}
URL="http://downloads.typesafe.com/scala/${VERSION}"
if [ "$1" = "Ubuntu" ]; then
wget -N ${URL}/${PKG}.deb
dpkg -i ${PKG}.deb
rm ${PKG}.deb
# install java if missing
apt-get install -f -y --force-yes
elif [ "$1" = 'CentOS' -o "$1" = 'RedHatEnterpriseServer' ]; then
rpm -Uhv ${URL}/${PKG}.rpm
else
echo "Unknown distribution"
exit 1
fi

View File

@ -1,20 +0,0 @@
#!/bin/bash
MAPR_HOME=/opt/mapr
while [ $# -gt 0 ] ; do
nodeArg=$1
exec< ${MAPR_HOME}/topology.data
result=""
while read line ; do
ar=( $line )
if [ "${ar[0]}" = "$nodeArg" ]; then
result="${ar[1]}"
fi
done
shift
if [ -z "$result" ]; then
echo -n "/default/rack "
else
echo -n "$result "
fi
done

View File

@ -1,112 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara_plugin_mapr.plugins.mapr.domain.node_process as np
import sahara_plugin_mapr.plugins.mapr.domain.service as s
import sahara_plugin_mapr.plugins.mapr.util.commands as cmd
import sahara_plugin_mapr.plugins.mapr.util.validation_utils as vu
DRILL = np.NodeProcess(
name='drill-bits',
ui_name='Drill',
package='mapr-drill',
open_ports=[8047]
)
DRILL_YARN = np.NodeProcess(
name='drill-bits-yarn',
ui_name='Drill on YARN',
package='mapr-drill-yarn',
open_ports=[8047]
)
class Drill(s.Service):
def __init__(self):
super(Drill, self).__init__()
self._name = 'drill'
self._ui_name = 'Drill'
self._node_processes = [DRILL]
self._ui_info = [('Drill', DRILL, {s.SERVICE_UI: 'http://%s:8047'})]
self._validation_rules = [
vu.node_client_package_conflict_vr([DRILL], DRILL_YARN)
]
def install(self, cluster_context, instances):
# Drill requires running cluster
pass
def post_start(self, cluster_context, instances):
instances = instances or cluster_context.get_instances(DRILL)
super(Drill, self).install(cluster_context, instances)
self._set_service_dir_owner(cluster_context, instances)
for instance in instances:
cmd.re_configure_sh(instance, cluster_context)
class DrillV07(Drill):
def __init__(self):
super(DrillV07, self).__init__()
self._version = '0.7'
class DrillV08(Drill):
def __init__(self):
super(DrillV08, self).__init__()
self._version = '0.8'
class DrillV09(Drill):
def __init__(self):
super(DrillV09, self).__init__()
self._version = '0.9'
class DrillV11(Drill):
def __init__(self):
super(DrillV11, self).__init__()
self._version = "1.1"
class DrillV12(Drill):
def __init__(self):
super(DrillV12, self).__init__()
self._version = "1.2"
class DrillV14(Drill):
def __init__(self):
super(DrillV14, self).__init__()
self._version = "1.4"
class DrillV16(Drill):
def __init__(self):
super(DrillV16, self).__init__()
self._version = "1.6"
class DrillV18(Drill):
def __init__(self):
super(DrillV18, self).__init__()
self._version = "1.8"
self._node_processes = [DRILL, DRILL_YARN]
class DrillV19(Drill):
def __init__(self):
super(DrillV19, self).__init__()
self._version = "1.8"
self._node_processes = [DRILL, DRILL_YARN]

View File

@ -1,47 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara_plugin_mapr.plugins.mapr.domain.node_process as np
import sahara_plugin_mapr.plugins.mapr.domain.service as s
import sahara_plugin_mapr.plugins.mapr.util.validation_utils as vu
FLUME = np.NodeProcess(
name='flume',
ui_name='Flume',
package='mapr-flume',
open_ports=[]
)
class Flume(s.Service):
def __init__(self):
super(Flume, self).__init__()
self._name = 'flume'
self._ui_name = 'Flume'
self._node_processes = [FLUME]
self._validation_rules = [vu.at_least(1, FLUME)]
class FlumeV15(Flume):
def __init__(self):
super(FlumeV15, self).__init__()
self._version = '1.5.0'
class FlumeV16(Flume):
def __init__(self):
super(FlumeV16, self).__init__()
self._version = '1.6.0'

View File

@ -1,120 +0,0 @@
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara_plugin_mapr.plugins.mapr.domain.configuration_file as bcf
import sahara_plugin_mapr.plugins.mapr.domain.node_process as np
import sahara_plugin_mapr.plugins.mapr.domain.service as s
import sahara_plugin_mapr.plugins.mapr.util.validation_utils as vu
HBASE_MASTER = np.NodeProcess(
name='hbmaster',
ui_name='HBase-Master',
package='mapr-hbase-master',
open_ports=[60000, 60010]
)
HBASE_REGION_SERVER = np.NodeProcess(
name='hbregionserver',
ui_name='HBase-RegionServer',
package='mapr-hbase-regionserver',
open_ports=[60020]
)
HBASE_THRIFT = np.NodeProcess(
name='hbasethrift',
ui_name='HBase-Thrift',
package='mapr-hbasethrift',
open_ports=[9090]
)
HBASE_REST = np.NodeProcess(
name="hbaserestgateway",
ui_name="HBase REST",
package="mapr-hbase-rest",
open_ports=[8080, 8085],
)
class HBase(s.Service):
def __init__(self):
super(HBase, self).__init__()
self._name = 'hbase'
self._ui_name = 'HBase'
self._node_processes = [
HBASE_MASTER,
HBASE_REGION_SERVER,
HBASE_THRIFT,
]
self._cluster_defaults = ['hbase-default.json']
self._validation_rules = [
vu.at_least(1, HBASE_MASTER),
vu.at_least(1, HBASE_REGION_SERVER),
]
self._ui_info = [
("HBase Master", HBASE_MASTER, {s.SERVICE_UI: "http://%s:60010"}),
]
def get_config_files(self, cluster_context, configs, instance=None):
hbase_site = bcf.HadoopXML("hbase-site.xml")
hbase_site.remote_path = self.conf_dir(cluster_context)
if instance:
hbase_site.fetch(instance)
hbase_site.load_properties(configs)
return [hbase_site]
class HBaseV094(HBase):
def __init__(self):
super(HBaseV094, self).__init__()
self._version = '0.94.24'
self._dependencies = [('mapr-hbase', self.version)]
class HBaseV0987(HBase):
def __init__(self):
super(HBaseV0987, self).__init__()
self._version = '0.98.7'
self._dependencies = [('mapr-hbase', self.version)]
class HBaseV0989(HBase):
def __init__(self):
super(HBaseV0989, self).__init__()
self._version = '0.98.9'
self._dependencies = [('mapr-hbase', self.version)]
self._node_processes.append(HBASE_REST)
self._ui_info.append(
("HBase REST", HBASE_REST, {s.SERVICE_UI: "http://%s:8085"}),
)
class HBaseV09812(HBase):
def __init__(self):
super(HBaseV09812, self).__init__()
self._version = "0.98.12"
self._dependencies = [("mapr-hbase", self.version)]
self._node_processes.append(HBASE_REST)
self._ui_info.append(
("HBase REST", HBASE_REST, {s.SERVICE_UI: "http://%s:8085"}),
)
class HBaseV111(HBase):
def __init__(self):
super(HBaseV111, self).__init__()
self._version = "1.1.1"
self._dependencies = [("mapr-hbase", self.version)]
self._node_processes.append(HBASE_REST)
self._ui_info.append(
("HBase REST", HBASE_REST, {s.SERVICE_UI: "http://%s:8085"}),
)

Some files were not shown because too many files have changed in this diff Show More