Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I8e85606041fed1a0608cda1ff62db4e90a8892ae
This commit is contained in:
Andreas Jaeger 2019-12-18 19:30:17 +01:00
parent d979f0f787
commit 2b961178e3
133 changed files with 10 additions and 96914 deletions

30
.gitignore vendored
View File

@ -1,30 +0,0 @@
# python
*.egg-info
*.eggs
*.egg
.venv
*.pyc
.tox
# IDEs
.idea
.settings
.project
# Doc
doc/_build
# Editors
*.swp
*~
# Bundle
Gemfile.lock
.bundled_gems
.bundle
# Misc
*.log
.librarian
AUTHORS
ChangeLog

2
.rspec
View File

@ -1,2 +0,0 @@
-f doc
--color

23
Gemfile
View File

@ -1,23 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'puppetlabs_spec_helper', '1.1.1'
gem 'puppet-lint', '~> 0.3.2'
gem 'rspec-puppet', '~> 2.4.0'
gem 'rspec-puppet-utils', '~> 2.0.0'
gem 'deep_merge'
gem 'pry-byebug'
gem 'puppet-spec'
gem 'colorize'
gem 'parallel'
gem 'openstack'
gem 'webmock'
end
if ENV['PUPPET_GEM_VERSION']
gem 'puppet', ENV['PUPPET_GEM_VERSION']
else
gem 'puppet', '~> 3.8.0'
end
# vim:ft=ruby

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,83 +0,0 @@
# fuel-noop-fixtures
--------------
## Table of Contents
1. [Overview - What is the fuel-noop-fixtures?](#overview)
2. [Structure - What is in the fuel-noop-fixtures?](#structure)
3. [Development](#development)
4. [Core Reviers](#core-reviewers)
5. [Contributors](#contributors)
## Overview
-----------
The fuel-noop-fixtures is a helper repo to store fixtures for Fuel Noop tests.
## Structure
------------
### Basic Repository Layout
```
fuel-noop-fixtures
├── LICENSE
├── README.md
├── catalogs
├── doc
├── hiera
├── facts
├── noop_tests.rb
```
### root
The root level contains important repository documentation and license
information.
### catalogs
The catalogs directory contains a committed state of Fuel Library deployment
data fixtures used for
[data regression checks](https://blueprints.launchpad.net/fuel/+spec/deployment-data-dryrun).
### doc
Here live The Docs for the noop tests framework. From these, Fuel developers
may find out how to write integration tests for supported composition layers.
### hiera
This directory contains hiera data templates for integration
[Fuel Library Noop tests](https://github.com/openstack/fuel-library/tree/master/tests/noop).
### facts
This directory contains known facts for the Fuel Library Noop integration tests.
### lib
This directory contains the noop tests framework itself.
### spec
This directory contains unit tests for the noop tests framework.
### noop_tests.rb
The main executable file.
## Development
--------------
* [Fuel How to Contribute](https://wiki.openstack.org/wiki/Fuel/How_to_contribute)
## Core Reviewers
-----------------
* [Fuel Noop Fixtures Cores](https://review.openstack.org/#/admin/groups/1205,members)
## Contributors
---------------
* [Stackalytics](http://stackalytics.com/?release=all&project_type=all&module=fuel-noop-fixtures&metric=commits)

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

1
catalogs/.gitignore vendored
View File

@ -1 +0,0 @@
*.pp

10
demo.rb
View File

@ -1,10 +0,0 @@
#!/usr/bin/env ruby
require_relative './noop_tests'
ENV['SPEC_SPEC_DIR'] = './spec/demo-hosts'
if $0 == __FILE__
manager = Noop::Manager.new
manager.main
end

View File

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNoopTests.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNoopTests.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNoopTests"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNoopTests"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

View File

@ -1,258 +0,0 @@
# -*- coding: utf-8 -*-
#
# Fuel Noop Tests documentation build configuration file, created by
# sphinx-quickstart on Thu Feb 11 20:43:50 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel Noop Tests'
copyright = u'2016, Mirantis inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FuelNoopTestsdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'FuelNoopTests.tex', u'Fuel Noop Tests Documentation',
u'Mirantis inc', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuelnooptests', u'Fuel Noop Tests Documentation',
[u'Mirantis inc'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'FuelNoopTests', u'Fuel Noop Tests Documentation',
u'Mirantis inc', 'FuelNoopTests', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

View File

@ -1,80 +0,0 @@
.. _fuel_noop_fixtures:
Fuel Noop fixtures
==================
There is a separate `fuel-noop-fixtures`_ repository to store all of the
fixtures and libraries required for the noop tests execution.
This repository will be automatically fetched before the noop tests are run to
the *tests/noop/fuel-noop-fixtures* directory.
Developers of the noop tests can add new Hiera and facts yaml files into this
repository instead of the main `fuel-library`_ repository starting from the
Fuel Mitaka (9.0) release.
.. note:: The fixtures for the Fuel <=8.0 belong to the fuel-library
repository and must be changed there.
.. _fuel-noop-fixtures: https://github.com/openstack/fuel-noop-fixtures
.. _fuel-library: https://github.com/openstack/fuel-library
Automatic generation of fixtures
--------------------------------
The fixtures must contain data as it comes from the Fuel deployment data
backend (Nailgun). Fixtures contain only data specific to the corresponding
Fuel version. Manual changes to the fixtures' data should be avoided.
The current approach to generate the fixtures is a semi-automated and
requires a Fuel master node of a given release deployed. To generate the
fixtures, for each of the deployment cases (environments) under test, first
create the environment, for example:
.. code-block:: console
$ fuel env --create --name test_neutron_vlan --rel 2 --net vlan
Then query, update and upload the environment attributes as required. For example,
to test a Ceph-for-all-but-ephemeral-plus-Ceilometer deployment:
.. code-block:: console
$ fuel env --attributes --env 1 --download
$ ruby -ryaml -e '\
> attr = YAML.load(File.read("./cluster_1/attributes.yaml"))
> attr["editable"]["storage"]["images_ceph"]["value"] = true
> attr["editable"]["storage"]["objects_ceph"]["value"] = true
> attr["editable"]["storage"]["volumes_ceph"]["value"] = true
> attr["editable"]["storage"]["volumes_lvm"]["value"] = false
> attr["editable"]["additional_components"]["ceilometer"]["value"] = true
> File.open("./cluster_1/attributes.yaml", "w").write(attr.to_yaml)'
$ fuel env --attributes --env 1 --upload
At last, add nodes, assign roles as you want to test it, then generate and store
the data fixtures as YAML files, for example:
.. code-block:: console
$ fuel --env 1 node set --node 1 --role controller
$ fuel --env 1 node set --node 2 --role compute,ceph-osd
$ fuel deployment --default --env 1
$ ls /root/deployment_1
ceph-osd_2.yaml compute_2.yaml primary-controller_1.yaml
Those files are now ready to be renamed and put under the `hiera`
directory, like this:
.. code-block:: console
$ git clone https://github.com/openstack/fuel-noop-fixtures
$ mv /root/deployment_1/compute_2.yaml \
> ./fuel-noop-fixtures/hiera/neut_vlan.ceph.ceil-compute.yaml
$ mv /root/deployment_1/ceph-osd_2.yaml \
> ./fuel-noop-fixtures/hiera/neut_vlan.ceph.ceil-ceph-osd.yaml
$ mv /root/deployment_1/primary-controller_1.yaml \
> ./fuel-noop-fixtures/hiera/neut_vlan.ceph.ceil-primary-controller.yaml
Note, there is a `script`_ to automate things to a certain degree as well.
Hopefully, we will improve the auto-generation process, eventually.
.. _script: https://github.com/fuel-noop-fixtures/utils/blob/master/generate_yamls.sh

View File

@ -1,13 +0,0 @@
Using additional RSpec matchers and task helpers
================================================
There are some matchers for RSpec one would like to use
ensure_transitive_dependency(before, after)
-------------------------------------------
This matcher allows one to check whether there is a
dependency between *after* and *before* resources
even if this dependency is transitional by means
of several other resources or containers such
as classes or defines.

View File

@ -1,29 +0,0 @@
=============================
Fuel Library Noop Tests Guide
=============================
Abstract
~~~~~~~~
The fuel-library is collection of Puppet modules and related code used by Fuel
to deploy OpenStack environments. There are top-scope Puppet manifests, known
as a Fuel library modular tasks. This guide documents the Fuel Noop testing
framework for these modular tasks.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
fixtures
structure
utility
usage
helpers
Search in this guide
~~~~~~~~~~~~~~~~~~~~
* :ref:`search`

View File

@ -1,242 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\FuelNoopTests.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\FuelNoopTests.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@ -1,63 +0,0 @@
Structure
=========
Data files
----------
To run a noop test on a spec following files are required:
- A spec file: *(i.e. spec/hosts/my/my_spec.rb)*
- A task file: *(i.e. modular/my/my.pp)*
- One of the Facts sets: *(i.e. ubuntu.yaml)*
- One of the Hiera files: *(i.e. neut_vlan.ceph.controller-ephemeral-ceph.yaml)*
Any single task is a combination of three attributes: spec file, yaml file
and facts file. Manifest file name and location will be determined automatically
based on the spec file. RSpec framework will try to compile the Puppet catalog
using the manifest file and modules from the module path. It will use the facts
from the facts file and the Hiera data from the hiera file.
If the spec is empty it will test only that catalog have compiled without any
errors. It's actually not a bad thing because even empty specs can catch most of
basic errors and problems. But if the spec has a **shared_examples 'catalog'**
block defined and there are several examples present they will be run against
the compiled catalog and the matchers will be used to determine if examples
pass or not.
Every Hiera yaml file also has a corresponding *globals* yaml file that contains
additional processed variables. These files are also used by most of the spec
tests. If you make any changes to the hiera yaml files you should also recreate
globals files by running *globals/globals* specs with *save globals* option
enabled. Later new files can be commited into the fixtures repository.
And, finally, there is an override system for hiera and facts yaml files.
For each spec file you can create a hiera or facts yaml file with a special
name. This file will be used on top of other files hierarchy. It can be very
useful in cases when you need to provide some custom data which is relevant
only for the one task you are working with without touching any other tasks.
Framework components
--------------------
The Noop test framework consists of the three components: the task manager,
the config and the task.
The task manager is responsible for collecting the
information about the present files, manipulation the task library, processing
the console options and environment variables and, finally, running the
tasks using the tasks objects and processing reports.
The config object contains the basic information about directory structure
and some default values and the values passed fro the external environment
variables. This object is static and is persistent between the instances of
all other objects.
The task object is the instance of a single test run. It can work with spec,
manifest, Hiera and facts yaml paths and run the actual RSpec command to
start the test.
The similar instance of the task process will be created inside
the RSpec process namespace and will be used to provide the information about
the current task as well as providing many different helpers and features
for the spec users. This object can be accessed through the proxy method of
the root **Noop** object which keep the reference to the current task instance.

View File

@ -1,398 +0,0 @@
Typical use cases
=================
Let's discuss the most common use cases of the Noop test framework and how it
can be used.
Initial setup
-------------
In most cases you should setup the environment before using the Noop tests
framework. The setup consists of three parts:
- Fixtures repository clone
- Ruby gems installation
- Puppet modules download
There is a wrapper script **tests/noop/setup_and_diagnostics.sh** that will try
to do all these things. First, it will clone the fixtures repository unless
it have already been cloned, then it will run **tests/noop/noop_tests.sh**
with options **-b** and **-B** to create and use the bundle gems folder,
**-l** options will enable Puppet modules update and **-t** option will
initiate check procedures for paths and task library content.
If you are using *RVM* or are managing Ruby gems manually you are free to
bypass this stem and clone fixtures repository manually.
Running all tests using multiple processes
------------------------------------------
Running **tests/noop/noop_tests.sh** without any options will try to execute
all the spec tasks one by one. The list of the spec tasks will be generated by
combining all the possible combinations of specs, hiera files and facts files
that are allowed for this spec. Adding **-p** options allows you to review the
list of tasks that will be run without actually running them.
Running tasks one by one will be a very time consuming process, so you should
try to use multi-process run instead by providing the **-j** options with a
number of processes that can be started simultaneously. In this mode you
will not see the output of every RSpec process, but you will be able to get
the combined result report of all test runs at the end of the process.
You can also monitor the progress of tasks by using the debug option **-d**.
It will show you which tasks are starting and finishing and what shell commands
and environment variables are used to run them.::
tests/noop/noop_tests.sh -j 24 -d
Will run all the spec task with debug output and keeping no more then 24
child processes all the time.::
tests/noop/noop_tests.sh -p
Will output the list of tasks that is going to be run together with the facts
and yaml files used.
There is also the **run_all.sh** shortcut script for this action.
Running only a subset of tasks
------------------------------
In many cases you would want to run only a subset of tasks, up to only one task.
You can use filters to do so. By providing the **-s**, **-y** and **-f** will
allow you to set one or more specs, yams and facts that you want to use. The
list of tasks will be build by filtering out everything else that you have
not specified. Don't forget that you can use the **-p** option to review the
list of tasks before actually running them.
List options **-Y**, **-F**, **-S** and **-T** can be used to view the list of
all hiera yaml files, facts files, spec files and task files respectively.
These lists are very helpful for finding out correct values for the filter
options you want to use. Note, that using filter and list options together will
allow you to see the filtered lists.::
tests/noop/noop_tests.sh -Y
Will list all available hiera yaml files.::
tests/noop/noop_tests.sh -y neut_vlan.compute.ssl.yaml -p
Will show you which task are going to run with this yaml file.::
tests/noop/noop_tests.sh -y neut_vlan.compute.ssl -s firewall/firewall -f ubuntu
Will run the *firewall/firewall* spec test with the provided yaml and facts
file, but it will only work if this combination is allowed for this spec.
Note, that you can either provide **.yaml**, **_spec.rb**, **.pp** extensions
for yaml and spec files, or you can omit them and they will be found out on
their own.::
tests/noop/noop_tests.sh -y neut_vlan.compute.ssl,neut_vlan.compute.nossl -s firewall/firewall,netconfig/netconfig -p
Filters can use used with a list of elements or can be given as a regular
expression or a list of regular expressions::
./noop_tests.sh -p -s 'master/.*'
Will filter all tasks in the *master* group.::
./noop_tests.sh -p -s '^ceph.*,^heat.*,glance/db_spec'
Will filter all *ceph*, *heat* tasks and glance/db_spec task individually.::
./noop_tests.sh -p -s '^ceph.*' -y ceph
All *ceph* related tasks only on Hiera files which have Ceph enabled.
Recreating globals yaml files
-----------------------------
All globals files should already be precreated and commited to the fixtures
repository and there is no need for you to create them again in the most cases.
But, if you have made some changes to the existing yaml files or have
added a new one, you should create the globals yamls again.
You can do it by running *tests/noop/noop_tests.sh* with **-g** option.
It will set filters to run only the globals tasks as well as enabling the
option to save the generated yaml files. Using **-j** option will make the
process much faster.
There is also the **run_globals.sh** shortcut script for this action.
Spec file annotations
---------------------
The framework builds the list of tasks to run by combining all allowed facts
and yaml files for each spec file and creating a task for every combination.
By default, the list of yaml files, allowed for each spec will be determined
by the intersection of node roles the spec should be run on (obtained from the
*tasks.yaml* files used by the Nailgun to compile the deployment graph) and the
hiera file roles (obtained from the hiera files themselves). The facts file
will default to **ubuntu** value.
In most cases it would be better to manually specify the hiera files and,
possibly, the facts files for your spec file, because running a task for every
hiera file with the same roles would be redundant. On the other hand, if you
know which Hiera files can cause a different behaviour of your task, and you
want to test this behaviour in the different scenarios, you can explicitly
specify the list of yaml files and facts files you need.
The list of Hiera files can be set by using the **HIERA:** commented annotation
string followed by the list of hiera file names separated by the space
character.::
# HIERA: neut_vlan.compute.ssl neut_vlan.compute.nossl
The list of facts files can be specified the same way using the **FACTS:**
annotation.::
# FACTS: centos6 centos7
The list of task will contain this spec with all possible combinations of the
specified Hiera and facts files. If you need to enter only the precise list of
possible run combinations you can use the **RUN:** annotation.::
# RUN: (hiera1) (facts1)
# RUN: (hiera2) (facts2)
It can be specified many times an all entered combinations will be added to the
list.
You can use **ROLE** annotation to specify the list of node roles this
spec should be running at. It will find the list of Hiera yaml files
that have roles matching this list.::
# ROLE: controller
# ROLE: primary-controller
There is also a way to use the reverse logic. You can specify the Hiera
and facts yaml files that you want to exclude from the list instead of
providing the list of included files.::
# SKIP_HIERA: neut_vlan.compute.ssl neut_vlan.compute.nossl
# SKIP_FACTS: centos6
These yaml files will be excluded from the list of possible yaml files. If
you have used both include and exclude options, the exclude option will have
the priority over the include option. If there are no included Hiera files
the list of Hiera files will be generated from the node roles.
Note, that all these options can be combined all together. It will mean
to take all 'compute' yamls, add neut_vlan_l3ha.ceph.ceil-primary-controller
and remove 'neut_vlan.compute.ssl' and then add master/master_centos7 run.::
# ROLE: compute
# HIERA: neut_vlan_l3ha.ceph.ceil-primary-controller.yaml
# RUN: master master_centos7
# SKIP_HIERA: neut_vlan.compute.ssl.yaml
The final annotation **DISABLE_SPEC** allows you to temporarily disable the
spec from being seen the framework. It can use useful if you want to turn off
a spec with run problems and fix them later without breaking the tests.::
# DISABLE_SPEC
The spec file with this annotation will be completely ignored.
Using hiera and facts overrides
-------------------------------
In some cases you need a special set of facts values or the Hiera data for
your task. If this values are very specific and are not useful for other tasks
you can use override system instead of creating the new Hiera or facts yaml.
There are *override* folders inside the Hiera and facts folders. If you place
a yaml file with the specific name to this folder, it will be used during the
spec catalog compilation as the top level of Hiera's hierarchy. The values which
are specified there will be used before the values in other yaml files. Hash
values will be merged on the basic values and the matching key will be
rewritten. Facts yamls work the same way by rewriting the basic values by the
values specified in the override file.
Both yaml files should be named after the task name with path separator changed
to the dash character. For example, the **firewall/firewall** task will use
the override file name *firewall-firewall.yaml* and
**openstack-controller/keystone** task will use the file name
*openstack-controller-keystone.yaml* if these files are found in the
override folders.
Using hiera plugin overrides
----------------------------
If you have several additional YAML files that should be applied on top of
the base Hiera files, for example, files, provided or generated by plugins
or the other tasks, you can use the plugin override system. Any files which
have been placed to the *hiera/plugins/${yaml_base_name}/* folder will be
applied on top of this YAML file when it will be used in the Noop tests.
Multiple files inside this directory will be ordered alphabetically with
the former letters having the higher priority.
Working with report files
-------------------------
When the task manager runs the tasks they leave report files anf the manager
can collect them to generate a combined test report seen at the end of the test
process. These files can be found in the reports folder and re in json format.
You can use **-r** and **-R** options to load the saved reports from the
previous run and display the report again, or to load reports and run the tasks
that have previously failed after you have tried to somehow fix them.
You can use option **-o** to filter out only failed tasks and examples from
the report and **-O** options to show only tasks without showing the individual
examples. These options can be used together to show only failed tasks.
The task manager can also generate a test report in *jUnit XML* format using
the **-x** options. It will be saves to the **report.xml** file in the *reports*
folder of the fixtures repository. This file can be used by many tools to
visualize the tests results, notably by the Jenkins CI.
Catalog debugging
-----------------
There are several features that can be helpful during writing the initial spec
for a task or when you are debugging a spec failure. Running tasks with **-a**
options will show the report text about which files are being used in this task
run and what files are found in Hiera and facts hierarchies.
Using **-A** option will output the entire compiled catalog in the Puppet DSL
format. You can review its content and resource parameters to either find
out what resources and classes are there or to see what values the parameters
and properties actually have after all the catalog logic is processed. It's
very helpful when you are debugging a strange task behaviour or writing a spec
file.
The framework can also gather and report information about *File* resources
that are being installed by Puppet. Using *--save_file_resources* options
will dave the list of files that would be installed by the catalog and
description about their source or template. Using *--puppet_binary_files*
option will enable additional RSpec matcher that will fail if there are
files and, especially, binary files being installed. These ones should be
delivered by fuel packages.
Data-driven catalog tests
-------------------------
Usually the spec files try to repeat the logic found in the tested manifests,
receive the same set of resources and their parameters and compare them to
the set of resources found in the compiled catalog. Then the matchers are used
to check if the catalog contains what is expected from it to contain.
While this method works well in most cases it requires a lot of work and
extensive expertise in the tasks' domain to write a correct and comprehensive
set of spec for a task catalog. Specs also cannot detect if there are several
new resources or properties that have not been described in the spec file.
Data-driven tests can offer an alternative way to ensure that there are
no unwanted changes in the tasks catalogs. The idea behind them is building
catalogs in human-readable format before and after the changes are made. Then
these files can be compared and everything that have been changes will become
visible.
Using the **-V** options will save the current catalog to the *catalogs*
folder. These generated catalogs can be useful when reviewing complex patches
with major changes to the modules or manifests. A diff for data changes may
help a developer/reviewer examine the catalog contents and check
that every resource or class are receiving the correct property values.
You can also use **-v** option to enable automatic catalog checks. It should be
done after you have generated the initial versions and made some changes.
Running the tests with this option enabled will generate the catalogs again and
compare them to the saved version. If there are differences the test will be
failed and you will be able to locate the failed tasks. Here is an example
workflow one may use to examine a complex patch for data layer changes (we
assume she is a cool ruby developer and use the rvm manager and bundler):
.. code-block:: console
$ git clone https://github.com/openstack/fuel-library
$ cd fuel-library
$ rvm use ruby-2.1.3
$ PUPPET_GEM_VERSION=3.4.0
$ PUPPET_VERSION=3.4.0
$ ./tests/noop/setup_and_diagnostics.sh -B
$ ./deployment/remove_modules.sh && ./deployment/update_modules.sh
$ ./tests/noop/noop_tests.sh -V -j10 -b -s swift
$ git review -d $swift_die_hard_patch_id
$ ./tests/noop/noop_tests.sh -v -j10 -b -s swift
At this point, the reviewer will get the data diffs proposed to the swift
modules and finish her thorough review. Note that the command
`./tests/noop/setup_and_diagnostics.sh -B` gets a clean state and sets things
up, while removing and updating_modules is required to make things go smooth.
Using external environment variables and custom paths
-----------------------------------------------------
There are a number of environment variables used by either the task manager or
by the specs themselves which can alter their behaviour and override the
default or calculated values.
Paths related:
- **SPEC_ROOT_DIR** Set the path to the root folder of the framework. Many
other folders are found relative to this path.
- **SPEC_SPEC_DIR** The path to the folder with the spec files. You can change
it but it should be at the *spec/hosts* from the root folder or the
rpsec-puppet will break.
- **SPEC_MODULE_PATH** or **SPEC_MODULEPATH** Set the path to the modules
library. It can be either a path to a single directory with Puppet
modules or a string with colon-separated paths to several module
directories.
- **SPEC_TASK_DIR** Set the path to the task manifests folder.
- **SPEC_DEPLOYMENT_DIR** Set the path to the *deployment* directory. It's
actually use only to find the scripts to update and reset modules.
- **SPEC_TASK_ROOT_DIR** Set the root path of the RSpec execution.
RSpec command will be run from this directory.
Usually it's the same dir as the **SPEC_ROOT_DIR**.
- **WORKSPACE** This variable is passed by the Jenkins jobs or will default to
the *workspece* folder. Currently used only to store the Ruby gems installed
by the *bundler* if *RVM* is not used.
- **SPEC_FACTS_DIR** The path to the folder with facts yaml files.
- **SPEC_HIERA_DIR** or **SPEC_YAML_DIR** The path to the folder with Hiera
yaml files.
Spec related:
- **SPEC_FACTS_NAME** Set the name of the facts file that will be used by the
spec process.
It's set when the task is being run.
- **SPEC_HIERA_NAME** or **SPEC_ASTUTE_FILE_NAME** Set the name of the Hiera
yaml file that will be used by the spec process.
It's set when the task is being run.
- **SPEC_FILE_NAME** Set the spec/manifest file name for the spec process to
test.
It's set when the task is being run and even can override the internal value.
- **SPEC_BUNDLE_EXEC** Use *bundle exec* to run the *rspec* command by the
task object.
- **SPEC_UPDATE_GLOBALS** Save the generated globals files instead of just
checking that globals task's catalog is compiling without and error.
- **SPEC_CATALOG_SHOW** Ask the spec to output the catalog contents.
- **SPEC_SHOW_STATUS** Ask the spec to output the status text.
Debug related:
- **SPEC_TASK_CONSOLE** Run the pry console in the manager process.
- **SPEC_RSPEC_CONSOLE** Run the pry console in the RSpec process.
- **SPEC_PUPPET_DEBUG** Enable debug output of the Puppet's catalog compilation.
This variable is also used by many other rspec suites of the Mirantis
Puppet modules outside of the Noop tests framework to output the
additional debug information.
- **SPEC_TASK_DEBUG** Enable the debug output of the task and manager objects.
- **SPEC_DEBUG_LOG** This variable can the the debug log destination file.
Fixtures source related:
- **NOOP_FIXTURES_REPO_URL** Fixtures repository. Defaults to
`https://github.com/openstack/fuel-noop-fixtures.git`
- **NOOP_FIXTURES_BRANCH** Fixtures branch. Defaults to `origin/master`
- **NOOP_FIXTURES_GERRIT_URL** Gerrit repository. Defaults to
`https://review.openstack.org/openstack/fuel-noop-fixtures`
- **NOOP_FIXTURES_GERRIT_COMMIT** Gerrit commit ref that should be
cherry-picked. Could contain multiple refs, space separated. Defaults to
`none`
Many of this variables can be set by the Noop manager CLI options, or you can
always export them externally.

View File

@ -1,86 +0,0 @@
Using the noop_tests utility
============================
The noop_tests options
----------------------
Noop tests framework is actually located in the fixtures repository together
with its yaml data files. There is a wrapper script *tests/noop/noop_tests.sh*
that can be used from the Fuel library repository to automatically setup the
external fixtures repository, configure paths and run the framework.
First, you can use the **-h** options to get the help output.::
tests/noop/noop_tests.sh -h
Output:::
Usage: noop_tests [options]
Main options:
-j, --jobs JOBS Parallel run RSpec jobs
-g, --globals Run all globals tasks and update saved globals YAML files
-B, --bundle_setup Setup Ruby environment using Bundle
-b, --bundle_exec Use "bundle exec" to run rspec
-l, --update-librarian Run librarian-puppet update in the deployment directory prior to testing
-L, --reset-librarian Reset puppet modules to librarian versions in the deployment directory prior to testing
-o, --report_only_failed Show only failed tasks and examples in the report
-O, --report_only_tasks Show only tasks, skip individual examples
-r, --load_saved_reports Read saved report JSON files from the previous run and show tasks report
-R, --run_failed_tasks Run the task that have previously failed again
-x, --xunit_report Save report in xUnit format to a file
List options:
-Y, --list_hiera List all hiera yaml files
-S, --list_specs List all task spec files
-F, --list_facts List all facts yaml files
-T, --list_tasks List all task manifest files
Filter options:
-s, --specs SPEC1,SPEC2 Run only these spec files. Example: "hosts/hosts_spec.rb,apache/apache_spec.rb"
-y, --yamls YAML1,YAML2 Run only these hiera yamls. Example: "controller.yaml,compute.yaml"
-f, --facts FACTS1,FACTS2 Run only these facts yamls. Example: "ubuntu.yaml,centos.yaml"
Debug options:
-c, --task_console Run PRY console
-C, --rspec_console Run PRY console in the RSpec process
-d, --task_debug Show framework debug messages
-D, --puppet_debug Show Puppet debug messages
--debug_log FILE Write all debug messages to this files
-t, --self-check Perform self-check and diagnostic procedures
-p, --pretend Show which tasks will be run without actually running them
Path options:
--dir_root DIR Path to the test root folder
--dir_deployment DIR Path to the test deployment folder
--dir_hiera_yamls DIR Path to the folder with hiera files
--dir_facts_yamls DIR Path to the folder with facts yaml files
--dir_spec_files DIR Path to the folder with task spec files (changing this may break puppet-rspec)
--dir_task_files DIR Path to the folder with task manifest files
--dir_puppet_modules DIR Path to the puppet modules
Spec options:
-A, --catalog_show Show catalog content debug output
-V, --catalog_save Save catalog to the files instead of comparing them with the current catalogs
-v, --catalog_check Check the saved catalog against the current one
-a, --spec_status Show spec status blocks
--puppet_binary_files Check if Puppet installs binary files
--save_file_resources Save file resources list to a report file
Shortcut scripts
----------------
There are also several shortcut scripts near the *noop_tests.sh* file that
can be used to perform some common actions.
- **tests/noop/noop_tests.sh** The main wrapper shell script. It downloads the
fixtures repository, sets the correct paths and setups the Ruby gems. It's
used by many other shortcut scripts.
- **utils/jenkins/fuel_noop_tests.sh** The wrapper script used as an entry point
for the automated Jenkins CI jobs. Runs all tests in parallel mode.
- **tests/noop/run_all.sh** This wrapper will run all tests in parallel mode.
- **tests/noop/run_global.sh** This wrapper will run all globals tasks and save
the generated globals yaml files.
- **tests/noop/setup_and_diagnostics.sh** This wrapper will first setup the
Ruby environment, download Fuel Library modules and run the noop tests in the
diagnostics mode to check the presence of all folders in the structure and
the numbers of tasks in the library.
- **run_failed_tasks.sh** This wrapper will load the saved reports files from
the previous run and will try to run all the failed tasks again.
- **purge_reports.sh** Removes all task report files.
- **purge_globals.sh** Removes all saved globals files.
- **purge_catalogs.sh** Removes all saves catalog files.

View File

View File

@ -1,20 +0,0 @@
---
:fqdn: 'node01.example.com'
:hostname: 'node01'
:physicalprocessorcount: '4'
:processorcount: '4'
:memorysize_mb: '32138.66'
:memorysize: '31.39 GB'
:kernel: 'Linux'
:osfamily: 'RedHat'
:operatingsystem: 'CentOS'
:operatingsystemrelease: '6.5'
:operatingsystemmajrelease: '6'
:lsbdistid: 'CentOS'
:l3_fqdn_hostname: 'node01'
:l3_default_route: '172.16.1.1'
:concat_basedir: '/tmp/'
:l23_os: 'centos6'
:os_package_type: 'rpm'
:os_service_default: '<SERVICE DEFAULT>'
:os_workers: 2

View File

@ -1,20 +0,0 @@
---
:fqdn: 'node01.example.com'
:hostname: 'node01'
:physicalprocessorcount: '4'
:processorcount: '4'
:memorysize_mb: '32138.66'
:memorysize: '31.39 GB'
:kernel: 'Linux'
:osfamily: 'RedHat'
:operatingsystem: 'CentOS'
:operatingsystemrelease: '7.0'
:operatingsystemmajrelease: '7'
:lsbdistid: 'CentOS'
:l3_fqdn_hostname: 'node01'
:l3_default_route: '172.16.1.1'
:concat_basedir: '/tmp/'
:l23_os: 'centos6'
:os_package_type: 'rpm'
:os_service_default: '<SERVICE DEFAULT>'
:os_workers: 2

View File

@ -1,28 +0,0 @@
# these facts are needed for old-style manifests with $::fuel_settings
:astute_settings_yaml: "\"HOSTNAME\": \"nailgun\"\n\"DNS_DOMAIN\": \"test.domain.local\"\n\"DNS_SEARCH\": \"test.domain.local\"\n\"DNS_UPSTREAM\": \"10.109.0.1\"\n\"NTP1\": \"0.fuel.pool.ntp.org\"\n\"NTP2\": \"1.fuel.pool.ntp.org\"\n\"NTP3\": \"2.fuel.pool.ntp.org\"\n\"ADMIN_NETWORK\":\n \"interface\": \"eth0\"\n \"ipaddress\": \"10.109.0.2\"\n \"netmask\": \"255.255.255.0\"\n \"cidr\": \"10.20.0.0/24\"\n \"size\": \"256\"\n \"dhcp_gateway\": \"10.109.0.1\"\n \"dhcp_pool_start\": \"10.109.0.3\"\n \"dhcp_pool_end\": \"10.109.0.254\"\n \"mac\": \"64:60:46:2e:5d:37\"\n\"FUEL_ACCESS\":\n \"user\": \"admin\"\n \"password\": \"admin\"\n\"BOOTSTRAP\":\n \"MIRROR_DISTRO\": \"http://archive.ubuntu.com/ubuntu\"\n \"MIRROR_MOS\": \"http://mirror.fuel-infra.org/mos-repos/ubuntu/9.0\"\n \"HTTP_PROXY\": \"\"\n \"EXTRA_APT_REPOS\": \"\"\n \"flavor\": \"centos\"\n\"PRODUCTION\": \"docker\"\n\"postgres\":\n \"keystone_dbname\": \"keystone\"\n \"nailgun_user\": \"nailgun\"\n \"keystone_user\": \"keystone\"\n \"nailgun_password\": \"CYoU6RS6\"\n \"ostf_user\": \"ostf\"\n \"nailgun_dbname\": \"nailgun\"\n \"keystone_password\": \"cpppakUb\"\n \"ostf_dbname\": \"ostf\"\n \"ostf_password\": \"TwfzylM7\"\n\"cobbler\":\n \"password\": \"0mMXE4t8\"\n \"user\": \"cobbler\"\n\"astute\":\n \"password\": \"SwLCUx2H\"\n \"user\": \"naily\"\n\"keystone\":\n \"nailgun_user\": \"nailgun\"\n \"monitord_user\": \"monitord\"\n \"nailgun_password\": \"MtC5S2TN\"\n \"monitord_password\": \"9IR0gsgd\"\n \"ostf_user\": \"ostf\"\n \"admin_token\": \"ZoyxrMO6\"\n \"ostf_password\": \"7evzsSBv\"\n\"mcollective\":\n \"password\": \"PPMi1XT2\"\n \"user\": \"mcollective\"\n"
:fuel_release: "9.0"
:fqdn: 'master.example.com'
:hostname: 'master'
:physicalprocessorcount: '4'
:processorcount: '4'
:memorysize_mb: '32138.66'
:memorysize: '31.39 GB'
:kernel: 'Linux'
:osfamily: 'RedHat'
:operatingsystem: 'CentOS'
:operatingsystemrelease: '6.5'
:operatingsystemmajrelease: '6'
:lsbdistid: 'CentOS'
:l3_fqdn_hostname: 'master'
:l3_default_route: '172.16.1.1'
:concat_basedir: '/tmp/'
:l23_os: 'centos6'
:os_package_type: 'rpm'
:os_service_default: '<SERVICE DEFAULT>'
:interfaces: docker0,eth0,lo
:ipaddress: 172.17.42.1
:ipaddress_docker0: 172.17.42.1
:ipaddress_eth0: 10.20.0.2
:ipaddress_lo: 127.0.0.1
:rsyslog_version: 5.8.10
:os_workers: 2

View File

@ -1,28 +0,0 @@
# these facts are needed for old-style manifests with $::fuel_settings
:astute_settings_yaml: "\"HOSTNAME\": \"nailgun\"\n\"DNS_DOMAIN\": \"test.domain.local\"\n\"DNS_SEARCH\": \"test.domain.local\"\n\"DNS_UPSTREAM\": \"10.109.0.1\"\n\"NTP1\": \"0.fuel.pool.ntp.org\"\n\"NTP2\": \"1.fuel.pool.ntp.org\"\n\"NTP3\": \"2.fuel.pool.ntp.org\"\n\"ADMIN_NETWORK\":\n \"interface\": \"eth0\"\n \"ipaddress\": \"10.109.0.2\"\n \"netmask\": \"255.255.255.0\"\n \"ssh_network\": \"10.109.0.0/24\"\n \"cidr\": \"10.20.0.0/24\"\n \"size\": \"256\"\n \"dhcp_gateway\": \"10.109.0.1\"\n \"dhcp_pool_start\": \"10.109.0.3\"\n \"dhcp_pool_end\": \"10.109.0.254\"\n \"mac\": \"64:60:46:2e:5d:37\"\n\"FUEL_ACCESS\":\n \"user\": \"admin\"\n \"password\": \"admin\"\n\"BOOTSTRAP\":\n \"MIRROR_DISTRO\": \"http://archive.ubuntu.com/ubuntu\"\n \"MIRROR_MOS\": \"http://mirror.fuel-infra.org/mos-repos/ubuntu/9.0\"\n \"HTTP_PROXY\": \"\"\n \"EXTRA_APT_REPOS\": \"\"\n \"flavor\": \"centos\"\n\"PRODUCTION\": \"docker\"\n\"postgres\":\n \"keystone_dbname\": \"keystone\"\n \"nailgun_user\": \"nailgun\"\n \"keystone_user\": \"keystone\"\n \"nailgun_password\": \"CYoU6RS6\"\n \"ostf_user\": \"ostf\"\n \"nailgun_dbname\": \"nailgun\"\n \"keystone_password\": \"cpppakUb\"\n \"ostf_dbname\": \"ostf\"\n \"ostf_password\": \"TwfzylM7\"\n\"cobbler\":\n \"password\": \"0mMXE4t8\"\n \"user\": \"cobbler\"\n\"astute\":\n \"password\": \"SwLCUx2H\"\n \"user\": \"naily\"\n\"keystone\":\n \"nailgun_user\": \"nailgun\"\n \"monitord_user\": \"monitord\"\n \"nailgun_password\": \"MtC5S2TN\"\n \"monitord_password\": \"9IR0gsgd\"\n \"ostf_user\": \"ostf\"\n \"admin_token\": \"ZoyxrMO6\"\n \"ostf_password\": \"7evzsSBv\"\n\"mcollective\":\n \"password\": \"PPMi1XT2\"\n \"user\": \"mcollective\"\n"
:fuel_release: "9.0"
:fqdn: 'master.example.com'
:hostname: 'master'
:physicalprocessorcount: '4'
:processorcount: '4'
:memorysize_mb: '32138.66'
:memorysize: '31.39 GB'
:kernel: 'Linux'
:osfamily: 'RedHat'
:operatingsystem: 'CentOS'
:operatingsystemrelease: '7.0'
:operatingsystemmajrelease: '7'
:lsbdistid: 'CentOS'
:l3_fqdn_hostname: 'master'
:l3_default_route: '172.16.1.1'
:concat_basedir: '/tmp/'
:l23_os: 'centos6'
:os_package_type: 'rpm'
:os_service_default: '<SERVICE DEFAULT>'
:interfaces: docker0,eth0,lo
:ipaddress: 172.17.42.1
:ipaddress_docker0: 172.17.42.1
:ipaddress_eth0: 10.20.0.2
:ipaddress_lo: 127.0.0.1
:rsyslog_version: 7.4.7
:os_workers: 2

View File

View File

@ -1,2 +0,0 @@
---
:processorcount: 4

View File

@ -1,2 +0,0 @@
---
:processorcount: 4

View File

@ -1,2 +0,0 @@
---
:processorcount: 4

View File

@ -1,2 +0,0 @@
---
:processorcount: 4

View File

@ -1,302 +0,0 @@
---
partitions:
sda1:
size: "49152"
label: primary
sda2:
size: "409600"
label: primary
sda3:
uuid: ae0ac634-e27b-42f3-9153-d92a0068220d
size: "409600"
mount: /boot
label: primary
filesystem: ext2
sda4:
size: "35782656"
label: primary
filesystem: LVM2_member
sda5:
size: "21102592"
label: primary
filesystem: LVM2_member
sda6:
size: "42074112"
label: primary
filesystem: LVM2_member
sda7:
size: "23199744"
label: primary
filesystem: LVM2_member
sda8:
size: "11032576"
label: primary
filesystem: LVM2_member
sda9:
uuid: d1d1f9e8-0e6e-4687-9bb7-96c32f0df974
size: "40960"
label: config-2
filesystem: ext2
sdb1:
size: "49152"
label: primary
sdb2:
size: "409600"
label: primary
sdb3:
size: "132667392"
label: primary
filesystem: LVM2_member
sdc1:
size: "49152"
label: primary
sdc2:
size: "409600"
label: primary
sdc3:
size: "132667392"
label: primary
filesystem: LVM2_member
hardwaremodel: x86_64
netmask: "255.255.255.0"
ps: "ps -ef"
lsbdistdescription: "Ubuntu 16.04.1 LTS"
processors:
models:
- "Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz"
count: 1
physicalcount: 1
hardwareisa: x86_64
kernel: Linux
os_package_type: debian
service_provider: systemd
operatingsystem: Ubuntu
sshdsakey: "AAAAB3NzaC1kc3MAAACBAO4GCoitokg7853i4il87AGQp+4IPKsqXRG0os5lXCg5DobISOPSmRp2PpSpiMkVv3jl5keceLb3nUz/FLeSrqEhsveYZT0qxqjtIerXrfc29S794B9T62zhIWQ2nWzpNQMYcOhovU5ov59ZSNWIA/llcDKbc3Gk9QEBsDgM5ezdAAAAFQC1AkJ5ok29hQ5QYKE0yikni/T2bQAAAIEApJZUPDJ+BXN459d7qE7Pxow+sIdKZ5Fe0lHPK9wybYunnNbo0GtjAU5SwwCdU3Eul9aGc5zgY2eT4JjZ5uZwymaJlHpL0LULbH+eQwwbWosQLw8Mw6piCjl8mn1ubC+xeqvvsM1denv3Xfs3CAQma4ZxTEO5qhqKGL3RSAllOwwAAACBAJNUP3Jcxbl28WZZfqnzC5YJQEZPp1A07uv4REN2UvPVK0nXdK2DbcHS7WCEx9ywF/qqf4fTTY+0lv3MPWS/c9kF+XETBB1dfepsK/omyZ53UxhPGTnaRh9xtRq80g+jJKItslg//eAogGip+7Ta7OhwVR6KlAbV36OhGow3p8U2"
sshfp_dsa: |-
SSHFP 2 1 7f2f5019ac51633b74237eae02ac43c242a6f778
SSHFP 2 2 b8bfabf1413f1d6ec99099522f5517b2edd4afb4245f1f30b0a62088425e469b
sshrsakey: "AAAAB3NzaC1yc2EAAAADAQABAAABAQCxwXqBS4gaX/1Wc0VGA5TcUCeWeSeuE7jtVzakRHmlI15ghrKmcsw3g1PXtWdDBbYIqeOwZO4+ya8ffyRzln+bMZF34OmpSq00HoyBazaZloC0ZMnk1Mw45lMntUwdSDftYz8OexY/y0wuEtLZd64Ul+UxRGphg7M+wJky7gdmR8Ow7so7HOxOXSwUz7dQPbvRiTR2JFdKGbIQLF6RX/YAc0TRU3ifRD1UM92CYXLPEBxP9Yy7F0dC4sN5aGPkdogArbq02TT4y2Wb272ScmXutCh5aBeVx4K5BNW/gbgbEecCzvLel4/7ZIplqeKP/CL75uSokn+Cs4YZ/j6hpyv/"
sshfp_rsa: |-
SSHFP 1 1 15cd6b8b3164701e01a3e4f1bc7d29e0d1b74078
SSHFP 1 2 96118cfb8744672ada34fec91b36be29a3f2a4395cc6e3b57319a257ccdf42ee
sshecdsakey: "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUympV1vesrscbddEOEBNvAiYBezisSwLSo3RqdGBHOB1RN+L1geo8RHqinWh1V69CDMEd1ZP17VSCTj48+T3o="
sshfp_ecdsa: |-
SSHFP 3 1 b2ec93f3ad6e627f59dcaf595c14ba40636471bb
SSHFP 3 2 0024d215f0e46148b8c5cd90e7aa58cb2add1d73c6858c11d92a142ad14b431b
sshed25519key: AAAAC3NzaC1lZDI1NTE5AAAAIKNTc/YaVliwqJ0UH5DHwy4jeitxyiVS71wi/2R4hviN
sshfp_ed25519: |-
SSHFP 4 1 51aa4d868ea4b3bfeaa9b0141d1ad9eb945ec297
SSHFP 4 2 c07d50bdf2be408db24d11e2200a796ae8169f2a558d412289eb28f93e884be9
augeasversion: "1.4.0"
selinux: false
puppetversion: "3.8.5"
virtual: kvm
is_virtual: true
vlans: "101,102"
boardmanufacturer: "Oracle Corporation"
boardproductname: VirtualBox
boardserialnumber: "0"
bios_vendor: "innotek GmbH"
bios_version: VirtualBox
bios_release_date: "12/01/2006"
manufacturer: "innotek GmbH"
productname: VirtualBox
serialnumber: "0"
uuid: F98DAF54-7849-4D4C-9FA9-474080EBB4D9
type: Other
uptime: "3:04 hours"
uptime_hours: 3
blockdevice_sda_size: 68718428160
blockdevice_sda_vendor: ATA
blockdevice_sda_model: "VBOX HARDDISK"
blockdevice_sdb_size: 68718428160
blockdevice_sdb_vendor: ATA
blockdevice_sdb_model: "VBOX HARDDISK"
blockdevice_sdc_size: 68718428160
blockdevice_sdc_vendor: ATA
blockdevice_sdc_model: "VBOX HARDDISK"
blockdevices: "sda,sdb,sdc"
facterversion: "2.4.6"
memorysize: "1.95 GB"
memoryfree: "183.81 MB"
swapsize: "2.00 GB"
swapfree: "0.01 MB"
swapsize_mb: "2048.00"
swapfree_mb: "0.01"
memorysize_mb: "2000.45"
memoryfree_mb: "183.81"
uptime_days: 0
architecture: amd64
kernelmajversion: "4.4"
interfaces: "bond0,br_ex,br_floating,br_fw_admin,br_int,br_mgmt,br_prv,br_storage,enp0s3,enp0s8,enp0s9,enp0s3_101,enp0s3_102,hapr_host,lo,ovs_system,p_eeee51a2_0,p_ff798dba_0,v_management,v_public,vr_host_base"
macaddress_bond0: "92:d3:0e:75:f7:64"
mtu_bond0: 1500
ipaddress_br_ex: "172.16.0.5"
macaddress_br_ex: "08:00:27:bc:e2:66"
netmask_br_ex: "255.255.255.0"
mtu_br_ex: 1500
macaddress_br_floating: "22:fd:9f:51:aa:48"
mtu_br_floating: 1500
ipaddress_br_fw_admin: "10.20.0.4"
macaddress_br_fw_admin: "08:00:27:9a:28:24"
netmask_br_fw_admin: "255.255.255.0"
mtu_br_fw_admin: 1500
macaddress_br_int: "9e:78:0c:ef:c5:4f"
mtu_br_int: 1500
ipaddress_br_mgmt: "192.168.0.6"
macaddress_br_mgmt: "08:00:27:9a:28:24"
netmask_br_mgmt: "255.255.255.0"
mtu_br_mgmt: 1500
macaddress_br_prv: "12:fd:e9:d6:6d:4c"
mtu_br_prv: 1500
ipaddress_br_storage: "192.168.1.4"
macaddress_br_storage: "08:00:27:9a:28:24"
netmask_br_storage: "255.255.255.0"
mtu_br_storage: 1500
macaddress_enp0s3: "08:00:27:9a:28:24"
mtu_enp0s3: 1500
macaddress_enp0s8: "08:00:27:bc:e2:66"
mtu_enp0s8: 1500
macaddress_enp0s9: "08:00:27:2f:eb:50"
mtu_enp0s9: 1500
macaddress_enp0s3_101: "08:00:27:9a:28:24"
mtu_enp0s3_101: 1500
macaddress_enp0s3_102: "08:00:27:9a:28:24"
mtu_enp0s3_102: 1500
ipaddress_hapr_host: "240.0.0.1"
macaddress_hapr_host: "72:43:d1:27:6e:fa"
netmask_hapr_host: "255.255.255.252"
mtu_hapr_host: 1500
ipaddress_lo: "127.0.0.1"
netmask_lo: "255.0.0.0"
mtu_lo: 65536
macaddress_ovs_system: "4a:33:57:2f:f1:34"
mtu_ovs_system: 1500
macaddress_p_eeee51a2_0: "9e:66:67:d6:60:88"
mtu_p_eeee51a2_0: 65000
macaddress_p_ff798dba_0: "b6:56:6e:f8:66:1f"
mtu_p_ff798dba_0: 65000
macaddress_v_management: "6e:b3:0a:cf:24:43"
mtu_v_management: 1500
macaddress_v_public: "c6:ff:b4:07:8c:79"
mtu_v_public: 1500
ipaddress_vr_host_base: "240.0.0.5"
macaddress_vr_host_base: "46:91:25:77:2d:6e"
netmask_vr_host_base: "255.255.255.252"
mtu_vr_host_base: 1500
timezone: UTC
gid: root
path: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
macaddress: "92:d3:0e:75:f7:64"
system_uptime:
seconds: 11093
hours: 3
days: 0
uptime: "3:04 hours"
hostname: ctrl1
filesystems: "ext2,ext3,ext4,squashfs,vfat,xfs"
lsbmajdistrelease: "16.04"
os:
name: Ubuntu
family: Debian
release:
major: "16.04"
full: "16.04"
lsb:
distcodename: xenial
distid: Ubuntu
distdescription: "Ubuntu 16.04.1 LTS"
distrelease: "16.04"
majdistrelease: "16.04"
fqdn: ctrl1.domain.tld
lsbdistid: Ubuntu
physicalprocessorcount: 1
rubyplatform: x86_64-linux-gnu
rubyversion: "2.3.1"
network_br_ex: "172.16.0.0"
network_br_fw_admin: "10.20.0.0"
network_br_mgmt: "192.168.0.0"
network_br_storage: "192.168.1.0"
network_hapr_host: "240.0.0.0"
network_lo: "127.0.0.0"
network_vr_host_base: "240.0.0.4"
operatingsystemrelease: "16.04"
kernelrelease: "4.4.0-34-generic"
rubysitedir: /usr/local/lib/site_ruby/2.3.0
osfamily: Debian
id: root
uptime_seconds: 11093
lsbdistcodename: xenial
processor0: "Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz"
processorcount: 1
uniqueid: a8c00600
domain: domain.tld
operatingsystemmajrelease: "16.04"
kernelversion: "4.4.0"
lsbdistrelease: "16.04"
ipaddress: "172.16.0.5"
apache_version: "2.4.18"
mysql_server_id: 167011540
package_provider: apt
openssl_version: "1.0.2g-fips"
libvirt_package_version: "1.3.1-1ubuntu10.1"
apt_reboot_required: false
l23_os: ubuntu
ip6tables_version: "1.6.0"
acpid_version: "2"
iptables_persistent_version: "1.0.4"
l3_fqdn_hostname: ctrl1.domain.tld
root_home: /root
rabbitmq_version: "3.6.1"
netrings:
enp0s3:
maximums:
RX: "4096"
TX: "4096"
current:
RX: "4096"
TX: "4096"
enp0s8:
maximums:
RX: "4096"
TX: "4096"
current:
RX: "4096"
TX: "4096"
enp0s9:
maximums:
RX: "4096"
TX: "4096"
current:
RX: "256"
TX: "256"
rsyslog_version: "8.16.0-1ubuntu3"
iptables_version: "1.6.0"
pcmk_node_name: ctrl1.domain.tld
kern_module_ovs_loaded: false
kern_module_bridge_loaded: true
allocated_hugepages: "{\x221G\x22:false,\x222M\x22:false}"
osd_devices_list: ""
acpi_event: false
staging_http_get: curl
ssh_server_version_full: "7.2p2"
ssh_server_version_major: "7.2"
ssh_server_version_release: "7.2"
apt_update_last_success: 1471611612
haproxy_version: "1.6.3"
mysql_version: "5.6.30"
mounts:
- /
- /boot
- /var/lib/mysql
- /var/log
- /var/lib/horizon
- /var/lib/glance
ssh_client_version_full: "7.2p2"
ssh_client_version_major: "7.2"
ssh_client_version_release: "7.2"
l2_ovs_vlan_splinters_need_for: ""
is_pe: false
os_service_default: "<SERVICE DEFAULT>"
os_workers: 2

View File

@ -1 +0,0 @@
*.yaml

View File

@ -1,98 +0,0 @@
"ADMIN_NETWORK":
"dhcp_gateway": "10.145.0.1"
"dhcp_pool_end": "10.145.0.254"
"dhcp_pool_start": "10.145.0.4"
"interface": "eth0"
"ipaddress": "10.145.0.2"
"mac": "64:ef:bc:bf:6c:44"
"netmask": "255.255.255.0"
"ssh_network": "10.145.0.0/24"
"BOOTSTRAP":
"flavor": "ubuntu"
"http_proxy": ""
"https_proxy": ""
"repos":
- "name": "ubuntu"
"priority": !!null "null"
"section": "main universe multiverse"
"suite": "trusty"
"type": "deb"
"uri": "http://archive.ubuntu.com/ubuntu"
- "name": "ubuntu-updates"
"priority": !!null "null"
"section": "main universe multiverse"
"suite": "trusty-updates"
"type": "deb"
"uri": "http://archive.ubuntu.com/ubuntu"
- "name": "ubuntu-security"
"priority": !!null "null"
"section": "main universe multiverse"
"suite": "trusty-security"
"type": "deb"
"uri": "http://archive.ubuntu.com/ubuntu"
- "name": "mos"
"priority": !!int "1050"
"section": "main restricted"
"suite": "mos10.0"
"type": "deb"
"uri": "http://127.0.0.1:8080/ubuntu/x86_64"
- "name": "mos-updates"
"priority": !!int "1050"
"section": "main restricted"
"suite": "mos10.0-updates"
"type": "deb"
"uri": "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
- "name": "mos-security"
"priority": !!int "1050"
"section": "main restricted"
"suite": "mos10.0-security"
"type": "deb"
"uri": "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
- "name": "mos-holdback"
"priority": !!int "1100"
"section": "main restricted"
"suite": "mos10.0-holdback"
"type": "deb"
"uri": "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
"skip_default_img_build": !!bool "false"
"DNS_DOMAIN": "domain.tld"
"DNS_SEARCH": "domain.tld"
"DNS_UPSTREAM": "10.145.0.1"
"FEATURE_GROUPS": []
"FUEL_ACCESS":
"password": "admin"
"user": "admin"
"HOSTNAME": "fuel"
"NTP1": "0.fuel.pool.ntp.org"
"NTP2": "1.fuel.pool.ntp.org"
"NTP3": "2.fuel.pool.ntp.org"
"PRODUCTION": "docker"
"TEST_DNS": "www.google.com"
"astute":
"password": "uSdbi42Po5gcHnUUdVT48diu"
"user": "naily"
"cobbler":
"password": "P2guQCCl9MHS88RCRF5I1QGw"
"user": "cobbler"
"keystone":
"admin_token": "M1khaGkHj72wMMp4JZ0TYnYt"
"monitord_password": "euniPf5A3LdUjSokt4tJ3BCw"
"monitord_user": "monitord"
"nailgun_password": "Llw1eTOaIYq3VzxUDhrNcd91"
"nailgun_user": "nailgun"
"ostf_password": "uSjjpBrV3t4URfCGoSse4fZ6"
"ostf_user": "ostf"
"service_token_off": "true"
"mcollective":
"password": "wbYJ8IXSrJyon5GNvx2Ad9xf"
"user": "mcollective"
"postgres":
"keystone_dbname": "keystone"
"keystone_password": "IuocdY94d7PWfNq5i89OEyE2"
"keystone_user": "keystone"
"nailgun_dbname": "nailgun"
"nailgun_password": "VNTUIHHMLvaiSffK5Dlwmh9U"
"nailgun_user": "nailgun"
"ostf_dbname": "ostf"
"ostf_password": "WzjtGGb4oW07wPqqUiLvAB3E"
"ostf_user": "ostf"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +0,0 @@
---
deleted_nodes: [node-798.domain.tld, node-799.domain.tld]

View File

@ -1,7 +0,0 @@
---
ceilometer:
workers: 4
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,4 +0,0 @@
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,6 +0,0 @@
---
workers_max: 4
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,7 +0,0 @@
---
nova:
workers: 4
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,4 +0,0 @@
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,13 +0,0 @@
plugins:
- name: contrail
repositories:
- name: contrail-5.0.0
priority: 1100
section: ''
suite: /
type: deb
uri: http://10.109.15.2:8080/plugins/contrail-5.0/repositories/ubuntu
scripts:
- local_path: /etc/fuel/plugins/contrail-5.0/
remote_url: rsync://10.109.15.2:/plugins/contrail-5.0/deployment_scripts/

View File

@ -1,6 +0,0 @@
---
host_uuid: '00000000-0000-0000-0000-000000000000'
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,4 +0,0 @@
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,4 +0,0 @@
network_metadata:
vips:
management:
namespace: 'haproxy'

View File

@ -1,2 +0,0 @@
---
workers_max: 4

View File

@ -1,4 +0,0 @@
---
mu_upgrade:
repos: "mos,mos-updates,mos-security,mos-holdback"
enabled: true

View File

@ -1,48 +0,0 @@
require_relative 'noop/utils'
require_relative 'noop/config'
require_relative 'noop/manager'
require_relative 'noop/task'
require_relative 'noop/matchers'
module Noop
def self.new_task(*args)
self.task = Noop::Task.new *args
end
def self.task_spec=(value)
self.task.file_name_spec = value
end
def self.task_hiera=(value)
self.task.file_name_hiera = value
end
def self.task_facts=(value)
self.task.file_name_facts = value
end
def self.task_spec
self.task.file_name_spec
end
def self.task_hiera
self.task.file_name_hiera
end
def self.task_facts
self.task.file_name_facts
end
def self.task=(value)
@task = value
end
def self.task
return @task if @task
@task = Noop::Task.new
end
def self.method_missing(method, *args)
self.task.send method, *args
end
end

View File

@ -1,6 +0,0 @@
require_relative 'utils'
require_relative 'config/base'
require_relative 'config/hiera'
require_relative 'config/facts'
require_relative 'config/globals'
require_relative 'config/log'

View File

@ -1,143 +0,0 @@
require 'pathname'
module Noop
module Config
# The root directory of the config sub-module.
# It's being used as the root for the relative paths
# to the other directories.
# @return [Pathname]
def self.dir_path_config
return @dirname if @dirname
@dirname = Pathname.new(__FILE__).dirname.realpath
end
# The root directory of the fixtures module.
# @return [Pathname]
def self.dir_path_root
return @dir_path_root if @dir_path_root
@dir_path_root = Noop::Utils.path_from_env 'SPEC_ROOT_DIR'
@dir_path_root = dir_path_config.parent.parent.parent unless @dir_path_root
begin
@dir_path_root = @dir_path_root.realpath
rescue
@dir_path_root
end
end
# The directory where the task will chdir before being run.
# Equals to the root dir unless specified.
# @return [Pathname]
def self.dir_path_task_root
return @dir_path_task_root if @dir_path_task_root
@dir_path_task_root = Noop::Utils.path_from_env 'SPEC_TASK_ROOT_DIR'
@dir_path_task_root = dir_path_root unless @dir_path_task_root
begin
@dir_path_task_root = @dir_path_task_root.realpath
rescue
@dir_path_task_root
end
end
# @return [Pathname]
def self.dir_path_task_spec
return @dir_path_task_spec if @dir_path_task_spec
@dir_path_task_spec = Noop::Utils.path_from_env 'SPEC_SPEC_DIR'
@dir_path_task_spec = dir_path_root + 'spec' + 'hosts' unless @dir_path_task_spec
begin
@dir_path_task_spec = @dir_path_task_spec.realpath
rescue
@dir_path_task_spec
end
end
# @return [Array<Pathname>]
def self.list_path_modules
return @list_path_modules if @list_path_modules
@list_path_modules = Noop::Utils.path_list_from_env 'SPEC_MODULEPATH', 'SPEC_MODULE_PATH'
return @list_path_modules if @list_path_modules.any?
@list_path_modules = [dir_path_root + 'modules']
end
# @return [Pathname]
def self.dir_path_tasks_local
return @dir_path_tasks_local if @dir_path_tasks_local
@dir_path_tasks_local = Noop::Utils.path_from_env 'SPEC_TASK_DIR'
@dir_path_tasks_local = dir_path_root + 'tasks' unless @dir_path_tasks_local
begin
@dir_path_tasks_local = @dir_path_tasks_local.realpath
rescue
@dir_path_tasks_local
end
end
# @return [Pathname]
def self.dir_path_modules_node
return @dir_path_modules_node if @dir_path_modules_node
@dir_path_modules_node = Pathname.new '/etc/puppet/modules'
end
# @return [Pathname]
def self.dir_path_tasks_node
return @dir_path_tasks_node if @dir_path_tasks_node
@dir_path_tasks_node = dir_path_modules_node + 'osnailyfacter' + 'modular'
end
# @return [Pathname]
def self.dir_path_deployment
return @dir_path_deployment if @dir_path_deployment
@dir_path_deployment = Noop::Utils.path_from_env 'SPEC_DEPLOYMENT_DIR'
@dir_path_deployment = dir_path_root + 'deployment' unless @dir_path_deployment
begin
@dir_path_deployment = @dir_path_deployment.realpath
rescue
@dir_path_deployment
end
end
# Workspace directory where gem bundle will be created
# is passed from Jenkins or the default value is used
# @return [Pathname]
def self.dir_path_workspace
return @dir_path_workspace if @dir_path_workspace
@dir_path_workspace = Noop::Utils.path_from_env 'WORKSPACE'
@dir_path_workspace = Noop::Config.dir_path_root + Pathname.new('workspace') unless @dir_path_workspace
begin
@dir_path_workspace = @dir_path_workspace.realpath
rescue
nil
end
@dir_path_workspace.mkpath
raise "Workspace '#{@dir_path_workspace}' is not a directory!" unless @dir_path_workspace.directory?
@dir_path_workspace
end
# The name of the gem home directory
# @return [Pathname]
def self.dir_name_gem_home
Pathname.new 'bundled_gems'
end
# Get a GEM_HOME either from the environment (using RVM)
# or from the default value (using bundle)
# @return [Pathname]
def self.dir_path_gem_home
return @dir_path_gem_home if @dir_path_gem_home
@dir_path_gem_home = Noop::Utils.path_from_env 'GEM_HOME'
return @dir_path_gem_home if @dir_path_gem_home
@dir_path_gem_home = dir_path_workspace + dir_name_gem_home
@dir_path_gem_home
end
# @return [Pathname]
def self.dir_path_reports
return @dir_path_reports if @dir_path_reports
@dir_path_reports = Noop::Utils.path_from_env 'SPEC_REPORTS_DIR'
@dir_path_reports = dir_path_root + 'reports' unless @dir_path_reports
begin
@dir_path_reports = @dir_path_reports.realpath
rescue
@dir_path_reports
end
end
end
end

View File

@ -1,36 +0,0 @@
require 'pathname'
module Noop
module Config
# @return [Pathname]
def self.dir_name_facts
Pathname.new 'facts'
end
# @return [Pathname]
def self.dir_path_facts
return @dir_path_facts if @dir_path_facts
@dir_path_facts = Noop::Utils.path_from_env 'SPEC_FACTS_DIR'
@dir_path_facts = dir_path_root + dir_name_facts unless @dir_path_facts
begin
@dir_path_facts = @dir_path_facts.realpath
rescue
@dir_path_facts
end
end
# @return [Pathname]
def self.dir_name_facts_override
Pathname.new 'override'
end
# @return [Pathname]
def self.dir_path_facts_override
dir_path_facts + dir_name_facts_override
end
def self.default_facts_file_name
Pathname.new 'ubuntu16.yaml'
end
end
end

View File

@ -1,33 +0,0 @@
require 'pathname'
module Noop
module Config
# @return [Pathname]
def self.spec_name_globals
Pathname.new 'globals/globals_spec.rb'
end
# @return [Pathname]
def self.spec_path_globals
dir_path_task_spec + spec_name_globals
end
def self.manifest_name_globals
Noop::Utils.convert_to_manifest spec_name_globals
end
def self.manifest_path_globals
dir_path_tasks_local + manifest_name_globals
end
# @return [Pathname]
def self.dir_name_globals
Pathname.new 'globals'
end
# @return [Pathname]
def self.dir_path_globals
dir_path_hiera + dir_name_globals
end
end
end

View File

@ -1,46 +0,0 @@
require 'pathname'
module Noop
module Config
# @return [Pathname]
def self.dir_name_hiera
Pathname.new 'hiera'
end
# @return [Pathname]
def self.dir_path_hiera
return @dir_path_hiera if @dir_path_hiera
@dir_path_hiera = Noop::Utils.path_from_env 'SPEC_HIERA_DIR', 'SPEC_YAML_DIR'
@dir_path_hiera = dir_path_root + dir_name_hiera unless @dir_path_hiera
begin
@dir_path_hiera = @dir_path_hiera.realpath
rescue
@dir_path_hiera
end
end
# @return [Pathname]
def self.dir_name_hiera_override
Pathname.new 'override'
end
# @return [Pathname]
def self.dir_path_hiera_override
dir_path_hiera + dir_name_hiera_override
end
def self.default_hiera_file_name
Pathname.new 'novanet-primary-controller.yaml'
end
# @return [Pathname]
def self.file_name_hiera_plugins
Pathname.new 'plugins'
end
# @return [Pathname]
def self.file_path_hiera_plugins
Noop::Config.dir_path_hiera + file_name_hiera_plugins
end
end
end

View File

@ -1,26 +0,0 @@
require 'logger'
module Noop
module Config
def self.log_destination
return ENV['SPEC_DEBUG_LOG'] if ENV['SPEC_DEBUG_LOG']
STDOUT
end
def self.log_level
if ENV['SPEC_TASK_DEBUG']
Logger::DEBUG
else
Logger::WARN
end
end
def self.log
return @log if @log
@log = Logger.new log_destination
@log.level = log_level
@log.progname = 'noop_manager'
@log
end
end
end

View File

@ -1,7 +0,0 @@
require_relative 'config'
require_relative 'manager/library'
require_relative 'manager/options'
require_relative 'manager/base'
require_relative 'manager/report'
require_relative 'manager/setup'
require_relative 'manager/xunit'

View File

@ -1,227 +0,0 @@
module Noop
class Manager
def initialize
options
colorize_load_or_stub
parallel_load_or_stub
end
# Load the "colorize" gem or just
# stub the colorization method if the gem
# cannot be loaded and don't show colors
def colorize_load_or_stub
begin
require 'colorize'
rescue LoadError
debug 'Could not load "colorize" gem. Disabling colors.'
String.instance_eval do
define_method(:colorize) do |*args|
self
end
end
end
end
# Load the 'parallel' gem or just
# stub the parallel run function to run tasks one by one
def parallel_load_or_stub
begin
require 'parallel'
rescue LoadError
debug 'Could not load "parallel" gem. Disabling multi-process run.'
Object.const_set('Parallel', Module.new)
class << Parallel
def map(data, *args, &block)
data.map &block
end
def processor_count
0
end
end
end
end
# Write a debug message to the logger
# @return [void]
def debug(message)
Noop::Config.log.debug message
end
# Output a message to the console
# @return [void]
def output(message)
Noop::Utils.output message
end
# Output an error message to the log file
# and raise the exception
# @return [void]
def error(message)
Noop::Utils.error message
end
# Get the parallel run count value from the options
# or from the processor count if "auto" is set
# @return [Integer]
def parallel_run
return @parallel_run if @parallel_run
if options[:parallel_run] == 'auto'
@parallel_run = Parallel.processor_count
debug "Using parallel run count: #{@parallel_run}"
return @parallel_run
end
@parallel_run = options[:parallel_run].to_i
end
# Check if the parallel run option is enabled
# @return [true,false]
def parallel_run?
parallel_run > 0
end
# Check if there are some filters defined
# @return [true,false]
def has_filters?
options[:filter_specs] or options[:filter_facts] or options[:filter_hiera] or options[:filter_examples]
end
# Output a list of all discovered Hiera file names taking filers into account
# @return [void]
def list_hiera_files
hiera_file_names.sort.each do |file_name_hiera|
next unless hiera_included? file_name_hiera
output file_name_hiera
end
exit(0)
end
# Output a list of all discovered facts file names taking filers into account
# @return [void]
def list_facts_files
facts_file_names.sort.each do |file_name_facts|
next unless facts_included? file_name_facts
output file_name_facts
end
exit(0)
end
# Output a list of all discovered spec file names taking filers into account
# @return [void]
def list_spec_files
spec_file_names.sort.each do |file_name_spec|
next unless spec_included? file_name_spec
output file_name_spec
end
exit(0)
end
# Output a list of all discovered task file names taking filers into account
# @return [void]
def list_task_files
task_file_names.sort.each do |file_name_task|
output file_name_task
end
exit(0)
end
# Try to run all discovered tasks in the task list, using
# parallel run if enabled
# Does not run tasks if :pretend option is given
# return [Array<Noop::Task>]
def run_all_tasks
Parallel.map(task_list, :in_threads => parallel_run) do |task|
task.run unless options[:pretend]
task
end
end
# Try to run anly those tasks that have failed status by reseting them
# to the :pending status first.
# Does not run tasks if :pretend option is given
# return [Array<Noop::Task>]
def run_failed_tasks
Parallel.map(task_list, :in_threads => parallel_run) do |task|
next if task.success?
task.status = :pending
task.run unless options[:pretend]
task
end
end
# Ask every task in the task list to load its report file and status
# from the previous run attempt
# return [Array<Noop::Task>]
def load_task_reports
Parallel.map(task_list, :in_threads => parallel_run) do |task|
task.file_load_report_json
task.determine_task_status
task
end
end
# Check if there are any failed tasks in the list.
# @return [true, false]
def have_failed_tasks?
task_list.any? do |task|
task.failed?
end
end
# Exit with error if there are failed tasks
# or without the error code if none.
def exit_with_error_code
exit 1 if have_failed_tasks?
exit 0
end
#########################################
def main(override_options = {})
options.merge! override_options
if ENV['SPEC_TASK_CONSOLE']
require 'pry'
binding.pry
exit(0)
end
if options[:bundle_setup]
setup_bundle
end
if options[:update_librarian_puppet]
setup_library
end
if options[:self_check]
self_check
exit(0)
end
list_hiera_files if options[:list_hiera]
list_facts_files if options[:list_facts]
list_spec_files if options[:list_specs]
list_task_files if options[:list_tasks]
if options[:run_failed_tasks]
load_task_reports
run_failed_tasks
tasks_report
exit_with_error_code
end
if options[:load_saved_reports]
load_task_reports
tasks_report
save_xunit_report if options[:xunit_report] and not options[:pretend]
exit_with_error_code
end
run_all_tasks
tasks_report
save_xunit_report if options[:xunit_report] and not options[:pretend]
exit_with_error_code
end
end
end

View File

@ -1,455 +0,0 @@
require 'yaml'
require 'set'
module Noop
class Manager
# Recursively find file in the folder
# @param root [String,Pathname]
# @param exclude [Array<Pathname>]
# @return [Array<Pathname>]
def find_files(root, path_from=nil, exclude=[], &block)
exclude = [exclude] unless exclude.is_a? Array
root = Noop::Utils.convert_to_path root
files = []
begin
root.children.each do |path|
next if exclude.include? path.basename
if path.file?
if block_given?
next unless block.call path
end
path = path.relative_path_from path_from if path_from
files << path
else
files << find_files(path, path_from, exclude, &block)
end
end
rescue
[]
end
files.flatten
end
# Scan the spec directory and gather the list of spec files
# @return [Array<Pathname>]
def spec_file_names
return @spec_file_names if @spec_file_names
error "No #{Noop::Config.dir_path_task_spec} directory!" unless Noop::Config.dir_path_task_spec.directory?
@spec_file_names = find_files(Noop::Config.dir_path_task_spec, Noop::Config.dir_path_task_spec) do |file|
file.to_s.end_with? '_spec.rb'
end
end
# Scan the Hiera directory and gather the list of Hiera files
# @return [Array<Pathname>]
def hiera_file_names
return @hiera_file_names if @hiera_file_names
error "No #{Noop::Config.dir_path_hiera} directory!" unless Noop::Config.dir_path_hiera.directory?
exclude = [ Noop::Config.dir_name_hiera_override, Noop::Config.dir_name_globals, Noop::Config.file_name_hiera_plugins ]
@hiera_file_names = find_files(Noop::Config.dir_path_hiera, Noop::Config.dir_path_hiera, exclude) do |file|
file.to_s.end_with? '.yaml'
end
end
# Scan the facts directory and gather the list of facts files
# @return [Array<Pathname>]
def facts_file_names
return @facts_file_names if @facts_file_names
error "No #{Noop::Config.dir_path_facts} directory!" unless Noop::Config.dir_path_facts.directory?
exclude = [ Noop::Config.dir_name_facts_override ]
@facts_file_names = find_files(Noop::Config.dir_path_facts, Noop::Config.dir_path_facts, exclude) do |file|
file.to_s.end_with? '.yaml'
end
end
# Scan the tasks directory and gather the list of task files
# @return [Array<Pathname>]
def task_file_names
return @task_file_names if @task_file_names
error "No #{Noop::Config.dir_path_tasks_local} directory!" unless Noop::Config.dir_path_tasks_local.directory?
@task_file_names = find_files(Noop::Config.dir_path_tasks_local, Noop::Config.dir_path_tasks_local) do |file|
file.to_s.end_with? '.pp'
end
end
# Read the task deployment graph metadata files in the library:
# Find all 'tasks.yaml' files in the puppet directory.
# Read them all to a Hash by their ids.
# Find all 'groups' records and resolve their 'tasks' reference
# by pointing referenced tasks to this group instead.
# Using the SPEC_NO_GRAPH_METADATA environment variable
# disable the task graph processing.
# @return [Hash<String => Hash>]
def task_graph_metadata
return {} if ENV['SPEC_NO_GRAPH_METADATA']
return @task_graph_metadata if @task_graph_metadata
@task_graph_metadata = {}
Noop::Config.list_path_modules.each do |path|
next unless path.directory?
find_files(path) do |task_file|
next unless task_file.file?
next unless task_file.to_s.end_with? 'tasks.yaml'
begin
tasks = YAML.load_file task_file
rescue
next
end
tasks.each do |task|
id = task['id']
@task_graph_metadata[id] = task
end
false
end
end
@task_graph_metadata.each do |id, group_task|
next unless group_task['type'] == 'group' and group_task['tasks'].is_a? Array
group_task['tasks'].each do |task|
next unless @task_graph_metadata[task]
@task_graph_metadata[task]['groups'] = [] unless @task_graph_metadata[task]['groups'].is_a? Array
@task_graph_metadata[task]['groups'] << id
end
end
@task_graph_metadata
end
# Try to determine the roles each spec should be run in using
# the deployment graph metadata. Take a list of groups or roles
# and form a set of them.
# @return [Hash<Pathname => Set>]
def assign_spec_to_roles
return @assign_spec_to_roles if @assign_spec_to_roles
@assign_spec_to_roles = {}
task_graph_metadata.values.each do |task_data|
roles = (task_data['groups'] or task_data['roles'] or task_data['role'])
next unless roles
roles = [roles] unless roles.is_a? Array
file_path_manifest = task_data.fetch('parameters', {}).fetch('puppet_manifest', nil)
next unless file_path_manifest
file_path_manifest = Pathname.new file_path_manifest
file_name_manifest = file_path_manifest.relative_path_from Noop::Config.dir_path_tasks_node
file_name_spec = Noop::Utils.convert_to_spec file_name_manifest
roles = Set.new roles
@assign_spec_to_roles[file_name_spec] = Set.new unless @assign_spec_to_roles[file_name_spec].is_a? Set
@assign_spec_to_roles[file_name_spec] += roles
end
@assign_spec_to_roles
end
# Try to determine the roles of each Hiera file.
# Take 'nodes' structure and find 'node_roles' of the current node their.
# Form a set of found values and add root 'role' value if found.
# @return [Hash<Pathname => Set>]
def assign_hiera_to_roles
return @assign_hiera_to_roles if @assign_hiera_to_roles
@assign_hiera_to_roles = {}
hiera_file_names.each do |hiera_file|
begin
data = YAML.load_file(Noop::Config.dir_path_hiera + hiera_file)
next unless data.is_a? Hash
fqdn = data['fqdn']
next unless fqdn
nodes = data.fetch('network_metadata', {}).fetch('nodes', nil)
next unless nodes
this_node = nodes.find do |node|
node.last['fqdn'] == fqdn
end
node_roles = this_node.last['node_roles']
roles = Set.new
roles.merge node_roles if node_roles.is_a? Array
role = data['role']
roles.add role if role
@assign_hiera_to_roles[hiera_file] = roles
rescue
next
end
end
@assign_hiera_to_roles
end
# Determine Hiera files for each spec file by calculating
# the intersection between their roles sets.
# If the spec file contains '*' role it should be counted
# as all possible roles.
# @return [Hash<Pathname => Pathname]
def assign_spec_to_hiera
return @assign_spec_to_hiera if @assign_spec_to_hiera
@assign_spec_to_hiera = {}
assign_spec_to_roles.each do |file_name_spec, spec_roles_set|
hiera_files = get_hiera_for_roles spec_roles_set
@assign_spec_to_hiera[file_name_spec] = hiera_files if hiera_files.any?
end
@assign_spec_to_hiera
end
# Read all spec annotations metadata.
# @return [Hash<Pathname => Array>]
def spec_run_metadata
return @spec_run_metadata if @spec_run_metadata
@spec_run_metadata = {}
find_files(Noop::Config.dir_path_task_spec) do |spec_file|
next unless spec_file.file?
next unless spec_file.to_s.end_with? '_spec.rb'
spec_name = spec_file.relative_path_from(Noop::Config.dir_path_task_spec)
spec_data = parse_spec_file spec_file
@spec_run_metadata[spec_name] = spec_data if spec_data.any?
false
end
@spec_run_metadata
end
# Parse a spec file to find annotation entries.
# @param [Pathname] task_spec
# @return [Hash]
def parse_spec_file(task_spec)
task_spec_metadata = {}
begin
text = task_spec.read
text.split("\n").each do |line|
line = line.downcase
if line =~ /^\s*#\s*(?:yamls|hiera):\s*(.*)/
task_spec_metadata[:hiera] = [] unless task_spec_metadata[:hiera].is_a? Array
task_spec_metadata[:hiera] += get_list_of_yamls $1
end
if line =~ /^\s*#\s*facts:\s*(.*)/
task_spec_metadata[:facts] = [] unless task_spec_metadata[:facts].is_a? Array
task_spec_metadata[:facts] += get_list_of_yamls $1
end
if line =~ /^\s*#\s*(?:skip_yamls|skip_hiera):\s(.*)/
task_spec_metadata[:skip_hiera] = [] unless task_spec_metadata[:skip_hiera].is_a? Array
task_spec_metadata[:skip_hiera] += get_list_of_yamls $1
end
if line =~ /^\s*#\s*skip_facts:\s(.*)/
task_spec_metadata[:skip_facts] = [] unless task_spec_metadata[:skip_facts].is_a? Array
task_spec_metadata[:skip_facts] += get_list_of_yamls $1
end
if line =~ /^\s*#\s*disable_spec/
task_spec_metadata[:disable] = true
end
if line =~ /^\s*#\s*role:\s*(.*)/
task_spec_metadata[:roles] = [] unless task_spec_metadata[:roles].is_a? Array
roles = line.split /\s*,\s*|\s+/
task_spec_metadata[:roles] += roles
end
if line =~ /^\s*#\s*run:\s*(.*)/
run_record = get_list_of_yamls $1
if run_record.length >= 2
run_record = {
:hiera => run_record[0],
:facts => run_record[1],
}
task_spec_metadata[:runs] = [] unless task_spec_metadata[:runs].is_a? Array
task_spec_metadata[:runs] << run_record
end
end
end
rescue
return task_spec_metadata
end
task_spec_metadata
end
# Split a space or comma separated list of yaml files
# and form an Array of the yaml file names.
# @return [Array<Pathname>]
def get_list_of_yamls(line)
line = line.split /\s*,\s*|\s+/
line.map do |yaml|
yaml = Pathname.new yaml
yaml = yaml.sub /$/, '.yaml' unless yaml.extname =~ /\.yaml/i
yaml
end
end
# Determine the list of run records for a spec file:
# Take a list of explicitly defined runs if present.
# Make product of allowed Hiera and facts yaml files to
# form more run records.
# Use the default facts file name if there is none
# is given in the annotation.
# Use the list of Hiera files determined by the intersection of
# deployment graph metadata and Hiera yaml contents using roles
# as a common data.
def get_spec_runs(file_name_spec)
file_name_spec = Noop::Utils.convert_to_path file_name_spec
metadata = spec_run_metadata.fetch file_name_spec, {}
metadata[:facts] = [Noop::Config.default_facts_file_name] unless metadata[:facts]
if metadata[:roles]
metadata[:hiera] = [] unless metadata[:hiera]
metadata[:hiera] += get_hiera_for_roles metadata[:roles]
end
# the last default way to get hiera files list
metadata[:hiera] = assign_spec_to_hiera.fetch file_name_spec, [] unless metadata[:hiera]
runs = []
metadata[:facts].product metadata[:hiera] do |facts, hiera|
next if metadata[:skip_hiera].is_a? Array and metadata[:skip_hiera].include? hiera
next if metadata[:skip_facts].is_a? Array and metadata[:skip_facts].include? hiera
run_record = {
:hiera => hiera,
:facts => facts,
}
runs << run_record
end
runs += metadata[:runs] if metadata[:runs].is_a? Array
runs
end
# Get a list of Hiera YAML files which roles
# include the given set of roles
# @param roles [Array,Set,String]
# @return [Array]
def get_hiera_for_roles(*roles)
all_roles = Set.new
roles.flatten.each do |role|
if role.is_a? Set
all_roles += role
else
all_roles.add role
end
end
if all_roles.include? '*'
assign_hiera_to_roles.keys
else
assign_hiera_to_roles.select do |_file_name_hiera, hiera_roles_set|
roles_intersection = hiera_roles_set & all_roles
roles_intersection.any?
end.keys
end
end
# Check if the given element matches this filter
# @param [Array<String>]
def filter_is_matched?(filter, element)
return true unless filter
filter = [filter] unless filter.is_a? Array
filter.any? do |expression|
expression = Regexp.new expression.to_s
expression =~ element.to_s
end
end
# Use filters to check if this spec file is included
# @return [true,false]
def spec_included?(spec)
filter_is_matched? options[:filter_specs], spec
end
# Use filters to check if this facts file is included
# @return [true,false]
def facts_included?(facts)
filter_is_matched? options[:filter_facts], facts
end
# Use filters to check if this Hiera file is included
# @return [true,false]
def hiera_included?(hiera)
filter_is_matched? options[:filter_hiera], hiera
end
# Check if the globals spec should be skipped.
# It should not be skipped only if it's explicitly enabled in the filter.
# @return [true,false]
def skip_globals?(file_name_spec)
return false unless file_name_spec == Noop::Config.spec_name_globals
return true unless options[:filter_specs]
not spec_included? file_name_spec
end
# Check if the spec is disabled using the annotation
# @return [true,false]
def spec_is_disabled?(file_name_spec)
file_name_spec = Noop::Utils.convert_to_path file_name_spec
spec_run_metadata.fetch(file_name_spec, {}).fetch(:disable, false)
end
# Form the final list of Task objects that should be running.
# Take all discovered spec files, get run records for them,
# apply filters to exclude filtered records.
# @return [Array<Noop::Task>]
def task_list
return @task_list if @task_list
@task_list = []
spec_file_names.each do |file_name_spec|
next if spec_is_disabled? file_name_spec
next if skip_globals? file_name_spec
next unless spec_included? file_name_spec
get_spec_runs(file_name_spec).each do |run|
next unless run[:hiera] and run[:facts]
next unless facts_included? run[:facts]
next unless hiera_included? run[:hiera]
task = Noop::Task.new file_name_spec, run[:hiera], run[:facts]
task.parallel = true if parallel_run?
@task_list << task
end
end
@task_list
end
# Collect all hiera plugins into a data structure.
# Used only for debugging purposes.
# @return [Hash<String => Pathname>]
def hiera_plugins
return @hiera_plugins if @hiera_plugins
@hiera_plugins = {}
return @hiera_plugins unless Noop::Config.file_path_hiera_plugins.directory?
Noop::Config.file_path_hiera_plugins.children.each do |hiera|
next unless hiera.directory?
hiera_name = hiera.basename.to_s
hiera.children.each do |file|
next unless file.file?
next unless file.to_s.end_with? '.yaml'
file = file.relative_path_from Noop::Config.dir_path_hiera
@hiera_plugins[hiera_name] = [] unless @hiera_plugins[hiera_name]
@hiera_plugins[hiera_name] << file
end
end
@hiera_plugins
end
# Loop through all task files and find those that
# do not have a corresponding spec file present
# @return [Array<Pathname>]
def find_tasks_without_specs
task_file_names.reject do |manifest|
spec = Noop::Utils.convert_to_spec manifest
spec_file_names.include? spec
end
end
# Loop through all spec files and find those that
# do not have a corresponding task file present
# @return [Array<Pathname>]
def find_specs_without_tasks
spec_file_names.reject do |spec|
manifest = Noop::Utils.convert_to_manifest spec
task_file_names.include? manifest
end
end
# Loop through all spec files and find those
# which have not been matched to any task
# @return [Array<Pathname>]
def find_unmatched_specs
spec_file_names.reject do |spec|
next true if spec == Noop::Config.spec_name_globals
task_list.any? do |task|
task.file_name_spec == spec
end
end
end
end
end

View File

@ -1,161 +0,0 @@
require 'optparse'
module Noop
class Manager
# Parse the CLI options
# @return [Hash]
def options
return @options if @options
@options = {}
options_defaults @options
optparse = OptionParser.new do |opts|
opts.separator 'Main options:'
opts.on('-j', '--jobs JOBS', 'Parallel run RSpec jobs') do |jobs|
@options[:parallel_run] = jobs
end
opts.on('-g', '--globals', 'Run all globals tasks and update saved globals YAML files') do
ENV['SPEC_UPDATE_GLOBALS'] = 'YES'
options[:filter_specs] = [Noop::Config.spec_name_globals]
end
opts.on('-B', '--bundle_setup', 'Setup Ruby environment using Bundle') do
@options[:bundle_setup] = true
end
opts.on('-b', '--bundle_exec', 'Use "bundle exec" to run rspec') do
ENV['SPEC_BUNDLE_EXEC'] = 'YES'
end
opts.on('-l', '--update-librarian', 'Run librarian-puppet update in the deployment directory prior to testing') do
@options[:update_librarian_puppet] = true
end
opts.on('-L', '--reset-librarian', 'Reset puppet modules to librarian versions in the deployment directory prior to testing') do
@options[:reset_librarian_puppet] = true
end
opts.on('-o', '--report_only_failed', 'Show only failed tasks and examples in the report') do
@options[:report_only_failed] = true
end
opts.on('-O', '--report_only_tasks', 'Show only tasks, skip individual examples') do
@options[:report_only_tasks] = true
end
opts.on('-r', '--load_saved_reports', 'Read saved report JSON files from the previous run and show tasks report') do
@options[:load_saved_reports] = true
end
opts.on('-R', '--run_failed_tasks', 'Run the task that have previously failed again') do
@options[:run_failed_tasks] = true
end
opts.on('-x', '--xunit_report', 'Save report in xUnit format to a file') do
@options[:xunit_report] = true
end
opts.separator 'List options:'
opts.on('-Y', '--list_hiera', 'List all hiera yaml files') do
@options[:list_hiera] = true
end
opts.on('-S', '--list_specs', 'List all task spec files') do
@options[:list_specs] = true
end
opts.on('-F', '--list_facts', 'List all facts yaml files') do
@options[:list_facts] = true
end
opts.on('-T', '--list_tasks', 'List all task manifest files') do
@options[:list_tasks] = true
end
opts.separator 'Filter options:'
opts.on('-s', '--specs SPEC1,SPEC2', Array, 'Run only these spec files. Example: "hosts/hosts_spec.rb,apache/apache_spec.rb"') do |specs|
@options[:filter_specs] = specs
end
opts.on('-y', '--yamls YAML1,YAML2', Array, 'Run only these hiera yamls. Example: "controller.yaml,compute.yaml"') do |yamls|
@options[:filter_hiera] = yamls
end
opts.on('-f', '--facts FACTS1,FACTS2', Array, 'Run only these facts yamls. Example: "ubuntu16.yaml,centos7.yaml"') do |yamls|
@options[:filter_facts] = yamls
end
# opts.on('-e', '--examples STR1,STR2', Array, 'Run only these spec examples. Example: "should compile"') do |examples|
# @options[:filter_examples] = examples
# end
opts.separator 'Debug options:'
opts.on('-c', '--task_console', 'Run PRY console') do
ENV['SPEC_TASK_CONSOLE'] = 'YES'
end
opts.on('-C', '--rspec_console', 'Run PRY console in the RSpec process') do
ENV['SPEC_RSPEC_CONSOLE'] = 'YES'
end
opts.on('-d', '--task_debug', 'Show framework debug messages') do
ENV['SPEC_TASK_DEBUG'] = 'YES'
end
opts.on('-D', '--puppet_debug', 'Show Puppet debug messages') do
ENV['SPEC_PUPPET_DEBUG'] = 'YES'
end
opts.on('--debug_log FILE', 'Write all debug messages to this files') do |file|
ENV['SPEC_DEBUG_LOG'] = file
end
opts.on('-t', '--self-check', 'Perform self-check and diagnostic procedures') do
@options[:self_check] = true
end
opts.on('-p', '--pretend', 'Show which tasks will be run without actually running them') do
@options[:pretend] = true
end
opts.separator 'Path options:'
opts.on('--dir_root DIR', 'Path to the test root folder') do |dir|
ENV['SPEC_ROOT_DIR'] = dir
end
opts.on('--dir_deployment DIR', 'Path to the test deployment folder') do |dir|
ENV['SPEC_DEPLOYMENT_DIR'] = dir
end
opts.on('--dir_hiera_yamls DIR', 'Path to the folder with hiera files') do |dir|
ENV['SPEC_HIERA_DIR'] = dir
end
opts.on('--dir_facts_yamls DIR', 'Path to the folder with facts yaml files') do |dir|
ENV['SPEC_FACTS_DIR'] = dir
end
opts.on('--dir_spec_files DIR', 'Path to the folder with task spec files (changing this may break puppet-rspec)') do |dir|
ENV['SPEC_SPEC_DIR'] = dir
end
opts.on('--dir_task_files DIR', 'Path to the folder with task manifest files') do |dir|
ENV['SPEC_TASK_DIR'] = dir
end
opts.on('--module_path DIR', 'Path to the puppet modules. Can consist of several dirs separated by a colon.') do |dir|
ENV['SPEC_MODULE_PATH'] = dir
end
opts.separator 'Spec options:'
opts.on('-A', '--catalog_show', 'Show catalog content debug output') do
ENV['SPEC_CATALOG_SHOW'] = 'YES'
end
opts.on('-V', '--catalog_save', 'Save catalog to the files instead of comparing them with the current catalogs') do
ENV['SPEC_CATALOG_CHECK'] = 'save'
end
opts.on('-v', '--catalog_check', 'Check the saved catalog against the current one') do
ENV['SPEC_CATALOG_CHECK'] = 'check'
end
# opts.on('--spec_generate', 'Generate specs for catalogs') do
# ENV['SPEC_SPEC_GENERATE'] = 'YES'
# end
opts.on('-a', '--spec_status', 'Show spec status blocks') do
ENV['SPEC_SHOW_STATUS'] = 'YES'
end
opts.on('--spec_coverage', 'Show spec coverage statistics') do
ENV['SPEC_COVERAGE'] = 'YES'
end
opts.on('--puppet_binary_files', 'Check if Puppet installs binary files') do
ENV['SPEC_PUPPET_BINARY_FILES'] = 'YES'
end
opts.on('--save_file_resources', 'Save file resources list to a report file') do
ENV['SPEC_SAVE_FILE_RESOURCES'] = 'YES'
end
end
optparse.parse!
@options
end
# Any default options values can be set here
def options_defaults(options)
options[:parallel_run] = 0
end
end
end

View File

@ -1,282 +0,0 @@
require 'erb'
module Noop
class Manager
COLUMN_WIDTH = 8
# Output a status string for this task.
# Output examples to unless disables.
# @param task [Noop::Task]
def output_task_status(task)
return if options[:report_only_failed] and task.success?
line = task_status_string task
line += "#{task.file_base_spec.to_s.ljust max_length_spec + 1}"
line += "#{task.file_base_facts.to_s.ljust max_length_facts + 1}"
line += "#{task.file_base_hiera.to_s.ljust max_length_hiera + 1}"
output line
output_task_examples task unless options[:report_only_tasks]
end
# Output examples report for this task
# @param task [Noop::Task]
def output_task_examples(task)
return unless task.report.is_a? Hash
examples = task.report['examples']
return unless examples.is_a? Array
examples.each do |example|
description = example['description']
status = example['status']
next unless description and status
next if options[:report_only_failed] and status == 'passed'
line = " #{example_status_string status} #{description}"
exception_message = example.fetch('exception', {}).fetch('message', nil)
line += " (#{exception_message.colorize :cyan})" if exception_message
output line
end
end
# Get a colored string with status of this task
# @param task [Noop::Task]
# @return [String]
def task_status_string(task)
if task.pending?
'PENDING'.ljust(COLUMN_WIDTH).colorize :blue
elsif task.success?
'SUCCESS'.ljust(COLUMN_WIDTH).colorize :green
elsif task.failed?
'FAILED'.ljust(COLUMN_WIDTH).colorize :red
else
task.status
end
end
# Colorize the example status string
# @param status [String]
# @return [String]
def example_status_string(status)
if status == 'passed'
status.ljust(COLUMN_WIDTH).colorize :green
elsif status == 'failed'
status.ljust(COLUMN_WIDTH).colorize :red
else
status.ljust(COLUMN_WIDTH).colorize :blue
end
end
# Return a string showing if the directory is present.
# @param directory [Pathname]
# @return [String]
def directory_check_status_string(directory)
if directory.directory?
'SUCCESS'.ljust(COLUMN_WIDTH).colorize :green
else
'FAILED'.ljust(COLUMN_WIDTH).colorize :red
end
end
# Find the length of the longest spec file name
# @return [Integer]
def max_length_spec
return @max_length_spec if @max_length_spec
@max_length_spec = task_list.map do |task|
task.file_base_spec.to_s.length
end.max
end
# Find the length of the longest Hiera file name
# @return [Integer]
def max_length_hiera
return @max_length_hiera if @max_length_hiera
@max_length_hiera = task_list.map do |task|
task.file_base_hiera.to_s.length
end.max
end
# Find the length of the longest facts file name
# @return [Integer]
def max_length_facts
return @max_length_facts if @max_length_facts
@max_length_facts = task_list.map do |task|
task.file_base_facts.to_s.length
end.max
end
# Output a status string with tasks count
def output_task_totals
count = {
:total => 0,
:failed => 0,
:pending => 0,
}
task_list.each do |task|
next unless task.is_a? Noop::Task
count[:pending] += 1 if task.pending?
count[:failed] += 1 if task.failed?
count[:total] += 1
end
output_stats_string 'Tasks', count[:total], count[:failed], count[:pending]
end
# Output a status string with examples count
def output_examples_total
count = {
:total => 0,
:failed => 0,
:pending => 0,
}
task_list.each do |task|
next unless task.is_a? Noop::Task
next unless task.has_report?
task.report['examples'].each do |example|
count[:total] += 1
if example['status'] == 'failed'
count[:failed] += 1
elsif example['status'] == 'pending'
count[:pending] += 1
end
end
end
output_stats_string 'Examples', count[:total], count[:failed], count[:pending]
end
# Format a status string of examples or tasks
def output_stats_string(name, total, failed, pending)
line = "#{name.to_s.ljust(COLUMN_WIDTH).colorize :yellow}"
line += " Total: #{total.to_s.ljust(COLUMN_WIDTH).colorize :green}"
line += " Failed: #{failed.to_s.ljust(COLUMN_WIDTH).colorize :red}"
line += " Pending: #{pending.to_s.ljust(COLUMN_WIDTH).colorize :blue}"
output line
end
# Show the main tasks report
def tasks_report
output Noop::Utils.separator
task_list.each do |task|
output_task_status task
end
output Noop::Utils.separator
tasks_stats
output Noop::Utils.separator
end
# Show the tasks and examples stats
def tasks_stats
output_examples_total unless options[:report_only_tasks]
output_task_totals
end
# Show report with all defined filters content
def show_filters
if options[:filter_specs]
options[:filter_specs] = [options[:filter_specs]] unless options[:filter_specs].is_a? Array
output "Spec filter: #{options[:filter_specs].join(', ').colorize :green}"
end
if options[:filter_facts]
options[:filter_facts] = [options[:filter_facts]] unless options[:filter_facts].is_a? Array
output "Facts filter: #{options[:filter_facts].join(', ').colorize :green}"
end
if options[:filter_hiera]
options[:filter_hiera] = [options[:filter_hiera]] unless options[:filter_hiera].is_a? Array
output "Hiera filter: #{options[:filter_hiera].join(', ').colorize :green}"
end
if options[:filter_examples]
options[:filter_examples] = [options[:filter_examples]] unless options[:filter_examples].is_a? Array
output "Examples filter: #{options[:filter_examples].join(', ').colorize :green}"
end
end
# Show the stats of discovered library objects
def show_library
template = <<-'eof'
Tasks discovered: <%= task_file_names.length.to_s.colorize :green %>
Specs discovered: <%= spec_file_names.length.to_s.colorize :green %>
Hiera discovered: <%= hiera_file_names.length.to_s.colorize :green %>
Facts discovered: <%= facts_file_names.length.to_s.colorize :green %>
Tasks in graph metadata: <%= task_graph_metadata.length.to_s.colorize :yellow %>
Tasks with spec metadata: <%= spec_run_metadata.length.to_s.colorize :yellow %>
Total tasks to run: <%= task_list.count.to_s.colorize :yellow %>
eof
output ERB.new(template, nil, '-').result(binding)
end
# Check the existence of main directories
def check_paths
paths = [
:dir_path_config,
:dir_path_root,
:dir_path_task_root,
:dir_path_task_spec,
:dir_path_tasks_local,
:dir_path_deployment,
:dir_path_workspace,
:dir_path_hiera,
:dir_path_hiera_override,
:dir_path_facts,
:dir_path_facts_override,
:dir_path_globals,
:dir_path_reports,
:list_path_modules,
]
max_length = paths.map { |p| p.to_s.length }.max
paths.each do |path|
directory = Noop::Config.send path
directory = [directory] unless directory.is_a? Array
directory.each do |element|
output "#{directory_check_status_string element} #{path.to_s.ljust max_length} #{element}"
end
end
end
# Output a list of specs that have not been matched to any Hiera files
# and will never run
def list_unmatched_specs
unmatched_specs = find_unmatched_specs.to_a
if unmatched_specs.any?
Noop::Utils.output 'There are specs which have not been matched to a YAML and will never run:'.colorize :red
unmatched_specs.each do |spec|
Noop::Utils.output "#{'*'.colorize :yellow} #{spec}"
end
end
end
# Output a list of tasks without a spec file
# and a list of specs without a task file.
def list_missing_tasks_and_specs
tasks_without_specs = find_tasks_without_specs.to_a
specs_without_tasks = find_specs_without_tasks.to_a
if tasks_without_specs.any?
Noop::Utils.output 'There are tasks without specs:'.colorize :red
tasks_without_specs.each do |task|
Noop::Utils.output "#{'*'.colorize :yellow} #{task}"
end
end
if specs_without_tasks.any?
Noop::Utils.output 'There are specs without tasks:'.colorize :red
specs_without_tasks.each do |spec|
Noop::Utils.output "#{'*'.colorize :yellow} #{spec}"
end
end
end
# Run all diagnostic procedures
def self_check
output Noop::Utils.separator 'Paths'
check_paths
if has_filters?
output Noop::Utils.separator 'Filters'
show_filters
end
output Noop::Utils.separator 'Missing'
list_missing_tasks_and_specs
output Noop::Utils.separator 'Unmatched'
list_unmatched_specs
output Noop::Utils.separator 'Library'
show_library
output Noop::Utils.separator 'End'
end
end
end

View File

@ -1,84 +0,0 @@
module Noop
class Manager
# Check if bundle command is installed
# @return [true,false]
def bundle_installed?
`bundle --version`
$?.exitstatus == 0
end
# Check if librarian-puppet command is installed
# If we are using bundle there is no need to check it
# @return [true,false]
def librarian_installed?
return true if ENV['SPEC_BUNDLE_EXEC']
`librarian-puppet version`
$?.exitstatus == 0
end
# Setup bundle in the fixtures repo and bundle for puppet librarian
def setup_bundle
ENV['GEM_HOME'] = Noop::Config.dir_path_gem_home.to_s
bundle_install_and_update Noop::Config.dir_path_root
bundle_install_and_update Noop::Config.dir_path_deployment
Dir.chdir Noop::Config.dir_path_root
end
# Run update script to setup external Puppet modules
def setup_library
ENV['GEM_HOME'] = Noop::Config.dir_path_gem_home.to_s
update_puppet_modules Noop::Config.dir_path_deployment
Dir.chdir Noop::Config.dir_path_root
end
# @return [Pathname]
def file_name_gemfile_lock
Pathname.new 'Gemfile.lock'
end
# Remove the Gem lock file at the given path
# @param root [String,Pathname]
def remove_gemfile_lock(root)
root = Noop::Utils.convert_to_path root
lock_file_path = root + file_name_gemfile_lock
if lock_file_path.file?
debug "Removing Gem lock file: '#{lock_file_path}'"
lock_file_path.unlink
end
end
# Run bundles install and update actions in the given folder
# @param root [String,Pathname]
def bundle_install_and_update(root)
error 'Bundle is not installed!' unless bundle_installed?
root = Noop::Utils.convert_to_path root
remove_gemfile_lock root
Dir.chdir root or error "Could not chdir to: #{root}"
debug "Starting 'bundle install' at: '#{root}' with the Gem home: '#{ENV['GEM_HOME']}'"
Noop::Utils.run 'bundle install'
error 'Could not prepare bundle environment!' if $?.exitstatus != 0
debug "Starting 'bundle update' at: '#{root}' with the Gem home: '#{ENV['GEM_HOME']}'"
Noop::Utils.run 'bundle update'
error 'Could not update bundle environment!' if $?.exitstatus != 0
end
# Run librarian-puppet to fetch modules as
# necessary modules at the given folder
# @param root [String,Pathname]
def update_puppet_modules(root)
error 'Puppet Librarian is not installed!' unless librarian_installed?
root = Noop::Utils.convert_to_path root
Dir.chdir root or error "Could not chdir to: #{root}"
command = './update_modules.sh -v'
command = command + ' -b' if ENV['SPEC_BUNDLE_EXEC']
command = command + ' -r' if options[:reset_librarian_puppet]
debug 'Starting update_modules script'
Noop::Utils.run command
error 'Unable to update upstream puppet modules using librarian-puppet!' if $?.exitstatus != 0
debug 'Finished update_modules script'
end
end
end

View File

@ -1,144 +0,0 @@
require 'rexml/document'
module Noop
class Manager
# Generate a data structure that will be used to create the xUnit report
# @return [Array]
def tasks_report_structure
tasks_report = []
task_list.each do |task|
task_hash = {}
task_hash[:status] = task.status
task_hash[:name] = task.to_s
task_hash[:description] = task.description
task_hash[:spec] = task.file_name_spec.to_s
task_hash[:hiera] = task.file_name_hiera.to_s
task_hash[:facts] = task.file_name_facts.to_s
task_hash[:task] = task.file_name_manifest.to_s
task_hash[:examples] = []
if task.report.is_a? Hash
examples = task.report['examples']
next unless examples.is_a? Array
examples.each do |example|
example_hash = {}
example_hash[:file_path] = example['file_path']
example_hash[:line_number] = example['line_number']
example_hash[:description] = example['description']
example_hash[:status] = example['status']
example_hash[:run_time] = example['run_time']
example_hash[:pending_message] = example['pending_message']
exception_class = example.fetch('exception', {}).fetch('class', nil)
exception_message = example.fetch('exception', {}).fetch('message', nil)
next unless example_hash[:description] and example_hash[:status]
if exception_class and exception_message
example_hash[:exception_class] = exception_class
example_hash[:exception_message] = exception_message
end
task_hash[:examples] << example_hash
end
summary = task.report['summary']
task_hash[:example_count] = summary['example_count']
task_hash[:failure_count] = summary['failure_count']
task_hash[:pending_count] = summary['pending_count']
task_hash[:duration] = summary['duration']
end
tasks_report << task_hash
end
tasks_report
end
# Generate xUnit XML report text
# @return [String]
def xunit_report
document = REXML::Document.new
declaration = REXML::XMLDecl.new
declaration.encoding = 'UTF-8'
declaration.version = '1.0'
document.add declaration
testsuites = document.add_element 'testsuites'
tests = 0
failures = 0
task_id = 0
tasks_report_structure.each do |task|
testsuite = testsuites.add_element 'testsuite'
testsuite.add_attribute 'id', task_id
task_id += 1
testsuite.add_attribute 'name', task[:description]
testsuite.add_attribute 'package', task[:name]
testsuite.add_attribute 'tests', task[:example_count]
testsuite.add_attribute 'failures', task[:failure_count]
testsuite.add_attribute 'skipped', task[:pending_count]
testsuite.add_attribute 'time', task[:duration]
testsuite.add_attribute 'status', task[:status]
properties = testsuite.add_element 'properties'
property_task = properties.add_element 'property'
property_task.add_attribute 'name', 'task'
property_task.add_attribute 'value', task[:task]
property_spec = properties.add_element 'property'
property_spec.add_attribute 'name', 'spec'
property_spec.add_attribute 'value', task[:spec]
property_hiera = properties.add_element 'property'
property_hiera.add_attribute 'name', 'hiera'
property_hiera.add_attribute 'value', task[:hiera]
property_facts = properties.add_element 'property'
property_facts.add_attribute 'name', 'facts'
property_facts.add_attribute 'value', task[:facts]
if task[:examples].is_a? Array
task[:examples].each do |example|
tests += 1
testcase = testsuite.add_element 'testcase'
testcase.add_attribute 'name', example[:description]
testcase.add_attribute 'classname', "#{example[:file_path]}:#{example[:line_number]}"
testcase.add_attribute 'time', example[:run_time]
testcase.add_attribute 'status', example[:status]
if example[:status] == 'pending'
skipped = testcase.add_element 'skipped'
skipped.add_attribute 'message', example[:pending_message] if example[:pending_message]
end
if example[:status] == 'failed'
failures += 1
end
if example[:exception_message] and example[:exception_class]
failure = testcase.add_element 'failure'
failure.add_attribute 'message', example[:exception_message]
failure.add_attribute 'type', example[:exception_class]
end
end
end
end
testsuites.add_attribute 'tests', tests
testsuites.add_attribute 'failures', failures
document.to_s
end
# xUnit report file name
# @return [Pathname]
def file_name_xunit_report
Pathname.new 'report.xml'
end
# Full path to the xUnit report file
# @return [Pathname]
def file_path_xunit_report
Noop::Config.dir_path_reports + file_name_xunit_report
end
# Write the xUnit report to the file
# @return [void]
def save_xunit_report
debug "Saving xUnit XML report file to: #{file_path_xunit_report.to_s}"
File.open(file_path_xunit_report.to_s, 'w') do |file|
file.puts xunit_report
end
end
end
end

View File

@ -1,40 +0,0 @@
module FuelRelationshipGraphMatchers
class EnsureTransitiveDependency
def initialize(before, after)
@before = before
@after = after
end
def matches?(actual_graph)
@actual_graph = actual_graph
@dependents = actual_graph.dependents(
vertex_called(actual_graph, @before))
!@dependents.find_all { |d| d.ref =~ /#{Regexp.escape(@after)}/i }.empty?
end
def failure_message
msg = "expected deployment graph to contain a transitional dependency between\n"
msg << "#{@before} and #{@after} but it did not happen\n"
msg << "#{@before} dependents are: #{@dependents.map {|dep| dep.ref}}\n"
msg
end
def failure_message_when_negated
msg = "expected deployment graph to NOT contain a transitional dependency between\n"
msg << "#{@before} and #{@after} but it did not happen\n"
msg << "#{@before} dependents are: #{@dependents.map {|dep| dep.ref}}\n"
msg
end
private
def vertex_called(graph, name)
graph.vertices.find { |v| v.ref =~ /#{Regexp.escape(name)}/i }
end
end
def ensure_transitive_dependency(before, after)
EnsureTransitiveDependency.new(before, after)
end
end

View File

@ -1,12 +0,0 @@
require_relative 'config'
require_relative 'task/base'
require_relative 'task/facts'
require_relative 'task/globals'
require_relative 'task/hiera'
require_relative 'task/spec'
require_relative 'task/run'
require_relative 'task/overrides'
require_relative 'task/catalog'
require_relative 'task/helpers'
require_relative 'task/files'
require_relative 'task/report'

View File

@ -1,134 +0,0 @@
module Noop
class Task
def initialize(spec=nil, hiera=nil, facts=nil)
self.status = :pending
self.file_name_spec = Noop::Utils.convert_to_spec spec if spec
self.file_name_hiera = hiera if hiera
self.file_name_facts = facts if facts
self.pid = Process.pid
self.thread = Thread.current.object_id
@parallel = false
end
attr_accessor :parallel
attr_accessor :pid
attr_accessor :thread
attr_accessor :status
attr_accessor :valid
# Check if this task's configuration is valid
# @return [true,false]
def valid?
validate unless valid.is_a? TrueClass or valid.is_a? FalseClass
valid
end
# @return [true,false]
def success?
status == :success
end
# @return [true,false]
def failed?
status == :failed
end
# @return [true,false]
def pending?
status == :pending
end
# Write a debug message to the logger
# @return [void]
def debug(message)
Noop::Config.log.debug message
end
# Output a message to the console
# @return [void]
def output(message)
Noop::Utils.output message
end
# Write an error message to the log
# and raise the exception
# @return [void]
def error(message)
Noop::Utils.error message
end
# Write a warning message to the log
# @return [void]
def warning(message)
Noop::Utils.warning message
end
# @return [true,false]
def parallel_run?
parallel
end
# @return [true,false]
def validate
if file_name_spec_set?
unless file_present_spec?
warning "No spec file: #{file_path_spec}!"
self.valid = false
return valid
end
else
warning 'Spec file is not set for this task!'
self.valid = false
return valid
end
unless file_present_manifest?
warning "No task file: #{file_path_manifest}!"
self.valid = false
return valid
end
unless file_present_hiera?
warning "No hiera file: #{file_path_hiera}!"
self.valid = false
return valid
end
unless file_present_facts?
warning "No facts file: #{file_path_facts}!"
self.valid = false
return valid
end
self.valid = true
end
# @return [String]
def to_s
"Task[#{file_base_spec}]"
end
# @return [String]
def description
message = ''
message += "Manifest: #{file_name_manifest}"
message += " Spec: #{file_name_spec}"
message += " Hiera: #{file_name_hiera}"
message += " Facts: #{file_name_facts}"
message += " Status: #{status}"
message
end
# @return [String]
def process_info
message = ''
message + "Object: #{object_id}"
message += " Pid: #{pid}" if pid
message += " Thread: #{thread}" if thread
message
end
# @return [Strong]
def inspect
"Task[#{description}]"
end
end
end

View File

@ -1,165 +0,0 @@
require 'erb'
module Noop
class Task
# Dumps the entire catalog structure to the text
# representation in the Puppet language
# @param context [Object] the context from the rspec test
# @param resources_filter [Array] the list of resources to dump. Dump all resources if not given
def catalog_dump(context, resources_filter = [])
catalog = context.subject
catalog = catalog.call if catalog.is_a? Proc
text = ''
resources_filter = [resources_filter] unless resources_filter.is_a? Array
catalog.resources.select do |catalog_resource|
if catalog_resource.type == 'Class'
next false if %w(main Settings).include? catalog_resource.title.to_s
end
next true unless resources_filter.any?
resources_filter.find do |filter_resource|
resources_are_same? catalog_resource, filter_resource
end
end.sort_by do |catalog_resource|
catalog_resource.to_s
end.each do |catalog_resource|
text += dump_resource(catalog_resource) + "\n"
text += "\n"
end
text
end
# Takes a parameter value and formats it to the literal value
# that could be placed in the Puppet manifest
# @param value [String, Array, Hash, true, false, nil]
# @return [String]
def parameter_value_format(value)
case value
when TrueClass then 'true'
when FalseClass then 'false'
when NilClass then 'undef'
when Array then begin
array = value.collect do |v|
parameter_value_format v
end.join(', ')
"[ #{array} ]"
end
when Hash then begin
hash = value.keys.sort do |a, b|
a.to_s <=> b.to_s
end.collect do |key|
"#{parameter_value_format key.to_s} => #{parameter_value_format value[key]}"
end.join(', ')
"{ #{hash} }"
end
when Numeric, Symbol then parameter_value_format value.to_s
when String then begin
# escapes single quote characters and wrap into them
"'#{value.gsub "'", '\\\\\''}'"
end
else value.to_s
end
end
# Take a resource object and generate a manifest representation of it
# in the Puppet language. Replaces "to_manifest" Puppet function which
# is not working correctly.
# @param resource [Puppet::Resource]
# @return [String]
def dump_resource(resource)
return '' unless resource.is_a? Puppet::Resource or resource.is_a? Puppet::Parser::Resource
attributes = resource.keys
if attributes.include?(:name) and resource[:name] == resource[:title]
attributes.delete(:name)
end
attribute_max_length = attributes.inject(0) do |max_length, attribute|
attribute.to_s.length > max_length ? attribute.to_s.length : max_length
end
attributes.sort!
if attributes.first != :ensure && attributes.include?(:ensure)
attributes.delete(:ensure)
attributes.unshift(:ensure)
end
attributes_text_block = attributes.map { |attribute|
value = resource[attribute]
" #{attribute.to_s.ljust attribute_max_length} => #{parameter_value_format value},\n"
}.join
"#{resource.type.to_s.downcase} { '#{resource.title.to_s}' :\n#{attributes_text_block}}"
end
# This function preprocesses both saved and generated
# catalogs before they will be compared. It allows us to ignore
# irrelevant changes in the catalogs:
# * ignore trailing whitespaces
# * ignore empty lines
# @param data [String]
# @return [String]
def preprocess_catalog_data(data)
clear_data = []
data.to_s.split("\n").each do |line|
line = line.rstrip
next if line == ''
clear_data << line
end
clear_data.join "\n"
end
# Check if two resources have same type and title
# @param res1 [Puppet::Resource]
# @param res2 [Puppet::Resource]
# @return [TrueClass, False,Class]
def resources_are_same?(res1, res2)
res1 = res1.to_s.downcase.gsub %r|'"|, ''
res2 = res2.to_s.downcase.gsub %r|'"|, ''
res1 == res2
end
# @return [Pathname]
def dir_name_catalogs
Pathname.new 'catalogs'
end
# @return [Pathname]
def dir_path_catalogs
Noop::Config.dir_path_root + dir_name_catalogs
end
# @return [Pathname]
def file_name_task_catalog
Noop::Utils.convert_to_path "#{file_name_base_task_report}.pp"
end
# @return [Pathname]
def file_path_task_catalog
dir_path_catalogs + file_name_task_catalog
end
# Write the catalog file of this task
# using the data from RSpec context
# @param context [Object] the context from the rspec test
# @return [void]
def file_write_task_catalog(context)
dir_path_catalogs.mkpath
error "Catalog directory '#{dir_path_catalogs}' doesn't exist!" unless dir_path_catalogs.directory?
debug "Writing catalog file: #{file_path_task_catalog}"
File.open(file_path_task_catalog.to_s, 'w') do |file|
file.puts catalog_dump context
end
end
# Check if the catalog file exists for this task
# @return [true,false]
def file_present_task_catalog?
file_path_task_catalog.file?
end
# Read the catalog file of this task
# @return [String]
def file_read_task_catalog
return unless file_present_task_catalog?
debug "Reading catalog file: #{file_path_task_catalog}"
file_path_task_catalog.read
end
end
end

View File

@ -1,103 +0,0 @@
require 'yaml'
module Noop
class Task
# @return [Pathname]
def file_name_facts
return @file_name_facts if @file_name_facts
self.file_name_facts = Noop::Utils.path_from_env 'SPEC_FACTS_NAME'
return @file_name_facts if @file_name_facts
self.file_name_facts = Noop::Config.default_facts_file_name
@file_name_facts
end
alias :facts :file_name_facts
# @return [Pathname]
def file_name_facts=(value)
return if value.nil?
@file_name_facts = Noop::Utils.convert_to_path value
@file_name_facts = @file_name_facts.sub_ext '.yaml' if @file_name_facts.extname == ''
end
alias :facts= :file_name_facts=
# @return [Pathname]
def file_base_facts
file_name_facts.basename.sub_ext ''
end
# @return [Pathname]
def file_path_facts
Noop::Config.dir_path_facts + file_name_facts
end
# @return [true,false]
def file_present_facts?
return false unless file_path_facts
file_path_facts.readable?
end
# @return [Pathname]
def file_name_facts_override
file_name_task_extension
end
# @return [Pathname]
def file_path_facts_override
Noop::Config.dir_path_facts_override + file_name_facts_override
end
# @return [true,false]
def file_present_facts_override?
return unless file_path_facts_override
file_path_facts_override.readable?
end
# @return [Array<String>]
def facts_hierarchy
file_paths = []
file_paths << file_path_facts.to_s if file_present_facts?
file_paths << file_path_facts_override.to_s if file_present_facts_override?
file_paths
end
# @return [void]
def add_host_names(facts_data)
hostname = hiera_lookup 'node_name'
fqdn = hiera_lookup 'fqdn'
facts_data[:hostname] = hostname if hostname
facts_data[:l3_fqdn_hostname] = hostname if hostname
facts_data[:fqdn] = fqdn if fqdn
facts_data[:puppetversion] = Puppet.version
end
# @return [Hash]
def facts_data
facts_data = {}
facts_hierarchy.each do |file_path|
begin
file_data = YAML.load_file file_path
next unless file_data.is_a? Hash
file_data = Noop::Utils.symbolize_hash_to_keys file_data
facts_data.merge! file_data
rescue
next
end
end
add_host_names facts_data
facts_data
end
alias :ubuntu_facts :facts_data
alias :centos_facts :facts_data
# @return [String,nil]
def hostname
facts_data[:hostname]
end
# @return [String,nil]
def fqdn
facts_data[:fqdn]
end
end
end

View File

@ -1,100 +0,0 @@
module Noop
class Task
# @return [Pathname]
def dir_name_file_reports
Pathname.new 'files'
end
# @return [Pathname]
def dir_path_file_reports
Noop::Config.dir_path_reports + dir_name_file_reports
end
# @return [Pathname]
def file_name_file_report
Noop::Utils.convert_to_path "#{file_name_base_task_report}.yaml"
end
# @return [Pathname]
def file_path_file_report
dir_path_file_reports + file_name_file_report
end
# @return [Array<Puppet::Type>]
def find_file_resources(context)
catalog = context.subject
catalog = catalog.call if catalog.is_a? Proc
catalog.resources.select do |resource|
resource.type == 'File'
end
end
# @return [Hash]
def catalog_file_report_structure(context)
files = {}
find_file_resources(context).each do |resource|
next unless %w(present file directory).include? resource[:ensure] or not resource[:ensure]
if resource[:source]
content = resource[:source]
elsif resource[:content]
content = 'TEMPLATE'
else
content = nil
end
next unless content
files[resource[:path]] = content
end
files
end
# @return [String]
def catalog_file_report_template(binding)
template = <<-'eos'
<% if binary_files.any? -%>
You have <%= binary_files.length -%> files that are either binary or init.d scripts:
<% binary_files.each do |file| -%>
* <%= file %>
<% end -%>
<% end -%>
<% if downloaded_files.any? -%>
You are downloading <%= downloaded_files.length -%> files using File resource's source property:
<% downloaded_files.each do |file| -%>
* <%= file %>
<% end -%>
<% end -%>
eos
ERB.new(template, nil, '-').result(binding)
end
# @return [void]
def catalog_file_resources_check(context)
binary_files_regexp = %r{^/bin|^/usr/bin|^/usr/local/bin|^/usr/sbin|^/sbin|^/usr/lib|^/usr/share|^/etc/init.d|^/usr/local/sbin|^/etc/rc\S\.d}
binary_files = []
downloaded_files = []
find_file_resources(context).each do |resource|
next unless %w(present file directory).include? resource[:ensure] or not resource[:ensure]
file_path = resource[:path] or resource[:title]
file_source = resource[:source]
binary_files << file_path if file_path =~ binary_files_regexp
downloaded_files << file_path if file_source
end
if binary_files.any? or downloaded_files.any?
output Noop::Utils.separator
output catalog_file_report_template(binding)
output Noop::Utils.separator
error 'Puppet is installing files that should be packed to the Fuel package!'
end
end
# @return [void]
def catalog_file_report_write(context)
dir_path_file_reports.mkpath
error "File report directory '#{dir_path_file_reports}' doesn't exist!" unless dir_path_file_reports.directory?
debug "Saving File resources list file to: #{file_path_file_report.to_s}"
File.open(file_path_file_report.to_s, 'w') do |file|
YAML.dump catalog_file_report_structure(context), file
end
end
end
end

View File

@ -1,37 +0,0 @@
module Noop
class Task
# @return [Pathname]
def file_path_globals
Noop::Config.dir_path_globals + file_name_hiera
end
# @return [true,false]
def file_present_globals?
return false unless file_path_globals
file_path_globals.readable?
end
def write_file_globals(content)
debug "Saving Globals YAML file to: '#{file_path_globals.to_s}'"
File.open(file_path_globals.to_s, 'w') do |file|
file.write content
end
end
# @return [Pathname]
def file_name_globals
file_name_hiera
end
# @return [Pathname]
def file_base_globals
file_base_hiera
end
# @return [Pathname]
def element_globals
Noop::Config.dir_name_globals + file_base_globals
end
end
end

View File

@ -1,161 +0,0 @@
module Noop
class Task
# Extract the parameter or property of a Puppet resource in the catalog
# @param context [RSpec::ExampleGroup] The 'self' of the RSpec example group
# @param resource_type [String] Name of the resource type
# @param resource_name [String] Title of the resource
# @param parameter [String] Parameter name
# @return [Object]
def resource_parameter_value(context, resource_type, resource_name, parameter)
catalog = context.subject
catalog = catalog.call if catalog.is_a? Proc
resource = catalog.resource resource_type, resource_name
error "No resource type: '#{resource_type}' name: '#{resource_name}' in the catalog!" unless resource
resource[parameter.to_sym]
end
# Save the current puppet scope
# @param value [Puppet::Scope]
def puppet_scope=(value)
@puppet_scope = value
end
# The saved Puppet scope to run functions in
# Or the newly generated scope.
# @return [Puppet::Scope]
def puppet_scope
return @puppet_scope if @puppet_scope
PuppetlabsSpec::PuppetInternals.scope
end
# Load a puppet function if it's not already loaded
# @param name [String] Function name
def puppet_function_load(name)
name = name.to_sym unless name.is_a? Symbol
Puppet::Parser::Functions.autoloader.load name
end
# Call a puppet function and return it's value
# @param name [String] Function name
# @param *args [Object] Function parameters
# @return [Object]
def puppet_function(name, *args)
name = name.to_sym unless name.is_a? Symbol
puppet_function_load name
if puppet4?
puppet_scope.call_function name, args
else
error "Could not load Puppet function '#{name}'!" unless puppet_scope.respond_to? "function_#{name}".to_sym
puppet_scope.send "function_#{name}".to_sym, args
end
end
# Take a variable value from the saved puppet scope
# @param name [String] variable name
def lookupvar(name)
puppet_scope.lookupvar name
end
alias :variable :lookupvar
# Load a class from the Puppet modules into the current scope
# It can be used to extract values from 'params' classes like this:
# Noop.load_class 'nova::params'
# Noop.variable 'nova::params::common_package_name'
# => 'openstack-nova-common'
# These values can be later used in the spec examples.
# Note, that the loaded class will not be found in the spec's catalog
# object, but can be found here: Noop.puppet_scope.catalog
# @param class_name [String]
def puppet_class_include(class_name)
class_name = class_name.to_s
unless puppet_scope.catalog.classes.include? class_name
debug "Dynamically loading class: '#{class_name}'"
puppet_scope.compiler.evaluate_classes [class_name], puppet_scope, false
end
end
# Convert resource catalog to a RAL catalog
# and run both "generate" functions for each resource
# that has it and then add results to the catalog
# @param context [RSpec::ExampleGroup] The 'self' of the RSpec example group
# @return <Lambda>
def create_ral_catalog(context)
catalog = context.catalog
catalog = catalog.call if catalog.is_a? Proc
ral_catalog = catalog.to_ral
ral_catalog.resources.each do |resource|
generate_additional_resources ral_catalog, resource
end
lambda { ral_catalog }
end
# If the resources has one of the generate function
# run it and add the generated resources to the catalog
# if they are not there already. Run generate functions
# recursively for the generated resources too.
# @param catalog [Puppet::Catalog]
# @param resource [Puppet::Type]
def generate_additional_resources(catalog, resource)
generate_functions = [:generate, :eval_generate]
generate_functions.each do |function_name|
next unless resource.respond_to? function_name
debug "Resource: #{resource} run: #{function_name}"
generated = resource.send function_name
next unless generated.is_a? Array
generated.each do |generated_resource|
next unless generated_resource.is_a? Puppet::Type
next if catalog.resource generated_resource.ref
debug "Add resource: #{generated_resource} to the catalog"
catalog.add_resource generated_resource
generate_additional_resources catalog, generated_resource
end
end
catalog
end
# Check if the currently running spec is the given one
# or one of the given ones if an array is provided
# @param spec [String, Array<String>]
# @return [true,false]
def current_spec_is?(spec)
return false unless file_name_spec_set?
spec = [spec] unless spec.is_a? Array
spec = spec.flatten
spec = spec.map do |spec|
Noop::Utils.convert_to_spec spec
end
spec.any? do |spec|
file_name_spec == spec
end
end
# check if we're using Puppet4
# @return [true,false]
def puppet4?
Puppet.version.to_f >= 4.0
end
# convert the values in the nested data structure
# from nil to :undef as they are used in Puppet 4
# modifies the argument object and returns it
# @param data [Array, Hash]
# @return [Array, Hash]
def nil2undef(data)
return :undef if data.nil?
if data.is_a? Array
data.each_with_index do |value, index|
data[index] = nil2undef value
end
data
elsif data.is_a? Hash
data.keys.each do |key|
data[key] = nil2undef data[key]
end
data
end
data
end
end
end

View File

@ -1,170 +0,0 @@
module Noop
class Task
# @return [Pathname]
def file_name_hiera
return @file_name_hiera if @file_name_hiera
self.file_name_hiera = Noop::Utils.path_from_env 'SPEC_ASTUTE_FILE_NAME', 'SPEC_HIERA_NAME'
return @file_name_hiera if @file_name_hiera
self.file_name_hiera = Noop::Config.default_hiera_file_name unless
@file_name_hiera
end
# @return [Pathname]
def file_name_hiera=(value)
return if value.nil?
@file_name_hiera = Noop::Utils.convert_to_path value
@file_name_hiera = @file_name_hiera.sub_ext '.yaml' if @file_name_hiera.extname == ''
end
# @return [Pathname]
def file_base_hiera
file_name_hiera.basename.sub_ext ''
end
# @return [Pathname]
def file_path_hiera
Noop::Config.dir_path_hiera + file_name_hiera
end
# @return [true,false]
def file_present_hiera?
return false unless file_path_hiera
file_path_hiera.readable?
end
# @return [Pathname]
def element_hiera
file_base_hiera
end
# @return [Pathname]
def file_name_hiera_override
file_name_task_extension
end
# @return [Pathname]
def file_path_hiera_override
Noop::Config.dir_path_hiera_override + file_name_hiera_override
end
# @return [true,false]
def file_present_hiera_override?
return unless file_path_hiera_override
file_path_hiera_override.readable?
end
# @return [Pathname]
def element_hiera_override
override_file = file_name_hiera_override
return unless override_file
Noop::Config.dir_name_hiera_override + override_file.sub_ext('')
end
# @return [Pathname]
def dir_path_task_hiera_plugins
Noop::Config.file_path_hiera_plugins + file_base_hiera
end
# @return [Array<Pathname>]
def list_hiera_plugins
return @list_hiera_plugins if @list_hiera_plugins
@list_hiera_plugins = [] unless @list_hiera_plugins
return @list_hiera_plugins unless dir_path_task_hiera_plugins.directory?
dir_path_task_hiera_plugins.children.each do |file|
next unless file.file?
next unless file.to_s.end_with? '.yaml'
file = file.relative_path_from Noop::Config.dir_path_hiera
file = file.sub_ext('')
@list_hiera_plugins << file
end
@list_hiera_plugins.sort!
@list_hiera_plugins
end
# @return [String]
def hiera_logger
if ENV['SPEC_PUPPET_DEBUG']
'console'
else
'noop'
end
end
# @return [Array<String>]
def hiera_hierarchy
elements = []
elements += list_hiera_plugins.map(&:to_s) if list_hiera_plugins.any?
elements << element_hiera_override.to_s if file_present_hiera_override?
elements << element_globals.to_s if file_present_globals?
elements << element_hiera.to_s if file_present_hiera?
elements
end
# @return [Hash]
def hiera_config
{
:backends => [
'yaml',
],
:yaml => {
:datadir => Noop::Config.dir_path_hiera.to_s,
},
:hierarchy => hiera_hierarchy,
:logger => hiera_logger,
:merge_behavior => :deeper,
}
end
# @return [Hiera]
def hiera_object
return @hiera_object if @hiera_object
@hiera_object = Hiera.new(:config => hiera_config)
Hiera.logger = hiera_config[:logger]
@hiera_object
end
# @return [Object]
def hiera_lookup(key, default = nil, resolution_type = :priority)
key = key.to_s
# def lookup(key, default, scope, order_override=nil, resolution_type=:priority)
hiera_object.lookup key, default, {}, nil, resolution_type
end
alias :hiera :hiera_lookup
# @return [Hash]
def hiera_hash(key, default = nil)
hiera_lookup key, default, :hash
end
# @return [Array]
def hiera_array(key, default = nil)
hiera_lookup key, default, :array
end
# @return [Object]
def hiera_structure(key, default = nil, separator = '/', resolution_type = :hash)
path_lookup = lambda do |data, path, default_value|
break default_value unless data
break data unless path.is_a? Array and path.any?
break default_value unless data.is_a? Hash or data.is_a? Array
key = path.shift
if data.is_a? Array
begin
key = Integer key
rescue ArgumentError
break default_value
end
end
path_lookup.call data[key], path, default_value
end
path = key.split separator
key = path.shift
data = hiera key, nil, resolution_type
path_lookup.call data, path, default
end
alias :hiera_dir :hiera_structure
end
end

View File

@ -1,130 +0,0 @@
module Noop
class Task
# Setup all needed override functions
def setup_overrides
puppet_default_settings
puppet_debug_override if ENV['SPEC_PUPPET_DEBUG']
puppet_resource_scope_override
rspec_coverage_add_override
return unless file_name_spec_set?
hiera_config_override
setup_manifest
end
# Set the current module path and the manifest file
# to run in this RSpec session
def setup_manifest
RSpec.configuration.manifest = file_path_manifest.to_s
RSpec.configuration.module_path = Noop::Config.list_path_modules.join ':'
RSpec.configuration.manifest_dir = Noop::Config.dir_path_tasks_local.to_s
# FIXME: kludge to support calling Puppet function outside of the test context
Puppet.settings[:modulepath] = RSpec.configuration.module_path
Puppet.settings[:manifest] = RSpec.configuration.manifest_dir
end
# Override Hiera configuration in the Puppet objects
def hiera_config_override
class << HieraPuppet
def hiera
@hiera ||= Hiera.new(:config => hiera_config)
Hiera.logger = 'noop'
@hiera
end
end
class << Hiera::Config
def config
@config
end
def config=(value)
@config = value
end
def load(source)
@config ||= {}
end
def yaml_load_file(source)
@config ||= {}
end
def []=(key, value)
@config ||= {}
@config[key] = value
end
end
Hiera::Config.config = hiera_config
end
# Ask Puppet to save the current scope reference to the task instance
def puppet_resource_scope_override
Puppet::Parser::Resource.module_eval do
def initialize(*args)
raise ArgumentError, "Resources require a hash as last argument" unless args.last.is_a? Hash
raise ArgumentError, "Resources require a scope" unless args.last[:scope]
super
Noop.task.puppet_scope = scope
@source ||= scope.source
end
end
end
# Divert Puppet logs to the console
def puppet_debug_override
Puppet::Util::Log.level = :debug
Puppet::Util::Log.newdestination(:console)
end
# These settings are pulled from the Puppet TestHelper
# (See Puppet::Test::TestHelper.initialize_settings_before_each)
# These items used to be setup in puppet 3.4 but were moved to before tests
# which breaks our testing framework because we attempt to call
# PuppetlabsSpec::PuppetInternals.scope and
# Puppet::Parser::Function.autoload.load prior to the testing being run.
# This results in an rspec failure so we need to initialize the basic
# settings up front to prevent issues with test framework. See PUP-5601
def puppet_default_settings
defaults = {
:logdir => '/dev/null',
:confdir => '/dev/null',
:vardir => '/dev/null',
:rundir => '/dev/null',
:hiera_config => '/dev/null',
}
defaults[:codedir] = '/dev/null' if puppet4?
Puppet.settings.initialize_app_defaults(defaults)
end
def rspec_coverage_add_override
RSpec::Puppet::Coverage.class_eval do
def add_from_catalog(catalog, test_module)
catalog.to_a.each do |resource|
next if @filters.include?(resource.to_s)
if resource.file == Puppet[:manifest]
add(resource)
else
@excluded = [] unless @excluded
@excluded << resource.to_s
end
end
end
def report!
report = {}
report[:total] = @collection.size
report[:touched] = @collection.count { |_, resource| resource.touched? }
report[:untouched] = report[:total] - report[:touched]
report[:coverage] = "%5.2f" % ((report[:touched].to_f / report[:total].to_f) * 100)
report[:resources] = Hash[*@collection.map do |name, wrapper|
[name, wrapper.to_hash]
end.flatten]
report[:excluded] = @excluded
report
end
end
end
end
end

View File

@ -1,116 +0,0 @@
module Noop
class Task
attr_accessor :report
# Generate the report of the currently using files in this spec
# @return [String]
def status_report(context)
task = context.task
template = <<-'eof'
Facts: <%= task.file_path_facts %>
Hiera: <%= task.file_path_hiera %>
Spec: <%= task.file_path_spec %>
Modules: <%= Noop::Config.list_path_modules.join(':') %>
Manifest: <%= task.file_path_manifest %>
Node: <%= task.hiera_lookup 'fqdn', '?' %>
Roles: <%= task.hiera_array('roles', ['?']).join(' ') %>
Hiera hierarchy:
<% task.hiera_hierarchy.each do |element| -%>
* <%= element %>
<% end -%>
Facts hierarchy:
<% task.facts_hierarchy.reverse.each do |element| -%>
* <%= element %>
<% end -%>
eof
ERB.new(template, nil, '-').result(binding)
end
# Get a loaded gem version
# @return [String,nil]
def gem_version(gem)
gem = gem.to_s
return unless Object.const_defined? 'Gem'
return unless Gem.loaded_specs.is_a? Hash
return unless Gem.loaded_specs[gem].respond_to? :version
Gem.loaded_specs[gem].version
end
# Gem a report about RSpec gems versions
# @return [String]
def gem_versions_report
versions = "Ruby version: #{RUBY_VERSION}"
%w(puppet rspec rspec-puppet rspec-puppet-utils puppetlabs_spec_helper).each do |gem|
version = gem_version gem
versions += "\n'#{gem}' gem version: #{version}"if version
end
versions
end
# Load a report file of this task if it's present
def file_load_report_json
self.report = file_data_report_json
end
# Check if this task has report loaded
# @return [true,false]
def has_report?
report.is_a? Hash and report['examples'].is_a? Array
end
# @return [Pathname]
def file_name_report_json
Noop::Utils.convert_to_path "#{file_name_base_task_report}.json"
end
# @return [Pathname]
def file_path_report_json
Noop::Config.dir_path_reports + file_name_report_json
end
# @return [Pathname]
def dir_name_coverage
Pathname.new 'coverage'
end
# @return [Pathname]
def dir_path_coverage
Noop::Config.dir_path_reports + dir_name_coverage
end
# @return [Pathname]
def file_path_coverage_report
dir_path_coverage + Noop::Utils.convert_to_path("#{file_name_base_task_report}.yaml")
end
# @return [Hash]
def file_data_report_json
return unless file_present_report_json?
file_data = nil
begin
# debug "Reading report file: #{file_path_report_json}"
file_content = File.read file_path_report_json.to_s
file_data = JSON.load file_content
return unless file_data.is_a? Hash
rescue
debug "Error parsing report file: #{file_path_report_json}"
nil
end
file_data
end
# Remove the report file
def file_remove_report_json
#debug "Removing report file: #{file_path_report_json}"
file_path_report_json.unlink if file_present_report_json?
end
# @return [true,false]
def file_present_report_json?
file_path_report_json.exist?
end
end
end

Some files were not shown because too many files have changed in this diff Show More