Retire instack-undercloud

instack-undercloud is no longer in use by the TripleO project. Removing
the code to avoid confusion. Stable branches will continue to be
maintained for their life however no new features should be added.

Change-Id: I63a813c7c1ffd30ca30017133d31a497b77a9a4d
Blueprint: remove-instack-undercloud
This commit is contained in:
Alex Schultz 2018-10-26 09:45:32 -06:00 committed by Emilien Macchi
parent 8c0a8316ce
commit 87abe05ba0
180 changed files with 6 additions and 9472 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = instack_undercloud
omit = instack_undercloud/tests/*
[report]
ignore_errors = True

54
.gitignore vendored
View File

@ -1,54 +0,0 @@
*.py[cod]
*.sw[op]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
cover
.tox
.testrepository
nosetests.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
*.bundle
Gemfile.lock
# Mr Mac User
.DS_Store
._.DS_Store
# tarballs
*.tar.gz
# sdist generated stuff
AUTHORS
ChangeLog
instack.answers
# Files created by releasenotes build
releasenotes/build

View File

@ -3,3 +3,4 @@ host=review.openstack.org
port=29418
project=openstack/instack-undercloud
defaultbranch=master

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./instack_undercloud ./instack_undercloud $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

22
Gemfile
View File

@ -1,22 +0,0 @@
source ENV['GEM_SOURCE'] || "https://rubygems.org"
group :development, :test, :system_tests do
gem 'puppet-openstack_spec_helper',
:git => 'https://git.openstack.org/openstack/puppet-openstack_spec_helper',
:branch => 'master',
:require => false
end
if facterversion = ENV['FACTER_GEM_VERSION']
gem 'facter', facterversion, :require => false
else
gem 'facter', :require => false
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet', :require => false
end
# vim:ft=ruby

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,18 +0,0 @@
Team and repository tags
========================
[![Team and repository tags](https://governance.openstack.org/tc/badges/instack-undercloud.svg)](https://governance.openstack.org/tc/reference/tags/index.html)
<!-- Change things from this point on -->
Undercloud Install via instack
==============================
instack-undercloud is tooling for installing a TripleO undercloud.
It is part of the TripleO project:
https://docs.openstack.org/tripleo-docs/latest/
* Free software: Apache license
* Source: https://git.openstack.org/cgit/openstack/instack-undercloud
* Bugs: https://bugs.launchpad.net/tripleo

5
README.rst Normal file
View File

@ -0,0 +1,5 @@
This project is no longer maintained.
The contents of this repository are still available in the Git source code management system. To see the contents of this repository before it reached its end of life, please check out the previous commit with "git checkout HEAD^1".
For any further questions, please email openstack-dev@lists.openstack.org or join #openstack-dev on Freenode.

View File

@ -1,6 +0,0 @@
require 'puppetlabs_spec_helper/rake_tasks'
require 'puppet-lint/tasks/puppet-lint'
PuppetLint.configuration.fail_on_warnings = true
PuppetLint.configuration.send('disable_80chars')
PuppetSyntax.fail_on_deprecation_notices = false

View File

@ -1,2 +0,0 @@
libssl-dev [platform:dpkg test]
openssl-devel [platform:rpm test]

View File

@ -1,3 +0,0 @@
[DEFAULT]
output_file = undercloud.conf.sample
namespace = instack-undercloud

View File

@ -1,4 +0,0 @@
.. toctree::
:maxdepth: 1
undercloud

View File

@ -1,8 +0,0 @@
===================
:mod:`undercloud`
===================
.. automodule:: instack_undercloud.undercloud
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,278 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'openstackdocstheme',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Instack Undercloud'
copyright = u'2015, OpenStack Foundation'
bug_tracker = u'Launchpad'
bug_tracker_url = u'https://launchpad.net/tripleo'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '3.0.0'
# The full version, including alpha/beta/rc tags.
release = '3.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'instack-underclouddoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'instack-undercloud.tex', u'instack-undercloud Documentation',
u'2015, OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'instack-undercloud', u'instack-undercloud Documentation',
[u'2015, OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'instack-undercloud', u'instack-undercloud Documentation',
u'2015, OpenStack Foundation', 'instack-undercloud',
'Tooling for installing TripleO undercloud.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# openstackdocstheme options
repository_name = 'openstack/instack-undercloud'
bug_project = 'tripleo'
bug_tag = 'documentation'
rst_prolog = """
.. |project| replace:: %s
.. |bug_tracker| replace:: %s
.. |bug_tracker_url| replace:: %s
""" % (project, bug_tracker, bug_tracker_url)

View File

@ -1,24 +0,0 @@
Welcome to |project| documentation
====================================
The instack-undercloud project has code and diskimage-builder
elements for deploying a TripleO undercloud to an existing system.
See the `TripleO documentation`_ for the full end-to-end workflow.
.. _`TripleO documentation`: https://docs.openstack.org/tripleo-docs/latest/
API Documentation
=================
.. toctree::
:maxdepth: 1
api/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,4 +0,0 @@
Enable the CentOS CR Repo
Allow use of packages from the CentOS CR repository, per the instructions at
https://wiki.centos.org/AdditionalResources/Repositories/CR

View File

@ -1,9 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
# Per https://seven.centos.org/2015/03/centos-7-cr-repo-has-been-populated/
# we need to update before we can enable the cr repo.
yum -y update
yum-config-manager --enable cr

View File

@ -1,9 +0,0 @@
Build an instack vm image
This element allows building an instack vm image using diskimage-builder. To build
the image simply include this element and the appropriate distro element.
For example:
disk-image-create -a amd64 -o instack \
--image-size 30 \
fedora instack-vm

View File

@ -1,3 +0,0 @@
local-config
package-installs
vm

View File

@ -1,18 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
ANSWERSFILE=${ANSWERSFILE:-""}
if [ -z "$ANSWERSFILE" ]; then
echo "\$ANSWERSFILE should be defined."
exit 1
fi
file_list="$ANSWERSFILE
$TE_DATAFILE"
for f in $file_list; do
cp "$f" "$TMP_HOOKS_PATH"
done

View File

@ -1,10 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
# When using instack-virt-setup, it makes sense to always enable IP forwarding
# so the Overcloud nodes can have external access.
cat > /etc/sysctl.d/ip-forward.conf <<EOF
net.ipv4.ip_forward=1
EOF

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
useradd -m stack
cat > /etc/sudoers.d/stack <<eof
stack ALL=(ALL) NOPASSWD:ALL
eof
chmod 0440 /etc/sudoers.d/stack
visudo -c
mkdir -p /home/stack/.ssh
cp /tmp/in_target.d/undercloud.conf.sample /home/stack/undercloud.conf
cp /tmp/in_target.d/$(basename $TE_DATAFILE) /home/stack/instackenv.json
chown -R stack:stack /home/stack

View File

@ -1,3 +0,0 @@
net-tools:
yum-utils:
git:

View File

@ -1,6 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
yum erase -y cloud-init

View File

@ -1,7 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
echo "$UNDERCLOUD_VM_NAME.localdomain" > /etc/hostname
echo "127.0.0.1 $UNDERCLOUD_VM_NAME $UNDERCLOUD_VM_NAME.localdomain" >> /etc/hosts

View File

@ -1,13 +0,0 @@
overcloud-full
==============
Element for the overcloud-full image created by instack-undercloud.
Workarounds
-----------
This element can be used to apply needed workarounds.
* openstack-glance-api and openstack-glance-registry are currently installed
explicitly since this is not handled by the overcloud-control element from
tripleo-puppet-elements

View File

@ -1,7 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
# Enable persistent logging for the systemd journal
mkdir -p /var/log/journal

View File

@ -1,2 +0,0 @@
openstack-glance-api:
openstack-glance-registry:

View File

@ -1,6 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
rm -f /etc/libvirt/qemu/networks/autostart/default.xml

View File

@ -1,4 +0,0 @@
This element will override the behavior from the pip-and-virtualenv element
from tripleo-image-elements so that python-pip and python-virtualenv are never
installed.

View File

@ -1 +0,0 @@
pip-and-virtualenv

View File

@ -1,9 +0,0 @@
puppet-stack-config
-------------------
puppet-stack-config provides static puppet configuration for a single node
baremetal cloud using the Ironic driver. A yaml template is used to render a
hiera data file at /etc/puppet/hieradata/puppet-stack-config.yaml.
The template rendering takes its input from a set of defined environment
variables.

View File

@ -1,2 +0,0 @@
hiera
puppet-modules

View File

@ -1,5 +0,0 @@
#!/bin/bash
set -eux
yum -y install git

View File

@ -1,58 +0,0 @@
#!/usr/bin/python
# Copyright 2015 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import subprocess
import tempfile
import pystache
from instack_undercloud import undercloud
renderer = pystache.Renderer(escape=lambda s: s)
template = os.path.join(os.path.dirname(__file__),
'..',
'puppet-stack-config.yaml.template')
context = {item: os.environ.get(item)
for item in undercloud.InstackEnvironment.PUPPET_KEYS}
endpoint_context = {}
for k, v in os.environ.items():
if k.startswith('UNDERCLOUD_ENDPOINT_'):
endpoint_context[k] = v
context.update(endpoint_context)
# Make sure boolean strings are treated as Bool()
for k, v in list(context.items()):
if v == 'False':
context[k] = False
elif v == 'True':
context[k] = True
with open(template) as f:
puppet_stack_config_yaml = renderer.render(f.read(), context)
puppet_stack_config_yaml_path = '/etc/puppet/hieradata/puppet-stack-config.yaml'
if not os.path.exists(os.path.dirname(puppet_stack_config_yaml_path)):
os.makedirs(os.path.dirname(puppet_stack_config_yaml_path))
with open(puppet_stack_config_yaml_path, 'w') as f:
f.write(puppet_stack_config_yaml)
# Secure permissions
os.chmod(os.path.dirname(puppet_stack_config_yaml_path), 0750)
os.chmod(puppet_stack_config_yaml_path, 0600)

View File

@ -1,7 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
mkdir -p /etc/puppet/manifests
cp $(dirname $0)/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp

View File

@ -1 +0,0 @@
tripleo::selinux::mode: permissive

View File

@ -1,22 +0,0 @@
rabbitmq::package_provider: yum
tripleo::selinux::mode: enforcing
tripleo::profile::base::sshd::options:
HostKey:
- '/etc/ssh/ssh_host_rsa_key'
- '/etc/ssh/ssh_host_ecdsa_key'
- '/etc/ssh/ssh_host_ed25519_key'
SyslogFacility: 'AUTHPRIV'
AuthorizedKeysFile: '.ssh/authorized_keys'
ChallengeResponseAuthentication: 'no'
GSSAPIAuthentication: 'yes'
GSSAPICleanupCredentials: 'no'
UsePAM: 'yes'
UseDNS: 'no'
X11Forwarding: 'yes'
UsePrivilegeSeparation: 'sandbox'
AcceptEnv:
- 'LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES'
- 'LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT'
- 'LC_IDENTIFICATION LC_ALL LANGUAGE'
- 'XMODIFIERS'
Subsystem: 'sftp /usr/libexec/openssh/sftp-server'

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
function puppet_apply {
set +e
$@ 2>&1
rc=$?
set -e
echo "puppet apply exited with exit code $rc"
if [ $rc != 2 -a $rc != 0 ]; then
exit $rc
fi
}
puppet_apply puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp

View File

@ -1,7 +0,0 @@
#!/bin/bash
set -eux
set -o pipefail
EXTERNAL_BRIDGE=br-ctlplane
iptables -w -t nat -C PREROUTING -d 169.254.169.254/32 -i $EXTERNAL_BRIDGE -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775 || iptables -w -t nat -I PREROUTING -d 169.254.169.254/32 -i $EXTERNAL_BRIDGE -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775

View File

@ -1,2 +0,0 @@
pystache:
python-oslo-concurrency:

View File

@ -1,729 +0,0 @@
# Copyright 2015 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
warning('instack-undercloud is deprecated in Rocky and is replaced by containerized-undercloud.')
# Deploy os-net-config before everything in the catalog
include ::stdlib
class { '::tripleo::network::os_net_config':
stage => 'setup',
}
# enable ip forwarding for the overcloud nodes to access the outside internet
# in cases where they are on an isolated network
ensure_resource('sysctl::value', 'net.ipv4.ip_forward', { 'value' => 1 })
# NOTE(aschultz): clear up old file as this used to be managed via DIB
file { '/etc/sysctl.d/ip-forward.conf':
ensure => absent
}
# NOTE(aschultz): LP#1750194 - docker will switch FORWARD to DROP if ip_forward
# is not enabled first.
Sysctl::Value['net.ipv4.ip_forward'] -> Package<| title == 'docker' |>
# NOTE(aschultz): LP#1754426 - remove cloud-init and disable os-collect-config
package { 'cloud-init':
ensure => 'absent',
}
service { 'os-collect-config':
ensure => stopped,
enable => false,
}
# Run OpenStack db-sync at every puppet run, in any case.
Exec<| title == 'neutron-db-sync' |> { refreshonly => false }
Exec<| title == 'keystone-manage db_sync' |> { refreshonly => false }
Exec<| title == 'glance-manage db_sync' |> { refreshonly => false }
Exec<| title == 'nova-db-sync-api' |> { refreshonly => false }
Exec<| title == 'nova-db-sync' |> { refreshonly => false }
Exec<| title == 'nova-db-online-data-migrations' |> { refreshonly => false }
Exec<| title == 'ironic-db-online-data-migrations' |> { refreshonly => false }
Exec<| title == 'heat-dbsync' |> {
refreshonly => false,
# Heat database on the undercloud can be really big, db-sync take usually at least 10 min.
timeout => 900,
}
Exec<| title == 'aodh-db-sync' |> { refreshonly => false }
Exec<| title == 'ironic-dbsync' |> { refreshonly => false }
Exec<| title == 'mistral-db-sync' |> { refreshonly => false }
Exec<| title == 'mistral-db-populate' |> { refreshonly => false }
Exec<| title == 'zaqar-manage db_sync' |> { refreshonly => false }
Exec<| title == 'cinder-manage db_sync' |> { refreshonly => false }
Keystone::Resource::Service_identity {
default_domain => hiera('keystone_default_domain'),
}
include ::tripleo::profile::base::time::ntp
include ::rabbitmq
Class['::rabbitmq'] -> Service['httpd']
include ::tripleo::firewall
include ::tripleo::selinux
include ::tripleo::profile::base::kernel
include ::tripleo::profile::base::certmonger_user
if hiera('tripleo::haproxy::service_certificate', undef) {
class {'::tripleo::profile::base::haproxy':
enable_load_balancer => true,
}
include ::tripleo::keepalived
# NOTE: The following is required because we need to make sure that keepalived
# is up and running before rabbitmq. The reason is that when the undercloud is
# with ssl the hostname is configured to one of the VIPs so rabbit will try to
# connect to it at startup and if the VIP is not up it will fail (LP#1782814)
Class['::tripleo::keepalived'] -> Class['::rabbitmq']
# NOTE: This is required because the haproxy configuration should be changed
# before any keystone operations are triggered. Without this, it will try to
# access the new endpoints that point to haproxy even if haproxy hasn't
# started yet. The same is the case for ironic and ironic-inspector.
Class['::tripleo::haproxy'] -> Anchor['keystone::install::begin']
}
# MySQL
include ::tripleo::profile::base::database::mysql
# Raise the mysql file limit
exec { 'systemctl-daemon-reload':
command => '/bin/systemctl daemon-reload',
refreshonly => true,
}
file { '/etc/systemd/system/mariadb.service.d':
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
}
file { '/etc/systemd/system/mariadb.service.d/limits.conf':
ensure => 'file',
owner => 'root',
group => 'root',
mode => '0644',
content => "[Service]\nLimitNOFILE=16384\n",
require => File['/etc/systemd/system/mariadb.service.d'],
notify => [Exec['systemctl-daemon-reload'], Service['mysqld']],
}
Exec['systemctl-daemon-reload'] -> Service['mysqld']
file { '/var/log/journal':
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
notify => Service['systemd-journald'],
}
service { 'systemd-journald':
ensure => 'running'
}
# FIXME: this should only occur on the bootstrap host (ditto for db syncs)
# Create all the database schemas
# Example DSN format: mysql+pymysql://user:password@host/dbname
$allowed_hosts = ['%',hiera('controller_host')]
$re_dsn = '//([^:]+):([^@]+)@\[?([^/]+?)\]?/([a-z_-]+)'
$keystone_dsn = match(hiera('keystone::database_connection'), $re_dsn)
class { '::keystone::db::mysql':
user => $keystone_dsn[1],
password => $keystone_dsn[2],
host => $keystone_dsn[3],
dbname => $keystone_dsn[4],
allowed_hosts => $allowed_hosts,
}
$glance_dsn = match(hiera('glance::api::database_connection'), $re_dsn)
class { '::glance::db::mysql':
user => $glance_dsn[1],
password => $glance_dsn[2],
host => $glance_dsn[3],
dbname => $glance_dsn[4],
allowed_hosts => $allowed_hosts,
}
$nova_dsn = match(hiera('nova::database_connection'), $re_dsn)
class { '::nova::db::mysql':
user => $nova_dsn[1],
password => $nova_dsn[2],
host => $nova_dsn[3],
dbname => $nova_dsn[4],
allowed_hosts => $allowed_hosts,
}
$nova_api_dsn = match(hiera('nova::api_database_connection'), $re_dsn)
class { '::nova::db::mysql_api':
user => $nova_api_dsn[1],
password => $nova_api_dsn[2],
host => $nova_api_dsn[3],
dbname => $nova_api_dsn[4],
allowed_hosts => $allowed_hosts,
}
$nova_placement_dsn = match(hiera('nova::placement_database_connection'), $re_dsn)
class { '::nova::db::mysql_placement':
user => $nova_placement_dsn[1],
password => $nova_placement_dsn[2],
host => $nova_placement_dsn[3],
dbname => $nova_placement_dsn[4],
allowed_hosts => $allowed_hosts,
}
$neutron_dsn = match(hiera('neutron::server::database_connection'), $re_dsn)
class { '::neutron::db::mysql':
user => $neutron_dsn[1],
password => $neutron_dsn[2],
host => $neutron_dsn[3],
dbname => $neutron_dsn[4],
allowed_hosts => $allowed_hosts,
}
$heat_dsn = match(hiera('heat_dsn'), $re_dsn)
class { '::heat::db::mysql':
user => $heat_dsn[1],
password => $heat_dsn[2],
host => $heat_dsn[3],
dbname => $heat_dsn[4],
allowed_hosts => $allowed_hosts,
}
if str2bool(hiera('enable_telemetry', false)) {
# Ceilometer
include ::ceilometer::keystone::auth
include ::aodh::keystone::auth
include ::ceilometer
include ::ceilometer::agent::notification
include ::ceilometer::agent::central
include ::ceilometer::agent::auth
include ::ceilometer::dispatcher::gnocchi
# We need to use exec as the keystone dependency wouldnt allow
# us to wait until service is up before running upgrade. This
# is because both keystone, gnocchi and ceilometer run under apache.
exec { 'ceilo-gnocchi-upgrade':
command => 'ceilometer-upgrade --skip-metering-database',
path => ['/usr/bin', '/usr/sbin'],
}
# This ensures we can do service validation on gnocchi api before
# running ceilometer-upgrade
$command = join(['curl -s',
hiera('gnocchi_healthcheck_url')], ' ')
openstacklib::service_validation { 'gnocchi-status':
command => $command,
tries => 20,
refreshonly => true,
subscribe => Anchor['gnocchi::service::end']
}
# Ensure all endpoint exists and only then run the upgrade.
Keystone::Resource::Service_identity<||>
-> Openstacklib::Service_validation['gnocchi-status']
-> Exec['ceilo-gnocchi-upgrade']
# Aodh
$aodh_dsn = match(hiera('aodh::db::database_connection'), $re_dsn)
class { '::aodh::db::mysql':
user => $aodh_dsn[1],
password => $aodh_dsn[2],
host => $aodh_dsn[3],
dbname => $aodh_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::aodh
include ::aodh::api
include ::aodh::wsgi::apache
include ::aodh::evaluator
include ::aodh::notifier
include ::aodh::listener
include ::aodh::client
include ::aodh::db::sync
include ::aodh::auth
include ::aodh::config
# Gnocchi
$gnocchi_dsn = match(hiera('gnocchi::db::database_connection'), $re_dsn)
class { '::gnocchi::db::mysql':
user => $gnocchi_dsn[1],
password => $gnocchi_dsn[2],
host => $gnocchi_dsn[3],
dbname => $gnocchi_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::gnocchi
include ::gnocchi::keystone::auth
include ::gnocchi::api
include ::gnocchi::wsgi::apache
include ::gnocchi::client
include ::gnocchi::db::sync
include ::gnocchi::storage
include ::gnocchi::metricd
include ::gnocchi::statsd
include ::gnocchi::config
$gnocchi_backend = downcase(hiera('gnocchi_backend', 'swift'))
case $gnocchi_backend {
'swift': { include ::gnocchi::storage::swift }
'file': { include ::gnocchi::storage::file }
'rbd': { include ::gnocchi::storage::ceph }
default: { fail('Unrecognized gnocchi_backend parameter.') }
}
# Panko
$panko_dsn = match(hiera('panko::db::database_connection'), $re_dsn)
class { '::panko::db::mysql':
user => $panko_dsn[1],
password => $panko_dsn[2],
host => $panko_dsn[3],
dbname => $panko_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::panko
include ::panko::keystone::auth
include ::panko::config
include ::panko::db
include ::panko::db::sync
include ::panko::api
include ::panko::wsgi::apache
include ::panko::client
} else {
# If Telemetry is disabled, ensure we tear down everything:
# packages, services, configuration files.
Package { [
'python-aodh',
'python-ceilometer',
'python-gnocchi',
'python-panko'
]:
ensure => 'purged',
notify => Service['httpd'],
}
File { [
'/etc/httpd/conf.d/10-aodh_wsgi.conf',
'/etc/httpd/conf.d/10-ceilometer_wsgi.conf',
'/etc/httpd/conf.d/10-gnocchi_wsgi.conf',
'/etc/httpd/conf.d/10-panko_wsgi.conf',
]:
ensure => absent,
notify => Service['httpd'],
}
}
$ironic_dsn = match(hiera('ironic::database_connection'), $re_dsn)
class { '::ironic::db::mysql':
user => $ironic_dsn[1],
password => $ironic_dsn[2],
host => $ironic_dsn[3],
dbname => $ironic_dsn[4],
allowed_hosts => $allowed_hosts,
}
$ironic_inspector_dsn = match(hiera('ironic::inspector::db::database_connection'), $re_dsn)
class { '::ironic::inspector::db::mysql':
user => $ironic_inspector_dsn[1],
password => $ironic_inspector_dsn[2],
host => $ironic_inspector_dsn[3],
dbname => $ironic_inspector_dsn[4],
allowed_hosts => $allowed_hosts,
}
# pre-install swift here so we can build rings
include ::swift
if hiera('tripleo::haproxy::service_certificate', undef) {
$keystone_public_endpoint = join(['https://', hiera('controller_public_host'), ':13000'])
$enable_proxy_headers_parsing = true
} else {
$keystone_public_endpoint = undef
$enable_proxy_headers_parsing = false
}
if str2bool(hiera('enable_telemetry', false)) {
$notification_topics = ['notifications']
} else {
$notification_topics = []
}
class { '::keystone':
enable_proxy_headers_parsing => $enable_proxy_headers_parsing,
notification_topics => $notification_topics,
}
include ::keystone::wsgi::apache
include ::keystone::cron::token_flush
include ::keystone::roles::admin
include ::keystone::endpoint
include ::keystone::cors
include ::keystone::config
include ::heat::keystone::auth
include ::heat::keystone::auth_cfn
include ::neutron::keystone::auth
include ::glance::keystone::auth
include ::nova::keystone::auth
include ::nova::keystone::auth_placement
include ::swift::keystone::auth
include ::ironic::keystone::auth
include ::ironic::keystone::auth_inspector
#TODO: need a cleanup-keystone-tokens.sh solution here
keystone_config {
'ec2/driver': value => 'keystone.contrib.ec2.backends.sql.Ec2';
}
# TODO: notifications, scrubber, etc.
class { '::glance::api':
enable_proxy_headers_parsing => $enable_proxy_headers_parsing,
}
include ::glance::backend::swift
include ::glance::notify::rabbitmq
class { '::nova':
debug => hiera('debug'),
notification_format => 'unversioned',
}
class { '::nova::api':
enable_proxy_headers_parsing => $enable_proxy_headers_parsing,
}
include ::nova::wsgi::apache_api
include ::nova::cell_v2::simple_setup
include ::nova::placement
include ::nova::wsgi::apache_placement
include ::nova::cron::archive_deleted_rows
include ::nova::cron::purge_shadow_tables
include ::nova::config
include ::nova::conductor
include ::nova::scheduler
include ::nova::scheduler::filter
include ::nova::compute
class { '::neutron':
debug => hiera('debug'),
}
include ::neutron::server
include ::neutron::server::notifications
include ::neutron::quota
include ::neutron::plugins::ml2
include ::neutron::agents::dhcp
include ::neutron::agents::l3
include ::neutron::plugins::ml2::networking_baremetal
include ::neutron::agents::ml2::networking_baremetal
include ::neutron::config
# Make sure ironic endpoint exists before starting the service
Keystone_endpoint <||> -> Service['ironic-neutron-agent']
class { '::neutron::agents::ml2::ovs':
bridge_mappings => split(hiera('neutron_bridge_mappings'), ','),
}
neutron_config {
'DEFAULT/notification_driver': value => 'messaging';
}
# swift proxy
include ::memcached
include ::swift::proxy
include ::swift::ringbuilder
include ::swift::proxy::proxy_logging
include ::swift::proxy::healthcheck
include ::swift::proxy::bulk
include ::swift::proxy::cache
include ::swift::proxy::keystone
include ::swift::proxy::authtoken
include ::swift::proxy::staticweb
include ::swift::proxy::copy
include ::swift::proxy::slo
include ::swift::proxy::dlo
include ::swift::proxy::versioned_writes
include ::swift::proxy::ratelimit
include ::swift::proxy::catch_errors
include ::swift::proxy::tempurl
include ::swift::proxy::formpost
include ::swift::objectexpirer
include ::swift::config
# swift storage
class { '::swift::storage::all':
mount_check => str2bool(hiera('swift_mount_check')),
allow_versions => true,
}
if(!defined(File['/srv/node'])) {
file { '/srv/node':
ensure => directory,
owner => 'swift',
group => 'swift',
require => Package['swift'],
}
}
# This is no longer automatically created by Swift itself
file { '/srv/node/1':
ensure => directory,
owner => 'swift',
group => 'swift',
require => File['/srv/node'],
}
$swift_components = ['account', 'container', 'object']
swift::storage::filter::recon { $swift_components : }
swift::storage::filter::healthcheck { $swift_components : }
$controller_host = hiera('controller_host_wrapped')
ring_object_device { "${controller_host}:6000/1":
zone => 1,
weight => 1,
}
Ring_object_device<||> ~> Service['swift-proxy-server']
ring_container_device { "${controller_host}:6001/1":
zone => 1,
weight => 1,
}
Ring_container_device<||> ~> Service['swift-proxy-server']
ring_account_device { "${controller_host}:6002/1":
zone => 1,
weight => 1,
}
Ring_account_device<||> ~> Service['swift-proxy-server']
# Ensure rsyslog catches up change in /etc/rsyslog.d and forwards logs
exec { 'restart rsyslog':
command => '/bin/systemctl restart rsyslog',
}
# Apache
include ::apache
# Heat
class { '::heat':
debug => hiera('debug'),
keystone_ec2_uri => join([hiera('keystone_auth_uri'), '/ec2tokens']),
enable_proxy_headers_parsing => $enable_proxy_headers_parsing,
heat_clients_endpoint_type => hiera('heat_clients_endpoint_type', 'internal'),
}
include ::heat::api
include ::heat::wsgi::apache_api
include ::heat::api_cfn
include ::heat::wsgi::apache_api_cfn
include ::heat::engine
include ::heat::keystone::domain
include ::heat::cron::purge_deleted
include ::heat::cors
include ::heat::config
include ::keystone::roles::admin
include ::nova::compute::ironic
include ::nova::network::neutron
include ::nova::cors
# Ironic
include ::ironic
include ::ironic::api
include ::ironic::wsgi::apache
include ::ironic::conductor
include ::ironic::drivers::ansible
include ::ironic::drivers::drac
include ::ironic::drivers::ilo
include ::ironic::drivers::inspector
include ::ironic::drivers::interfaces
include ::ironic::drivers::ipmi
include ::ironic::drivers::pxe
include ::ironic::drivers::redfish
include ::ironic::drivers::staging
include ::ironic::glance
include ::ironic::inspector
include ::ironic::inspector::cors
include ::ironic::inspector::pxe_filter
include ::ironic::inspector::pxe_filter::dnsmasq
include ::ironic::neutron
include ::ironic::pxe
include ::ironic::service_catalog
include ::ironic::swift
include ::ironic::cors
include ::ironic::config
Keystone_endpoint<||> -> Service['ironic-inspector']
# https://bugs.launchpad.net/tripleo/+bug/1663273
Keystone_endpoint <||> -> Service['nova-compute']
Keystone_service <||> -> Service['nova-compute']
# This is a workaround for a race between nova-compute and ironic
# conductor. When https://bugs.launchpad.net/tripleo/+bug/1777608 is
# fixed this can be removed. Currently we wait 1 minutes for the
# ironic conductor service to be ready. As puppet can order thing its
# own way and be slow (especially in CI env) we can have services
# started at more than one minute appart, hence the need for it.
Service[$::ironic::params::conductor_service] -> Service[$::nova::params::compute_service_name]
if str2bool(hiera('enable_tempest', true)) {
# tempest
package{'openstack-tempest': }
# needed for /bin/subunit-2to1 (called by run_tempest.sh)
package{'subunit-filters': }
}
# Ensure dm thin-pool is never activated. This avoids an issue
# where the instack host (in this case on a VM) was crashing due to
# activation of the docker thin-pool associated with the atomic host.
augeas { 'lvm.conf':
require => Package['nova-compute'],
context => '/files/etc/lvm/lvm.conf/devices/dict/',
changes => 'set global_filter/list/1/str "r|^/dev/disk/by-path/ip.*iscsi.*\.org\.openstack:.*|"'
}
if str2bool(hiera('enable_docker_registry', true)) {
ensure_resource('group', 'docker', {
'ensure' => 'present',
})
ensure_resource('user', 'docker_user', {
'name' => hiera('tripleo_install_user'),
'groups' => 'docker',
'notify' => Service['docker'],
})
include ::tripleo::profile::base::docker_registry
}
include ::mistral
$mistral_dsn = match(hiera('mistral::database_connection'), $re_dsn)
class { '::mistral::db::mysql':
user => $mistral_dsn[1],
password => $mistral_dsn[2],
host => $mistral_dsn[3],
dbname => $mistral_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::mistral::keystone::auth
include ::mistral::db::sync
include ::mistral::api
include ::mistral::engine
ensure_resource('user', 'mistral', {
'name' => 'mistral',
'groups' => 'docker',
})
include ::mistral::executor
include ::mistral::cors
include ::mistral::cron_trigger
include ::mistral::config
# ensure TripleO common entrypoints for custom Mistral actions
# are installed before performing the Mistral action population
package {'openstack-tripleo-common': }
Package['openstack-tripleo-common'] ~> Exec['mistral-db-populate']
# If ironic inspector is not running, mistral-db-populate will have invalid
# actions for it.
Class['::ironic::inspector'] ~> Exec['mistral-db-populate']
# db-populate calls inspectorclient, which will use the keystone endpoint to
# check inspector's version. So that's needed before db-populate is executed.
Class['::ironic::keystone::auth_inspector'] ~> Exec['mistral-db-populate']
if str2bool(hiera('enable_ui', true)) {
include ::tripleo::ui
}
if str2bool(hiera('enable_validations', true)) {
include ::tripleo::profile::base::validations
}
include ::zaqar
$zaqar_dsn = match(hiera('zaqar::management::sqlalchemy::uri'), $re_dsn)
class { '::zaqar::db::mysql':
user => $zaqar_dsn[1],
password => $zaqar_dsn[2],
host => $zaqar_dsn[3],
dbname => $zaqar_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::zaqar::db::sync
include ::zaqar::management::sqlalchemy
include ::zaqar::messaging::swift
include ::zaqar::keystone::auth
include ::zaqar::keystone::auth_websocket
include ::zaqar::transport::websocket
include ::zaqar::transport::wsgi
include ::zaqar::server
include ::zaqar::wsgi::apache
include ::zaqar::config
zaqar::server_instance{ '1':
transport => 'websocket'
}
if str2bool(hiera('enable_cinder', true)) {
$cinder_dsn = match(hiera('cinder::database_connection'), $re_dsn)
class { '::cinder::db::mysql':
user => $cinder_dsn[1],
password => $cinder_dsn[2],
host => $cinder_dsn[3],
dbname => $cinder_dsn[4],
allowed_hosts => $allowed_hosts,
}
include ::cinder::keystone::auth
include ::cinder
include ::cinder::api
include ::cinder::cron::db_purge
include ::cinder::config
include ::cinder::glance
include ::cinder::scheduler
include ::cinder::volume
include ::cinder::wsgi::apache
$cinder_backend_name = hiera('cinder_backend_name')
cinder::backend::iscsi { $cinder_backend_name:
iscsi_ip_address => hiera('cinder_iscsi_address'),
iscsi_helper => 'lioadm',
iscsi_protocol => 'iscsi'
}
include ::cinder::backends
if str2bool(hiera('cinder_enable_test_volume', false)) {
include ::cinder::setup_test_volume
}
}
# firewalld is a dependency of some anaconda packages, so we need to use purge
# to ensure all the things that it might be a dependency for are also
# removed. See LP#1669915
ensure_resource('package', 'firewalld', {
'ensure' => 'purged',
})
ensure_resource('package', 'openstack-selinux')
ensure_resource('package', 'parted')
ensure_resource('package', 'psmisc')
include ::tripleo::profile::base::sshd
# Swift is using only a single replica on the undercloud. Therefore recovering
# from a corrupted or lost object is not possible, and running replicators and
# auditors only wastes resources.
$needless_services = [
'swift-account-auditor',
'swift-account-replicator',
'swift-container-auditor',
'swift-container-replicator',
'swift-object-auditor',
'swift-object-replicator']
Service[$needless_services] {
enable => false,
ensure => stopped,
}
# novajoin install
if str2bool(hiera('enable_novajoin', false)) {
include ::nova::metadata::novajoin::auth
include ::nova::metadata::novajoin::api
}
# Any special handling that need to be done during the upgrade.
if str2bool($::undercloud_upgrade) {
# Noop
}

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
operating-system

View File

@ -1,3 +0,0 @@
{{#os_net_config}}
{{.}}
{{/os_net_config}}

View File

@ -1,46 +0,0 @@
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
NOVA_VERSION=1.1
export NOVA_VERSION
OS_PASSWORD={{admin_password}}
export OS_PASSWORD
OS_AUTH_TYPE=password
export OS_AUTH_TYPE
{{#service_certificate}}
OS_AUTH_URL=https://{{public_host}}:13000/
PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_AUTH_URL
export PYTHONWARNINGS
{{/service_certificate}}
{{^service_certificate}}
OS_AUTH_URL=http://{{local-ip-wrapped}}:5000/
export OS_AUTH_URL
{{/service_certificate}}
OS_USERNAME=admin
OS_PROJECT_NAME=admin
COMPUTE_API_VERSION=1.1
# 1.34 is the latest API version in Ironic Pike supported by ironicclient
IRONIC_API_VERSION=1.34
OS_BAREMETAL_API_VERSION=$IRONIC_API_VERSION
OS_NO_CACHE=True
OS_CLOUDNAME=undercloud
export OS_USERNAME
export OS_PROJECT_NAME
export COMPUTE_API_VERSION
export IRONIC_API_VERSION
export OS_BAREMETAL_API_VERSION
export OS_NO_CACHE
export OS_CLOUDNAME
OS_IDENTITY_API_VERSION='3'
export OS_IDENTITY_API_VERSION
OS_PROJECT_DOMAIN_NAME='Default'
export OS_PROJECT_DOMAIN_NAME
OS_USER_DOMAIN_NAME='Default'
export OS_USER_DOMAIN_NAME
# Add OS_CLOUDNAME to PS1
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
export PS1=${PS1:-""}
export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
export CLOUDPROMPT_ENABLED=1
fi

View File

@ -1,24 +0,0 @@
UNDERCLOUD_ADMIN_PASSWORD=$(sudo hiera admin_password)
UNDERCLOUD_ADMIN_TOKEN=$(sudo hiera keystone::admin_token)
UNDERCLOUD_CEILOMETER_METERING_SECRET=$(sudo hiera ceilometer::metering_secret)
UNDERCLOUD_CEILOMETER_PASSWORD=$(sudo hiera ceilometer::keystone::authtoken::password)
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(sudo hiera snmpd_readonly_user_password)
UNDERCLOUD_CEILOMETER_SNMPD_USER=$(sudo hiera snmpd_readonly_user_name)
UNDERCLOUD_DB_PASSWORD=$(sudo hiera admin_password)
UNDERCLOUD_GLANCE_PASSWORD=$(sudo hiera glance::api::keystone_password)
UNDERCLOUD_HAPROXY_STATS_PASSWORD=$(sudo hiera tripleo::haproxy::haproxy_stats_password)
UNDERCLOUD_HEAT_ENCRYPTION_KEY=$(sudo hiera heat::engine::auth_encryption_key)
UNDERCLOUD_HEAT_PASSWORD=$(sudo hiera heat::keystone_password)
UNDERCLOUD_HEAT_STACK_DOMAIN_ADMIN_PASSWORD=$(sudo hiera heat_stack_domain_admin_password)
UNDERCLOUD_HORIZON_SECRET_KEY=$(sudo hiera horizon_secret_key)
UNDERCLOUD_IRONIC_PASSWORD=$(sudo hiera ironic::api::authtoken::password)
UNDERCLOUD_NEUTRON_PASSWORD=$(sudo hiera neutron::server::auth_password)
UNDERCLOUD_NOVA_PASSWORD=$(sudo hiera nova::keystone::authtoken::password)
UNDERCLOUD_RABBIT_COOKIE=$(sudo hiera rabbit_cookie)
UNDERCLOUD_RABBIT_PASSWORD=$(sudo hiera rabbit_password)
UNDERCLOUD_RABBIT_USERNAME=$(sudo hiera rabbit_username)
UNDERCLOUD_SWIFT_HASH_SUFFIX=$(sudo hiera swift::swift_hash_suffix)
UNDERCLOUD_SWIFT_PASSWORD=$(sudo hiera swift::proxy::authtoken::admin_password)
UNDERCLOUD_MISTRAL_PASSWORD=$(sudo hiera mistral::admin_password)
UNDERCLOUD_ZAQAR_PASSWORD=$(sudo hiera zaqar::keystone::authtoken::password)
UNDERCLOUD_CINDER_PASSWORD=$(sudo hiera cinder::keystone::authtoken::password)

View File

@ -1,29 +0,0 @@
# In case this script crashed or was interrupted earlier, flush, unlink and
# delete the temp chain.
IPTCOMMAND=iptables
if [[ {{local-ip}} =~ : ]] ; then
IPTCOMMAND=ip6tables
fi
$IPTCOMMAND -w -t nat -F BOOTSTACK_MASQ_NEW || true
$IPTCOMMAND -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ_NEW || true
$IPTCOMMAND -w -t nat -X BOOTSTACK_MASQ_NEW || true
$IPTCOMMAND -w -t nat -N BOOTSTACK_MASQ_NEW
# Build the chain we want.
{{#masquerade_networks}}
NETWORK={{.}}
NETWORKS={{#masquerade_networks}}{{.}},{{/masquerade_networks}}
# Shell substitution to remove the traling comma
NETWORKS=${NETWORKS%?}
$IPTCOMMAND -w -t nat -A BOOTSTACK_MASQ_NEW -s $NETWORK -d $NETWORKS -j RETURN
$IPTCOMMAND -w -t nat -A BOOTSTACK_MASQ_NEW -s $NETWORK -j MASQUERADE
{{/masquerade_networks}}
# Link it in.
$IPTCOMMAND -w -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW
# Delete the old chain if present.
$IPTCOMMAND -w -t nat -F BOOTSTACK_MASQ || true
$IPTCOMMAND -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ || true
$IPTCOMMAND -w -t nat -X BOOTSTACK_MASQ || true
# Rename the new chain into permanence.
$IPTCOMMAND -w -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ
# remove forwarding rule (fixes bug 1183099)
$IPTCOMMAND -w -D FORWARD -j REJECT --reject-with icmp-host-prohibited || true

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -eux
if systemctl is-enabled keepalived; then
# This needs to be run after os-net-config, since os-net-config potentially
# can restart network interfaces, which would affects VIPs controlled by
# keepalived. So don't just move this up without knowing the consequences.
# You have been warned.
systemctl reload keepalived
fi

View File

@ -1,41 +0,0 @@
#!/bin/bash
set -eux
RULES_SCRIPT=/var/opt/undercloud-stack/masquerade
. $RULES_SCRIPT
iptables-save > /etc/sysconfig/iptables
# We are specifically running the following commands after the
# iptables rules to ensure the persisted file does not contain any
# ephemeral neutron rules. Neutron assumes the iptables rules are not
# persisted so it may cause an issue if the rule is loaded on boot
# (or via iptables restart). If an operator needs to reload iptables
# for any reason, they may need to manually reload the appropriate
# neutron agent to restore these iptables rules.
# https://bugzilla.redhat.com/show_bug.cgi?id=1541528
if /bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables
then
/bin/sed -i /neutron-/d /etc/sysconfig/iptables
fi
if /bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables
then
/bin/sed -i /neutron-/d /etc/sysconfig/ip6tables
fi
# Do not persist ephemeral firewall rules managed by ironic-inspector
# pxe_filter 'iptables' driver.
# https://bugs.launchpad.net/tripleo/+bug/1765700
if /bin/test -f /etc/sysconfig/iptables && /bin/grep -v "\-m comment \--comment" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector
then
/bin/sed -i "/-m comment --comment.*ironic-inspector/p;/ironic-inspector/d" /etc/sysconfig/iptables
fi
if /bin/test -f /etc/sysconfig/ip6tables && /bin/grep -v "\-m comment \--comment" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector
then
/bin/sed -i "/-m comment --comment.*ironic-inspector/p;/ironic-inspector/d" /etc/sysconfig/ip6tables
fi

View File

@ -1,44 +0,0 @@
#!/bin/bash
set -eux
source /root/tripleo-undercloud-passwords
source /root/stackrc
INSTACK_ROOT=${INSTACK_ROOT:-""}
export INSTACK_ROOT
if [ -n "$INSTACK_ROOT" ]; then
PATH=$PATH:$INSTACK_ROOT/instack-undercloud/scripts
export PATH
fi
if [ ! -f /root/.ssh/authorized_keys ]; then
sudo mkdir -p /root/.ssh
sudo chmod 7000 /root/.ssh/
sudo touch /root/.ssh/authorized_keys
sudo chmod 600 /root/.ssh/authorized_keys
fi
if [ ! -f /root/.ssh/id_rsa ]; then
ssh-keygen -b 1024 -N '' -f /root/.ssh/id_rsa
fi
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
if [ -e /usr/sbin/getenforce ]; then
if [ "$(getenforce)" == "Enforcing" ]; then
set +e
selinux_wrong_permission="$(find /root/.ssh/ -exec ls -lZ {} \; | grep -v 'ssh_home_t')"
set -e
if [ -n "${selinux_wrong_permission}" ]; then
semanage fcontext -a -t ssh_home_t '/root/.ssh(/.*)?'
restorecon -R /root/.ssh/
fi
fi
fi
# Disable nova quotas
openstack quota set --cores -1 --instances -1 --ram -1 $(openstack project show admin | awk '$2=="id" {print $4}')
# instack-prepare-for-overcloud
rm -rf $HOME/.novaclient

View File

@ -1,66 +0,0 @@
utility-image:
imagefactory --debug base_image \
--file-parameter install_script \
utility_image.ks utility_image.tdl
input-image:
imagefactory --debug base_image \
--file-parameter install_script \
input_image.ks input_image.tdl
overcloud-images: overcloud-control overcloud-compute overcloud-cinder-volume overcloud-swift-storage deploy-ramdisk-ironic discovery-ramdisk
overcloud-control:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_overcloud_control.tdl \
--parameter results_location "/overcloud-control.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)
overcloud-compute:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_overcloud_compute.tdl \
--parameter results_location "/overcloud-compute.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)
overcloud-cinder-volume:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_overcloud_cinder_volume.tdl \
--parameter results_location "/overcloud-cinder-volume.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)
overcloud-swift-storage:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_overcloud_swift_storage.tdl \
--parameter results_location "/overcloud-swift-storage.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)
deploy-ramdisk-ironic:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_deploy_ramdisk_ironic.tdl \
--parameter results_location "/deploy-ramdisk-ironic.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)
discovery-ramdisk:
imagefactory --debug \
target_image \
--id $(INPUT_IMAGE_ID) \
--parameter utility_image $(UTILITY_IMAGE_ID) \
--file-parameter utility_customizations dib_discovery_ramdisk.tdl \
--parameter results_location "/discovery-ramdisk.tar" indirection
tar -x -f $$(ls -1tr /var/lib/imagefactory/storage/*.body | tail -n 1)

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images deploy-ramdisk
</command>
<command name="tar">tar cf /mnt/deploy-ramdisk-ironic.tar deploy-ramdisk-ironic.initramfs deploy-ramdisk-ironic.kernel</command>
</commands>
</template>

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images discovery-ramdisk
</command>
<command name="tar">tar cf /mnt/discovery-ramdisk.tar discovery-ramdisk.initramfs discovery-ramdisk.kernel</command>
</commands>
</template>

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images overcloud-cinder-volume
</command>
<command name="tar">tar cf /mnt/overcloud-cinder-volume.tar overcloud-cinder-volume.qcow2 overcloud-cinder-volume.vmlinuz overcloud-cinder-volume.initrd</command>
</commands>
</template>

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images overcloud-compute
</command>
<command name="tar">tar cf /mnt/overcloud-compute.tar overcloud-compute.qcow2 overcloud-compute.vmlinuz overcloud-compute.initrd</command>
</commands>
</template>

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images overcloud-control
</command>
<command name="tar">tar cf /mnt/overcloud-control.tar overcloud-control.qcow2 overcloud-control.vmlinuz overcloud-control.initrd</command>
</commands>
</template>

View File

@ -1,12 +0,0 @@
<template>
<commands>
<command name='mount'>mount /dev/vdb1 /mnt</command>
<command name='backup'>cp /etc/sudoers /etc/sudoers_backup</command>
<command name='pty'>sed 's/.*requiretty//g' /etc/sudoers_backup > /etc/sudoers</command>
<command name='convert'>qemu-img convert -O qcow2 /mnt/input_image.raw /mnt/input_image.qcow2</command>
<command name="localimage">export DIB_LOCAL_IMAGE=/mnt/input_image.qcow2
instack-build-images overcloud-swift-storage
</command>
<command name="tar">tar cf /mnt/overcloud-swift-storage.tar overcloud-swift-storage.qcow2 overcloud-swift-storage.vmlinuz overcloud-swift-storage.initrd</command>
</commands>
</template>

View File

@ -1,26 +0,0 @@
url --url=http://download.eng.brq.redhat.com/pub/fedora/releases/20/Fedora/x86_64/os/
# Without the Everything repo, we cannot install cloud-init
repo --name="fedora-everything" --baseurl=http://download.eng.brq.redhat.com/pub/fedora/releases/20/Everything/x86_64/os/
install
text
keyboard us
lang en_US.UTF-8
skipx
network --device eth0 --bootproto dhcp
rootpw ROOTPW
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr
clearpart --all --drives=vda
part / --fstype="ext4" --size=3000
reboot
%packages
@core
cloud-init
tar
%end

View File

@ -1,12 +0,0 @@
<template>
<name>f20-jeos</name>
<os>
<name>Fedora</name>
<version>20</version>
<arch>x86_64</arch>
<install type='url'>
<url>http://download.eng.brq.redhat.com/pub/fedora/releases/20/Fedora/x86_64/os/</url>
</install>
</os>
<description>Fedora 20 JEOS Image</description>
</template>

View File

@ -1,41 +0,0 @@
url --url=http://download.eng.brq.redhat.com/pub/fedora/releases/20/Fedora/x86_64/os/
# Without the Everything repo, we cannot install cloud-init
repo --name="fedora-everything" --baseurl=http://download.eng.brq.redhat.com/pub/fedora/releases/20/Everything/x86_64/os/
repo --name="updates" --baseurl=http://download.eng.brq.redhat.com/pub/fedora/linux/updates/20/x86_64/
repo --name=openstack --baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-20/
# Uncomment the following line to use the copr repository
# repo --name=copr-openstack-m --baseurl=http://copr-be.cloud.fedoraproject.org/results/slagle/openstack-m/fedora-$releasever-$basearch/
install
text
keyboard us
lang en_US.UTF-8
skipx
network --device eth0 --bootproto dhcp
rootpw ROOTPW
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr
clearpart --all --drives=vda
part biosboot --fstype=biosboot --size=1
part /boot --fstype ext4 --size=200 --ondisk=vda
part pv.2 --size=1 --grow --ondisk=vda
volgroup VolGroup00 --pesize=32768 pv.2
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=768 --grow --maxsize=1536
logvol / --fstype ext4 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
reboot
%packages
@core
qemu-img
instack-undercloud
git
%end

View File

@ -1,15 +0,0 @@
<template>
<name>f20-jeos</name>
<os>
<name>Fedora</name>
<version>20</version>
<arch>x86_64</arch>
<install type='url'>
<url>http://download.eng.brq.redhat.com/pub/fedora/releases/20/Fedora/x86_64/os/</url>
</install>
</os>
<disk>
<size>40</size>
</disk>
<description>Fedora 20 JEOS Image</description>
</template>

View File

@ -1,18 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo('instack_undercloud')

File diff suppressed because it is too large Load Diff

View File

@ -1,274 +0,0 @@
# Copyright 2015 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import fixture as config_fixture
from oslo_config import cfg
from oslotest import base
from instack_undercloud import undercloud
from instack_undercloud import validator
class TestValidator(base.BaseTestCase):
def setUp(self):
super(TestValidator, self).setUp()
self.conf = self.useFixture(config_fixture.Config())
# ctlplane-subnet - config group options
self.grp0 = cfg.OptGroup(name='ctlplane-subnet',
title='ctlplane-subnet')
self.opts = [cfg.StrOpt('cidr'),
cfg.StrOpt('dhcp_start'),
cfg.StrOpt('dhcp_end'),
cfg.StrOpt('inspection_iprange'),
cfg.StrOpt('gateway'),
cfg.BoolOpt('masquerade')]
self.conf.register_opts(self.opts, group=self.grp0)
self.grp1 = cfg.OptGroup(name='subnet1', title='subnet1')
self.conf.config(cidr='192.168.24.0/24',
dhcp_start='192.168.24.5', dhcp_end='192.168.24.24',
inspection_iprange='192.168.24.100,192.168.24.120',
gateway='192.168.24.1', masquerade=True,
group='ctlplane-subnet')
@mock.patch('netifaces.interfaces')
def test_validation_passes(self, ifaces_mock):
ifaces_mock.return_value = ['eth1']
undercloud._validate_network()
def test_fail_on_local_ip(self):
self.conf.config(local_ip='193.0.2.1/24')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_network_gateway(self):
self.conf.config(gateway='193.0.2.1', group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_dhcp_start(self):
self.conf.config(dhcp_start='193.0.2.10', group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_dhcp_end(self):
self.conf.config(dhcp_end='193.0.2.10', group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_inspection_start(self):
self.conf.config(inspection_iprange='193.0.2.100,192.168.24.120',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_inspection_end(self):
self.conf.config(inspection_iprange='192.168.24.100,193.0.2.120',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_dhcp_order(self):
self.conf.config(dhcp_start='192.168.24.100', dhcp_end='192.168.24.10',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_dhcp_equal(self):
self.conf.config(dhcp_start='192.168.24.100',
dhcp_end='192.168.24.100', group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_inspection_order(self):
self.conf.config(inspection_iprange='192.168.24.120,192.168.24.100',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_inspection_equal(self):
self.conf.config(inspection_iprange='192.168.24.120,192.168.24.120',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_range_overlap_1(self):
self.conf.config(dhcp_start='192.168.24.10', dhcp_end='192.168.24.100',
inspection_iprange='192.168.24.90,192.168.24.110',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_range_overlap_2(self):
self.conf.config(dhcp_start='192.168.24.100',
dhcp_end='192.168.24.120',
inspection_iprange='192.168.24.90,192.168.24.110',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_range_overlap_3(self):
self.conf.config(dhcp_start='192.168.24.20', dhcp_end='192.168.24.90',
inspection_iprange='192.168.24.10,192.168.24.100',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_range_overlap_4(self):
self.conf.config(dhcp_start='192.168.24.10', dhcp_end='192.168.24.100',
inspection_iprange='192.168.24.20,192.168.24.90',
group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_invalid_local_ip(self):
self.conf.config(local_ip='192.168.24.1')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_unqualified_hostname(self):
self.conf.config(undercloud_hostname='undercloud')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_no_alter_params(self):
self.conf.config(cidr='192.168.24.0/24', group='ctlplane-subnet')
params = {opt.name: self.conf.conf[opt.name]
for opt in undercloud._opts}
params.update(
{opt.name: self.conf.conf.get('ctlplane-subnet')[opt.name]
for opt in undercloud._subnets_opts})
save_params = dict(params)
validator.validate_config(params, lambda x: None)
self.assertEqual(save_params, params)
@mock.patch('netifaces.interfaces')
def test_valid_undercloud_nameserver_passes(self, ifaces_mock):
ifaces_mock.return_value = ['eth1']
self.conf.config(undercloud_nameservers=['192.168.24.4',
'192.168.24.5'])
undercloud._validate_network()
def test_invalid_undercloud_nameserver_fails(self):
self.conf.config(undercloud_nameservers=['Iamthewalrus'])
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_invalid_public_host(self):
self.conf.config(undercloud_public_host='192.0.3.2',
undercloud_service_certificate='foo.pem',
enable_ui=False)
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
def test_fail_on_invalid_admin_host(self):
self.conf.config(undercloud_admin_host='192.0.3.3',
generate_service_certificate=True,
enable_ui=False)
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
@mock.patch('netifaces.interfaces')
def test_ssl_hosts_allowed(self, ifaces_mock):
ifaces_mock.return_value = ['eth1']
self.conf.config(undercloud_public_host='public.domain',
undercloud_admin_host='admin.domain',
undercloud_service_certificate='foo.pem',
enable_ui=False)
undercloud._validate_network()
@mock.patch('netifaces.interfaces')
def test_allow_all_with_ui(self, ifaces_mock):
ifaces_mock.return_value = ['eth1']
self.conf.config(undercloud_admin_host='10.0.0.10',
generate_service_certificate=True,
enable_ui=True)
undercloud._validate_network()
@mock.patch('netifaces.interfaces')
def test_fail_on_invalid_ip(self, ifaces_mock):
ifaces_mock.return_value = ['eth1']
self.conf.config(dhcp_start='foo.bar', group='ctlplane-subnet')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
@mock.patch('netifaces.interfaces')
def test_validate_interface_exists(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(local_interface='eth0')
undercloud._validate_network()
@mock.patch('netifaces.interfaces')
def test_fail_validate_interface_missing(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(local_interface='em1')
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
@mock.patch('netifaces.interfaces')
def test_validate_interface_with_net_config_override(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(local_interface='em2', net_config_override='foo')
undercloud._validate_network()
def test_validate_additional_architectures_ok(self):
self.conf.config(additional_architectures=['ppc64le'],
ipxe_enabled=False)
undercloud._validate_architecure_options()
def test_validate_additional_architectures_bad_arch(self):
self.conf.config(additional_architectures=['ppc64le', 'INVALID'],
ipxe_enabled=False)
self.assertRaises(validator.FailedValidation,
undercloud._validate_architecure_options)
def test_validate_additional_architectures_ipxe_fail(self):
self.conf.config(additional_architectures=['ppc64le'],
ipxe_enabled=True)
self.assertRaises(validator.FailedValidation,
undercloud._validate_architecure_options)
@mock.patch('netifaces.interfaces')
def test_validate_routed_networks_not_enabled_pass(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(enable_routed_networks=False)
self.conf.config(subnets=['ctlplane-subnet'])
undercloud._validate_network()
@mock.patch('netifaces.interfaces')
def test_validate_routed_networks_not_enabled_fail(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(enable_routed_networks=False)
self.conf.config(subnets=['ctlplane-subnet', 'subnet1'])
self.assertRaises(validator.FailedValidation,
undercloud._validate_network)
@mock.patch('netifaces.interfaces')
def test_validate_routed_networks_enabled_pass(self, ifaces_mock):
ifaces_mock.return_value = ['eth0', 'eth1']
self.conf.config(enable_routed_networks=True)
self.conf.config(subnets=['ctlplane-subnet', 'subnet1'])
self.conf.register_opts(self.opts, group=self.grp1)
self.conf.config(cidr='192.168.24.0/24',
dhcp_start='192.168.24.5', dhcp_end='192.168.24.24',
inspection_iprange='192.168.24.100,192.168.24.120',
gateway='192.168.24.1', masquerade=True,
group='ctlplane-subnet')
self.conf.config(cidr='192.168.10.0/24', dhcp_start='192.168.10.10',
dhcp_end='192.168.10.99',
inspection_iprange='192.168.10.100,192.168.10.189',
gateway='192.168.10.254', masquerade=True,
group='subnet1')
undercloud._validate_network()

File diff suppressed because it is too large Load Diff

View File

@ -1,211 +0,0 @@
# Copyright 2015 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
import netifaces
import six
SUPPORTED_ARCHITECTURES = ['ppc64le']
class FailedValidation(Exception):
pass
def validate_config(params, error_callback):
"""Validate an undercloud configuration described by params
:param params: A dict containing all of the undercloud.conf option
names mapped to their proposed values.
:param error_callback: A callback function that should be used to handle
errors. The function must accept a single parameter, which will be
a string describing the error.
"""
local_params = dict(params)
_validate_value_formats(local_params, error_callback)
_validate_in_cidr(local_params, error_callback)
_validate_dhcp_range(local_params, error_callback)
_validate_inspection_range(local_params, error_callback)
_validate_no_overlap(local_params, error_callback)
_validate_ips(local_params, error_callback)
_validate_interface_exists(local_params, error_callback)
def _validate_ppc64le_exclusive_opts(params, error_callback):
if 'ppc64le' in params['additional_architectures']:
if 'ipxe_enabled' in params and params['ipxe_enabled']:
error_callback('Currently iPXE boot isn\'t supported with '
'ppc64le systems but is enabled')
def _validate_additional_architectures(params, error_callback):
for arch in params['additional_architectures']:
if arch not in SUPPORTED_ARCHITECTURES:
error_callback('%s "%s" must be a supported architecture: %s' %
('additional_architectures', arch,
' '.join(SUPPORTED_ARCHITECTURES)))
def _validate_ips(params, error_callback):
def is_ip(value, param_name):
try:
netaddr.IPAddress(value)
except netaddr.core.AddrFormatError:
error_callback(
'%s "%s" must be a valid IP address' % (param_name, value))
for ip in params['undercloud_nameservers']:
is_ip(ip, 'undercloud_nameservers')
def _validate_value_formats(params, error_callback):
"""Validate format of some values
Certain values have a specific format that must be maintained in order to
work properly. For example, local_ip must be in CIDR form, and the
hostname must be a FQDN.
"""
for param in ('local_ip', 'cidr'):
if param in params:
try:
ip_net = netaddr.IPNetwork(params[param])
if (ip_net.prefixlen == 32) or (ip_net.prefixlen == 0):
message = ('"%s" "%s" not valid: Invalid netmask.' %
(param, params[param]))
error_callback(message)
# If IPv6 the ctlplane network uses the EUI-64 address format,
# which requires the prefix to be /64
if ip_net.version == 6 and ip_net.prefixlen != 64:
message = ('"%s" "%s" not valid: '
'Prefix must be 64 for IPv6.' %
(param, params[param]))
error_callback(message)
except netaddr.core.AddrFormatError as e:
message = ('"%s" "%s" not valid: "%s" '
'Value must be in CIDR format.' %
(param, params[param], str(e)))
error_callback(message)
except TypeError as e:
message = ('"%s" "%s" invalid type: "%s" ' %
(param, params[param], str(e)))
error_callback(message)
if 'undercloud_hostname' in params:
hostname = params['undercloud_hostname']
if hostname is not None and '.' not in hostname:
message = 'Hostname "%s" is not fully qualified.' % hostname
error_callback(message)
def _validate_in_cidr(params, error_callback):
cidr = netaddr.IPNetwork(params['cidr'])
def validate_addr_in_cidr(params, name, pretty_name=None, require_ip=True):
try:
if netaddr.IPAddress(params[name]) not in cidr:
message = ('%s "%s" not in defined CIDR "%s"' %
(pretty_name or name, params[name], cidr))
error_callback(message)
except netaddr.core.AddrFormatError:
if require_ip:
message = 'Invalid IP address: %s' % params[name]
error_callback(message)
# NOTE(hjensas): Only check certs etc if not validating routed subnets
if 'local_ip' in params:
params['just_local_ip'] = params['local_ip'].split('/')[0]
validate_addr_in_cidr(params, 'just_local_ip', 'local_ip')
# NOTE(bnemec): The ui needs to be externally accessible, which means
# in many cases we can't have the public vip on the provisioning
# network. In that case users are on their own to ensure they've picked
# valid values for the VIP hosts.
if ((params['undercloud_service_certificate'] or
params['generate_service_certificate']) and
not params['enable_ui']):
validate_addr_in_cidr(params, 'undercloud_public_host',
require_ip=False)
validate_addr_in_cidr(params, 'undercloud_admin_host',
require_ip=False)
# undercloud.conf uses inspection_iprange, the configuration wizard
# tool passes the values separately.
if 'inspection_iprange' in params:
inspection_iprange = params['inspection_iprange'].split(',')
params['inspection_start'] = inspection_iprange[0]
params['inspection_end'] = inspection_iprange[1]
validate_addr_in_cidr(params, 'gateway')
validate_addr_in_cidr(params, 'dhcp_start')
validate_addr_in_cidr(params, 'dhcp_end')
validate_addr_in_cidr(params, 'inspection_start', 'Inspection range start')
validate_addr_in_cidr(params, 'inspection_end', 'Inspection range end')
def _validate_dhcp_range(params, error_callback):
dhcp_start = netaddr.IPAddress(params['dhcp_start'])
dhcp_end = netaddr.IPAddress(params['dhcp_end'])
if dhcp_start >= dhcp_end:
message = ('Invalid dhcp range specified, dhcp_start "%s" does '
'not come before dhcp_end "%s"' %
(dhcp_start, dhcp_end))
error_callback(message)
def _validate_inspection_range(params, error_callback):
inspection_start = netaddr.IPAddress(params['inspection_start'])
inspection_end = netaddr.IPAddress(params['inspection_end'])
if inspection_start >= inspection_end:
message = ('Invalid inspection range specified, inspection_start '
'"%s" does not come before inspection_end "%s"' %
(inspection_start, inspection_end))
error_callback(message)
def _validate_no_overlap(params, error_callback):
"""Validate the provisioning and inspection ip ranges do not overlap"""
dhcp_set = netaddr.IPSet(netaddr.IPRange(params['dhcp_start'],
params['dhcp_end']))
inspection_set = netaddr.IPSet(netaddr.IPRange(params['inspection_start'],
params['inspection_end']))
# If there is any intersection of the two sets then we have a problem
if dhcp_set & inspection_set:
message = ('Inspection DHCP range "%s-%s" overlaps provisioning '
'DHCP range "%s-%s".' %
(params['inspection_start'], params['inspection_end'],
params['dhcp_start'], params['dhcp_end']))
error_callback(message)
def _validate_interface_exists(params, error_callback):
"""Validate the provided local interface exists"""
local_interface = params['local_interface']
net_override = params['net_config_override']
if not net_override and local_interface not in netifaces.interfaces():
message = ('Invalid local_interface specified. %s is not available.' %
local_interface)
error_callback(message)
def _validate_no_missing_subnet_param(name, params, error_callback):
if None in six.viewvalues(params):
missing = list((k) for k, v in params.iteritems() if not v)
message = 'subnet %s. Missing option(s): %s' % (name, missing)
error_callback(message)
def validate_subnet(name, params, error_callback):
local_params = dict(params)
_validate_no_missing_subnet_param(name, params, error_callback)
_validate_value_formats(local_params, error_callback)
_validate_in_cidr(local_params, error_callback)
_validate_dhcp_range(local_params, error_callback)
_validate_inspection_range(local_params, error_callback)
_validate_no_overlap(local_params, error_callback)

View File

@ -1,35 +0,0 @@
[
{
"name": "Installation",
"element": [
"install-types",
"undercloud-install",
"enable-packages-install",
"element-manifest",
"puppet-stack-config"
],
"hook": [
"extra-data",
"pre-install",
"install",
"post-install"
],
"exclude-element": [
"pip-and-virtualenv",
"epel",
"os-collect-config",
"svc-map",
"pip-manifest",
"package-installs",
"pkg-map",
"puppet",
"cache-url",
"dib-python",
"os-svc-install",
"install-bin"
],
"blacklist": [
"99-refresh-completed"
]
}
]

View File

@ -1,35 +0,0 @@
[
{
"name": "Installation",
"element": [
"install-types",
"undercloud-install",
"enable-packages-install",
"element-manifest",
"puppet-stack-config"
],
"hook": [
"extra-data",
"pre-install",
"install",
"post-install"
],
"exclude-element": [
"pip-and-virtualenv",
"epel",
"os-collect-config",
"svc-map",
"pip-manifest",
"package-installs",
"pkg-map",
"puppet",
"cache-url",
"dib-python",
"os-svc-install",
"install-bin"
],
"blacklist": [
"99-refresh-completed"
]
}
]

View File

@ -1,84 +0,0 @@
alabaster==0.7.10
anyjson==0.3.3
appdirs==1.3.0
Babel==2.3.4
bashate==0.5.1
cliff==2.8.0
cmd2==0.8.0
coverage==4.0
debtcollector==1.2.0
decorator==3.4.0
deprecation==1.0
dib-utils==0.0.8
docutils==0.11
dogpile.cache==0.6.2
dulwich==0.15.0
extras==1.0.0
fixtures==3.0.0
flake8==2.2.4
hacking==0.10.3
imagesize==0.7.1
iso8601==0.1.11
Jinja2==2.10
jmespath==0.9.0
jsonpatch==1.16
jsonpointer==1.13
jsonschema==2.6.0
keystoneauth1==3.4.0
linecache2==1.0.0
MarkupSafe==1.0
mccabe==0.2.1
mock==2.0.0
monotonic==0.6
mox3==0.20.0
msgpack-python==0.4.0
munch==2.1.0
netaddr==0.7.18
netifaces==0.10.4
openstackdocstheme==1.18.1
openstacksdk==0.11.2
os-apply-config==5.0.0
os-client-config==1.28.0
os-refresh-config==6.0.0
os-service-types==1.2.0
osc-lib==1.8.0
oslo.config==5.2.0
oslo.i18n==3.15.3
oslo.serialization==2.18.0
oslo.utils==3.33.0
oslotest==3.2.0
pbr==2.0.0
pep8==1.5.7
positional==1.2.1
prettytable==0.7.2
psutil==3.2.2
pyflakes==0.8.1
Pygments==2.2.0
pyparsing==2.1.0
pyperclip==1.5.27
pystache==0.5.4
python-ironicclient==2.2.0
python-keystoneclient==3.8.0
python-mimeparse==1.6.0
python-mistralclient==3.1.0
python-novaclient==9.1.0
python-subunit==1.0.0
python-swiftclient==3.2.0
pytz==2013.6
PyYAML==3.12
reno==2.5.0
requests==2.14.2
requestsexceptions==1.2.0
rfc3986==0.3.1
simplejson==3.5.1
six==1.10.0
snowballstemmer==1.2.1
Sphinx==1.6.5
sphinxcontrib-websupport==1.0.1
stevedore==1.20.0
testrepository==0.0.18
testscenarios==0.4
testtools==2.2.0
traceback2==1.4.0
unittest2==1.1.0
wrapt==1.7.0

View File

@ -1,25 +0,0 @@
---
prelude: >
6.0.0 is the final release for Ocata.
It's the first release where release notes are added.
features:
- Support for gnocchi service on undercloud to provide metrics support in
Telemetry. This will only be enabled when enable_telemetry is true.
- Support for panko service on undercloud to provide events support in
Telemetry. This will only be enabled when enable_telemetry is true.
- Remove Glance Registry from undercloud. It also means Glance API v1 won't
be available anymore.
- Validate vips when generating certificate.
- Improve upgrade process to include upgrade flag. This flag will be used
by the Puppet manifest to knows when an upgrade happens.
- Deploy Nova Placement API service.
- Novajoin service support.
- Run `yum update -y` before Puppet run.
- Optional Cinder support for undercloud.
- When Cinder is enabled, deploy both v2 and v3 APIs.
- Aodh is now configured by default to use its own mysql backend.
deprecations:
- Ceilometer API is officially deprecated. The service is still enabled
when enable_telemetry is true. This can be disabled using the
enable_legacy_ceilometer_api option in undercloud.conf. Users should
start migrating to aodh, gnocchi and panko in future.

View File

@ -1,4 +0,0 @@
---
features:
- The undercloud installation now adds a keystone user and configures the
authtoken middleware for novajoin.

View File

@ -1,5 +0,0 @@
---
security:
- |
TLS is now used by default for the public endpoints. This is done through
the generate_service_certificates option, which now defaults to 'True'.

View File

@ -1,5 +0,0 @@
---
features:
- |
Add additional endpoints to hieradata, which are used in the tripleo:ui
class to facilitate proxying of API endpoints via Apache's mod_rewrite

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Fixes `bug 1668775 <https://bugs.launchpad.net/tripleo/+bug/1668775>`__ Certmonger certificate does not include EKUs

View File

@ -1,4 +0,0 @@
---
fixes:
- Add gnocchi to events dispatcher so ceilometer can
publish events to panko and gnocchi.

View File

@ -1,4 +0,0 @@
---
fixes:
- Add OS_AUTH_TYPE to undercloud stackrc file. Not all clients default to
keystone auth, so lets explicitly set the auth type in env.

View File

@ -1,6 +0,0 @@
---
features:
- |
Add tripleo::ui::endpoint_proxy_ironic_inspector and
tripleo::ui::endpoint_config_ironic_inspector variables to elements for
use in new proxy config for ironic-inspector API service

View File

@ -1,9 +0,0 @@
---
features:
- |
The ``ansible`` deploy interface is enabled by default. It can be used by
updating a node with the following command::
openstack baremetal node set <NODE> --deploy-interface ansible \
--driver-info ansible_deploy_username=<SSH_USER> \
--driver-info ansible_deploy_key_file=<SSH_KEY_FILE>

View File

@ -1,6 +0,0 @@
---
fixes:
- |
The user-provided certificate (via the undercloud_service_certificate
option) now takes precedence over the autogenerated certificate (which is
created via the generate_service_certificate option)

View File

@ -1,12 +0,0 @@
---
upgrade:
- |
Changed the configuration of endpoints that UI uses in order to connect to
the Undercloud in a non-SSL deployment. The port number that the UI now
uses to communicate with the Undercloud for non-SSL connections is 3000,
which supports endpoint proxy configuration. Previously, this port number
was the default port number for the service endpoint that UI connected to.
fixes:
- |
Fixes `bug 1663199 <https://bugs.launchpad.net/tripleo/+bug/1663199>`__ UI doesn't work without manual update on HTTP undercloud

View File

@ -1,7 +0,0 @@
---
fixes:
- |
In /etc/heat/heat.conf, [clients]/endpoint_type was configured to use the
internal endpoints and this was hardcoded in puppet-stack-config.pp so
there was no way to change it. It's now configurable via the hiera key
heat_clients_endpoint_type.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
The Heat CFN endpoint is now created in Keystone during the undercloud
install. A new configuration option, undercloud_heat_cfn_password is added
for the heat_cfn service user associated with the endpoint.

View File

@ -1,3 +0,0 @@
---
deprecations:
- the instack-virt-setup script has been deprecated.

View File

@ -1,4 +0,0 @@
---
deprecations:
- auth_uri is depreacted and will be removed in a future release.
Please, use www_authenticate_uri instead.

View File

@ -1,5 +0,0 @@
---
deprecations:
- |
instack-undercloud is deprecated in Rocky cycle and is replaced by
the containerized undercloud efforts in python-tripleoclient.

View File

@ -1,6 +0,0 @@
---
deprecations:
- Ceilometer API is deprecated since ocata release.
fixes:
- Ceilometer API is now disabled by default. This has been deprecated
since ocata release. Use gnocchi/aodh and panko APIs instead.

View File

@ -1,6 +0,0 @@
---
deprecations:
- Ceilometer collector service is deprecated in pike release.
fixes:
- Disable ceilometer collector by default as its deprecated. All the
data will now be dispatched through pipeline directly.

View File

@ -1,10 +0,0 @@
---
upgrade:
- If you had telemetry enabled in Ocata and you upgrade to pike with
defaults, the telemetry services will be disabled upon upgrade. If
you choose to keep it enabled, set the enable_telemetry option to
true before upgrade and services will continue to be enabled after upgrade.
fixes:
- Finally disabling telemetry services on undercloud by default. Telemetry
use case has been quite limited on undercloud and it makes sense to
disable by default and let user enabl based on need.

View File

@ -1,11 +0,0 @@
---
upgrade:
- |
Network configuration changes are no longer allowed during undercloud
upgrades. Changing the local_ip of a deployed undercloud causes problems
with some of the services, so a pre-deployment check was added to prevent
such changes.
Because the default CIDR was changed in this release, the check also
prevents accidental reconfiguration of the ctlplane network if the old
default is still in use, but not explicitly configured.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Fixes and issue where the PXE filter in ironic-inspectors DHCP server may
become out of sync with the ironic-inspector service. `Bug 1780421
<https://bugs.launchpad.net/tripleo/+bug/1780421>`_.

View File

@ -1,6 +0,0 @@
---
features:
- Add a new docker_registry_mirror option which can be used to
configure a registry mirror in the /etc/docker/daemon.json file.
The motivation for this change is to help support pulling images
from HTTP mirrors within CI.

View File

@ -1,6 +0,0 @@
---
issues:
- |
Keystone v2.0 APIs were removed so we now need to configure
`project_domain_name` and `user_domain_name` to enable v3 API.
We're using the Default domain since it was already in-use.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Drop ceilometer collector from undercloud. This was moved into legacy mode
in Pike and deprecated.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Remove legacy ceilometer api from undercloud. This was moved to legacy
mode in Pike.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
The description of the ``enable_cinder`` option was fixed to not imply
that booting from Cinder volumes is implemented in the undercloud.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Fixed an incompatibility with mistralclient 3.2.0, where a different
exception type was raised and thus not handled during the undercloud
install post config. See #1749186

Some files were not shown because too many files have changed in this diff Show More