Retire os-cloud-config

os-cloud-config has been deprecated in Ocata and is not used anymore in
Pike.

We don't need it anymore. Looking at:
http://codesearch.openstack.org/?q=os_cloud_config&i=nope&files=&repos=
The module is not used anymore.

Change-Id: I31f382fc5a55ffdb847403e1a25f679647579ad5
This commit is contained in:
Emilien Macchi 2017-03-28 15:37:49 -04:00
parent 294a04e99e
commit 95587d4c64
68 changed files with 5 additions and 5866 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = os_cloud_config
omit = os_cloud_config/tests/*,os_cloud_config/openstack/*
[report]
ignore_errors = True

52
.gitignore vendored
View File

@ -1,52 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
cover
.tox
nosetests.xml
.testrepository
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/os-cloud-config.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/os-cloud-config

View File

@ -1,4 +0,0 @@
os-cloud-config Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,64 +1,6 @@
.. warning::
This project is no longer maintained.
os-cloud-config is DEPRECATED in the Ocata release and will be removed in
Pike.
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/os-cloud-config.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
===============================
os-cloud-config
===============================
Configuration for OpenStack clouds.
When first installing an OpenStack cloud there are a number of common
up-front configuration tasks that need to be performed. To alleviate
the need for different sets of tooling to reinvent solutions to these
problems, this package provides a set of tools.
These tools are intended to be well-tested, and available as
importable Python modules as well as command-line tools.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/os-cloud-config
Features
--------
* generate-keystone-pki:
* Generate a certificate authority and a signing key for use with Keystone
Public Key Infrastructure token signing.
* init-keystone:
* Initialize Keystone on a host with a provided admin token, admin e-mail
and admin password. Also allows optionally changing the region and the
public endpoint that Keystone registers with itself.
* register-nodes:
* Register nodes with a baremetal service, such as Nova-baremetal or Ironic.
* setup-endpoints:
* Register services, such as Glance and Cinder with a configured Keystone.
* setup-flavors:
* Creates flavors in Nova, either describing the distinct set of nodes the
cloud has registered, or a custom set of flavors that has been specified.
* setup-neutron:
* Configure Neutron at the cloud (not the host) level, setting up either a
physical control plane network suitable for deployment clouds, or an
external network with an internal floating network suitable for workload
clouds.
The contents of this repository are still available in the Git source code
management system. To see the contents of this repository before it reached
its end of life, please check out the previous commit with
"git checkout HEAD^1".

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'os-cloud-config'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
Contributing to os-cloud-config
===================================
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,29 +0,0 @@
.. os-cloud-config documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to os-cloud-config's documentation!
========================================================
.. warning::
os-cloud-config is DEPRECATED in the Ocata release and will be removed in
Pike.
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install os-cloud-config
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv os-cloud-config
$ pip install os-cloud-config

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,274 +0,0 @@
========
Usage
========
To use os-cloud-config in a project::
import os_cloud_config
-----------------------------------
Initializing Keystone for a host
-----------------------------------
The ``init-keystone`` command line utility initializes Keystone for use with normal
authentication by creating the admin and service tenants, the admin role, the
admin user, configure certificates and finally registers the initial identity
endpoint.
.. note::
init-keystone will wait for a user-specified amount of time for a Keystone
service to be running on the specified host. The default is a 10 minute
wait time with 10 seconds between poll attempts.
For example::
init-keystone -o 192.0.2.1 -t unset -e admin@example.com -p unset -u root
That acts on the host ``192.0.2.1``, sets the admin token and the admin password
to the string ``unset``, the admin e-mail address to ``admin@example.com``, and
uses the root user to connect to the host via ssh to configure certificates.
--------------------------------------------
Registering nodes with a baremetal service
--------------------------------------------
The ``register-nodes`` command line utility supports registering nodes with
either Ironic or Nova-baremetal. Ironic will be used if the Ironic service
is registered with Keystone.
.. note::
register-nodes will ask Ironic to power off every machine as they are
registered.
.. note::
register-nodes will wait up to 10 minutes for the baremetal service to
register a node.
The nodes argument to register-nodes is a JSON file describing the nodes to
be registered in a list of objects. If the node is determined to be currently
registered, the details from the JSON file will be used to update the node
registration.
.. note::
Nova-baremetal does not support updating registered nodes, any previously
registered nodes will be skipped.
For example::
register-nodes -s seed -n /tmp/one-node
Where ``/tmp/one-node`` contains::
[
{
"memory": "2048",
"disk": "30",
"arch": "i386",
"pm_user": "steven",
"pm_addr": "192.168.122.1",
"pm_password": "password",
"pm_type": "pxe_ssh",
"mac": [
"00:76:31:1f:f2:a0"
],
"cpu": "1"
}
]
.. note::
The memory, disk, arch, and cpu fields are optional and can be omitted.
----------------------------------------------------------
Generating keys and certificates for use with Keystone PKI
----------------------------------------------------------
The ``generate-keystone-pki`` command line utility generates keys and certificates
which Keystone uses for signing authentication tokens.
- Keys and certificates can be generated into separate files::
generate-keystone-pki /tmp/certificates
That creates four files with signing and CA keys and certificates in
``/tmp/certificates`` directory.
- Key and certificates can be generated into heat environment file::
generate-keystone-pki -j overcloud-env.json
That adds following values into ``overcloud-env.json`` file::
{
"parameter_defaults": {
"KeystoneSigningKey": "some_key",
"KeystoneSigningCertificate": "some_cert",
"KeystoneCACertificate": "some_cert"
}
}
CA key is not added because this file is not needed by Keystone PKI.
- Key and certificates can be generated into os-apply-config metadata file::
generate-keystone-pki -s -j local.json
This adds following values into local.json file::
{
"keystone": {
"signing_certificate": "some_cert",
"signing_key": "some_key",
"ca_certificate": "some_cert"
}
}
CA key is not added because this file is not needed by Keystone PKI.
---------------------
Setting up networking
---------------------
The ``setup-neutron`` command line utility allows setting up of a physical control
plane network suitable for deployment clouds, or an external network with an
internal floating network suitable for workload clouds.
The network JSON argument allows specifying the network(s) to be created::
setup-neutron -n /tmp/ctlplane
Where ``/tmp/ctlplane`` contains::
{
"physical": {
"gateway": "192.0.2.1",
"metadata_server": "192.0.2.1",
"cidr": "192.0.2.0/24",
"allocation_end": "192.0.2.20",
"allocation_start": "192.0.2.2",
"name": "ctlplane"
}
}
This will create a Neutron flat net with a name of ``ctlplane``, and a subnet
with a CIDR of ``192.0.2.0/24``, a metadata server and gateway of ``192.0.2.1``,
and will allocate DHCP leases in the range of ``192.0.2.2`` to ``192.0.2.20``, as
well as adding a route for ``169.254.169.254/32``.
setup-neutron also supports datacentre networks that require 802.1Q VLAN tags::
setup-neutron -n /tmp/ctlplane-dc
Where ``/tmp/ctlplane-dc`` contains::
{
"physical": {
"gateway": "192.0.2.1",
"metadata_server": "192.0.2.1",
"cidr": "192.0.2.0/24",
"allocation_end": "192.0.2.20",
"allocation_start": "192.0.2.2",
"name": "public",
"physical_network": "ctlplane",
"segmentation_id": 25
}
}
This creates a Neutron 'net' called ``public`` using VLAN tag ``25``, that uses
an existing 'net' called ``ctlplane`` as a physical transport.
.. note::
The key ``physical_network`` is required when creating a network that
specifies a ``segmentation_id``, and it must reference an existing net.
setup-neutron can also create two networks suitable for workload clouds::
setup-neutron -n /tmp/float
Where ``/tmp/float`` contains::
{
"float": {
"cidr": "10.0.0.0/8",
"name": "default-net",
},
"external": {
"name": "ext-net",
"cidr": "192.0.2.0/24",
"allocation_start": "192.0.2.45",
"allocation_end": "192.0.2.64",
"gateway": "192.0.2.1"
}
}
This creates two Neutron nets, the first with a name of ``default-net`` and
set as shared, and second with a name ``ext-net`` with the ``router:external``
property set to true. The ``default-net`` subnet has a CIDR of ``10.0.0.0/8`` and a
default nameserver of ``8.8.8.8``, and the ``ext-net`` subnet has a CIDR of
``192.0.2.0/24``, a gateway of ``192.0.2.1`` and allocates DHCP from
``192.0.2.45`` until ``192.0.2.64``. setup-neutron will also create a router
for the float network, setting the external network as the gateway.
----------------
Creating flavors
----------------
The ``setup-flavors`` command line utility creates flavors in Nova -- either using
the nodes that have been registered to provide a distinct set of hardware that
is provisioned, or by specifying the set of flavors that should be created.
.. note::
setup-flavors will delete the existing default flavors, such as m1.small
and m1.xlarge. For this use case, the cloud that is having flavors created
is a cloud only using baremetal hardware, so only needs to describe the
hardware available.
Utilising the ``/tmp/one-node`` file specified in the ``register-nodes`` example
above, create a flavor::
setup-flavors -n /tmp/one-node
Which results in a flavor called ``baremetal_2048_30_None_1``.
If the ``ROOT_DISK`` environment variable is set in the environment, that will be
used as the disk size, leaving the remainder set as ephemeral storage, giving
a flavor name of ``baremetal_2048_10_20_1``.
Conversely, you can specify a JSON file describing the flavors to create::
setup-flavors -f /tmp/one-flavor
Where ``/tmp/one-flavor`` contains::
[
{
"name": "controller",
"memory": "2048",
"disk": "30",
"arch": "i386",
"cpu": "1"
}
]
The JSON file can also contain an ``extra_specs`` parameter, which is a JSON
object describing the key-value pairs to add into the flavor metadata::
[
{
"name": "controller",
"memory": "2048",
"disk": "30",
"arch": "i386",
"cpu": "1",
"extra_specs": {
"key": "value"
}
}
]

View File

@ -1,40 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import sys
import pbr.version
__version__ = pbr.version.VersionInfo('os_cloud_config').version_string()
def configure_logging(args=None):
if args and args.log_config:
logging.config.fileConfig(args.log_config,
disable_existing_loggers=False)
else:
format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
date_format = '%Y-%m-%d %H:%M:%S'
log_level = logging.DEBUG if args and args.debug else logging.INFO
logging.basicConfig(datefmt=date_format,
format=format,
level=log_level,
stream=sys.stdout)
configure_logging()
LOG = logging.getLogger(__name__)
LOG.warning("os-cloud-config is DEPRECATED in the Ocata release and will "
"be removed in Pike.")

View File

@ -1,28 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n as i18n
_translators = i18n.TranslatorFactory(domain='os_cloud_config')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,65 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import textwrap
from os_cloud_config.cmd.utils import environment
from os_cloud_config import keystone_pki
def parse_args():
description = textwrap.dedent("""
Generate 4 files inside <directory> for use with Keystone PKI
token signing:
ca_key.pem - certificate authority key
ca_cert.pem - self-signed certificate authority certificate
signing_key.pem - key for signing tokens
signing_cert.pem - certificate for verifying token validity, the
certificate itself is verifiable by ca_cert.pem
Alternatively write generated key/certs into <heatenv> JSON file
or into an os-apply-config metadata file for use without Heat.
ca_key.pem doesn't have to (shouldn't) be uploaded to Keystone nodes.
""")
parser = argparse.ArgumentParser(
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('-d', '--directory', dest='directory',
help='directory where keys/certs will be generated')
group.add_argument('-j', '--heatenv', dest='heatenv',
help='write signing key/cert and CA cert into JSON '
'Heat environment file, CA key is omitted')
parser.add_argument('-s', '--seed', action='store_true',
help='JSON file for seed machine has different '
'structure (for seed machine we update directly '
'heat metadata file injected into image). '
'Different key/certs names and different '
'parent node are used (default: false)')
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
if args.heatenv:
keystone_pki.generate_certs_into_json(args.heatenv, args.seed)
else:
keystone_pki.create_and_write_ca_and_signing_pairs(args.directory)

View File

@ -1,88 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import textwrap
from os_cloud_config.cmd.utils import environment
from os_cloud_config.keystone import initialize
def parse_args():
description = textwrap.dedent("""
Perform initial setup of keystone for a new cloud.
This will create the admin and service tenants, the admin role, the admin
user, configure certificates and finally register the initial identity
endpoint, after which Keystone may be used with normal authentication.
This command will wait for a user-specified amount of time for a Keystone
service to be running on the specified host. The default is a 10 minute
wait time with 10 seconds between poll attempts.
""")
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description)
parser.add_argument('-o', '--host', dest='host', required=True,
help="ip/hostname of node where Keystone is running")
parser.add_argument('-t', '--admin-token', dest='admin_token',
help="admin token to use with Keystone's admin "
"endpoint", required=True)
parser.add_argument('-e', '--admin-email', dest='admin_email',
help="admin user's e-mail address to be set",
required=True)
parser.add_argument('-p', '--admin-password', dest='admin_password',
help="admin user's password to be set",
required=True)
parser.add_argument('-r', '--region', dest='region', default='regionOne',
help="region to create the endpoint in")
endpoint_group = parser.add_mutually_exclusive_group()
endpoint_group.add_argument('-s', '--ssl', dest='ssl',
help="ip/hostname to use as the ssl "
"endpoint, if required")
endpoint_group.add_argument('--public', dest='public',
help="ip/hostname to use as the public "
"endpoint, if the default is not suitable")
parser.add_argument('-u', '--user', dest='user',
help="user to connect to the Keystone node via ssh, "
"required with --pki-setup")
parser.add_argument('--timeout', dest='timeout', default=600, type=int,
help="Total seconds to wait for keystone to be ready")
parser.add_argument('--poll-interval', dest='pollinterval',
default=10, type=int,
help="Seconds to wait between keystone checks")
pki_group = parser.add_mutually_exclusive_group()
pki_group.add_argument('--pki-setup', dest='pkisetup',
action='store_true',
help="Perform PKI setup (DEPRECATED)", default=True)
pki_group.add_argument('--no-pki-setup', dest='pkisetup',
action='store_false',
help="Do not perform PKI setup")
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
if args.pkisetup and not args.user:
print("User is required if PKI setup will be performed.")
return 1
initialize(args.host, args.admin_token, args.admin_email,
args.admin_password, args.region, args.ssl, args.public,
args.user, args.timeout, args.pollinterval, args.pkisetup)

View File

@ -1,53 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import textwrap
import os_cloud_config.cmd.utils._clients as clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config.keystone import initialize_for_heat
def parse_args():
description = textwrap.dedent("""
Create a domain for Heat to use, as well as a user to administer it.
This will create a heat domain in Keystone, as well as an admin user that
has rights to administer the domain.
""")
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description)
parser.add_argument('-d', '--domain-admin-password',
dest='domain_admin_password',
help="domain admin user's password to be set",
required=True)
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
try:
environment._ensure()
keystone_client = clients.get_keystone_v3_client()
initialize_for_heat(keystone_client, args.domain_admin_password)
except Exception:
logging.exception("Unexpected error during command execution")
return 1
return 0

View File

@ -1,84 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import logging
import textwrap
from os_cloud_config.cmd.utils import _clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config import nodes
def parse_args():
description = textwrap.dedent("""
Register nodes with either Ironic or Nova-baremetal.
The JSON nodes file contains a list of node metadata. Each list item is
a JSON object describing one node, which has "memory" in KB, "cpu" in
threads, "arch" (one of i386/amd64/etc), "disk" in GB, "mac" a list of
MAC addresses for the node, and "pm_type", "pm_user", "pm_addr" and
"pm_password" describing power management details.
Ironic will be used if the Ironic service is registered with Keystone.
This program will wait up to 10 minutes for the baremetal service to
register a node.
""")
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description)
parser.add_argument('-s', '--service-host', dest='service_host',
help='Nova compute service host to register nodes '
'with')
parser.add_argument('-n', '--nodes', dest='nodes', required=True,
help='A JSON file containing a list of nodes that '
'are intended to be registered')
parser.add_argument('-r', '--remove', dest='remove', action='store_true',
help='Remove all unspecified nodes from the baremetal '
'service. Use with extreme caution!')
parser.add_argument('-k', '--kernel-name', dest='kernel_name',
help='Default kernel name (in Glance) for nodes that '
'do not specify one.')
parser.add_argument('-d', '--ramdisk-name', dest='ramdisk_name',
help='Default ramdisk name (in Glance) for nodes that '
'do not specify one.')
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
try:
with open(args.nodes, 'r') as node_file:
nodes_list = json.load(node_file)
environment._ensure()
keystone_client = _clients.get_keystone_client()
glance_client = _clients.get_glance_client()
client = _clients.get_ironic_client()
nodes.register_all_nodes(
args.service_host, nodes_list, client=client, remove=args.remove,
blocking=True, keystone_client=keystone_client,
glance_client=glance_client, kernel_name=args.kernel_name,
ramdisk_name=args.ramdisk_name)
except Exception:
logging.exception("Unexpected error during command execution")
return 1
return 0

View File

@ -1,79 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import textwrap
import simplejson
from os_cloud_config.cmd.utils import _clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config import keystone
def parse_args():
description = textwrap.dedent("""
Register endpoints for specified services.
The JSON services file contains a dict of services metadata. Each item is
a JSON object describing one service. You can define following keys for
each service:
description - description of service
type - type of service
path - path part of endpoint URI
admin_path - path part of admin endpoint URI
port - endpoint's port
ssl_port - if 'public' parameter is specified, public endpoint URI
is composed of public IP and this SSL port
password - password set for service's user
name - create user and service with specified name, service key
name is used by default
nouser - don't create user for service
""")
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description)
parser.add_argument('-s', '--services', dest='services', required=True,
help='a JSON file containing a list of services that '
'are intended to be registered')
parser.add_argument('-p', '--public', dest='public_host',
help='ip/hostname used for public endpoint URI, HTTPS '
'will be used')
parser.add_argument('-r', '--region', dest='region',
help='represents the geographic location of the '
'service endpoint')
environment._add_logging_arguments(parser)
return parser.parse_args()
def main(stdout=None):
args = parse_args()
environment._configure_logging(args)
if os.path.isfile(args.services):
with open(args.services, 'r') as service_file:
services = simplejson.load(service_file)
else:
# we assume it's just JSON string
services = simplejson.loads(args.services)
client = _clients.get_keystone_client()
keystone.setup_endpoints(
services,
public_host=args.public_host,
region=args.region,
os_auth_url=os.environ["OS_AUTH_URL"],
client=client)

View File

@ -1,88 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import logging
import os
import textwrap
from os_cloud_config.cmd.utils import _clients as clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config import flavors
def parse_args():
description = textwrap.dedent("""
Create flavors describing the compute resources the cloud has
available.
If the list of flavors is only meant to encompass hardware that the cloud
has available, a JSON file describing the nodes can be specified, see
register-nodes for the format.
If a custom list of flavors is to be created, they can be specified as a
list of JSON objects. Each list item is a JSON object describing one
flavor, which has a "name" for the flavor, "memory" in MB, "cpu" in
threads, "disk" in GB and "arch" as one of i386/amd64/etc.
""")
parser = argparse.ArgumentParser(
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('-n', '--nodes', dest='nodes',
help='A JSON file containing a list of nodes that '
'distinct flavors will be generated and created from')
group.add_argument('-f', '--flavors', dest='flavors',
help='A JSON file containing a list of flavors to '
'create directly')
group.add_argument('-i', '--ironic', action='store_true',
help='Pull the registered list of nodes from Ironic '
'that distinct flavors will be generated and created '
'from')
parser.add_argument('-k', '--kernel', dest='kernel',
help='ID of the kernel in Glance', required=True)
parser.add_argument('-r', '--ramdisk', dest='ramdisk',
help='ID of the ramdisk in Glance', required=True)
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
try:
environment._ensure()
client = clients.get_nova_bm_client()
flavors.cleanup_flavors(client)
root_disk = os.environ.get('ROOT_DISK', None)
if args.nodes:
with open(args.nodes, 'r') as nodes_file:
nodes_list = json.load(nodes_file)
flavors.create_flavors_from_nodes(
client, nodes_list, args.kernel, args.ramdisk, root_disk)
elif args.flavors:
with open(args.flavors, 'r') as flavors_file:
flavors_list = json.load(flavors_file)
flavors.create_flavors_from_list(
client, flavors_list, args.kernel, args.ramdisk)
elif args.ironic:
ironic_client = clients.get_ironic_client()
flavors.create_flavors_from_ironic(
client, ironic_client, args.kernel, args.ramdisk, root_disk)
except Exception:
logging.exception("Unexpected error during command execution")
return 1
return 0

View File

@ -1,81 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import logging
import textwrap
from os_cloud_config.cmd.utils import _clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config import neutron
def parse_args():
description = textwrap.dedent("""
Setup neutron for a new cloud.
The JSON describing the network(s) to create is expected to be of the
following form:
{
"physical": {
"gateway": "192.0.2.1",
"metadata_server": "192.0.2.1",
"cidr": "192.0.2.0/24",
"allocation_end": "192.0.2.20",
"allocation_start": "192.0.2.2",
"name": "ctlplane"
},
"float": {
"allocation_start": "10.0.0.2",
"allocation_end": "10.0.0.100",
"name": "default-net"
"cidr": "10.0.0.0/8"
}
}
At least one network of the type 'physical' or 'float' is required. cidr
and name are always required for each network, and physical networks
also require metadata_server.
""")
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description)
parser.add_argument('-n', '--network-json', dest='json',
help='JSON formatted description of the network(s) to '
'create', required=True)
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
try:
environment._ensure()
with open(args.json, 'r') as jsonfile:
network_desc = json.load(jsonfile)
neutron_client = _clients.get_neutron_client()
keystone_client = _clients.get_keystone_client()
neutron.initialize_neutron(network_desc,
neutron_client=neutron_client,
keystone_client=keystone_client)
except Exception:
logging.exception("Unexpected error during command execution")
return 1
return 0

View File

@ -1,38 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import mock
from os_cloud_config.cmd import generate_keystone_pki
from os_cloud_config.tests import base
class GenerateKeystonePKITest(base.TestCase):
@mock.patch('os_cloud_config.keystone_pki.generate_certs_into_json')
@mock.patch.object(sys, 'argv', ['generate-keystone-pki', '-j',
'foo.json', '-s'])
def test_with_heatenv(self, generate_mock):
generate_keystone_pki.main()
generate_mock.assert_called_once_with('foo.json', True)
@mock.patch('os_cloud_config.keystone_pki.create_and_write_ca_'
'and_signing_pairs')
@mock.patch.object(sys, 'argv', ['generate-keystone-pki', '-d', 'bar'])
def test_without_heatenv(self, create_mock):
generate_keystone_pki.main()
create_mock.assert_called_once_with('bar')

View File

@ -1,34 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import mock
from os_cloud_config.cmd import init_keystone
from os_cloud_config.tests import base
class InitKeystoneTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.init_keystone.initialize')
@mock.patch.object(sys, 'argv', ['init-keystone', '-o', 'hostname', '-t',
'token', '-e', 'admin@example.com', '-p',
'password', '-u', 'root'])
def test_script(self, initialize_mock):
init_keystone.main()
initialize_mock.assert_called_once_with(
'hostname', 'token', 'admin@example.com', 'password', 'regionOne',
None, None, 'root', 600, 10, True)

View File

@ -1,35 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import mock
from os_cloud_config.cmd import init_keystone_heat_domain
from os_cloud_config.tests import base
class InitKeystoneHeatDomainTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.init_keystone_heat_domain.environment')
@mock.patch('os_cloud_config.cmd.init_keystone_heat_domain'
'.initialize_for_heat')
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_v3_client',
return_value='keystone_v3_client_mock')
@mock.patch.object(sys, 'argv', ['init-keystone', '-d', 'password'])
def test_script(self, environment_mock, initialize_mock, client_mock):
init_keystone_heat_domain.main()
initialize_mock.assert_called_once_with('keystone_v3_client_mock',
'password')

View File

@ -1,72 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import tempfile
import mock
from os_cloud_config.cmd import register_nodes
from os_cloud_config.tests import base
class RegisterNodesTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.utils._clients.get_glance_client',
return_value='glance_client_mock')
@mock.patch('os_cloud_config.cmd.utils._clients.get_ironic_client',
return_value='ironic_client_mock')
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_client',
return_value='keystone_client_mock')
@mock.patch('os_cloud_config.nodes.register_all_nodes')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['register-nodes', '--service-host',
'seed', '--ramdisk-name', 'bm-ramdisk',
'--kernel-name', 'bm-kernel', '--nodes'])
def test_with_arguments_ironic(self, register_mock,
get_keystone_client_mock,
get_ironic_client_mock,
get_glance_client_mock):
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.append(f.name)
return_code = register_nodes.main()
register_mock.assert_called_once_with(
"seed", {}, client='ironic_client_mock', remove=False,
blocking=True, keystone_client='keystone_client_mock',
glance_client='glance_client_mock',
kernel_name='bm-kernel', ramdisk_name='bm-ramdisk')
get_keystone_client_mock.assert_called_once_with()
get_ironic_client_mock.assert_called_once_with()
get_glance_client_mock.assert_called_once_with()
self.assertEqual(0, return_code)
@mock.patch('os_cloud_config.nodes.register_all_nodes')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['register-nodes', '--service-host',
'seed', '--nodes'])
def test_with_exception(self, register_mock):
register_mock.side_effect = ValueError
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.append(f.name)
return_code = register_nodes.main()
self.assertEqual(1, return_code)

View File

@ -1,44 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import mock
from os_cloud_config.cmd import setup_endpoints
from os_cloud_config.tests import base
class SetupEndpointsTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_client',
return_value='keystone_client_mock')
@mock.patch('os_cloud_config.keystone.setup_endpoints')
@mock.patch.object(
sys, 'argv',
['setup-endpoints', '-s', '{"nova": {"password": "123"}}',
'-p', '192.0.2.28', '-r', 'EC'])
@mock.patch.dict('os.environ', {
'OS_USERNAME': 'admin',
'OS_PASSWORD': 'password',
'OS_TENANT_NAME': 'admin',
'OS_AUTH_URL': 'http://localhost:5000'})
def test_script(self, setup_endpoints_mock, get_keystone_client_mock):
setup_endpoints.main()
get_keystone_client_mock.assert_called_once_with()
setup_endpoints_mock.assert_called_once_with(
{'nova': {'password': '123'}},
public_host='192.0.2.28',
region='EC',
os_auth_url="http://localhost:5000",
client="keystone_client_mock")

View File

@ -1,100 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import tempfile
import mock
from os_cloud_config.cmd import setup_flavors
from os_cloud_config.tests import base
class RegisterNodesTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.utils._clients.get_nova_bm_client')
@mock.patch('os_cloud_config.flavors.create_flavors_from_nodes')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['setup-flavors', '--nodes', '-k', 'aaa',
'-r', 'zzz'])
def test_with_arguments_nodes(self, create_flavors_mock,
get_nova_bm_client_mock):
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.insert(2, f.name)
return_code = setup_flavors.main()
create_flavors_mock.assert_called_once_with(
get_nova_bm_client_mock(), {}, 'aaa', 'zzz', None)
self.assertEqual(0, return_code)
@mock.patch('os_cloud_config.cmd.utils._clients.get_nova_bm_client')
@mock.patch('os_cloud_config.flavors.create_flavors_from_nodes')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a',
'ROOT_DISK': '10'})
@mock.patch.object(sys, 'argv', ['setup-flavors', '--nodes', '-k', 'aaa',
'-r', 'zzz'])
def test_with_arguments_nodes_root_disk(self, create_flavors_mock,
get_nova_bm_client_mock):
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.insert(2, f.name)
return_code = setup_flavors.main()
create_flavors_mock.assert_called_once_with(
get_nova_bm_client_mock(), {}, 'aaa', 'zzz', '10')
self.assertEqual(0, return_code)
@mock.patch('os_cloud_config.cmd.utils._clients.get_nova_bm_client')
@mock.patch('os_cloud_config.flavors.create_flavors_from_list')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['setup-flavors', '--flavors', '-k', 'aaa',
'-r', 'zzz'])
def test_with_arguments_flavors(self, create_flavors_mock,
get_nova_bm_client_mock):
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.insert(2, f.name)
return_code = setup_flavors.main()
create_flavors_mock.assert_called_once_with(
get_nova_bm_client_mock(), {}, 'aaa', 'zzz')
self.assertEqual(0, return_code)
@mock.patch('os_cloud_config.cmd.utils._clients.get_nova_bm_client',
return_value='nova_bm_client_mock')
@mock.patch('os_cloud_config.flavors.create_flavors_from_nodes')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['setup-flavors', '--nodes', '-k', 'aaa',
'-r', 'zzz'])
def test_with_exception(self, create_flavors_mock,
get_nova_bm_client_mock):
create_flavors_mock.side_effect = ValueError
with tempfile.NamedTemporaryFile() as f:
f.write(u'{}\n'.encode('utf-8'))
f.flush()
sys.argv.insert(2, f.name)
return_code = setup_flavors.main()
self.assertEqual(1, return_code)

View File

@ -1,64 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import sys
import tempfile
import mock
from os_cloud_config.cmd import setup_neutron
from os_cloud_config.tests import base
class SetupNeutronTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_client',
return_value='keystone_client_mock')
@mock.patch('os_cloud_config.cmd.utils._clients.get_neutron_client',
return_value='neutron_client_mock')
@mock.patch('os_cloud_config.neutron.initialize_neutron')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['setup-neutron', '--network-json'])
def test_with_arguments(self, initialize_mock, get_neutron_client_mock,
get_keystone_client_mock):
network_desc = {'physical': {'metadata_server': 'foo.bar'}}
with tempfile.NamedTemporaryFile() as f:
f.write(json.dumps(network_desc).encode('utf-8'))
f.flush()
sys.argv.append(f.name)
return_code = setup_neutron.main()
get_keystone_client_mock.assert_called_once_with()
get_neutron_client_mock.assert_called_once_with()
initialize_mock.assert_called_once_with(
network_desc,
neutron_client='neutron_client_mock',
keystone_client='keystone_client_mock')
self.assertEqual(0, return_code)
@mock.patch('os_cloud_config.neutron.initialize_neutron')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['setup-neutron', '--network-json'])
def test_with_exception(self, initialize_mock):
initialize_mock.side_effect = ValueError
with tempfile.NamedTemporaryFile() as f:
f.write('{}\n'.encode('utf-8'))
f.flush()
sys.argv.append(f.name)
return_code = setup_neutron.main()
self.assertEqual(1, return_code)

View File

@ -1,38 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import mock
from os_cloud_config.cmd import upload_kernel_ramdisk
from os_cloud_config.tests import base
class UploadKernelRamdiskTest(base.TestCase):
@mock.patch('os_cloud_config.cmd.utils._clients.get_glance_client',
return_value='glance_client_mock')
@mock.patch('os_cloud_config.glance.create_or_find_kernel_and_ramdisk')
@mock.patch.dict('os.environ', {'OS_USERNAME': 'a', 'OS_PASSWORD': 'a',
'OS_TENANT_NAME': 'a', 'OS_AUTH_URL': 'a'})
@mock.patch.object(sys, 'argv', ['upload_kernel_ramdisk', '-k',
'bm-kernel', '-r', 'bm-ramdisk', '-l',
'kernel-file', '-s', 'ramdisk-file'])
def test_with_arguments(self, create_or_find_mock, glanceclient_mock):
upload_kernel_ramdisk.main()
create_or_find_mock.assert_called_once_with(
'glance_client_mock', 'bm-kernel', 'bm-ramdisk',
kernel_path='kernel-file', ramdisk_path='ramdisk-file')

View File

@ -1,56 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import textwrap
from os_cloud_config.cmd.utils import _clients as clients
from os_cloud_config.cmd.utils import environment
from os_cloud_config import glance
def parse_args():
description = textwrap.dedent("""
Uploads the provided kernel and ramdisk to a Glance store.
""")
parser = argparse.ArgumentParser(
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument('-k', '--kernel', dest='kernel',
help='Name of the kernel image', required=True)
parser.add_argument('-l' '--kernel-file', dest='kernel_file',
help='Kernel to upload', required=True)
parser.add_argument('-r', '--ramdisk', dest='ramdisk',
help='Name of the ramdisk image', required=True)
parser.add_argument('-s', '--ramdisk-file', dest='ramdisk_file',
help='Ramdisk to upload', required=True)
environment._add_logging_arguments(parser)
return parser.parse_args()
def main():
args = parse_args()
environment._configure_logging(args)
try:
environment._ensure()
client = clients.get_glance_client()
glance.create_or_find_kernel_and_ramdisk(
client, args.kernel, args.ramdisk, kernel_path=args.kernel_file,
ramdisk_path=args.ramdisk_file)
except Exception:
logging.exception("Unexpected error during command execution")
return 1
return 0

View File

@ -1,51 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from os_cloud_config.utils import clients
LOG = logging.getLogger(__name__)
def _get_client_args():
return (os.environ["OS_USERNAME"],
os.environ["OS_PASSWORD"],
os.environ["OS_TENANT_NAME"],
os.environ["OS_AUTH_URL"],
os.environ.get("OS_CACERT"))
def get_nova_bm_client():
return clients.get_nova_bm_client(*_get_client_args())
def get_ironic_client():
return clients.get_ironic_client(*_get_client_args())
def get_keystone_client():
return clients.get_keystone_client(*_get_client_args())
def get_keystone_v3_client():
return clients.get_keystone_v3_client(*_get_client_args())
def get_neutron_client():
return clients.get_neutron_client(*_get_client_args())
def get_glance_client():
return clients.get_glance_client(*_get_client_args())

View File

@ -1,43 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import os_cloud_config
from os_cloud_config import exception
def _ensure():
environ = ("OS_USERNAME", "OS_PASSWORD", "OS_AUTH_URL", "OS_TENANT_NAME")
missing = set(environ).difference(os.environ)
plural = "s are"
if missing:
if len(missing) == 1:
plural = " is"
message = ("%s environment variable%s required to be set." % (
", ".join(sorted(missing)), plural))
raise exception.MissingEnvironment(message)
def _add_logging_arguments(parser):
group = parser.add_mutually_exclusive_group()
group.add_argument('--debug', action='store_true',
help='set logging level to DEBUG (default is INFO)')
group.add_argument('--log-config',
help='external logging configuration file')
def _configure_logging(args):
os_cloud_config.configure_logging(args)

View File

@ -1,108 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from os_cloud_config.cmd.utils import _clients as clients
from os_cloud_config.tests import base
class CMDClientsTest(base.TestCase):
@mock.patch.dict('os.environ', {'OS_USERNAME': 'username',
'OS_PASSWORD': 'password',
'OS_TENANT_NAME': 'tenant',
'OS_AUTH_URL': 'auth_url',
'OS_CACERT': 'cacert'})
def test___get_client_args(self):
result = clients._get_client_args()
expected = ("username", "password", "tenant", "auth_url", "cacert")
self.assertEqual(result, expected)
@mock.patch('os.environ')
@mock.patch('ironicclient.client.get_client')
def test_get_ironic_client(self, client_mock, environ):
clients.get_ironic_client()
client_mock.assert_called_once_with(
1, os_username=environ["OS_USERNAME"],
os_password=environ["OS_PASSWORD"],
os_auth_url=environ["OS_AUTH_URL"],
os_tenant_name=environ["OS_TENANT_NAME"],
ca_file=environ.get("OS_CACERT"))
@mock.patch('os.environ')
@mock.patch('novaclient.client.Client')
def test_get_nova_bm_client(self, client_mock, environ):
clients.get_nova_bm_client()
client_mock.assert_called_once_with("2", environ["OS_USERNAME"],
environ["OS_PASSWORD"],
environ["OS_AUTH_URL"],
environ["OS_TENANT_NAME"],
cacert=environ.get("OS_CACERT"),
extensions=[mock.ANY])
@mock.patch('os.environ')
@mock.patch('keystoneclient.v2_0.client.Client')
def test_get_keystone_client(self, client_mock, environ):
clients.get_keystone_client()
client_mock.assert_called_once_with(
username=environ["OS_USERNAME"],
password=environ["OS_PASSWORD"],
auth_url=environ["OS_AUTH_URL"],
tenant_name=environ["OS_TENANT_NAME"],
cacert=environ.get("OS_CACERT"))
@mock.patch('os.environ')
@mock.patch('keystoneclient.v3.client.Client')
def test_get_keystone_v3_client(self, client_mock, environ):
clients.get_keystone_v3_client()
client_mock.assert_called_once_with(
username=environ["OS_USERNAME"],
password=environ["OS_PASSWORD"],
auth_url=environ["OS_AUTH_URL"].replace('v2.0', 'v3'),
tenant_name=environ["OS_TENANT_NAME"],
cacert=environ.get("OS_CACERT"))
@mock.patch('os.environ')
@mock.patch('neutronclient.neutron.client.Client')
def test_get_neutron_client(self, client_mock, environ):
clients.get_neutron_client()
client_mock.assert_called_once_with(
'2.0', username=environ["OS_USERNAME"],
password=environ["OS_PASSWORD"],
auth_url=environ["OS_AUTH_URL"],
tenant_name=environ["OS_TENANT_NAME"],
ca_cert=environ.get("OS_CACERT"))
@mock.patch('os.environ')
@mock.patch('keystoneclient.session.Session')
@mock.patch('keystoneclient.auth.identity.v2.Password')
@mock.patch('glanceclient.Client')
def test_get_glance_client(self, client_mock, password_mock, session_mock,
environ):
clients.get_glance_client()
tenant_name = environ["OS_TENANT_NAME"]
password_mock.assert_called_once_with(auth_url=environ["OS_AUTH_URL"],
username=environ["OS_USERNAME"],
password=environ["OS_PASSWORD"],
tenant_name=tenant_name)
session_mock.assert_called_once_with(auth=password_mock.return_value)
session_mock.return_value.get_endpoint.assert_called_once_with(
service_type='image', interface='public', region_name='regionOne')
session_mock.return_value.get_token.assert_called_once_with()
client_mock.assert_called_once_with(
'1', endpoint=session_mock.return_value.get_endpoint.return_value,
token=session_mock.return_value.get_token.return_value,
cacert=environ.get('OS_CACERT'))

View File

@ -1,45 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
import testtools
from os_cloud_config.cmd.utils import environment
from os_cloud_config import exception
from os_cloud_config.tests import base
class CMDEnviromentTest(base.TestCase):
@mock.patch.dict('os.environ', {})
def test_ensure_environment_missing_all(self):
message = ("OS_AUTH_URL, OS_PASSWORD, OS_TENANT_NAME, OS_USERNAME "
"environment variables are required to be set.")
with testtools.ExpectedException(exception.MissingEnvironment,
message):
environment._ensure()
@mock.patch.dict('os.environ', {'OS_PASSWORD': 'a', 'OS_AUTH_URL': 'a',
'OS_TENANT_NAME': 'a'})
def test_ensure_environment_missing_username(self):
message = "OS_USERNAME environment variable is required to be set."
with testtools.ExpectedException(exception.MissingEnvironment,
message):
environment._ensure()
@mock.patch.dict('os.environ', {'OS_PASSWORD': 'a', 'OS_AUTH_URL': 'a',
'OS_TENANT_NAME': 'a', 'OS_USERNAME': 'a'})
def test_ensure_environment_missing_none(self):
self.assertIs(None, environment._ensure())

View File

@ -1,54 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from os_cloud_config._i18n import _
LOG = logging.getLogger(__name__)
class CloudConfigException(Exception):
"""Base os-cloud-config exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
msg_fmt = _("An unknown exception occurred.")
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if not message:
try:
message = self.msg_fmt % kwargs
except Exception:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception('Exception in string format operation')
for name, value in kwargs.iteritems():
LOG.error("%s: %s" % (name, value))
# at least get the core message out if something happened
message = self.msg_fmt
super(CloudConfigException, self).__init__(message)
class MissingEnvironment(CloudConfigException):
message = "Required environment variables are not set."

View File

@ -1,103 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
LOG = logging.getLogger(__name__)
def cleanup_flavors(client, names=('m1.tiny', 'm1.small', 'm1.medium',
'm1.large', 'm1.xlarge')):
LOG.debug('Cleaning up non-baremetal flavors.')
for flavor in client.flavors.list():
if flavor.name in names:
client.flavors.delete(flavor.id)
def check_node_properties(node):
for node_property in 'memory_mb', 'local_gb', 'cpus', 'cpu_arch':
if not node.properties.get(node_property):
LOG.warning('node %s does not have %s set. Not creating flavor'
'from node.', node.uuid, node_property)
return False
return True
def create_flavors_from_ironic(client, ironic_client, kernel, ramdisk,
root_disk):
node_list = []
for node in ironic_client.node.list(detail=True):
if not check_node_properties(node):
continue
node_list.append({
'memory': node.properties['memory_mb'],
'disk': node.properties['local_gb'],
'cpu': node.properties['cpus'],
'arch': node.properties['cpu_arch']})
create_flavors_from_nodes(client, node_list, kernel, ramdisk, root_disk)
def create_flavors_from_nodes(client, node_list, kernel, ramdisk, root_disk):
LOG.debug('Populating flavors to create from node list.')
node_details = set()
for node in node_list:
disk = node['disk']
ephemeral = 0
if root_disk:
disk = str(root_disk)
ephemeral = str(int(node['disk']) - int(root_disk))
node_details.add((node['memory'], disk, node['cpu'], node['arch'],
ephemeral))
flavor_list = []
for node in node_details:
new_flavor = {'memory': node[0], 'disk': node[1], 'cpu': node[2],
'arch': node[3], 'ephemeral': node[4]}
name = 'baremetal_%(memory)s_%(disk)s_%(ephemeral)s_%(cpu)s' % (
new_flavor)
new_flavor['name'] = name
flavor_list.append(new_flavor)
create_flavors_from_list(client, flavor_list, kernel, ramdisk)
def create_flavors_from_list(client, flavor_list, kernel, ramdisk):
LOG.debug('Creating flavors from flavors list.')
for flavor in filter_existing_flavors(client, flavor_list):
flavor.update({'kernel': kernel, 'ramdisk': ramdisk})
LOG.debug('Creating %(name)s flavor with memory %(memory)s, '
'disk %(disk)s, cpu %(cpu)s, %(arch)s arch.' % flavor)
_create_flavor(client, flavor)
def filter_existing_flavors(client, flavor_list):
flavors = client.flavors.list()
names_to_create = (set([f['name'] for f in flavor_list]) -
set([f.name for f in flavors]))
flavors_to_create = [f for f in flavor_list
if f['name'] in names_to_create]
return flavors_to_create
def _create_flavor(client, flavor_desc):
flavor = client.flavors.create(flavor_desc['name'], flavor_desc['memory'],
flavor_desc['cpu'], flavor_desc['disk'],
None, ephemeral=flavor_desc['ephemeral'])
bm_prefix = 'baremetal:deploy'
flavor_metadata = {'cpu_arch': flavor_desc['arch'],
'%s_kernel_id' % bm_prefix: flavor_desc['kernel'],
'%s_ramdisk_id' % bm_prefix: flavor_desc['ramdisk']}
if flavor_desc.get('extra_specs'):
flavor_metadata.update(flavor_desc['extra_specs'])
flavor.set_keys(metadata=flavor_metadata)

View File

@ -1,65 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import logging
from glanceclient import exc
LOG = logging.getLogger(__name__)
def create_or_find_kernel_and_ramdisk(glanceclient, kernel_name, ramdisk_name,
kernel_path=None, ramdisk_path=None,
skip_missing=False):
"""Find or create a given kernel and ramdisk in Glance.
If either kernel_path or ramdisk_path is None, they will not be created,
and an exception will be raised if it does not exist in Glance.
:param glanceclient: A client for Glance.
:param kernel_name: Name to search for or create for the kernel.
:param ramdisk_name: Name to search for or create for the ramdisk.
:param kernel_path: Path to the kernel on disk.
:param ramdisk_path: Path to the ramdisk on disk.
:param skip_missing: If `True', do not raise an exception if either the
kernel or ramdisk image is not found.
:returns: A dictionary mapping kernel or ramdisk to the ID in Glance.
"""
kernel_image = _upload_file(glanceclient, kernel_name, kernel_path,
'aki', 'Kernel', skip_missing=skip_missing)
ramdisk_image = _upload_file(glanceclient, ramdisk_name, ramdisk_path,
'ari', 'Ramdisk', skip_missing=skip_missing)
return {'kernel': kernel_image.id, 'ramdisk': ramdisk_image.id}
def _upload_file(glanceclient, name, path, disk_format, type_name,
skip_missing=False):
image_tuple = collections.namedtuple('image', ['id'])
try:
image = glanceclient.images.find(name=name, disk_format=disk_format)
except exc.HTTPNotFound:
if path:
image = glanceclient.images.create(
name=name, disk_format=disk_format, is_public=True,
data=open(path, 'rb'))
else:
if skip_missing:
image = image_tuple(None)
else:
raise ValueError("%s image not found in Glance, and no path "
"specified." % type_name)
return image

View File

@ -1,645 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import logging
import socket
import subprocess
import time
from keystoneclient import exceptions
import keystoneclient.v2_0.client as ksclient_v2
import keystoneclient.v3.client as ksclient_v3
from six.moves.urllib.parse import urlparse
from os_cloud_config.utils import clients
LOG = logging.getLogger(__name__)
SERVICES = {
'heat': {
'description': 'Heat Service',
'type': 'orchestration',
'path': '/v1/%(tenant_id)s',
'port': 8004,
'ssl_port': 13004,
},
'heatcfn': {
'description': 'Heat CloudFormation Service',
'type': 'cloudformation',
'path': '/v1',
'port': 8000,
'ssl_port': 13005,
},
'neutron': {
'description': 'Neutron Service',
'type': 'network',
'port': 9696,
'ssl_port': 13696,
},
'glance': {
'description': 'Glance Image Service',
'type': 'image',
'port': 9292,
'ssl_port': 13292,
},
'ec2': {
'description': 'EC2 Compatibility Layer',
'type': 'ec2',
'path': '/services/Cloud',
'admin_path': '/services/Admin',
'port': 8773,
'ssl_port': 13773,
},
'nova': {
'description': 'Nova Compute Service',
'type': 'compute',
'path': '/v2.1/$(tenant_id)s',
'port': 8774,
'ssl_port': 13774,
},
'ceilometer': {
'description': 'Ceilometer Service',
'type': 'metering',
'port': 8777,
'ssl_port': 13777,
},
'gnocchi': {
'description': 'OpenStack Metric Service',
'type': 'metric',
'port': 8041,
'ssl_port': 13041,
},
'aodh': {
'description': 'OpenStack Alarming Service',
'type': 'alarming',
'port': 8042,
'ssl_port': 13042,
},
'cinder': {
'description': 'Cinder Volume Service',
'type': 'volume',
'path': '/v1/%(tenant_id)s',
'port': 8776,
'ssl_port': 13776,
},
'cinderv2': {
'description': 'Cinder Volume Service v2',
'type': 'volumev2',
'path': '/v2/%(tenant_id)s',
'port': 8776,
'ssl_port': 13776,
},
'swift': {
'description': 'Swift Object Storage Service',
'type': 'object-store',
'path': '/v1/AUTH_%(tenant_id)s',
'admin_path': '/v1',
'port': 8080,
'ssl_port': 13808,
},
'horizon': {
'description': 'OpenStack Dashboard',
'type': 'dashboard',
'nouser': True,
'path': '/',
'admin_path': '/admin'
},
'ironic': {
'description': 'Ironic Service',
'type': 'baremetal',
'port': 6385,
'ssl_port': 13385,
},
'tuskar': {
'description': 'Tuskar Service',
'type': 'management',
'path': '/v2',
'port': 8585
},
'manila': {
'description': 'Manila Service',
'type': 'share',
'path': '/v1/%(tenant_id)s',
'port': 8786,
'ssl_port': 13786,
},
'sahara': {
'description': 'Sahara Service',
'type': 'data-processing',
'path': '/v1.1/%(tenant_id)s',
'port': 8386
},
'trove': {
'description': 'Trove Service',
'type': 'database',
'path': '/v1.0/%(tenant_id)s',
'port': 8779
},
'congress': {
'description': 'Congress Service',
'type': 'policy',
'path': '/',
'port': 1789
}
}
def initialize(host, admin_token, admin_email, admin_password,
region='regionOne', ssl=None, public=None, user='root',
timeout=600, poll_interval=10, pki_setup=True, admin=None,
internal=None, public_port=None, admin_port=None,
internal_port=None):
"""Perform post-heat initialization of Keystone.
:param host: ip/hostname of node where Keystone is running
:param admin_token: admin token to use with Keystone's admin endpoint
:param admin_email: admin user's e-mail address to be set
:param admin_password: admin user's password to be set
:param region: region to create the endpoint in
:param ssl: ip/hostname to use as the ssl endpoint, if required
:param public: ip/hostname to use as the public endpoint, if the default
is not suitable
:param user: user to use to connect to the node where Keystone is running
:param timeout: Total seconds to wait for keystone to be running
:param poll_interval: Seconds to wait between keystone poll attempts
:param pki_setup: Boolean for running pki_setup conditionally
:param admin: ip/hostname to use as the admin endpoint, if the
default is not suitable
:param internal: ip/hostname to use as the internal endpoint, if the
default is not suitable
:param public_port: port to be used for the public endpoint, if default is
not suitable
:param admin_port: port to be used for the admin endpoint, if default is
not suitable
:param internal_port: port to be used for the internal endpoint, if
default is not suitable
"""
keystone_v2 = _create_admin_client_v2(host, admin_token)
keystone_v3 = _create_admin_client_v3(host, admin_token, ssl)
_create_roles(keystone_v2, timeout, poll_interval)
_create_tenants(keystone_v2)
_create_admin_user(keystone_v2, admin_email, admin_password)
_grant_admin_user_roles(keystone_v3)
_create_keystone_endpoint(keystone_v2, host, region, ssl, public, admin,
internal, public_port, admin_port, internal_port)
if pki_setup:
print("PKI initialization in init-keystone is deprecated and will be "
"removed.")
_perform_pki_initialization(host, user)
def initialize_for_swift(host, admin_token, ssl=None, public=None):
"""Create roles in Keystone for use with Swift.
:param host: ip/hostname of node where Keystone is running
:param admin_token: admin token to use with Keystone's admin endpoint
:param ssl: ip/hostname to use as the ssl endpoint, if required
:param public: ip/hostname to use as the public endpoint, if the default
is not suitable
"""
LOG.warning('This function is deprecated.')
keystone = _create_admin_client_v2(host, admin_token, public)
LOG.debug('Creating swiftoperator role.')
keystone.roles.create('swiftoperator')
LOG.debug('Creating ResellerAdmin role.')
keystone.roles.create('ResellerAdmin')
def initialize_for_heat(keystone, domain_admin_password):
"""Create Heat domain and an admin user for it.
:param keystone: A keystone v3 client
:param domain_admin_password: heat domain admin's password to be set
"""
try:
heat_domain = keystone.domains.find(name='heat')
LOG.debug('Domain heat already exists.')
except exceptions.NotFound:
LOG.debug('Creating heat domain.')
heat_domain = keystone.domains.create(
'heat',
description='Owns users and tenants created by heat'
)
try:
heat_admin = keystone.users.find(name='heat_domain_admin')
LOG.debug('Heat domain admin already exists.')
except exceptions.NotFound:
LOG.debug('Creating heat_domain_admin user.')
heat_admin = keystone.users.create(
'heat_domain_admin',
description='Manages users and tenants created by heat',
domain=heat_domain,
password=domain_admin_password,
)
LOG.debug('Granting admin role to heat_domain_admin user on heat domain.')
admin_role = keystone.roles.find(name='admin')
keystone.roles.grant(admin_role, user=heat_admin, domain=heat_domain)
def _create_role(keystone, name):
"""Helper for idempotent creating of role
:param keystone: keystone v2 client
:param name: name of the role
"""
role = keystone.roles.findall(name=name)
if role:
LOG.info("Role %s was already created." % name)
else:
LOG.debug("Creating %s role." % name)
keystone.roles.create(name)
def _create_tenant(keystone, name):
"""Helper for idempotent creating of tenant
:param keystone: keystone v2 client
:param name: name of the tenant
"""
tenants = keystone.tenants.findall(name=name)
if tenants:
LOG.info("Tenant %s was already created." % name)
else:
LOG.debug("Creating %s tenant." % name)
keystone.tenants.create(name, None)
def _setup_roles(keystone):
"""Create roles in Keystone for all services.
:param keystone: keystone v2 client
"""
# Create roles in Keystone for use with Swift.
_create_role(keystone, 'swiftoperator')
_create_role(keystone, 'ResellerAdmin')
# Create Heat role.
_create_role(keystone, 'heat_stack_user')
def _create_service(keystone, name, service_type, description=""):
"""Helper for idempotent creating of service.
:param keystone: keystone v2 client
:param name: service name
:param service_type: unique service type
:param description: service description
:return keystone service object
"""
existing_services = keystone.services.findall(type=service_type)
if existing_services:
LOG.info('Service %s for %s already created.', name, service_type)
kservice = existing_services[0]
else:
LOG.debug('Creating service for %s.', service_type)
kservice = keystone.services.create(
name, service_type, description=description)
return kservice
def _create_endpoint(keystone, region, service_id, public_uri, admin_uri,
internal_uri):
"""Helper for idempotent creating of endpoint.
:param keystone: keystone v2 client
:param region: endpoint region
:param service_id: id of associated service
:param public_uri: endpoint public uri
:param admin_uri: endpoint admin uri
:param internal_uri: endpoint internal uri
"""
if keystone.endpoints.findall(publicurl=public_uri):
LOG.info('Endpoint for service %s and public uri %s '
'already exists.', service_id, public_uri)
else:
LOG.debug('Creating endpoint for service %s.', service_id)
keystone.endpoints.create(
region, service_id, public_uri, admin_uri, internal_uri)
def setup_endpoints(endpoints, public_host=None, region=None, client=None,
os_username=None, os_password=None, os_tenant_name=None,
os_auth_url=None):
"""Create services endpoints in Keystone.
:param endpoints: dict containing endpoints data
:param public_host: ip/hostname used for public endpoint URI
:param region: endpoint location
"""
common_data = {
'internal_host': urlparse(os_auth_url).hostname,
'public_host': public_host
}
if not client:
LOG.warning('Creating client inline is deprecated, please pass '
'the client as parameter.')
client = clients.get_keystone_client(
os_username, os_password, os_tenant_name, os_auth_url)
# Setup roles first
_setup_roles(client)
# Create endpoints
LOG.debug('Creating service endpoints.')
for service, data in endpoints.items():
conf = SERVICES[service].copy()
conf.update(common_data)
conf.update(data)
_register_endpoint(client, service, conf, region)
def is_valid_ipv6_address(address):
try:
socket.inet_pton(socket.AF_INET6, address)
except socket.error: # not a valid address
return False
except TypeError: # Not a string, e.g. None
return False
return True
def _register_endpoint(keystone, service, data, region=None):
"""Create single service endpoint in Keystone.
:param keystone: keystone v2 client
:param service: name of service
:param data: dict containing endpoint configuration
:param region: endpoint location
"""
path = data.get('path', '/')
internal_host = data.get('internal_host')
if is_valid_ipv6_address(internal_host):
internal_host = '[{host}]'.format(host=internal_host)
port = data.get('port')
internal_uri = 'http://{host}:{port}{path}'.format(
host=internal_host, port=port, path=path)
public_host = data.get('public_host')
if is_valid_ipv6_address(public_host):
public_host = '[{host}]'.format(host=public_host)
public_protocol = 'http'
public_port = port
if public_host and 'ssl_port' in data:
public_port = data.get('ssl_port')
public_protocol = 'https'
public_uri = '{protocol}://{host}:{port}{path}'.format(
protocol=public_protocol,
host=public_host or internal_host,
port=public_port,
path=path)
admin_uri = 'http://{host}:{port}{path}'.format(
host=internal_host,
port=data.get('admin_port', port),
path=data.get('admin_path', path))
name = data.get('name', service)
if not data.get('nouser'):
_create_user_for_service(keystone, name, data.get('password', None))
kservice = _create_service(
keystone, name, data.get('type'), description=data.get('description'))
if kservice:
_create_endpoint(
keystone, region or 'regionOne', kservice.id,
public_uri, admin_uri, internal_uri)
def _create_user_for_service(keystone, name, password):
"""Create service specific user in Keystone.
:param keystone: keystone v2 client
:param name: user's name to be set
:param password: user's password to be set
"""
try:
keystone.users.find(name=name)
LOG.info('User %s already exists', name)
except ksclient_v2.exceptions.NotFound:
LOG.debug('Creating user %s.', name)
service_tenant = keystone.tenants.find(name='service')
user = keystone.users.create(name,
password,
tenant_id=service_tenant.id,
email='email=nobody@example.com')
admin_role = keystone.roles.find(name='admin')
keystone.roles.add_user_role(user, admin_role, service_tenant)
if name in ['ceilometer', 'gnocchi']:
reselleradmin_role = keystone.roles.find(name='ResellerAdmin')
keystone.roles.add_user_role(user, reselleradmin_role,
service_tenant)
def _create_admin_client_v2(host, admin_token, public=None):
"""Create Keystone v2 client for admin endpoint.
:param host: ip/hostname of node where Keystone is running
:param admin_token: admin token to use with Keystone's admin endpoint
:param ssl: ip/hostname to use as the ssl endpoint, if required
:param public: ip/hostname to use as the public endpoint, if default is
not suitable
"""
# It may not be readily obvious that admin v2 is never available
# via https. The SSL parameter is just the DNS name to use.
keystone_host = public or host
if is_valid_ipv6_address(keystone_host):
keystone_host = '[{host}]'.format(host=keystone_host)
admin_url = 'http://%s:35357/v2.0' % (keystone_host)
return ksclient_v2.Client(endpoint=admin_url, token=admin_token)
def _create_admin_client_v3(host, admin_token, ssl=None, public=None):
"""Create Keystone v3 client for admin endpoint.
:param host: ip/hostname of node where Keystone is running
:param admin_token: admin token to use with Keystone's admin endpoint
:param ssl: ip/hostname to use as the ssl endpoint, if required
:param public: ip/hostname to use as the public endpoint, if default is
not suitable
"""
keystone_host = public or host
if is_valid_ipv6_address(keystone_host):
keystone_host = '[{host}]'.format(host=keystone_host)
# TODO(bnemec): This should respect the ssl parameter, but right now we
# don't support running the admin endpoint behind ssl. Once that is
# fixed, this should use ssl when available.
admin_url = '%s://%s:35357/v3' % ('http', keystone_host)
return ksclient_v3.Client(endpoint=admin_url, token=admin_token)
def _create_roles(keystone, timeout=600, poll_interval=10):
"""Create initial roles in Keystone.
:param keystone: keystone v2 client
:param timeout: total seconds to wait for keystone
:param poll_interval: seconds to wait between keystone checks
"""
wait_cycles = int(timeout / poll_interval)
for count in range(wait_cycles):
try:
LOG.debug('Creating admin role, try %d.' % count)
_create_role(keystone, 'admin')
break
except (exceptions.ConnectionRefused, exceptions.ServiceUnavailable):
LOG.debug('Unable to create, sleeping for %d seconds.'
% poll_interval)
time.sleep(poll_interval)
def _create_tenants(keystone):
"""Create initial tenants in Keystone.
:param keystone: keystone v2 client
"""
_create_tenant(keystone, 'admin')
_create_tenant(keystone, 'service')
def _create_keystone_endpoint(keystone, host, region, ssl, public, admin,
internal, public_port=None, admin_port=None,
internal_port=None):
"""Create keystone endpoint in Keystone.
:param keystone: keystone v2 client
:param host: ip/hostname of node where Keystone is running
:param region: region to create the endpoint in
:param ssl: ip/hostname to use as the ssl endpoint, if required
:param public: ip/hostname to use as the public endpoint, if default is
not suitable
:param admin: ip/hostname to use as the admin endpoint, if the
default is not suitable
:param internal: ip/hostname to use as the internal endpoint, if the
default is not suitable
:param public_port: port to be used for the public endpoint, if default is
not suitable
:param admin_port: port to be used for the admin endpoint, if default is
not suitable
:param internal_port: port to be used for the internal endpoint, if
default is not suitable
"""
LOG.debug('Create keystone public endpoint')
service = _create_service(keystone, 'keystone', 'identity',
description='Keystone Identity Service')
if is_valid_ipv6_address(host):
host = '[{host}]'.format(host=host)
if is_valid_ipv6_address(ssl):
ssl = '[{host}]'.format(host=ssl)
if is_valid_ipv6_address(public):
public = '[{host}]'.format(host=public)
if is_valid_ipv6_address(admin):
admin = '[{host}]'.format(host=admin)
if is_valid_ipv6_address(internal):
internal = '[{host}]'.format(host=internal)
url_template = '{proto}://{host}:{port}/v2.0'
public_url = url_template.format(proto='http', host=host,
port=public_port or 5000)
if ssl:
public_url = url_template.format(proto='https', host=ssl,
port=public_port or 13000)
elif public:
public_url = url_template.format(proto='http', host=public,
port=public_port or 5000)
admin_url = url_template.format(proto='http', host=host,
port=admin_port or 35357)
if admin:
admin_url = url_template.format(proto='http', host=admin,
port=admin_port or 35357)
internal_url = url_template.format(proto='http', host=host,
port=internal_port or 5000)
if internal:
internal_url = url_template.format(proto='http', host=internal,
port=internal_port or 5000)
_create_endpoint(keystone, region, service.id, public_url, admin_url,
internal_url)
def _perform_pki_initialization(host, user):
"""Perform PKI initialization on a host for Keystone.
:param host: ip/hostname of node where Keystone is running
"""
subprocess.check_call(["ssh", "-o" "StrictHostKeyChecking=no", "-t",
"-l", user, host, "sudo", "keystone-manage",
"pki_setup", "--keystone-user",
"$(getent passwd | grep '^keystone' | cut -d: -f1)",
"--keystone-group",
"$(getent group | grep '^keystone' | cut -d: -f1)"])
def _create_admin_user(keystone, admin_email, admin_password):
"""Create admin user in Keystone.
:param keystone: keystone v2 client
:param admin_email: admin user's e-mail address to be set
:param admin_password: admin user's password to be set
"""
admin_tenant = keystone.tenants.find(name='admin')
try:
keystone.users.find(name='admin')
LOG.info('Admin user already exists, skip creation')
except exceptions.NotFound:
LOG.info('Creating admin user.')
keystone.users.create('admin', email=admin_email,
password=admin_password,
tenant_id=admin_tenant.id)
def _grant_admin_user_roles(keystone_v3):
"""Grant admin user roles with admin project and default domain.
:param keystone_v3: keystone v3 client
"""
admin_role = keystone_v3.roles.list(name='admin')[0]
# TO-DO(mmagr): Get back to filtering by id as soon as following bug
# is fixed: https://bugs.launchpad.net/python-keystoneclient/+bug/1452298
default_domain = keystone_v3.domains.list(name='default')[0]
admin_user = keystone_v3.users.list(domain=default_domain, name='admin')[0]
admin_project = keystone_v3.projects.list(domain=default_domain,
name='admin')[0]
if admin_role in keystone_v3.roles.list(user=admin_user,
project=admin_project):
LOG.info('Admin user is already granted admin role with admin project')
else:
LOG.info('Granting admin role to admin user on admin project.')
keystone_v3.roles.grant(admin_role, user=admin_user,
project=admin_project)
if admin_role in keystone_v3.roles.list(user=admin_user,
domain=default_domain):
LOG.info('Admin user is already granted admin role with default '
'domain')
else:
LOG.info('Granting admin role to admin user on default domain.')
keystone_v3.roles.grant(admin_role, user=admin_user,
domain=default_domain)

View File

@ -1,195 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import logging
import os
from os import path
import stat
from OpenSSL import crypto
LOG = logging.getLogger(__name__)
CA_KEY_SIZE = 2048
CA_CERT_DAYS = 10 * 365
SIGNING_KEY_SIZE = 2048
SIGNING_CERT_DAYS = 10 * 365
X509_VERSION = 2
def create_ca_pair(cert_serial=1):
"""Create CA private key and self-signed certificate.
CA generation is mostly meant for proof-of-concept
deployments. For real deployments it is suggested to use an
external CA (separate from deployment tools).
:param cert_serial: serial number of the generated certificate
:type cert_serial: integer
:return: (ca_key_pem, ca_cert_pem) tuple of base64 encoded CA
private key and CA certificate (PEM format)
:rtype: (string, string)
"""
ca_key = crypto.PKey()
ca_key.generate_key(crypto.TYPE_RSA, CA_KEY_SIZE)
LOG.debug('Generated CA key.')
ca_cert = crypto.X509()
ca_cert.set_version(X509_VERSION)
ca_cert.set_serial_number(cert_serial)
subject = ca_cert.get_subject()
subject.C = 'XX'
subject.ST = 'Unset'
subject.L = 'Unset'
subject.O = 'Unset'
subject.CN = 'Keystone CA'
ca_cert.gmtime_adj_notBefore(0)
ca_cert.gmtime_adj_notAfter(60 * 60 * 24 * CA_CERT_DAYS)
ca_cert.set_issuer(subject)
ca_cert.set_pubkey(ca_key)
ca_cert.add_extensions([
crypto.X509Extension(b"basicConstraints", True, b"CA:TRUE, pathlen:0"),
])
ca_cert.sign(ca_key, 'sha1')
LOG.debug('Generated CA certificate.')
return (crypto.dump_privatekey(crypto.FILETYPE_PEM, ca_key),
crypto.dump_certificate(crypto.FILETYPE_PEM, ca_cert))
def create_signing_pair(ca_key_pem, ca_cert_pem, cert_serial=2):
"""Create signing private key and certificate.
Os-cloud-config key generation and certificate signing is mostly
meant for proof-of-concept deployments. For real deployments it is
suggested to use certificates signed by an external CA.
:param ca_key_pem: CA private key to sign the signing certificate,
base64 encoded (PEM format)
:type ca_key_pem: string
:param ca_cert_pem: CA certificate, base64 encoded (PEM format)
:type ca_cert_pem: string
:param cert_serial: serial number of the generated certificate
:type cert_serial: integer
:return: (signing_key_pem, signing_cert_pem) tuple of base64
encoded signing private key and signing certificate
(PEM format)
:rtype: (string, string)
"""
ca_key = crypto.load_privatekey(crypto.FILETYPE_PEM, ca_key_pem)
ca_cert = crypto.load_certificate(crypto.FILETYPE_PEM, ca_cert_pem)
signing_key = crypto.PKey()
signing_key.generate_key(crypto.TYPE_RSA, CA_KEY_SIZE)
LOG.debug('Generated signing key.')
signing_cert = crypto.X509()
signing_cert.set_version(X509_VERSION)
signing_cert.set_serial_number(cert_serial)
subject = signing_cert.get_subject()
subject.C = 'XX'
subject.ST = 'Unset'
subject.L = 'Unset'
subject.O = 'Unset'
subject.CN = 'Keystone Signing'
signing_cert.gmtime_adj_notBefore(0)
signing_cert.gmtime_adj_notAfter(60 * 60 * 24 * SIGNING_CERT_DAYS)
signing_cert.set_issuer(ca_cert.get_subject())
signing_cert.set_pubkey(signing_key)
signing_cert.sign(ca_key, 'sha1')
LOG.debug('Generated signing certificate.')
return (crypto.dump_privatekey(crypto.FILETYPE_PEM, signing_key),
crypto.dump_certificate(crypto.FILETYPE_PEM, signing_cert))
def create_and_write_ca_and_signing_pairs(directory):
"""Create and write out CA and signing keys and certificates.
Generate ca_key.pem, ca_cert.pem, signing_key.pem,
signing_cert.pem and write them out to a directory.
:param directory: directory where keys and certs will be written
:type directory: string
"""
if not path.isdir(directory):
os.mkdir(directory)
ca_key_pem, ca_cert_pem = create_ca_pair()
signing_key_pem, signing_cert_pem = create_signing_pair(ca_key_pem,
ca_cert_pem)
_write_pki_file(path.join(directory, 'ca_key.pem'), ca_key_pem)
_write_pki_file(path.join(directory, 'ca_cert.pem'), ca_cert_pem)
_write_pki_file(path.join(directory, 'signing_key.pem'), signing_key_pem)
_write_pki_file(path.join(directory, 'signing_cert.pem'), signing_cert_pem)
def generate_certs_into_json(jsonfile, seed):
"""Create and write out CA certificate and signing certificate/key.
Generate CA certificate, signing certificate and signing key and
add them into a JSON file. If key/certs already exist in JSON file, no
change is done.
:param jsonfile: JSON file where certs and key will be written
:type jsonfile: string
:param seed: JSON file for seed machine has different structure. Different
key/certs names and different parent node are used
:type seed: boolean
"""
if os.path.isfile(jsonfile):
with open(jsonfile) as json_fd:
all_data = json.load(json_fd)
else:
all_data = {}
if seed:
parent = 'keystone'
ca_cert_name = 'ca_certificate'
signing_key_name = 'signing_key'
signing_cert_name = 'signing_certificate'
else:
parent = 'parameter_defaults'
ca_cert_name = 'KeystoneCACertificate'
signing_key_name = 'KeystoneSigningKey'
signing_cert_name = 'KeystoneSigningCertificate'
if parent not in all_data:
all_data[parent] = {}
parent_node = all_data[parent]
if not (ca_cert_name in parent_node and
signing_key_name in parent_node and
signing_cert_name in parent_node):
ca_key_pem, ca_cert_pem = create_ca_pair()
signing_key_pem, signing_cert_pem = create_signing_pair(ca_key_pem,
ca_cert_pem)
parent_node.update({ca_cert_name: ca_cert_pem,
signing_key_name: signing_key_pem,
signing_cert_name: signing_cert_pem})
with open(jsonfile, 'w') as json_fd:
json.dump(all_data, json_fd, sort_keys=True)
LOG.debug("Wrote key/certs into '%s'.", path.abspath(jsonfile))
else:
LOG.info("Key/certs are already present in '%s', skipping.",
path.abspath(jsonfile))
def _write_pki_file(file_path, contents):
with open(file_path, 'w') as f:
f.write(contents)
os.chmod(file_path, stat.S_IRUSR | stat.S_IWUSR)
LOG.debug("Wrote '%s'.", path.abspath(file_path))

View File

@ -1,152 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from os_cloud_config.cmd.utils import _clients as clients
LOG = logging.getLogger(__name__)
def initialize_neutron(network_desc, neutron_client=None,
keystone_client=None):
if not neutron_client:
LOG.warning(
'Creating neutron client inline is deprecated, please pass '
'the client as parameter.')
neutron_client = clients.get_neutron_client()
if not keystone_client:
LOG.warning(
'Creating keystone client inline is deprecated, please pass '
'the client as parameter.')
keystone_client = clients.get_keystone_client()
admin_tenant = _get_admin_tenant_id(keystone_client)
if 'physical' in network_desc:
network_type = 'physical'
if not admin_tenant:
raise ValueError("No admin tenant registered in Keystone")
if not network_desc['physical']['metadata_server']:
raise ValueError("metadata_server is required for physical "
"networks")
elif 'float' in network_desc:
network_type = 'float'
else:
raise ValueError("No float or physical network defined.")
net = _create_net(neutron_client, network_desc, network_type, admin_tenant)
subnet = _create_subnet(neutron_client, net, network_desc, network_type,
admin_tenant)
if 'external' in network_desc:
router = _create_router(neutron_client, subnet)
ext_net = _create_net(neutron_client, network_desc, 'external', None)
_create_subnet(neutron_client, ext_net, network_desc, 'external', None)
neutron_client.add_gateway_router(
router['router']['id'], {'network_id': ext_net['network']['id']})
LOG.debug("Neutron configured.")
def _get_admin_tenant_id(keystone):
"""Fetch the admin tenant id from Keystone.
:param keystone: A keystone v2 client.
"""
LOG.debug("Discovering admin tenant.")
tenant = keystone.tenants.find(name='admin')
if tenant:
return tenant.id
def _create_net(neutron, network_desc, network_type, admin_tenant):
"""Create a new neutron net.
:param neutron: A neutron v2 client.
:param network_desc: A network description.
:param network_type: The type of network to create.
:param admin_tenant: The admin tenant id in Keystone.
"""
LOG.debug("Creating %s network." % network_type)
type_desc = network_desc[network_type]
network = {'admin_state_up': True,
'name': type_desc['name']}
if network_type == 'physical':
network.update({'tenant_id': admin_tenant,
'provider:network_type': 'flat',
'provider:physical_network': network['name']})
elif network_type == 'float':
network['shared'] = True
elif network_type == 'external':
network['router:external'] = True
if type_desc.get('segmentation_id'):
vlan_tag = type_desc['segmentation_id']
physical_network = type_desc['physical_network']
network.update({'provider:network_type': 'vlan',
'provider:segmentation_id': vlan_tag,
'provider:physical_network': physical_network})
return neutron.create_network({'network': network})
def _create_subnet(neutron, net, network_desc, network_type, admin_tenant):
"""Create a new neutron subnet.
:param neutron: A neutron v2 client.
:param network_desc: A network description.
:param network_type: The type of network to create.
:param admin_tenant: The admin tenant id in Keystone.
"""
type_desc = network_desc[network_type]
cidr = type_desc['cidr']
LOG.debug("Creating %s subnet, with CIDR %s." % (network_type, cidr))
subnet = {'ip_version': 4, 'network_id': net['network']['id'],
'cidr': cidr}
if network_type == 'physical':
metadata = network_desc['physical']['metadata_server']
subnet.update({'tenant_id': admin_tenant,
'host_routes': [{'destination': '169.254.169.254/32',
'nexthop': metadata}]})
elif network_type == 'external':
subnet['enable_dhcp'] = False
if type_desc.get('gateway'):
subnet['gateway_ip'] = type_desc['gateway']
if type_desc.get('extra_routes'):
routes = type_desc['extra_routes']
if 'host_routes' not in subnet:
subnet['host_routes'] = []
subnet['host_routes'].extend(routes)
if type_desc.get('nameserver'):
subnet['dns_nameservers'] = [type_desc['nameserver']]
elif network_type == 'float':
subnet['dns_nameservers'] = ['8.8.8.8']
if 'enable_dhcp' in type_desc:
subnet['enable_dhcp'] = type_desc['enable_dhcp']
if (type_desc.get('allocation_start') and
type_desc.get('allocation_end')):
allocation_start = type_desc['allocation_start']
allocation_end = type_desc['allocation_end']
subnet['allocation_pools'] = [{'start': allocation_start,
'end': allocation_end}]
return neutron.create_subnet({'subnet': subnet})
def _create_router(neutron, subnet):
"""Create a new neutron router.
:param neutron: A neutron v2 client.
:param subnet: The subnet id to route for.
"""
LOG.debug("Creating router.")
router = neutron.create_router({'router': {'name': 'default-router'}})
neutron.add_interface_router(router['router']['id'],
{'subnet_id': subnet['subnet']['id']})
return router

View File

@ -1,391 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import time
from ironicclient import exc as ironicexp
import six
from os_cloud_config.cmd.utils import _clients as clients
from os_cloud_config import glance
LOG = logging.getLogger(__name__)
# This module is no longer used by TripleO! If you feel like changing one of
# the functions below or adding a new one, please apply your change to
# tripleo_common.utils.nodes in the tripleo-common repo.
def _ipmi_driver_info(node):
driver_info = {"ipmi_address": node["pm_addr"],
"ipmi_username": node["pm_user"],
"ipmi_password": node["pm_password"]}
for params in ('ipmi_bridging', 'ipmi_transit_address',
'ipmi_transit_channel', 'ipmi_target_address',
'ipmi_target_channel', 'ipmi_local_address'):
if node.get(params):
driver_info[params] = node[params]
return driver_info
def _pxe_drac_driver_info(node):
driver_info = {"drac_host": node["pm_addr"],
"drac_username": node["pm_user"],
"drac_password": node["pm_password"]}
return driver_info
def _pxe_ssh_driver_info(node):
if "pm_virt_type" not in node:
node["pm_virt_type"] = "virsh"
driver_info = {"ssh_address": node["pm_addr"],
"ssh_username": node["pm_user"],
"ssh_key_contents": node["pm_password"],
"ssh_virt_type": node["pm_virt_type"]}
return driver_info
def _pxe_ilo_driver_info(node):
driver_info = {"ilo_address": node["pm_addr"],
"ilo_username": node["pm_user"],
"ilo_password": node["pm_password"]}
return driver_info
def _pxe_iboot_driver_info(node):
driver_info = {"iboot_address": node["pm_addr"],
"iboot_username": node["pm_user"],
"iboot_password": node["pm_password"]}
# iboot_relay_id and iboot_port are optional
if "pm_relay_id" in node:
driver_info["iboot_relay_id"] = node["pm_relay_id"]
if "pm_port" in node:
driver_info["iboot_port"] = node["pm_port"]
return driver_info
def _fake_pxe_driver_info(node):
driver_info = {}
# The fake_pxe driver doesn't need any credentials since there's
# no power management
return driver_info
def _pxe_ucs_driver_info(node):
driver_info = {"ucs_address": node["pm_addr"],
"ucs_username": node["pm_user"],
"ucs_password": node["pm_password"],
"ucs_service_profile": node["pm_service_profile"]}
return driver_info
def _common_irmc_driver_info(node):
driver_info = {"irmc_address": node["pm_addr"],
"irmc_username": node["pm_user"],
"irmc_password": node["pm_password"]}
# irmc_port, irmc_auth_method, irmc_client_timeout, and
# irmc_sensor_method are optional
if "pm_port" in node:
driver_info["irmc_port"] = node["pm_port"]
if "pm_auth_method" in node:
driver_info["irmc_auth_method"] = node["pm_auth_method"]
if "pm_client_timeout" in node:
driver_info["irmc_client_timeout"] = node["pm_client_timeout"]
if "pm_sensor_method" in node:
driver_info["irmc_sensor_method"] = node["pm_sensor_method"]
return driver_info
def _pxe_irmc_driver_info(node):
return _common_irmc_driver_info(node)
def _iscsi_irmc_driver_info(node):
driver_info = _common_irmc_driver_info(node)
# irmc_deploy_iso is also required for iscsi_irmc
driver_info["irmc_deploy_iso"] = node["pm_deploy_iso"]
return driver_info
def _pxe_wol_driver_info(node):
driver_info = {"wol_host": node["pm_addr"]}
if "pm_port" in node:
driver_info["wol_port"] = node["pm_port"]
return driver_info
def _associate_deploy_kr_info(driver_info, node):
if "pxe" in node["pm_type"]:
if "kernel_id" in node:
driver_info["deploy_kernel"] = node["kernel_id"]
if "ramdisk_id" in node:
driver_info["deploy_ramdisk"] = node["ramdisk_id"]
return driver_info
def _extract_driver_info(node):
driver_info = {}
driver_info_map = {"pxe_drac": _pxe_drac_driver_info,
"pxe_ssh": _pxe_ssh_driver_info,
"pxe_ilo": _pxe_ilo_driver_info,
"pxe_iboot_iscsi": _pxe_iboot_driver_info,
"pxe_iboot_agent": _pxe_iboot_driver_info,
"fake_pxe": _fake_pxe_driver_info,
"pxe_ucs": _pxe_ucs_driver_info,
"pxe_irmc": _pxe_irmc_driver_info,
"iscsi_irmc": _iscsi_irmc_driver_info,
# agent_irmc and iscsi_irmc share the same driver info
"agent_irmc": _iscsi_irmc_driver_info,
"pxe_wol": _pxe_wol_driver_info}
def _get_driver_info(node):
pm_type = node["pm_type"]
if "ipmi" in pm_type:
return _ipmi_driver_info(node)
else:
if pm_type in driver_info_map:
return driver_info_map[pm_type](node)
else:
raise ValueError("Unknown pm_type: %s" % node["pm_type"])
driver_info = _get_driver_info(node)
driver_info = _associate_deploy_kr_info(driver_info, node)
return driver_info
def register_ironic_node(service_host, node, client=None, blocking=True):
mapping = {'cpus': 'cpu',
'memory_mb': 'memory',
'local_gb': 'disk',
'cpu_arch': 'arch'}
properties = {k: six.text_type(node.get(v))
for k, v in mapping.items()
if node.get(v) is not None}
driver_info = _extract_driver_info(node)
if 'capabilities' in node:
properties.update({"capabilities":
six.text_type(node.get('capabilities'))})
create_map = {"driver": node["pm_type"],
"properties": properties,
"driver_info": driver_info}
if 'name' in node:
create_map.update({"name": six.text_type(node.get('name'))})
for count in range(60):
LOG.debug('Registering %s node with ironic, try #%d.' %
(node.get("pm_addr", ''), count))
try:
ironic_node = client.node.create(**create_map)
break
except (ironicexp.ConnectionRefused, ironicexp.ServiceUnavailable):
if blocking:
LOG.debug('Service not available, sleeping for 10 seconds.')
time.sleep(10)
else:
LOG.debug('Service not available.')
else:
if blocking:
LOG.debug('Service unavailable after 10 minutes, giving up.')
else:
LOG.debug('Service unavailable after 60 tries, giving up.')
raise ironicexp.ServiceUnavailable()
for mac in node["mac"]:
client.port.create(address=mac, node_uuid=ironic_node.uuid)
# Ironic should do this directly, see bug 1315225.
try:
client.node.set_power_state(ironic_node.uuid, 'off')
except ironicexp.Conflict:
# Conflict means the Ironic conductor got there first, so we can
# ignore the exception.
pass
return ironic_node
def _populate_node_mapping(client):
LOG.debug('Populating list of registered nodes.')
node_map = {'mac': {}, 'pm_addr': {}}
nodes = [n.to_dict() for n in client.node.list()]
for node in nodes:
node_details = client.node.get(node['uuid'])
if node_details.driver in ('pxe_ssh', 'fake_pxe'):
for port in client.node.list_ports(node['uuid']):
node_map['mac'][port.address] = node['uuid']
elif 'ipmi' in node_details.driver:
pm_addr = node_details.driver_info['ipmi_address']
node_map['pm_addr'][pm_addr] = node['uuid']
elif node_details.driver == 'pxe_ilo':
pm_addr = node_details.driver_info['ilo_address']
node_map['pm_addr'][pm_addr] = node['uuid']
elif node_details.driver == 'pxe_drac':
pm_addr = node_details.driver_info['drac_host']
node_map['pm_addr'][pm_addr] = node['uuid']
elif node_details.driver in ('pxe_iboot_iscsi', 'pxe_iboot_agent'):
iboot_addr = node_details.driver_info['iboot_address']
if "iboot_port" in node_details.driver_info:
iboot_addr += (':%s' %
node_details.driver_info['iboot_port'])
if "iboot_relay_id" in node_details.driver_info:
iboot_addr += ('#%s' %
node_details.driver_info['iboot_relay_id'])
node_map['pm_addr'][iboot_addr] = node['uuid']
elif node_details.driver == 'pxe_irmc':
pm_addr = node_details.driver_info['irmc_address']
node_map['pm_addr'][pm_addr] = node['uuid']
return node_map
def _get_node_id(node, node_map):
if node['pm_type'] in ('pxe_ssh', 'fake_pxe'):
for mac in node['mac']:
if mac.lower() in node_map['mac']:
return node_map['mac'][mac.lower()]
elif node['pm_type'] in ('pxe_iboot_iscsi', 'pxe_iboot_agent'):
iboot_addr = node["pm_addr"]
if "pm_port" in node:
iboot_addr += ':%s' % node["pm_port"]
if "pm_relay_id" in node:
iboot_addr += '#%s' % node["pm_relay_id"]
if iboot_addr in node_map['pm_addr']:
return node_map['pm_addr'][iboot_addr]
else:
if node['pm_addr'] in node_map['pm_addr']:
return node_map['pm_addr'][node['pm_addr']]
def _update_or_register_ironic_node(service_host, node, node_map, client=None,
blocking=True):
node_uuid = _get_node_id(node, node_map)
massage_map = {'cpu': '/properties/cpus',
'memory': '/properties/memory_mb',
'disk': '/properties/local_gb',
'arch': '/properties/cpu_arch'}
if "ipmi" in node['pm_type']:
massage_map.update({'pm_addr': '/driver_info/ipmi_address',
'pm_user': '/driver_info/ipmi_username',
'pm_password': '/driver_info/ipmi_password'})
elif node['pm_type'] == 'pxe_ssh':
massage_map.update({'pm_addr': '/driver_info/ssh_address',
'pm_user': '/driver_info/ssh_username',
'pm_password': '/driver_info/ssh_key_contents'})
elif node['pm_type'] == 'pxe_ilo':
massage_map.update({'pm_addr': '/driver_info/ilo_address',
'pm_user': '/driver_info/ilo_username',
'pm_password': '/driver_info/ilo_password'})
elif node['pm_type'] == 'pxe_drac':
massage_map.update({'pm_addr': '/driver_info/drac_host',
'pm_user': '/driver_info/drac_username',
'pm_password': '/driver_info/drac_password'})
elif node['pm_type'] == 'pxe_irmc':
massage_map.update({'pm_addr': '/driver_info/irmc_address',
'pm_user': '/driver_info/irmc_username',
'pm_password': '/driver_info/irmc_password'})
if "name" in node:
massage_map.update({'name': '/name'})
if "capabilities" in node:
massage_map.update({'capabilities': '/properties/capabilities'})
if node_uuid:
ironic_node = client.node.get(node_uuid)
else:
ironic_node = None
if ironic_node is None:
ironic_node = register_ironic_node(service_host, node, client,
blocking=blocking)
else:
LOG.debug('Node %s already registered, updating details.' % (
ironic_node.uuid))
node_patch = []
for key, value in massage_map.items():
node_patch.append({'path': value,
'value': six.text_type(node[key]),
'op': 'replace'})
for count in range(2):
try:
client.node.update(ironic_node.uuid, node_patch)
break
except ironicexp.Conflict:
LOG.debug('Node %s locked for updating.' %
ironic_node.uuid)
time.sleep(5)
else:
raise ironicexp.Conflict()
return ironic_node.uuid
def _clean_up_extra_nodes(seen, client, remove=False):
all_nodes = set([n.uuid for n in client.node.list()])
remove_func = client.node.delete
extra_nodes = all_nodes - seen
for node in extra_nodes:
if remove:
LOG.debug('Removing extra registered node %s.' % node)
remove_func(node)
else:
LOG.debug('Extra registered node %s found.' % node)
def _register_list_of_nodes(register_func, node_map, client, nodes_list,
blocking, service_host, kernel_id, ramdisk_id):
seen = set()
for node in nodes_list:
if kernel_id:
if 'kernel_id' not in node:
node['kernel_id'] = kernel_id
if ramdisk_id:
if 'ramdisk_id' not in node:
node['ramdisk_id'] = ramdisk_id
try:
new_node = register_func(service_host, node, node_map,
client=client, blocking=blocking)
seen.add(new_node)
except ironicexp.Conflict:
LOG.debug("Could not update node, moving to next host")
seen.add(node)
return seen
def register_all_nodes(service_host, nodes_list, client=None, remove=False,
blocking=True, keystone_client=None, glance_client=None,
kernel_name=None, ramdisk_name=None):
LOG.warning('Using register_all_nodes from os-cloud-config is deprecated, '
'please use the same function from tripleo_common.utils.nodes')
LOG.debug('Registering all nodes.')
if client is None:
LOG.warning('Creating ironic client inline is deprecated, please '
'pass the client as parameter.')
client = clients.get_ironic_client()
register_func = _update_or_register_ironic_node
node_map = _populate_node_mapping(client)
glance_ids = {'kernel': None, 'ramdisk': None}
if kernel_name and ramdisk_name:
if glance_client is None:
LOG.warning('Creating glance client inline is deprecated, please '
'pass the client as a parameter.')
client = clients.get_glance_client()
glance_ids = glance.create_or_find_kernel_and_ramdisk(
glance_client, kernel_name, ramdisk_name)
seen = _register_list_of_nodes(register_func, node_map, client,
nodes_list, blocking, service_host,
glance_ids['kernel'], glance_ids['ramdisk'])
_clean_up_extra_nodes(seen, client, remove=remove)

View File

@ -1,58 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import fixtures
import testtools
_TRUE_VALUES = ('True', 'true', '1', 'yes')
class TestCase(testtools.TestCase):
"""Test case base class for all unit tests."""
def setUp(self):
"""Run before each test method to initialize test environment."""
super(TestCase, self).setUp()
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
try:
test_timeout = int(test_timeout)
except ValueError:
# If timeout value is invalid do not set a timeout.
test_timeout = 0
if test_timeout > 0:
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
self.useFixture(fixtures.NestedTempfile())
self.useFixture(fixtures.TempHomeDir())
if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES:
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES:
stderr = self.useFixture(fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
self.log_fixture = self.useFixture(fixtures.FakeLogger())
for key in ('OS_AUTH_URL', 'OS_PASSWORD', 'OS_TENANT_NAME',
'OS_USERNAME', 'OS_CACERT', 'ROOT_DISK'):
fixture = fixtures.EnvironmentVariable(key)
self.useFixture(fixture)

View File

@ -1,141 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import mock
from os_cloud_config import flavors
from os_cloud_config.tests import base
class FlavorsTest(base.TestCase):
def test_cleanup_flavors(self):
client = mock.MagicMock()
to_del = ('m1.tiny', 'm1.small', 'm1.medium', 'm1.large', 'm1.xlarge')
delete_calls = [mock.call(flavor) for flavor in to_del]
flavors.cleanup_flavors(client=client)
client.flavors.delete.has_calls(delete_calls)
def test_filter_existing_flavors_none(self):
client = mock.MagicMock()
client.flavors.list.return_value = []
flavor_list = [{'name': 'baremetal'}]
self.assertEqual(flavor_list,
flavors.filter_existing_flavors(client, flavor_list))
def test_filter_existing_flavors_one_existing(self):
client = mock.MagicMock()
flavor = collections.namedtuple('flavor', ['name'])
client.flavors.list.return_value = [flavor('baremetal_1')]
flavor_list = [{'name': 'baremetal_0'}, {'name': 'baremetal_1'}]
self.assertEqual([flavor_list[0]],
flavors.filter_existing_flavors(client, flavor_list))
def test_filter_existing_flavors_all_existing(self):
client = mock.MagicMock()
flavor = collections.namedtuple('flavor', ['name'])
client.flavors.list.return_value = [flavor('baremetal_0'),
flavor('baremetal_1')]
flavor_list = [{'name': 'baremetal_0'}, {'name': 'baremetal_1'}]
self.assertEqual([],
flavors.filter_existing_flavors(client, flavor_list))
@mock.patch('os_cloud_config.flavors._create_flavor')
def test_create_flavors_from_nodes(self, create_flavor):
node = {'cpu': '1', 'memory': '2048', 'disk': '30', 'arch': 'i386'}
node_list = [node, node]
client = mock.MagicMock()
flavors.create_flavors_from_nodes(client, node_list, 'aaa', 'bbb',
'10')
expected_flavor = node
expected_flavor.update({'disk': '10', 'ephemeral': '20',
'kernel': 'aaa', 'ramdisk': 'bbb',
'name': 'baremetal_2048_10_20_1'})
create_flavor.assert_called_once_with(client, expected_flavor)
@mock.patch('os_cloud_config.flavors._create_flavor')
def test_create_flavors_from_list(self, create_flavor):
flavor_list = [{'name': 'controller', 'cpu': '1', 'memory': '2048',
'disk': '30', 'arch': 'amd64'}]
client = mock.MagicMock()
flavors.create_flavors_from_list(client, flavor_list, 'aaa', 'bbb')
create_flavor.assert_called_once_with(
client, {'disk': '30', 'cpu': '1', 'arch': 'amd64',
'kernel': 'aaa', 'ramdisk': 'bbb', 'memory': '2048',
'name': 'controller'})
def test_create_flavor(self):
flavor = {'cpu': '1', 'memory': '2048', 'disk': '30', 'arch': 'i386',
'kernel': 'aaa', 'ramdisk': 'bbb', 'name': 'baremetal',
'ephemeral': None}
client = mock.MagicMock()
flavors._create_flavor(client, flavor)
client.flavors.create.assert_called_once_with(
'baremetal', '2048', '1', '30', None, ephemeral=None)
metadata = {'cpu_arch': 'i386', 'baremetal:deploy_kernel_id': 'aaa',
'baremetal:deploy_ramdisk_id': 'bbb'}
client.flavors.create.return_value.set_keys.assert_called_once_with(
metadata=metadata)
def test_create_flavor_with_extra_spec(self):
flavor = {'cpu': '1', 'memory': '2048', 'disk': '30', 'arch': 'i386',
'kernel': 'aaa', 'ramdisk': 'bbb', 'name': 'baremetal',
'ephemeral': None, 'extra_specs': {'key': 'value'}}
client = mock.MagicMock()
flavors._create_flavor(client, flavor)
client.flavors.create.assert_called_once_with(
'baremetal', '2048', '1', '30', None, ephemeral=None)
metadata = {'cpu_arch': 'i386', 'baremetal:deploy_kernel_id': 'aaa',
'baremetal:deploy_ramdisk_id': 'bbb', 'key': 'value'}
client.flavors.create.return_value.set_keys.assert_called_once_with(
metadata=metadata)
@mock.patch('os_cloud_config.flavors._create_flavor')
def test_create_flavor_from_ironic(self, create_flavor):
node = mock.MagicMock()
node.uuid = 'uuid'
node.properties = {'cpus': '1', 'memory_mb': '2048', 'local_gb': '30',
'cpu_arch': 'i386'}
client = mock.MagicMock()
ironic_client = mock.MagicMock()
ironic_client.node.list.return_value = [node]
flavors.create_flavors_from_ironic(client, ironic_client, 'aaa', 'bbb',
'10')
self.assertTrue(ironic_client.node.list.called)
expected_flavor = {'disk': '10', 'ephemeral': '20',
'kernel': 'aaa', 'ramdisk': 'bbb',
'name': 'baremetal_2048_10_20_1',
'memory': '2048', 'arch': 'i386',
'cpu': '1'}
create_flavor.assert_called_once_with(client, expected_flavor)
def test_check_node_properties(self):
node = mock.MagicMock()
properties = {'memory_mb': '1024',
'local_gb': '10',
'cpus': '1',
'cpu_arch': 'i386'}
node.properties = properties
self.assertTrue(flavors.check_node_properties(node))
properties['memory_mb'] = None
self.assertFalse(flavors.check_node_properties(node))
del properties['memory_mb']
self.assertFalse(flavors.check_node_properties(node))

View File

@ -1,100 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import tempfile
from glanceclient import exc
import mock
import testtools
from os_cloud_config import glance
from os_cloud_config.tests import base
class GlanceTest(base.TestCase):
def setUp(self):
super(GlanceTest, self).setUp()
self.image = collections.namedtuple('image', ['id'])
def test_return_existing_kernel_and_ramdisk(self):
client = mock.MagicMock()
expected = {'kernel': 'aaa', 'ramdisk': 'zzz'}
client.images.find.side_effect = (self.image('aaa'), self.image('zzz'))
ids = glance.create_or_find_kernel_and_ramdisk(client, 'bm-kernel',
'bm-ramdisk')
client.images.create.assert_not_called()
self.assertEqual(expected, ids)
def test_raise_exception_kernel(self):
client = mock.MagicMock()
client.images.find.side_effect = exc.HTTPNotFound
message = "Kernel image not found in Glance, and no path specified."
with testtools.ExpectedException(ValueError, message):
glance.create_or_find_kernel_and_ramdisk(client, 'bm-kernel',
None)
def test_raise_exception_ramdisk(self):
client = mock.MagicMock()
client.images.find.side_effect = (self.image('aaa'),
exc.HTTPNotFound)
message = "Ramdisk image not found in Glance, and no path specified."
with testtools.ExpectedException(ValueError, message):
glance.create_or_find_kernel_and_ramdisk(client, 'bm-kernel',
'bm-ramdisk')
def test_skip_missing_no_kernel(self):
client = mock.MagicMock()
client.images.find.side_effect = (exc.HTTPNotFound,
self.image('bbb'))
expected = {'kernel': None, 'ramdisk': 'bbb'}
ids = glance.create_or_find_kernel_and_ramdisk(
client, 'bm-kernel', 'bm-ramdisk', skip_missing=True)
self.assertEqual(ids, expected)
def test_skip_missing_no_ramdisk(self):
client = mock.MagicMock()
client.images.find.side_effect = (self.image('aaa'),
exc.HTTPNotFound)
expected = {'kernel': 'aaa', 'ramdisk': None}
ids = glance.create_or_find_kernel_and_ramdisk(
client, 'bm-kernel', 'bm-ramdisk', skip_missing=True)
self.assertEqual(ids, expected)
def test_skip_missing_kernel_and_ramdisk(self):
client = mock.MagicMock()
client.images.find.side_effect = exc.HTTPNotFound
expected = {'kernel': None, 'ramdisk': None}
ids = glance.create_or_find_kernel_and_ramdisk(
client, 'bm-kernel', 'bm-ramdisk', skip_missing=True)
self.assertEqual(ids, expected)
def test_create_kernel_and_ramdisk(self):
client = mock.MagicMock()
client.images.find.side_effect = exc.HTTPNotFound
client.images.create.side_effect = (self.image('aaa'),
self.image('zzz'))
expected = {'kernel': 'aaa', 'ramdisk': 'zzz'}
with tempfile.NamedTemporaryFile() as imagefile:
ids = glance.create_or_find_kernel_and_ramdisk(
client, 'bm-kernel', 'bm-ramdisk', kernel_path=imagefile.name,
ramdisk_path=imagefile.name)
kernel_create = mock.call(name='bm-kernel', disk_format='aki',
is_public=True, data=mock.ANY)
ramdisk_create = mock.call(name='bm-ramdisk', disk_format='ari',
is_public=True, data=mock.ANY)
client.images.create.assert_has_calls([kernel_create, ramdisk_create])
self.assertEqual(expected, ids)

View File

@ -1,545 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneclient import exceptions
import keystoneclient.v2_0.client as ksclient_v2
import mock
from os_cloud_config import keystone
from os_cloud_config.tests import base
class KeystoneTest(base.TestCase):
def assert_endpoint(self, host, region='regionOne', public_endpoint=None,
admin_endpoint=None, internal_endpoint=None):
self.client.services.create.assert_called_once_with(
'keystone', 'identity', description='Keystone Identity Service')
if public_endpoint is None:
public_endpoint = 'http://%s:5000/v2.0' % host
if admin_endpoint is None:
admin_endpoint = 'http://%s:35357/v2.0' % host
if internal_endpoint is None:
internal_endpoint = 'http://%s:5000/v2.0' % host
self.client.endpoints.create.assert_called_once_with(
region, self.client.services.create.return_value.id,
public_endpoint, admin_endpoint, internal_endpoint)
def assert_calls_in_grant_admin_user_roles(self):
self.client_v3.roles.list.assert_has_calls([mock.call(name='admin')])
self.client_v3.domains.list.assert_called_once_with(name='default')
self.client_v3.users.list.assert_called_once_with(
domain=self.client_v3.domains.list.return_value[0], name='admin')
self.client_v3.projects.list.assert_called_once_with(
domain=self.client_v3.domains.list.return_value[0], name='admin')
@mock.patch('subprocess.check_call')
def test_initialize(self, check_call_mock):
self._patch_client()
self._patch_client_v3()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
self.client.roles.findall.return_value = []
self.client.tenants.findall.return_value = []
keystone.initialize(
'192.0.0.3', 'mytoken', 'admin@example.org', 'adminpasswd')
self.client.tenants.create.assert_has_calls(
[mock.call('admin', None), mock.call('service', None)])
self.assert_calls_in_grant_admin_user_roles()
self.assert_endpoint('192.0.0.3')
check_call_mock.assert_called_once_with(
["ssh", "-o" "StrictHostKeyChecking=no", "-t", "-l", "root",
"192.0.0.3", "sudo", "keystone-manage", "pki_setup",
"--keystone-user",
"$(getent passwd | grep '^keystone' | cut -d: -f1)",
"--keystone-group",
"$(getent group | grep '^keystone' | cut -d: -f1)"])
def test_initialize_for_swift(self):
self._patch_client()
keystone.initialize_for_swift('192.0.0.3', 'mytoken')
self.client.roles.create.assert_has_calls(
[mock.call('swiftoperator'), mock.call('ResellerAdmin')])
def test_initialize_for_heat(self):
client = mock.MagicMock()
client.domains.find.side_effect = exceptions.NotFound
client.users.find.side_effect = exceptions.NotFound
keystone.initialize_for_heat(client, 'heatadminpasswd')
client.domains.create.assert_called_once_with(
'heat', description='Owns users and tenants created by heat')
client.users.create.assert_called_once_with(
'heat_domain_admin',
description='Manages users and tenants created by heat',
domain=client.domains.create.return_value,
password='heatadminpasswd')
client.roles.find.assert_called_once_with(name='admin')
client.roles.grant.assert_called_once_with(
client.roles.find.return_value,
user=client.users.create.return_value,
domain=client.domains.create.return_value)
@mock.patch('subprocess.check_call')
def test_idempotent_initialize(self, check_call_mock):
self._patch_client()
self._patch_client_v3()
self.client.services.findall.return_value = mock.MagicMock()
self.client.endpoints.findall.return_value = mock.MagicMock()
self.client.roles.findall.return_value = mock.MagicMock()
self.client.tenants.findall.return_value = mock.MagicMock()
keystone.initialize(
'192.0.0.3',
'mytoken',
'admin@example.org',
'adminpasswd')
self.assertFalse(self.client.roles.create('admin').called)
self.assertFalse(self.client.roles.create('service').called)
self.assertFalse(self.client.tenants.create('admin', None).called)
self.assertFalse(self.client.tenants.create('service', None).called)
self.assert_calls_in_grant_admin_user_roles()
check_call_mock.assert_called_once_with(
["ssh", "-o" "StrictHostKeyChecking=no", "-t", "-l", "root",
"192.0.0.3", "sudo", "keystone-manage", "pki_setup",
"--keystone-user",
"$(getent passwd | grep '^keystone' | cut -d: -f1)",
"--keystone-group",
"$(getent group | grep '^keystone' | cut -d: -f1)"])
def test_setup_roles(self):
self._patch_client()
self.client.roles.findall.return_value = []
keystone._setup_roles(self.client)
self.client.roles.findall.assert_has_calls(
[mock.call(name='swiftoperator'), mock.call(name='ResellerAdmin'),
mock.call(name='heat_stack_user')])
self.client.roles.create.assert_has_calls(
[mock.call('swiftoperator'), mock.call('ResellerAdmin'),
mock.call('heat_stack_user')])
def test_idempotent_setup_roles(self):
self._patch_client()
self.client.roles.findall.return_value = mock.MagicMock()
keystone._setup_roles(self.client)
self.client.roles.findall.assert_has_calls(
[mock.call(name='swiftoperator'), mock.call(name='ResellerAdmin'),
mock.call(name='heat_stack_user')], any_order=True)
self.assertFalse(self.client.roles.create('swiftoperator').called)
self.assertFalse(self.client.roles.create('ResellerAdmin').called)
self.assertFalse(self.client.roles.create('heat_stack_user').called)
def test_create_tenants(self):
self._patch_client()
self.client.tenants.findall.return_value = []
keystone._create_tenants(self.client)
self.client.tenants.findall.assert_has_calls(
[mock.call(name='admin'), mock.call(name='service')],
any_order=True)
self.client.tenants.create.assert_has_calls(
[mock.call('admin', None), mock.call('service', None)])
def test_idempotent_create_tenants(self):
self._patch_client()
self.client.tenants.findall.return_value = mock.MagicMock()
keystone._create_tenants(self.client)
self.client.tenants.findall.assert_has_calls(
[mock.call(name='admin'), mock.call(name='service')],
any_order=True)
# Test that tenants are not created again if they exists
self.assertFalse(self.client.tenants.create('admin', None).called)
self.assertFalse(self.client.tenants.create('service', None).called)
def test_create_keystone_endpoint_ssl(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', 'keystone.example.com',
None, None, None)
public_endpoint = 'https://keystone.example.com:13000/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint)
def test_create_keystone_endpoint_public(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', None, 'keystone.public',
None, None)
public_endpoint = 'http://keystone.public:5000/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint)
def test_create_keystone_endpoint_ssl_and_public(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', 'keystone.example.com',
'keystone.public', None, None)
public_endpoint = 'https://keystone.example.com:13000/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint)
def test_create_keystone_endpoint_public_and_admin(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', None, 'keystone.public',
'keystone.admin', None)
public_endpoint = 'http://keystone.public:5000/v2.0'
admin_endpoint = 'http://keystone.admin:35357/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint,
admin_endpoint=admin_endpoint)
def test_create_keystone_endpoint_ssl_public_and_admin(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', 'keystone.example.com',
'keystone.public', 'keystone.admin', None)
public_endpoint = 'https://keystone.example.com:13000/v2.0'
admin_endpoint = 'http://keystone.admin:35357/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint,
admin_endpoint=admin_endpoint)
def test_create_keystone_endpoint_public_admin_and_internal(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', None, 'keystone.public',
'keystone.admin', 'keystone.internal')
public_endpoint = 'http://keystone.public:5000/v2.0'
admin_endpoint = 'http://keystone.admin:35357/v2.0'
internal_endpoint = 'http://keystone.internal:5000/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint,
admin_endpoint=admin_endpoint,
internal_endpoint=internal_endpoint)
def test_create_keystone_endpoint_ssl_public_admin_and_internal(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionOne', 'keystone.example.com',
'keystone.public', 'keystone.admin', 'keystone.internal')
public_endpoint = 'https://keystone.example.com:13000/v2.0'
admin_endpoint = 'http://keystone.admin:35357/v2.0'
internal_endpoint = 'http://keystone.internal:5000/v2.0'
self.assert_endpoint('192.0.0.3', public_endpoint=public_endpoint,
admin_endpoint=admin_endpoint,
internal_endpoint=internal_endpoint)
def test_create_keystone_endpoint_region(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '192.0.0.3', 'regionTwo', None, None, None, None)
self.assert_endpoint('192.0.0.3', region='regionTwo')
def test_create_keystone_endpoint_ipv6(self):
self._patch_client()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone._create_keystone_endpoint(
self.client, '2001:db8:fd00:1000:f816:3eff:fec2:8e7c',
'regionOne',
None,
'2001:db8:fd00:1000:f816:3eff:fec2:8e7d',
'2001:db8:fd00:1000:f816:3eff:fec2:8e7e',
'2001:db8:fd00:1000:f816:3eff:fec2:8e7f')
pe = 'http://[2001:db8:fd00:1000:f816:3eff:fec2:8e7d]:5000/v2.0'
ae = 'http://[2001:db8:fd00:1000:f816:3eff:fec2:8e7e]:35357/v2.0'
ie = 'http://[2001:db8:fd00:1000:f816:3eff:fec2:8e7f]:5000/v2.0'
self.assert_endpoint(
'[2001:db8:fd00:1000:f816:3eff:fec2:8e7c]',
region='regionOne', public_endpoint=pe, admin_endpoint=ae,
internal_endpoint=ie)
@mock.patch('time.sleep')
def test_create_roles_retry(self, sleep):
self._patch_client()
side_effect = (exceptions.ConnectionRefused,
exceptions.ServiceUnavailable, mock.DEFAULT,
mock.DEFAULT)
self.client.roles.create.side_effect = side_effect
self.client.roles.findall.return_value = []
keystone._create_roles(self.client)
sleep.assert_has_calls([mock.call(10), mock.call(10)])
def test_setup_endpoints(self):
self.client = mock.MagicMock()
self.client.users.find.side_effect = ksclient_v2.exceptions.NotFound()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone.setup_endpoints(
{'nova': {'password': 'pass', 'type': 'compute',
'ssl_port': 1234}},
public_host='192.0.0.4', region='region', client=self.client,
os_auth_url='https://192.0.0.3')
self.client.users.find.assert_called_once_with(name='nova')
self.client.tenants.find.assert_called_once_with(name='service')
self.client.roles.find.assert_called_once_with(name='admin')
self.client.services.findall.assert_called_once_with(type='compute')
self.client.endpoints.findall.assert_called_once_with(
publicurl='https://192.0.0.4:1234/v2.1/$(tenant_id)s')
self.client.users.create.assert_called_once_with(
'nova', 'pass',
tenant_id=self.client.tenants.find.return_value.id,
email='email=nobody@example.com')
self.client.roles.add_user_role.assert_called_once_with(
self.client.users.create.return_value,
self.client.roles.find.return_value,
self.client.tenants.find.return_value)
self.client.services.create.assert_called_once_with(
'nova', 'compute', description='Nova Compute Service')
self.client.endpoints.create.assert_called_once_with(
'region',
self.client.services.create.return_value.id,
'https://192.0.0.4:1234/v2.1/$(tenant_id)s',
'http://192.0.0.3:8774/v2.1/$(tenant_id)s',
'http://192.0.0.3:8774/v2.1/$(tenant_id)s')
def test_setup_endpoints_ipv6(self):
self.client = mock.MagicMock()
self.client.users.find.side_effect = ksclient_v2.exceptions.NotFound()
self.client.services.findall.return_value = []
self.client.endpoints.findall.return_value = []
keystone.setup_endpoints(
{'nova': {'password': 'pass', 'type': 'compute',
'ssl_port': 1234}},
public_host='2001:db8:fd00:1000:f816:3eff:fec2:8e7c',
region='region', client=self.client,
os_auth_url='https://[2001:db8:fd00:1000:f816:3eff:fec2:8e7c]')
self.client.users.find.assert_called_once_with(name='nova')
self.client.tenants.find.assert_called_once_with(name='service')
self.client.roles.find.assert_called_once_with(name='admin')
self.client.services.findall.assert_called_once_with(type='compute')
self.client.endpoints.findall.assert_called_once_with(
publicurl='https://[2001:db8:fd00:1000:f816:3eff:fec2:8e7c]'
':1234/v2.1/$(tenant_id)s')
self.client.users.create.assert_called_once_with(
'nova', 'pass',
tenant_id=self.client.tenants.find.return_value.id,
email='email=nobody@example.com')
self.client.roles.add_user_role.assert_called_once_with(
self.client.users.create.return_value,
self.client.roles.find.return_value,
self.client.tenants.find.return_value)
self.client.services.create.assert_called_once_with(
'nova', 'compute', description='Nova Compute Service')
ipv6_addr = '2001:db8:fd00:1000:f816:3eff:fec2:8e7c'
self.client.endpoints.create.assert_called_once_with(
'region',
self.client.services.create.return_value.id,
'https://[%s]:1234/v2.1/$(tenant_id)s' % ipv6_addr,
'http://[%s]:8774/v2.1/$(tenant_id)s' % ipv6_addr,
'http://[%s]:8774/v2.1/$(tenant_id)s' % ipv6_addr)
@mock.patch('os_cloud_config.keystone._create_service')
def test_create_ssl_endpoint_no_ssl_port(self, mock_create_service):
client = mock.Mock()
client.endpoints.findall.return_value = []
data = {'nouser': True,
'internal_host': 'internal',
'public_host': 'public',
'port': 1234,
'password': 'password',
'type': 'compute',
}
mock_service = mock.Mock()
mock_service.id = 1
mock_create_service.return_value = mock_service
keystone._register_endpoint(client, 'fake', data)
client.endpoints.create.assert_called_once_with(
'regionOne', 1,
'http://public:1234/',
'http://internal:1234/',
'http://internal:1234/')
def test_idempotent_register_endpoint(self):
self.client = mock.MagicMock()
# Explicitly defining that endpoint has been already created
self.client.users.find.return_value = mock.MagicMock()
self.client.services.findall.return_value = mock.MagicMock()
self.client.endpoints.findall.return_value = mock.MagicMock()
keystone._register_endpoint(
self.client,
'nova',
{'password': 'pass', 'type': 'compute',
'ssl_port': 1234, 'public_host': '192.0.0.4',
'internal_host': '192.0.0.3'},
region=None)
# Calling just a subset of find APIs
self.client.users.find.assert_called_once_with(name='nova')
self.assertFalse(self.client.tenants.find.called)
self.assertFalse(self.client.roles.find.called)
self.client.services.findall.assert_called_once_with(type='compute')
self.client.endpoints.findall.assert_called_once_with(
publicurl='https://192.0.0.4:1234/')
# None of creating API calls has been called
self.assertFalse(self.client.users.create.called)
self.assertFalse(self.client.roles.add_user_role.called)
self.assertFalse(self.client.services.create.called)
self.assertFalse(self.client.endpoints.create.called)
@mock.patch('os_cloud_config.keystone.ksclient_v2.Client')
def test_create_admin_client_v2(self, client):
self.assertEqual(
client.return_value,
keystone._create_admin_client_v2('192.0.0.3', 'mytoken'))
client.assert_called_once_with(endpoint='http://192.0.0.3:35357/v2.0',
token='mytoken')
def _patch_client(self):
self.client = mock.MagicMock()
self.create_admin_client_patcher = mock.patch(
'os_cloud_config.keystone._create_admin_client_v2')
create_admin_client = self.create_admin_client_patcher.start()
self.addCleanup(self._patch_client_cleanup)
create_admin_client.return_value = self.client
def _patch_client_cleanup(self):
self.create_admin_client_patcher.stop()
self.client = None
@mock.patch('os_cloud_config.keystone.ksclient_v3.Client')
def test_create_admin_client_v3(self, client_v3):
self.assertEqual(
client_v3.return_value,
keystone._create_admin_client_v3('192.0.0.3', 'mytoken'))
client_v3.assert_called_once_with(endpoint='http://192.0.0.3:35357/v3',
token='mytoken')
def _patch_client_v3(self):
self.client_v3 = mock.MagicMock()
self.create_admin_client_patcher_v3 = mock.patch(
'os_cloud_config.keystone._create_admin_client_v3')
create_admin_client_v3 = self.create_admin_client_patcher_v3.start()
self.addCleanup(self._patch_client_cleanup_v3)
create_admin_client_v3.return_value = self.client_v3
def _patch_client_cleanup_v3(self):
self.create_admin_client_patcher_v3.stop()
self.client_v3 = None
def test_create_admin_user_user_exists(self):
self._patch_client()
keystone._create_admin_user(self.client, 'admin@example.org',
'adminpasswd')
self.client.tenants.find.assert_called_once_with(name='admin')
self.client.users.create.assert_not_called()
def test_create_admin_user_user_does_not_exist(self):
self._patch_client()
self.client.users.find.side_effect = exceptions.NotFound()
keystone._create_admin_user(self.client, 'admin@example.org',
'adminpasswd')
self.client.tenants.find.assert_called_once_with(name='admin')
self.client.users.create.assert_called_once_with(
'admin', email='admin@example.org', password='adminpasswd',
tenant_id=self.client.tenants.find.return_value.id)
def test_grant_admin_user_roles_idempotent(self):
self._patch_client_v3()
self.client_v3.roles.list.return_value = (
[self.client_v3.roles.list.return_value['admin']])
keystone._grant_admin_user_roles(self.client_v3)
self.assert_calls_in_grant_admin_user_roles()
self.client_v3.roles.grant.assert_not_called()
def list_roles_side_effect(self, *args, **kwargs):
if kwargs.get('name') == 'admin':
return [self.client_v3.roles.list.return_value[0]]
else:
return []
def test_grant_admin_user_roles(self):
self._patch_client_v3()
self.client_v3.roles.list.side_effect = self.list_roles_side_effect
keystone._grant_admin_user_roles(self.client_v3)
self.assert_calls_in_grant_admin_user_roles()
self.client_v3.roles.grant.assert_has_calls([
mock.call(self.client_v3.roles.list.return_value[0],
user=self.client_v3.users.list.return_value[0],
project=self.client_v3.projects.list.return_value[0]),
mock.call(self.client_v3.roles.list.return_value[0],
user=self.client_v3.users.list.return_value[0],
domain=self.client_v3.domains.list.return_value[0])])

View File

@ -1,142 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import stat
import mock
from OpenSSL import crypto
from os_cloud_config import keystone_pki
from os_cloud_config.tests import base
class KeystonePKITest(base.TestCase):
def test_create_ca_and_signing_pairs(self):
# use one common test to avoid generating CA pair twice
# do not mock out pyOpenSSL, test generated keys/certs
# create CA pair
ca_key_pem, ca_cert_pem = keystone_pki.create_ca_pair()
ca_key = crypto.load_privatekey(crypto.FILETYPE_PEM, ca_key_pem)
ca_cert = crypto.load_certificate(crypto.FILETYPE_PEM, ca_cert_pem)
# check CA key properties
self.assertTrue(ca_key.check())
self.assertEqual(2048, ca_key.bits())
# check CA cert properties
self.assertFalse(ca_cert.has_expired())
self.assertEqual('Keystone CA', ca_cert.get_issuer().CN)
self.assertEqual('Keystone CA', ca_cert.get_subject().CN)
# create signing pair
signing_key_pem, signing_cert_pem = keystone_pki.create_signing_pair(
ca_key_pem, ca_cert_pem)
signing_key = crypto.load_privatekey(crypto.FILETYPE_PEM,
signing_key_pem)
signing_cert = crypto.load_certificate(crypto.FILETYPE_PEM,
signing_cert_pem)
# check signing key properties
self.assertTrue(signing_key.check())
self.assertEqual(2048, signing_key.bits())
# check signing cert properties
self.assertFalse(signing_cert.has_expired())
self.assertEqual('Keystone CA', signing_cert.get_issuer().CN)
self.assertEqual('Keystone Signing', signing_cert.get_subject().CN)
# pyOpenSSL currenty cannot verify a cert against a CA cert
@mock.patch('os_cloud_config.keystone_pki.os.chmod', create=True)
@mock.patch('os_cloud_config.keystone_pki.os.mkdir', create=True)
@mock.patch('os_cloud_config.keystone_pki.path.isdir', create=True)
@mock.patch('os_cloud_config.keystone_pki.create_ca_pair')
@mock.patch('os_cloud_config.keystone_pki.create_signing_pair')
@mock.patch('os_cloud_config.keystone_pki.open', create=True)
def test_create_and_write_ca_and_signing_pairs(
self, open_, create_signing, create_ca, isdir, mkdir, chmod):
create_ca.return_value = ('mock_ca_key', 'mock_ca_cert')
create_signing.return_value = ('mock_signing_key', 'mock_signing_cert')
isdir.return_value = False
keystone_pki.create_and_write_ca_and_signing_pairs('/fake_dir')
mkdir.assert_called_with('/fake_dir')
chmod.assert_has_calls([
mock.call('/fake_dir/ca_key.pem',
stat.S_IRUSR | stat.S_IWUSR),
mock.call('/fake_dir/ca_cert.pem',
stat.S_IRUSR | stat.S_IWUSR),
mock.call('/fake_dir/signing_key.pem',
stat.S_IRUSR | stat.S_IWUSR),
mock.call('/fake_dir/signing_cert.pem',
stat.S_IRUSR | stat.S_IWUSR),
])
# need any_order param, there are open().__enter__()
# etc. called in between
open_.assert_has_calls([
mock.call('/fake_dir/ca_key.pem', 'w'),
mock.call('/fake_dir/ca_cert.pem', 'w'),
mock.call('/fake_dir/signing_key.pem', 'w'),
mock.call('/fake_dir/signing_cert.pem', 'w'),
], any_order=True)
cert_files = open_.return_value.__enter__.return_value
cert_files.write.assert_has_calls([
mock.call('mock_ca_key'),
mock.call('mock_ca_cert'),
mock.call('mock_signing_key'),
mock.call('mock_signing_cert'),
])
@mock.patch('os_cloud_config.keystone_pki.path.isfile', create=True)
@mock.patch('os_cloud_config.keystone_pki.create_ca_pair')
@mock.patch('os_cloud_config.keystone_pki.create_signing_pair')
@mock.patch('os_cloud_config.keystone_pki.open', create=True)
@mock.patch('os_cloud_config.keystone_pki.json.dump')
def test_generate_certs_into_json(
self, mock_json, open_, create_signing, create_ca, isfile):
create_ca.return_value = ('mock_ca_key', 'mock_ca_cert')
create_signing.return_value = ('mock_signing_key', 'mock_signing_cert')
isfile.return_value = False
keystone_pki.generate_certs_into_json('/jsonfile', False)
params = mock_json.call_args[0][0]['parameter_defaults']
self.assertEqual(params['KeystoneCACertificate'], 'mock_ca_cert')
self.assertEqual(params['KeystoneSigningKey'], 'mock_signing_key')
self.assertEqual(params['KeystoneSigningCertificate'],
'mock_signing_cert')
@mock.patch('os_cloud_config.keystone_pki.path.isfile', create=True)
@mock.patch('os_cloud_config.keystone_pki.create_ca_pair')
@mock.patch('os_cloud_config.keystone_pki.create_signing_pair')
@mock.patch('os_cloud_config.keystone_pki.open', create=True)
@mock.patch('os_cloud_config.keystone_pki.json.load')
@mock.patch('os_cloud_config.keystone_pki.json.dump')
def test_generate_certs_into_json_with_existing_certs(
self, mock_json_dump, mock_json_load, open_, create_signing,
create_ca, isfile):
create_ca.return_value = ('mock_ca_key', 'mock_ca_cert')
create_signing.return_value = ('mock_signing_key', 'mock_signing_cert')
isfile.return_value = True
mock_json_load.return_value = {
'parameter_defaults': {
'KeystoneCACertificate': 'mock_ca_cert',
'KeystoneSigningKey': 'mock_signing_key',
'KeystoneSigningCertificate': 'mock_signing_cert'
}
}
keystone_pki.generate_certs_into_json('/jsonfile', False)
mock_json_dump.assert_not_called()

View File

@ -1,269 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import mock
from os_cloud_config import neutron
from os_cloud_config.tests import base
class NeutronTest(base.TestCase):
def test_get_admin_tenant_id(self):
client = mock.MagicMock()
neutron._get_admin_tenant_id(client)
client.tenants.find.assert_called_once_with(name='admin')
def test_create_net_physical(self):
client = mock.MagicMock()
network = {'physical': {'name': 'ctlplane'}}
neutron._create_net(client, network, 'physical', 'admin_tenant')
physical_call = {'network': {'tenant_id': 'admin_tenant',
'provider:network_type': 'flat',
'name': 'ctlplane',
'provider:physical_network': 'ctlplane',
'admin_state_up': True}}
client.create_network.assert_called_once_with(physical_call)
def test_create_net_physical_vlan_tag(self):
client = mock.MagicMock()
network = {'physical': {'name': 'public',
'segmentation_id': '123',
'physical_network': 'ctlplane'}}
neutron._create_net(client, network, 'physical', 'admin_tenant')
physical_call = {'network': {'tenant_id': 'admin_tenant',
'provider:network_type': 'vlan',
'name': 'public',
'provider:physical_network': 'ctlplane',
'provider:segmentation_id': '123',
'admin_state_up': True}}
client.create_network.assert_called_once_with(physical_call)
def test_create_net_float(self):
client = mock.MagicMock()
network = {'float': {'name': 'default-net'}}
neutron._create_net(client, network, 'float', None)
float_call = {'network': {'shared': True,
'name': 'default-net',
'admin_state_up': True}}
client.create_network.assert_called_once_with(float_call)
def test_create_net_external(self):
client = mock.MagicMock()
network = {'external': {'name': 'ext-net'}}
neutron._create_net(client, network, 'external', None)
external_call = {'network': {'router:external': True,
'name': 'ext-net',
'admin_state_up': True}}
client.create_network.assert_called_once_with(external_call)
def test_create_subnet_physical(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'physical': {'name': 'ctlplane',
'cidr': '10.0.0.0/24',
'metadata_server': '10.0.0.1'}}
neutron._create_subnet(client, net, network, 'physical',
'admin_tenant')
host_routes = [{'nexthop': '10.0.0.1',
'destination': '169.254.169.254/32'}]
physical_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '10.0.0.0/24',
'host_routes': host_routes,
'tenant_id': 'admin_tenant'}}
client.create_subnet.assert_called_once_with(physical_call)
def test_create_subnet_float(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'float': {'name': 'default-net',
'cidr': '172.16.5.0/24'}}
neutron._create_subnet(client, net, network, 'float', None)
float_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '172.16.5.0/24',
'dns_nameservers': ['8.8.8.8']}}
client.create_subnet.assert_called_once_with(float_call)
def test_create_subnet_external(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'external': {'name': 'ext-net',
'cidr': '1.2.3.0/24'}}
neutron._create_subnet(client, net, network, 'external', None)
external_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '1.2.3.0/24',
'enable_dhcp': False}}
client.create_subnet.assert_called_once_with(external_call)
def test_create_subnet_with_gateway(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'external': {'name': 'ext-net',
'cidr': '1.2.3.0/24',
'gateway': '1.2.3.4'}}
neutron._create_subnet(client, net, network, 'external', None)
external_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '1.2.3.0/24',
'gateway_ip': '1.2.3.4',
'enable_dhcp': False}}
client.create_subnet.assert_called_once_with(external_call)
def test_create_subnet_with_allocation_pool(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'float': {'name': 'default-net',
'cidr': '172.16.5.0/24',
'allocation_start': '172.16.5.25',
'allocation_end': '172.16.5.40'}}
neutron._create_subnet(client, net, network, 'float', None)
float_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '172.16.5.0/24',
'dns_nameservers': ['8.8.8.8'],
'allocation_pools': [{'start': '172.16.5.25',
'end': '172.16.5.40'}]}}
client.create_subnet.assert_called_once_with(float_call)
def test_create_physical_subnet_with_extra_routes(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
routes = [{'destination': '2.3.4.0/24', 'nexthop': '172.16.6.253'}]
network = {'physical': {'name': 'ctlplane',
'cidr': '10.0.0.0/24',
'metadata_server': '10.0.0.1',
'extra_routes': routes}}
neutron._create_subnet(client, net, network, 'physical',
'admin_tenant')
host_routes = [{'nexthop': '10.0.0.1',
'destination': '169.254.169.254/32'}] + routes
physical_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '10.0.0.0/24',
'host_routes': host_routes,
'tenant_id': 'admin_tenant'}}
client.create_subnet.assert_called_once_with(physical_call)
def test_create_float_subnet_with_extra_routes(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
routes = [{'destination': '2.3.4.0/24', 'nexthop': '172.16.6.253'}]
network = {'float': {'name': 'default-net',
'cidr': '172.16.6.0/24',
'extra_routes': routes}}
neutron._create_subnet(client, net, network, 'float', None)
float_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '172.16.6.0/24',
'dns_nameservers': ['8.8.8.8'],
'host_routes': routes}}
client.create_subnet.assert_called_once_with(float_call)
def test_create_subnet_with_nameserver(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'float': {'name': 'default-net',
'cidr': '172.16.5.0/24',
'nameserver': '172.16.5.254'}}
neutron._create_subnet(client, net, network, 'float', None)
float_call = {'subnet': {'ip_version': 4,
'network_id': 'abcd',
'cidr': '172.16.5.0/24',
'dns_nameservers': ['172.16.5.254']}}
client.create_subnet.assert_called_once_with(float_call)
def test_create_subnet_with_no_dhcp(self):
client = mock.MagicMock()
net = {'network': {'id': 'abcd'}}
network = {'physical': {'name': 'ctlplane',
'cidr': '10.0.0.0/24',
'metadata_server': '10.0.0.1',
'enable_dhcp': False}}
neutron._create_subnet(client, net, network, 'physical', 'tenant')
host_routes = [{'nexthop': '10.0.0.1',
'destination': '169.254.169.254/32'}]
physical_call = {'subnet': {'ip_version': 4,
'enable_dhcp': False,
'network_id': 'abcd',
'cidr': '10.0.0.0/24',
'host_routes': host_routes,
'tenant_id': 'tenant'}}
client.create_subnet.assert_called_once_with(physical_call)
@mock.patch('os_cloud_config.cmd.utils._clients.get_neutron_client')
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_client')
def test_initialize_neutron_physical(self, keystoneclient, neutronclient):
network_desc = {'physical': {'name': 'ctlplane',
'cidr': '10.0.0.0/24',
'metadata_server': '10.0.0.1'}}
tenant = collections.namedtuple('tenant', ['id'])
keystoneclient().tenants.find.return_value = tenant('dead-beef')
neutron.initialize_neutron(network_desc)
network_call = {'network': {'tenant_id': 'dead-beef',
'provider:network_type': 'flat',
'name': u'ctlplane',
'provider:physical_network': u'ctlplane',
'admin_state_up': True}}
host_routes = [{'nexthop': '10.0.0.1',
'destination': '169.254.169.254/32'}]
net_id = neutronclient().create_network.return_value['network']['id']
subnet_call = {'subnet': {'ip_version': 4,
'network_id': net_id,
'cidr': u'10.0.0.0/24',
'host_routes': host_routes,
'tenant_id': 'dead-beef'}}
neutronclient().create_network.assert_called_once_with(network_call)
neutronclient().create_subnet.assert_called_once_with(subnet_call)
@mock.patch('os_cloud_config.cmd.utils._clients.get_neutron_client')
@mock.patch('os_cloud_config.cmd.utils._clients.get_keystone_client')
def test_initialize_neutron_float_and_external(self, keystoneclient,
neutronclient):
network_desc = {'float': {'name': 'default-net',
'cidr': '172.16.5.0/24'},
'external': {'name': 'ext-net',
'cidr': '1.2.3.0/24'}}
tenant = collections.namedtuple('tenant', ['id'])
keystoneclient().tenants.find.return_value = tenant('dead-beef')
neutron.initialize_neutron(network_desc)
float_network = {'network': {'shared': True,
'name': 'default-net',
'admin_state_up': True}}
external_network = {'network': {'router:external': True,
'name': 'ext-net',
'admin_state_up': True}}
float_subnet = {'subnet': {'ip_version': 4,
'network_id': mock.ANY,
'cidr': '172.16.5.0/24',
'dns_nameservers': ['8.8.8.8']}}
external_subnet = {'subnet': {'ip_version': 4,
'network_id': mock.ANY,
'cidr': '1.2.3.0/24',
'enable_dhcp': False}}
router_call = {'router': {'name': 'default-router'}}
neutronclient().create_network.has_calls([float_network,
external_network])
neutronclient().create_subnet.has_calls([float_subnet,
external_subnet])
neutronclient().create_router.assert_called_once_with(router_call)
network = neutronclient().create_network.return_value
neutronclient().add_gateway_router.assert_called_once_with(
neutronclient().create_router.return_value['router']['id'],
{'network_id': network['network']['id']})

View File

@ -1,534 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from ironicclient import exc as ironicexp
import mock
from testtools import matchers
from os_cloud_config import nodes
from os_cloud_config.tests import base
class NodesTest(base.TestCase):
def _get_node(self):
return {'cpu': '1', 'memory': '2048', 'disk': '30', 'arch': 'amd64',
'mac': ['aaa'], 'pm_addr': 'foo.bar', 'pm_user': 'test',
'pm_password': 'random', 'pm_type': 'pxe_ssh', 'name': 'node1',
'capabilities': 'num_nics:6'}
def test_register_list_of_nodes(self):
nodes_list = ['aaa', 'bbb']
return_node = nodes_list[0]
register_func = mock.MagicMock()
register_func.side_effect = [return_node, ironicexp.Conflict]
seen = nodes._register_list_of_nodes(register_func, {}, None,
nodes_list, False, 'servicehost',
None, None)
self.assertEqual(seen, set(nodes_list))
def test_extract_driver_info_ipmi(self):
node = self._get_node()
node["pm_type"] = "ipmi"
expected = {"ipmi_address": "foo.bar",
"ipmi_username": "test",
"ipmi_password": "random"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_ipmi_extended(self):
node = self._get_node()
node["pm_type"] = "ipmi"
node["ipmi_bridging"] = "dual"
node["ipmi_transit_address"] = "0x42"
node["ipmi_transit_channel"] = "0"
node["ipmi_target_address"] = "0x41"
node["ipmi_target_channel"] = "1"
node["ipmi_local_address"] = "0"
expected = {"ipmi_address": "foo.bar",
"ipmi_username": "test",
"ipmi_password": "random",
"ipmi_bridging": "dual",
"ipmi_transit_address": "0x42",
"ipmi_transit_channel": "0",
"ipmi_target_address": "0x41",
"ipmi_target_channel": "1",
"ipmi_local_address": "0",
}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_ssh(self):
node = self._get_node()
node["pm_type"] = "pxe_ssh"
expected = {"ssh_address": "foo.bar",
"ssh_username": "test",
"ssh_key_contents": "random",
"ssh_virt_type": "virsh"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_drac(self):
node = self._get_node()
node["pm_type"] = "pxe_drac"
expected = {"drac_host": "foo.bar",
"drac_username": "test",
"drac_password": "random"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_ssh_with_pm_virt_type(self):
node = self._get_node()
node["pm_type"] = "pxe_ssh"
node["pm_virt_type"] = "vbox"
expected = {"ssh_address": "foo.bar",
"ssh_username": "test",
"ssh_key_contents": "random",
"ssh_virt_type": "vbox"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_iboot(self):
node = self._get_node()
node["pm_type"] = "pxe_iboot_iscsi"
expected = {"iboot_address": "foo.bar",
"iboot_username": "test",
"iboot_password": "random"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_iboot_with_pm_relay_id(self):
node = self._get_node()
node["pm_type"] = "pxe_iboot_iscsi"
node["pm_relay_id"] = "pxe_iboot_id"
expected = {"iboot_address": "foo.bar",
"iboot_username": "test",
"iboot_password": "random",
"iboot_relay_id": "pxe_iboot_id"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_iboot_with_pm_port(self):
node = self._get_node()
node["pm_type"] = "pxe_iboot_iscsi"
node["pm_port"] = "8080"
expected = {"iboot_address": "foo.bar",
"iboot_username": "test",
"iboot_password": "random",
"iboot_port": "8080"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_ucs(self):
node = self._get_node()
node["pm_type"] = "pxe_ucs"
node["pm_service_profile"] = "foo_profile"
expected = {"ucs_address": "foo.bar",
"ucs_username": "test",
"ucs_password": "random",
"ucs_service_profile": "foo_profile"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_irmc(self):
node = self._get_node()
node["pm_type"] = "pxe_irmc"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_irmc_with_irmc_port(self):
node = self._get_node()
node["pm_type"] = "pxe_irmc"
node["pm_port"] = "443"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_port": "443"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_irmc_with_irmc_auth_method(self):
node = self._get_node()
node["pm_type"] = "pxe_irmc"
node["pm_auth_method"] = "baz_auth_method"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_auth_method": "baz_auth_method"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_irmc_with_irmc_client_timeout(self):
node = self._get_node()
node["pm_type"] = "pxe_irmc"
node["pm_client_timeout"] = "60"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_client_timeout": "60"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_irmc_with_irmc_sensor_method(self):
node = self._get_node()
node["pm_type"] = "pxe_irmc"
node["pm_sensor_method"] = "ipmitool"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_sensor_method": "ipmitool"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_iscsi_irmc(self):
node = self._get_node()
node["pm_type"] = "iscsi_irmc"
node["pm_deploy_iso"] = "deploy.iso"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_deploy_iso": "deploy.iso"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_agent_irmc(self):
node = self._get_node()
node["pm_type"] = "agent_irmc"
node["pm_deploy_iso"] = "deploy.iso"
expected = {"irmc_address": "foo.bar",
"irmc_username": "test",
"irmc_password": "random",
"irmc_deploy_iso": "deploy.iso"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_ipmi_with_kernel_ramdisk(self):
node = self._get_node()
node["pm_type"] = "pxe_ipmi"
node["kernel_id"] = "kernel-abc"
node["ramdisk_id"] = "ramdisk-foo"
expected = {"ipmi_address": "foo.bar",
"ipmi_username": "test",
"ipmi_password": "random",
"deploy_kernel": "kernel-abc",
"deploy_ramdisk": "ramdisk-foo"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_pxe_wol(self):
node = self._get_node()
node["pm_type"] = "pxe_wol"
expected = {"wol_host": "foo.bar"}
self.assertEqual(expected, nodes._extract_driver_info(node))
def test_extract_driver_info_unknown_type(self):
node = self._get_node()
node["pm_type"] = "unknown_type"
self.assertRaises(ValueError, nodes._extract_driver_info, node)
def test_register_all_nodes_ironic_no_hw_stats(self):
node_list = [self._get_node()]
# Remove the hardware stats from the node dictionary
node_list[0].pop("cpu")
node_list[0].pop("memory")
node_list[0].pop("disk")
node_list[0].pop("arch")
# Node properties should be created with empty string values for the
# hardware statistics
node_properties = {"capabilities": "num_nics:6"}
ironic = mock.MagicMock()
nodes.register_all_nodes('servicehost', node_list, client=ironic)
pxe_node_driver_info = {"ssh_address": "foo.bar",
"ssh_username": "test",
"ssh_key_contents": "random",
"ssh_virt_type": "virsh"}
pxe_node = mock.call(driver="pxe_ssh",
name='node1',
driver_info=pxe_node_driver_info,
properties=node_properties)
port_call = mock.call(node_uuid=ironic.node.create.return_value.uuid,
address='aaa')
power_off_call = mock.call(ironic.node.create.return_value.uuid, 'off')
ironic.node.create.assert_has_calls([pxe_node, mock.ANY])
ironic.port.create.assert_has_calls([port_call])
ironic.node.set_power_state.assert_has_calls([power_off_call])
def test_register_all_nodes_ironic(self):
node_list = [self._get_node()]
node_properties = {"cpus": "1",
"memory_mb": "2048",
"local_gb": "30",
"cpu_arch": "amd64",
"capabilities": "num_nics:6"}
ironic = mock.MagicMock()
nodes.register_all_nodes('servicehost', node_list, client=ironic)
pxe_node_driver_info = {"ssh_address": "foo.bar",
"ssh_username": "test",
"ssh_key_contents": "random",
"ssh_virt_type": "virsh"}
pxe_node = mock.call(driver="pxe_ssh",
name='node1',
driver_info=pxe_node_driver_info,
properties=node_properties)
port_call = mock.call(node_uuid=ironic.node.create.return_value.uuid,
address='aaa')
power_off_call = mock.call(ironic.node.create.return_value.uuid, 'off')
ironic.node.create.assert_has_calls([pxe_node, mock.ANY])
ironic.port.create.assert_has_calls([port_call])
ironic.node.set_power_state.assert_has_calls([power_off_call])
def test_register_all_nodes_ironic_kernel_ramdisk(self):
node_list = [self._get_node()]
node_properties = {"cpus": "1",
"memory_mb": "2048",
"local_gb": "30",
"cpu_arch": "amd64",
"capabilities": "num_nics:6"}
ironic = mock.MagicMock()
glance = mock.MagicMock()
image = collections.namedtuple('image', ['id'])
glance.images.find.side_effect = (image('kernel-123'),
image('ramdisk-999'))
nodes.register_all_nodes('servicehost', node_list, client=ironic,
glance_client=glance, kernel_name='bm-kernel',
ramdisk_name='bm-ramdisk')
pxe_node_driver_info = {"ssh_address": "foo.bar",
"ssh_username": "test",
"ssh_key_contents": "random",
"ssh_virt_type": "virsh",
"deploy_kernel": "kernel-123",
"deploy_ramdisk": "ramdisk-999"}
pxe_node = mock.call(driver="pxe_ssh",
name='node1',
driver_info=pxe_node_driver_info,
properties=node_properties)
port_call = mock.call(node_uuid=ironic.node.create.return_value.uuid,
address='aaa')
power_off_call = mock.call(ironic.node.create.return_value.uuid, 'off')
ironic.node.create.assert_has_calls([pxe_node, mock.ANY])
ironic.port.create.assert_has_calls([port_call])
ironic.node.set_power_state.assert_has_calls([power_off_call])
@mock.patch('time.sleep')
def test_register_ironic_node_retry(self, sleep):
ironic = mock.MagicMock()
ironic_node = collections.namedtuple('node', ['uuid'])
side_effect = (ironicexp.ConnectionRefused,
ironicexp.ServiceUnavailable, ironic_node('1'))
ironic.node.create.side_effect = side_effect
nodes.register_ironic_node(None, self._get_node(), client=ironic)
sleep.assert_has_calls([mock.call(10), mock.call(10)])
node_create = mock.call(driver='pxe_ssh',
name='node1',
driver_info=mock.ANY,
properties=mock.ANY)
ironic.node.create.assert_has_calls([node_create, node_create,
node_create])
@mock.patch('time.sleep')
def test_register_ironic_node_failure(self, sleep):
ironic = mock.MagicMock()
ironic.node.create.side_effect = ironicexp.ConnectionRefused
self.assertRaises(ironicexp.ServiceUnavailable,
nodes.register_ironic_node, None, self._get_node(),
client=ironic)
def test_register_ironic_node_update(self):
node = self._get_node()
ironic = mock.MagicMock()
node_map = {'mac': {'aaa': 1}}
def side_effect(*args, **kwargs):
update_patch = [
{'path': '/name', 'value': 'node1'},
{'path': '/driver_info/ssh_key_contents', 'value': 'random'},
{'path': '/driver_info/ssh_address', 'value': 'foo.bar'},
{'path': '/properties/memory_mb', 'value': '2048'},
{'path': '/properties/local_gb', 'value': '30'},
{'path': '/properties/cpu_arch', 'value': 'amd64'},
{'path': '/properties/cpus', 'value': '1'},
{'path': '/properties/capabilities', 'value': 'num_nics:6'},
{'path': '/driver_info/ssh_username', 'value': 'test'}]
for key in update_patch:
key['op'] = 'replace'
self.assertThat(update_patch,
matchers.MatchesSetwise(*(map(matchers.Equals,
args[1]))))
ironic.node.update.side_effect = side_effect
nodes._update_or_register_ironic_node(None, node, node_map,
client=ironic)
ironic.node.update.assert_called_once_with(
ironic.node.get.return_value.uuid, mock.ANY)
def _update_by_type(self, pm_type):
ironic = mock.MagicMock()
node_map = {'mac': {}, 'pm_addr': {}}
node = self._get_node()
node['pm_type'] = pm_type
node_map['pm_addr']['foo.bar'] = ironic.node.get.return_value.uuid
nodes._update_or_register_ironic_node('servicehost', node,
node_map, client=ironic)
ironic.node.update.assert_called_once_with(
ironic.node.get.return_value.uuid, mock.ANY)
def test_update_node_ironic_pxe_ipmitool(self):
self._update_by_type('pxe_ipmitool')
def test_update_node_ironic_pxe_drac(self):
self._update_by_type('pxe_drac')
def test_update_node_ironic_pxe_ilo(self):
self._update_by_type('pxe_ilo')
def test_update_node_ironic_pxe_irmc(self):
self._update_by_type('pxe_irmc')
def test_register_ironic_node_update_uppercase_mac(self):
node = self._get_node()
node['mac'][0] = node['mac'][0].upper()
ironic = mock.MagicMock()
node_map = {'mac': {'aaa': 1}}
def side_effect(*args, **kwargs):
update_patch = [
{'path': '/name', 'value': 'node1'},
{'path': '/driver_info/ssh_key_contents', 'value': 'random'},
{'path': '/driver_info/ssh_address', 'value': 'foo.bar'},
{'path': '/properties/memory_mb', 'value': '2048'},
{'path': '/properties/local_gb', 'value': '30'},
{'path': '/properties/cpu_arch', 'value': 'amd64'},
{'path': '/properties/cpus', 'value': '1'},
{'path': '/properties/capabilities', 'value': 'num_nics:6'},
{'path': '/driver_info/ssh_username', 'value': 'test'}]
for key in update_patch:
key['op'] = 'replace'
self.assertThat(update_patch,
matchers.MatchesSetwise(*(map(matchers.Equals,
args[1]))))
ironic.node.update.side_effect = side_effect
nodes._update_or_register_ironic_node(None, node, node_map,
client=ironic)
ironic.node.update.assert_called_once_with(
ironic.node.get.return_value.uuid, mock.ANY)
@mock.patch('time.sleep')
def test_register_ironic_node_update_locked_node(self, sleep):
node = self._get_node()
ironic = mock.MagicMock()
ironic.node.update.side_effect = ironicexp.Conflict
node_map = {'mac': {'aaa': 1}}
self.assertRaises(ironicexp.Conflict,
nodes._update_or_register_ironic_node, None, node,
node_map, client=ironic)
def test_register_ironic_node_int_values(self):
node_properties = {"cpus": "1",
"memory_mb": "2048",
"local_gb": "30",
"cpu_arch": "amd64",
"capabilities": "num_nics:6"}
node = self._get_node()
node['cpu'] = 1
node['memory'] = 2048
node['disk'] = 30
client = mock.MagicMock()
nodes.register_ironic_node('service_host', node, client=client)
client.node.create.assert_called_once_with(driver=mock.ANY,
name='node1',
properties=node_properties,
driver_info=mock.ANY)
def test_register_ironic_node_fake_pxe(self):
node_properties = {"cpus": "1",
"memory_mb": "2048",
"local_gb": "30",
"cpu_arch": "amd64",
"capabilities": "num_nics:6"}
node = self._get_node()
for v in ('pm_addr', 'pm_user', 'pm_password'):
del node[v]
node['pm_type'] = 'fake_pxe'
client = mock.MagicMock()
nodes.register_ironic_node('service_host', node, client=client)
client.node.create.assert_called_once_with(driver='fake_pxe',
name='node1',
properties=node_properties,
driver_info={})
def test_register_ironic_node_update_int_values(self):
node = self._get_node()
ironic = mock.MagicMock()
node['cpu'] = 1
node['memory'] = 2048
node['disk'] = 30
node_map = {'mac': {'aaa': 1}}
def side_effect(*args, **kwargs):
update_patch = [
{'path': '/name', 'value': 'node1'},
{'path': '/driver_info/ssh_key_contents', 'value': 'random'},
{'path': '/driver_info/ssh_address', 'value': 'foo.bar'},
{'path': '/properties/memory_mb', 'value': '2048'},
{'path': '/properties/local_gb', 'value': '30'},
{'path': '/properties/cpu_arch', 'value': 'amd64'},
{'path': '/properties/cpus', 'value': '1'},
{'path': '/properties/capabilities', 'value': 'num_nics:6'},
{'path': '/driver_info/ssh_username', 'value': 'test'}]
for key in update_patch:
key['op'] = 'replace'
self.assertThat(update_patch,
matchers.MatchesSetwise(*(map(matchers.Equals,
args[1]))))
ironic.node.update.side_effect = side_effect
nodes._update_or_register_ironic_node(None, node, node_map,
client=ironic)
def test_populate_node_mapping_ironic(self):
client = mock.MagicMock()
node1 = mock.MagicMock()
node1.to_dict.return_value = {'uuid': 'abcdef'}
node2 = mock.MagicMock()
node2.to_dict.return_value = {'uuid': 'fedcba'}
ironic_node = collections.namedtuple('node', ['uuid', 'driver',
'driver_info'])
ironic_port = collections.namedtuple('port', ['address'])
node1_detail = ironic_node('abcdef', 'pxe_ssh', None)
node2_detail = ironic_node('fedcba', 'ipmi',
{'ipmi_address': '10.0.1.2'})
client.node.get.side_effect = (node1_detail, node2_detail)
client.node.list_ports.return_value = [ironic_port('aaa')]
client.node.list.return_value = [node1, node2]
expected = {'mac': {'aaa': 'abcdef'},
'pm_addr': {'10.0.1.2': 'fedcba'}}
self.assertEqual(expected, nodes._populate_node_mapping(client))
def test_populate_node_mapping_ironic_fake_pxe(self):
client = mock.MagicMock()
node = mock.MagicMock()
node.to_dict.return_value = {'uuid': 'abcdef'}
ironic_node = collections.namedtuple('node', ['uuid', 'driver',
'driver_info'])
ironic_port = collections.namedtuple('port', ['address'])
node_detail = ironic_node('abcdef', 'fake_pxe', None)
client.node.get.return_value = node_detail
client.node.list_ports.return_value = [ironic_port('aaa')]
client.node.list.return_value = [node]
expected = {'mac': {'aaa': 'abcdef'}, 'pm_addr': {}}
self.assertEqual(expected, nodes._populate_node_mapping(client))
def test_clean_up_extra_nodes_ironic(self):
node = collections.namedtuple('node', ['uuid'])
client = mock.MagicMock()
client.node.list.return_value = [node('foobar')]
nodes._clean_up_extra_nodes(set(('abcd',)), client, remove=True)
client.node.delete.assert_called_once_with('foobar')
def test__get_node_id_fake_pxe(self):
node = self._get_node()
node['pm_type'] = 'fake_pxe'
node_map = {'mac': {'aaa': 'abcdef'}, 'pm_addr': {}}
self.assertEqual('abcdef', nodes._get_node_id(node, node_map))

View File

@ -1,111 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import glanceclient
from ironicclient import client as ironicclient
from keystoneclient.auth.identity import v2
from keystoneclient import session
from keystoneclient.v2_0 import client as ksclient
from keystoneclient.v3 import client as ks3client
from neutronclient.neutron import client as neutronclient
from novaclient import client as nova_client
from novaclient.extension import Extension
from novaclient.v2.contrib import baremetal
LOG = logging.getLogger(__name__)
def get_nova_bm_client(username, password, tenant_name, auth_url, cacert=None):
LOG.debug('Creating nova client.')
baremetal_extension = Extension('baremetal', baremetal)
return nova_client.Client("2", username, password, tenant_name, auth_url,
extensions=[baremetal_extension], cacert=cacert)
def get_ironic_client(username, password, tenant_name, auth_url, cacert=None):
LOG.debug('Creating ironic client.')
kwargs = {'os_username': username,
'os_password': password,
'os_auth_url': auth_url,
'os_tenant_name': tenant_name,
'ca_file': cacert}
return ironicclient.get_client(1, **kwargs)
def get_keystone_client(username,
password,
tenant_name,
auth_url,
cacert=None):
LOG.debug('Creating keystone client.')
kwargs = {'username': username,
'password': password,
'tenant_name': tenant_name,
'auth_url': auth_url,
'cacert': cacert}
return ksclient.Client(**kwargs)
def get_keystone_v3_client(username,
password,
tenant_name,
auth_url,
cacert=None):
LOG.debug('Creating keystone v3 client.')
kwargs = {'username': username,
'password': password,
'tenant_name': tenant_name,
'auth_url': auth_url.replace('v2.0', 'v3'),
'cacert': cacert}
return ks3client.Client(**kwargs)
def get_neutron_client(username,
password,
tenant_name,
auth_url,
cacert=None):
LOG.debug('Creating neutron client.')
kwargs = {'username': username,
'password': password,
'tenant_name': tenant_name,
'auth_url': auth_url,
'ca_cert': cacert}
neutron = neutronclient.Client('2.0', **kwargs)
neutron.format = 'json'
return neutron
def get_glance_client(username, password, tenant_name, auth_url, cacert=None,
region_name='regionOne'):
LOG.debug('Creating Keystone session to fetch Glance endpoint.')
auth = v2.Password(auth_url=auth_url, username=username, password=password,
tenant_name=tenant_name)
ks_session = session.Session(auth=auth)
endpoint = ks_session.get_endpoint(service_type='image',
interface='public',
region_name=region_name)
token = ks_session.get_token()
LOG.debug('Creating glance client.')
return glanceclient.Client('1', endpoint=endpoint, token=token,
cacert=cacert)

View File

@ -1,107 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from os_cloud_config.tests import base
from os_cloud_config.utils import clients
class ClientsTest(base.TestCase):
@mock.patch('ironicclient.client.get_client')
def test_get_ironic_client(self, client_mock):
clients.get_ironic_client('username', 'password', 'tenant_name',
'auth_url')
client_mock.assert_called_once_with(
1, os_username='username',
os_password='password',
os_auth_url='auth_url',
os_tenant_name='tenant_name',
ca_file=None)
@mock.patch('novaclient.client.Client')
def test_get_nova_bm_client(self, client_mock):
clients.get_nova_bm_client('username', 'password', 'tenant_name',
'auth_url')
client_mock.assert_called_once_with('2', 'username',
'password',
'tenant_name',
'auth_url',
cacert=None,
extensions=[mock.ANY])
@mock.patch('keystoneclient.v2_0.client.Client')
def test_get_keystone_client(self, client_mock):
clients.get_keystone_client('username', 'password', 'tenant_name',
'auth_url')
client_mock.assert_called_once_with(
username='username',
password='password',
auth_url='auth_url',
tenant_name='tenant_name',
cacert=None)
@mock.patch('keystoneclient.v3.client.Client')
def test_get_keystone_v3_client_with_v2_url(self, client_mock):
clients.get_keystone_v3_client('username', 'password', 'tenant_name',
'auth_url/v2.0')
client_mock.assert_called_once_with(
username='username',
password='password',
auth_url='auth_url/v3',
tenant_name='tenant_name',
cacert=None)
@mock.patch('keystoneclient.v3.client.Client')
def test_get_keystone_v3_client_with_v3_url(self, client_mock):
clients.get_keystone_v3_client('username', 'password', 'tenant_name',
'auth_url/v3')
client_mock.assert_called_once_with(
username='username',
password='password',
auth_url='auth_url/v3',
tenant_name='tenant_name',
cacert=None)
@mock.patch('neutronclient.neutron.client.Client')
def test_get_neutron_client(self, client_mock):
clients.get_neutron_client('username', 'password', 'tenant_name',
'auth_url')
client_mock.assert_called_once_with(
'2.0', username='username',
password='password',
auth_url='auth_url',
tenant_name='tenant_name',
ca_cert=None)
@mock.patch('keystoneclient.session.Session')
@mock.patch('keystoneclient.auth.identity.v2.Password')
@mock.patch('glanceclient.Client')
def test_get_glance_client(self, client_mock, password_mock, session_mock):
clients.get_glance_client('username', 'password', 'tenant_name',
'auth_url')
password_mock.assert_called_once_with(auth_url='auth_url',
username='username',
password='password',
tenant_name='tenant_name')
session_mock.assert_called_once_with(auth=password_mock.return_value)
session_mock.return_value.get_endpoint.assert_called_once_with(
service_type='image', interface='public', region_name='regionOne')
session_mock.return_value.get_token.assert_called_once_with()
client_mock.assert_called_once_with(
'1', endpoint=session_mock.return_value.get_endpoint.return_value,
token=session_mock.return_value.get_token.return_value,
cacert=None)

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.8 # Apache-2.0
Babel>=2.3.4 # BSD
python-glanceclient>=2.5.0 # Apache-2.0
python-ironicclient>=1.9.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
pyOpenSSL>=0.14 # Apache-2.0
six>=1.9.0 # MIT

View File

@ -1,56 +0,0 @@
[metadata]
name = os-cloud-config
summary = Configuration for OpenStack clouds.
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
[files]
packages =
os_cloud_config
[entry_points]
console_scripts =
generate-keystone-pki = os_cloud_config.cmd.generate_keystone_pki:main
init-keystone = os_cloud_config.cmd.init_keystone:main
init-keystone-heat-domain = os_cloud_config.cmd.init_keystone_heat_domain:main
register-nodes = os_cloud_config.cmd.register_nodes:main
setup-endpoints = os_cloud_config.cmd.setup_endpoints:main
setup-flavors = os_cloud_config.cmd.setup_flavors:main
setup-neutron = os_cloud_config.cmd.setup_neutron:main
upload-kernel-ramdisk = os_cloud_config.cmd.upload_kernel_ramdisk:main
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = os_cloud_config/locale
domain = os-cloud-config
[update_catalog]
domain = os-cloud-config
output_dir = os_cloud_config/locale
input_file = os_cloud_config/locale/os-cloud-config.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = os_cloud_config/locale/os-cloud-config.pot

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.10,>=0.9.2
coverage>=4.0 # Apache-2.0
docutils>=0.11 # OSI-Approved Open Source, Public Domain
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0 # BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx>=1.5.1 # BSD
oslosphinx>=4.7.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT

View File

@ -1,30 +0,0 @@
#!/usr/bin/env bash
# Client constraint file contains this client version pin that is in conflict
# with installing the client from source. We should remove the version pin in
# the constraints file before applying it for from-source installation.
CONSTRAINTS_FILE="$1"
shift 1
set -e
# NOTE(tonyb): Place this in the tox enviroment's log dir so it will get
# published to logs.openstack.org for easy debugging.
localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
if [[ "$CONSTRAINTS_FILE" != http* ]]; then
CONSTRAINTS_FILE="file://$CONSTRAINTS_FILE"
fi
# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
curl "$CONSTRAINTS_FILE" --insecure --progress-bar --output "$localfile"
pip install -c"$localfile" openstack-requirements
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints "$localfile" -- "$CLIENT_NAME"
pip install -c"$localfile" -U "$@"
exit $?

42
tox.ini
View File

@ -1,42 +0,0 @@
[tox]
minversion = 2.0
envlist = py34,py27,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
BRANCH_NAME=master
CLIENT_NAME=os-cloud-config
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:cover]
commands = python setup.py test --coverage --coverage-package-name='os_cloud_config' --testr-args='{posargs}'
[flake8]
# H302 skipped on purpose per IRC discussion involving other TripleO projects.
# H803 skipped on purpose per list discussion.
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125,H302,H803,C901
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build
max-complexity=16
[hacking]
import_exceptions =
os_cloud_config._i18n