Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Ifbec28e1510fd2fa5a48b471458c16b5a1ef8ad3
This commit is contained in:
Tony Breeds 2017-09-12 16:00:22 -06:00
parent 36109e58f2
commit 78e1ad153b
36 changed files with 14 additions and 5134 deletions

35
.gitignore vendored
View File

@ -1,35 +0,0 @@
# Compiled files
*.py[co]
*.a
*.o
*.so
# Sphinx
_build
doc/source/api/
# Packages/installer info
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
# Other
*.DS_Store
.testrepository
.tox
.venv
.*.swp
.coverage
cover
AUTHORS
ChangeLog
*.sqlite
*~
.idea

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/ironic-lib.git

View File

@ -1,10 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
OS_DEBUG=${OS_DEBUG:-0} \
${PYTHON:-python} -m subunit.run discover -t ./ $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,10 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/ironic-lib

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,23 +0,0 @@
----------
ironic_lib
----------
Overview
--------
A common library to be used **exclusively** by projects under the `Ironic
governance <http://governance.openstack.org/reference/projects/ironic.html>`_.
Running Tests
-------------
To run tests in virtualenvs (preferred)::
sudo pip install tox
tox
To run tests in the current environment::
sudo pip install -r requirements.txt
nosetests

View File

@ -1,88 +0,0 @@
===========================
Testing Your OpenStack Code
===========================
------------
A Quickstart
------------
This is designed to be enough information for you to run your first tests.
Detailed information on testing can be found here: https://wiki.openstack.org/wiki/Testing
*Install pip*::
[apt-get | yum] install python-pip
More information on pip here: http://www.pip-installer.org/en/latest/
*Use pip to install tox*::
pip install tox
Run The Tests
-------------
*Navigate to the project's root directory and execute*::
tox
Note: completing this command may take a long time (depends on system resources)
also, you might not see any output until tox is complete.
Information about tox can be found here: http://testrun.org/tox/latest/
Run The Tests in One Environment
--------------------------------
Tox will run your entire test suite in the environments specified in the project tox.ini::
[tox]
envlist = <list of available environments>
To run the test suite in just one of the environments in envlist execute::
tox -e <env>
so for example, *run the test suite in py26*::
tox -e py26
Run One Test
------------
To run individual tests with tox:
if testr is in tox.ini, for example::
[testenv]
includes "python setup.py testr --slowest --testr-args='{posargs}'"
run individual tests with the following syntax::
tox -e <env> -- path.to.module:Class.test
so for example, *run the cpu_limited test in Nova*::
tox -e py27 -- nova.tests.test_claims:ClaimTestCase.test_cpu_unlimited
if nose is in tox.ini, for example::
[testenv]
includes "nosetests {posargs}"
run individual tests with the following syntax::
tox -e <env> -- --tests path.to.module:Class.test
so for example, *run the list test in Glance*::
tox -e py27 -- --tests glance.tests.unit.test_auth.py:TestImageRepoProxy.test_list
Need More Info?
---------------
More information about testr: https://wiki.openstack.org/wiki/Testr
More information about nose: https://nose.readthedocs.org/en/latest/
More information about testing OpenStack code can be found here:
https://wiki.openstack.org/wiki/Testing

View File

@ -1,79 +0,0 @@
# -*- coding: utf-8 -*-
#
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'oslosphinx',
]
wsme_protocols = ['restjson']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Ironic Lib'
copyright = u'OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from ironic_lib import version as il_version
# The full version, including alpha/beta/rc tags.
release = il_version.version_info.release_string()
# The short X.Y version.
version = il_version.version_info.version_string()
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['ironic_lib']
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
#html_theme_path = ["."]
#html_theme = '_theme'
#html_static_path = ['_static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
(
'index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation',
'manual'
),
]

View File

@ -1,88 +0,0 @@
======================
Welcome to Ironic-lib!
======================
Overview
========
Ironic-lib is a library for use by projects under Bare Metal governance only.
This documentation is intended for developer use only. If you are looking for
documentation for deployers, please see the
`ironic documentation <http://docs.openstack.org/developer/ironic/#administrator-s-guide>`_.
Metrics
=======
Ironic-lib provides a pluggable metrics library as of the 2.0.0 release.
Current provided backends are the default, 'noop', which discards all data,
and 'statsd', which emits metrics to a statsd daemon over the network. The
metrics backend to be used is configured via ``CONF.metrics.backend``. How
this configuration is set in practice may vary by project.
The typical usage of metrics is to initialize and cache a metrics logger,
using the `get_metrics_logger()` method in `ironic_lib.metrics_utils`, then
use that object to decorate functions or create context managers to gather
metrics. The general convention is to provide the name of the module as the
first argument to set it as the prefix, then set the actual metric name to the
method name. For example::
from ironic_lib import metrics_utils
METRICS = metrics_utils.get_metrics_logger(__name__)
@METRICS.timer('my_simple_method')
def my_simple_method(arg, matey):
pass
def my_complex_method(arg, matey):
with METRICS.timer('complex_method_pt_1'):
do_some_work()
with METRICS.timer('complex_method_pt_2'):
do_more_work()
There are three different kinds of metrics:
- **Timers** measure how long the code in the decorated method or context
manager takes to execute, and emits the value as a timer metric. These
are useful for measuring performance of a given block of code.
- **Counters** increment a counter each time a decorated method or context
manager is executed. These are useful for counting the number of times a
method is called, or the number of times an event occurs.
- **Gauges** return the value of a decorated method as a metric. This is
useful when you want to monitor the value returned by a method over time.
Additionally, metrics can be sent directly, rather than using a context
manager or decorator, when appropriate. When used in this way, ironic-lib will
simply emit the value provided as the requested metric type. For example::
from ironic_lib import metrics_utils
METRICS = metrics_utils.get_metrics_logger(__name__)
def my_node_failure_method(node):
if node.failed:
METRICS.send_counter(node.uuid, 1)
The provided statsd backend natively supports all three metric types. For more
information about how statsd changes behavior based on the metric type, see
`statsd metric types <https://github.com/etsy/statsd/blob/master/docs/metric_types.md>`_
Generated Developer Documentation
=================================
.. toctree::
:maxdepth: 1
api/autoindex
References
==========
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,26 +0,0 @@
# An ironic-lib.filters to be used with rootwrap command.
# The following commands should be used in filters for disk manipulation.
# This file should be owned by (and only-writeable by) the root user.
# NOTE:
# if you update this file, you will also need to adjust the
# ironic-lib.filters from the ironic module.
[Filters]
# ironic_lib/disk_utils.py
blkid: CommandFilter, blkid, root
blockdev: CommandFilter, blockdev, root
hexdump: CommandFilter, hexdump, root
qemu-img: CommandFilter, qemu-img, root
wipefs: CommandFilter, wipefs, root
sgdisk: CommandFilter, sgdisk, root
partprobe: CommandFilter, partprobe, root
# ironic_lib/utils.py
mkswap: CommandFilter, mkswap, root
mkfs: CommandFilter, mkfs, root
dd: CommandFilter, dd, root
# ironic_lib/disk_partitioner.py
fuser: CommandFilter, fuser, root
parted: CommandFilter, parted, root

View File

@ -1,22 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This ensures the ironic_lib namespace is defined
try:
import pkg_resources
pkg_resources.declare_namespace(__name__)
except ImportError:
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__)

View File

@ -1,31 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n as i18n
_translators = i18n.TranslatorFactory(domain='ironic-lib')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,183 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import loopingcall
from ironic_lib.common.i18n import _
from ironic_lib.common.i18n import _LW
from ironic_lib import exception
from ironic_lib import utils
opts = [
cfg.IntOpt('check_device_interval',
default=1,
help='After Ironic has completed creating the partition table, '
'it continues to check for activity on the attached iSCSI '
'device status at this interval prior to copying the image'
' to the node, in seconds'),
cfg.IntOpt('check_device_max_retries',
default=20,
help='The maximum number of times to check that the device is '
'not accessed by another process. If the device is still '
'busy after that, the disk partitioning will be treated as'
' having failed.')
]
CONF = cfg.CONF
opt_group = cfg.OptGroup(name='disk_partitioner',
title='Options for the disk partitioner')
CONF.register_group(opt_group)
CONF.register_opts(opts, opt_group)
LOG = logging.getLogger(__name__)
class DiskPartitioner(object):
def __init__(self, device, disk_label='msdos', alignment='optimal'):
"""A convenient wrapper around the parted tool.
:param device: The device path.
:param disk_label: The type of the partition table. Valid types are:
"bsd", "dvh", "gpt", "loop", "mac", "msdos",
"pc98", or "sun".
:param alignment: Set alignment for newly created partitions.
Valid types are: none, cylinder, minimal and
optimal.
"""
self._device = device
self._disk_label = disk_label
self._alignment = alignment
self._partitions = []
self._fuser_pids_re = re.compile(r'((\d)+\s*)+')
def _exec(self, *args):
# NOTE(lucasagomes): utils.execute() is already a wrapper on top
# of processutils.execute() which raises specific
# exceptions. It also logs any failure so we don't
# need to log it again here.
utils.execute('parted', '-a', self._alignment, '-s', self._device,
'--', 'unit', 'MiB', *args, check_exit_code=[0],
use_standard_locale=True, run_as_root=True)
def add_partition(self, size, part_type='primary', fs_type='',
boot_flag=None):
"""Add a partition.
:param size: The size of the partition in MiB.
:param part_type: The type of the partition. Valid values are:
primary, logical, or extended.
:param fs_type: The filesystem type. Valid types are: ext2, fat32,
fat16, HFS, linux-swap, NTFS, reiserfs, ufs.
If blank (''), it will create a Linux native
partition (83).
:param boot_flag: Boot flag that needs to be configured on the
partition. Ignored if None. It can take values
'bios_grub', 'boot'.
:returns: The partition number.
"""
self._partitions.append({'size': size,
'type': part_type,
'fs_type': fs_type,
'boot_flag': boot_flag})
return len(self._partitions)
def get_partitions(self):
"""Get the partitioning layout.
:returns: An iterator with the partition number and the
partition layout.
"""
return enumerate(self._partitions, 1)
def _wait_for_disk_to_become_available(self, retries, max_retries, pids,
stderr):
retries[0] += 1
if retries[0] > max_retries:
raise loopingcall.LoopingCallDone()
try:
# NOTE(ifarkas): fuser returns a non-zero return code if none of
# the specified files is accessed
out, err = utils.execute('fuser', self._device,
check_exit_code=[0, 1], run_as_root=True)
if not out and not err:
raise loopingcall.LoopingCallDone()
else:
if err:
stderr[0] = err
if out:
pids_match = re.search(self._fuser_pids_re, out)
pids[0] = pids_match.group()
except processutils.ProcessExecutionError as exc:
LOG.warning(_LW('Failed to check the device %(device)s with fuser:'
' %(err)s'), {'device': self._device, 'err': exc})
def commit(self):
"""Write to the disk."""
LOG.debug("Committing partitions to disk.")
cmd_args = ['mklabel', self._disk_label]
# NOTE(lucasagomes): Lead in with 1MiB to allow room for the
# partition table itself.
start = 1
for num, part in self.get_partitions():
end = start + part['size']
cmd_args.extend(['mkpart', part['type'], part['fs_type'],
str(start), str(end)])
if part['boot_flag']:
cmd_args.extend(['set', str(num), part['boot_flag'], 'on'])
start = end
self._exec(*cmd_args)
retries = [0]
pids = ['']
fuser_err = ['']
interval = CONF.disk_partitioner.check_device_interval
max_retries = CONF.disk_partitioner.check_device_max_retries
timer = loopingcall.FixedIntervalLoopingCall(
self._wait_for_disk_to_become_available,
retries, max_retries, pids, fuser_err)
timer.start(interval=interval).wait()
if retries[0] > max_retries:
if pids[0]:
raise exception.InstanceDeployFailure(
_('Disk partitioning failed on device %(device)s. '
'Processes with the following PIDs are holding it: '
'%(pids)s. Time out waiting for completion.')
% {'device': self._device, 'pids': pids[0]})
else:
raise exception.InstanceDeployFailure(
_('Disk partitioning failed on device %(device)s. Fuser '
'exited with "%(fuser_err)s". Time out waiting for '
'completion.')
% {'device': self._device, 'fuser_err': fuser_err[0]})
def list_opts():
"""Entry point for oslo-config-generator."""
return [('disk_partitioner', opts)]

View File

@ -1,765 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import gzip
import logging
import math
import os
import re
import requests
import shutil
import six
import stat
import tempfile
import time
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import imageutils
from oslo_utils import units
from ironic_lib.common.i18n import _
from ironic_lib.common.i18n import _LE
from ironic_lib.common.i18n import _LI
from ironic_lib.common.i18n import _LW
from ironic_lib import disk_partitioner
from ironic_lib import exception
from ironic_lib import utils
opts = [
cfg.IntOpt('efi_system_partition_size',
default=200,
help='Size of EFI system partition in MiB when configuring '
'UEFI systems for local boot.'),
cfg.IntOpt('bios_boot_partition_size',
default=1,
help='Size of BIOS Boot partition in MiB when configuring '
'GPT partitioned systems for local boot in BIOS.'),
cfg.StrOpt('dd_block_size',
default='1M',
help='Block size to use when writing to the nodes disk.'),
cfg.IntOpt('iscsi_verify_attempts',
default=3,
help='Maximum attempts to verify an iSCSI connection is '
'active, sleeping 1 second between attempts.'),
]
CONF = cfg.CONF
CONF.register_opts(opts, group='disk_utils')
LOG = logging.getLogger(__name__)
_PARTED_PRINT_RE = re.compile(r"^(\d+):([\d\.]+)MiB:"
"([\d\.]+)MiB:([\d\.]+)MiB:(\w*)::(\w*)")
CONFIGDRIVE_LABEL = "config-2"
MAX_CONFIG_DRIVE_SIZE_MB = 64
# Maximum disk size supported by MBR is 2TB (2 * 1024 * 1024 MB)
MAX_DISK_SIZE_MB_SUPPORTED_BY_MBR = 2097152
def list_partitions(device):
"""Get partitions information from given device.
:param device: The device path.
:returns: list of dictionaries (one per partition) with keys:
number, start, end, size (in MiB), filesystem, flags
"""
output = utils.execute(
'parted', '-s', '-m', device, 'unit', 'MiB', 'print',
use_standard_locale=True, run_as_root=True)[0]
if isinstance(output, bytes):
output = output.decode("utf-8")
lines = [line for line in output.split('\n') if line.strip()][2:]
# Example of line: 1:1.00MiB:501MiB:500MiB:ext4::boot
fields = ('number', 'start', 'end', 'size', 'filesystem', 'flags')
result = []
for line in lines:
match = _PARTED_PRINT_RE.match(line)
if match is None:
LOG.warning(_LW("Partition information from parted for device "
"%(device)s does not match "
"expected format: %(line)s"),
dict(device=device, line=line))
continue
# Cast int fields to ints (some are floats and we round them down)
groups = [int(float(x)) if i < 4 else x
for i, x in enumerate(match.groups())]
result.append(dict(zip(fields, groups)))
return result
def get_disk_identifier(dev):
"""Get the disk identifier from the disk being exposed by the ramdisk.
This disk identifier is appended to the pxe config which will then be
used by chain.c32 to detect the correct disk to chainload. This is helpful
in deployments to nodes with multiple disks.
http://www.syslinux.org/wiki/index.php/Comboot/chain.c32#mbr:
:param dev: Path for the already populated disk device.
:returns: The Disk Identifier.
"""
disk_identifier = utils.execute('hexdump', '-s', '440', '-n', '4',
'-e', '''\"0x%08x\"''',
dev,
run_as_root=True,
check_exit_code=[0],
attempts=5,
delay_on_retry=True)
return disk_identifier[0]
def is_iscsi_device(dev, node_uuid):
"""check whether the device path belongs to an iscsi device. """
iscsi_id = "iqn.2008-10.org.openstack:%s" % node_uuid
return iscsi_id in dev
def make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, node_uuid, commit=True,
boot_option="netboot", boot_mode="bios",
disk_label=None):
"""Partition the disk device.
Create partitions for root, swap, ephemeral and configdrive on a
disk device.
:param root_mb: Size of the root partition in mebibytes (MiB).
:param swap_mb: Size of the swap partition in mebibytes (MiB). If 0,
no partition will be created.
:param ephemeral_mb: Size of the ephemeral partition in mebibytes (MiB).
If 0, no partition will be created.
:param configdrive_mb: Size of the configdrive partition in
mebibytes (MiB). If 0, no partition will be created.
:param commit: True/False. Default for this setting is True. If False
partitions will not be written to disk.
:param boot_option: Can be "local" or "netboot". "netboot" by default.
:param boot_mode: Can be "bios" or "uefi". "bios" by default.
:param node_uuid: Node's uuid. Used for logging.
:param disk_label: The disk label to be used when creating the
partition table. Valid values are: "msdos", "gpt" or None; If None
Ironic will figure it out according to the boot_mode parameter.
:returns: A dictionary containing the partition type as Key and partition
path as Value for the partitions created by this method.
"""
LOG.debug("Starting to partition the disk device: %(dev)s "
"for node %(node)s",
{'dev': dev, 'node': node_uuid})
# the actual device names in the baremetal are like /dev/sda, /dev/sdb etc.
# While for the iSCSI device, the naming convention has a format which has
# iqn also embedded in it.
# When this function is called by ironic-conductor, the iSCSI device name
# should be appended by "part%d". While on the baremetal, it should name
# the device partitions as /dev/sda1 and not /dev/sda-part1.
if is_iscsi_device(dev, node_uuid):
part_template = dev + '-part%d'
else:
part_template = dev + '%d'
part_dict = {}
if disk_label is None:
disk_label = 'gpt' if boot_mode == 'uefi' else 'msdos'
dp = disk_partitioner.DiskPartitioner(dev, disk_label=disk_label)
# For uefi localboot, switch partition table to gpt and create the efi
# system partition as the first partition.
if boot_mode == "uefi" and boot_option == "local":
part_num = dp.add_partition(CONF.disk_utils.efi_system_partition_size,
fs_type='fat32',
boot_flag='boot')
part_dict['efi system partition'] = part_template % part_num
if boot_mode == "bios" and boot_option == "local" and disk_label == "gpt":
part_num = dp.add_partition(CONF.disk_utils.bios_boot_partition_size,
boot_flag='bios_grub')
part_dict['BIOS Boot partition'] = part_template % part_num
if ephemeral_mb:
LOG.debug("Add ephemeral partition (%(size)d MB) to device: %(dev)s "
"for node %(node)s",
{'dev': dev, 'size': ephemeral_mb, 'node': node_uuid})
part_num = dp.add_partition(ephemeral_mb)
part_dict['ephemeral'] = part_template % part_num
if swap_mb:
LOG.debug("Add Swap partition (%(size)d MB) to device: %(dev)s "
"for node %(node)s",
{'dev': dev, 'size': swap_mb, 'node': node_uuid})
part_num = dp.add_partition(swap_mb, fs_type='linux-swap')
part_dict['swap'] = part_template % part_num
if configdrive_mb:
LOG.debug("Add config drive partition (%(size)d MB) to device: "
"%(dev)s for node %(node)s",
{'dev': dev, 'size': configdrive_mb, 'node': node_uuid})
part_num = dp.add_partition(configdrive_mb)
part_dict['configdrive'] = part_template % part_num
# NOTE(lucasagomes): Make the root partition the last partition. This
# enables tools like cloud-init's growroot utility to expand the root
# partition until the end of the disk.
LOG.debug("Add root partition (%(size)d MB) to device: %(dev)s "
"for node %(node)s",
{'dev': dev, 'size': root_mb, 'node': node_uuid})
boot_val = None
if (boot_mode == "bios" and boot_option == "local" and
disk_label == "msdos"):
boot_val = 'boot'
part_num = dp.add_partition(root_mb, boot_flag=boot_val)
part_dict['root'] = part_template % part_num
if commit:
# write to the disk
dp.commit()
return part_dict
def is_block_device(dev):
"""Check whether a device is block or not."""
attempts = CONF.disk_utils.iscsi_verify_attempts
for attempt in range(attempts):
try:
s = os.stat(dev)
except OSError as e:
LOG.debug("Unable to stat device %(dev)s. Attempt %(attempt)d "
"out of %(total)d. Error: %(err)s",
{"dev": dev, "attempt": attempt + 1,
"total": attempts, "err": e})
time.sleep(1)
else:
return stat.S_ISBLK(s.st_mode)
msg = _("Unable to stat device %(dev)s after attempting to verify "
"%(attempts)d times.") % {'dev': dev, 'attempts': attempts}
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def dd(src, dst):
"""Execute dd from src to dst."""
utils.dd(src, dst, 'bs=%s' % CONF.disk_utils.dd_block_size, 'oflag=direct')
def qemu_img_info(path):
"""Return an object containing the parsed output from qemu-img info."""
if not os.path.exists(path):
return imageutils.QemuImgInfo()
out, err = utils.execute('env', 'LC_ALL=C', 'LANG=C',
'qemu-img', 'info', path)
return imageutils.QemuImgInfo(out)
def convert_image(source, dest, out_format, run_as_root=False):
"""Convert image to other format."""
cmd = ('qemu-img', 'convert', '-O', out_format, source, dest)
utils.execute(*cmd, run_as_root=run_as_root)
def populate_image(src, dst):
data = qemu_img_info(src)
if data.file_format == 'raw':
dd(src, dst)
else:
convert_image(src, dst, 'raw', True)
# TODO(rameshg87): Remove this one-line method and use utils.mkfs
# directly.
def mkfs(fs, dev, label=None):
"""Execute mkfs on a device."""
utils.mkfs(fs, dev, label)
def block_uuid(dev):
"""Get UUID of a block device."""
out, _err = utils.execute('blkid', '-s', 'UUID', '-o', 'value', dev,
run_as_root=True,
check_exit_code=[0])
return out.strip()
def get_image_mb(image_path, virtual_size=True):
"""Get size of an image in Megabyte."""
mb = 1024 * 1024
if not virtual_size:
image_byte = os.path.getsize(image_path)
else:
data = qemu_img_info(image_path)
image_byte = data.virtual_size
# round up size to MB
image_mb = int((image_byte + mb - 1) / mb)
return image_mb
def get_dev_block_size(dev):
"""Get the device size in 512 byte sectors."""
block_sz, cmderr = utils.execute('blockdev', '--getsz', dev,
run_as_root=True, check_exit_code=[0])
return int(block_sz)
def destroy_disk_metadata(dev, node_uuid):
"""Destroy metadata structures on node's disk.
Ensure that node's disk magic strings are wiped without zeroing the
entire drive. To do this we use the wipefs tool from util-linux.
:param dev: Path for the device to work on.
:param node_uuid: Node's uuid. Used for logging.
"""
# NOTE(NobodyCam): This is needed to work around bug:
# https://bugs.launchpad.net/ironic/+bug/1317647
LOG.debug("Start destroy disk metadata for node %(node)s.",
{'node': node_uuid})
try:
utils.execute('wipefs', '--force', '--all', dev,
run_as_root=True,
use_standard_locale=True)
except processutils.ProcessExecutionError as e:
# NOTE(zhenguo): Check if --force option is supported for wipefs,
# if not, we should try without it.
if '--force' in str(e):
utils.execute('wipefs', '--all', dev,
run_as_root=True,
use_standard_locale=True)
else:
raise e
LOG.info(_LI("Disk metadata on %(dev)s successfully destroyed for node "
"%(node)s"), {'dev': dev, 'node': node_uuid})
def _get_configdrive(configdrive, node_uuid, tempdir=None):
"""Get the information about size and location of the configdrive.
:param configdrive: Base64 encoded Gzipped configdrive content or
configdrive HTTP URL.
:param node_uuid: Node's uuid. Used for logging.
:param tempdir: temporary directory for the temporary configdrive file
:raises: InstanceDeployFailure if it can't download or decode the
config drive.
:returns: A tuple with the size in MiB and path to the uncompressed
configdrive file.
"""
# Check if the configdrive option is a HTTP URL or the content directly
is_url = utils.is_http_url(configdrive)
if is_url:
try:
data = requests.get(configdrive).content
except requests.exceptions.RequestException as e:
raise exception.InstanceDeployFailure(
_("Can't download the configdrive content for node %(node)s "
"from '%(url)s'. Reason: %(reason)s") %
{'node': node_uuid, 'url': configdrive, 'reason': e})
else:
data = configdrive
try:
data = six.BytesIO(base64.b64decode(data))
except TypeError:
error_msg = (_('Config drive for node %s is not base64 encoded '
'or the content is malformed.') % node_uuid)
if is_url:
error_msg += _(' Downloaded from "%s".') % configdrive
raise exception.InstanceDeployFailure(error_msg)
configdrive_file = tempfile.NamedTemporaryFile(delete=False,
prefix='configdrive',
dir=tempdir)
configdrive_mb = 0
with gzip.GzipFile('configdrive', 'rb', fileobj=data) as gunzipped:
try:
shutil.copyfileobj(gunzipped, configdrive_file)
except EnvironmentError as e:
# Delete the created file
utils.unlink_without_raise(configdrive_file.name)
raise exception.InstanceDeployFailure(
_('Encountered error while decompressing and writing '
'config drive for node %(node)s. Error: %(exc)s') %
{'node': node_uuid, 'exc': e})
else:
# Get the file size and convert to MiB
configdrive_file.seek(0, os.SEEK_END)
bytes_ = configdrive_file.tell()
configdrive_mb = int(math.ceil(float(bytes_) / units.Mi))
finally:
configdrive_file.close()
return (configdrive_mb, configdrive_file.name)
def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format,
image_path, node_uuid, preserve_ephemeral=False,
configdrive=None, boot_option="netboot", boot_mode="bios",
tempdir=None, disk_label=None):
"""Create partitions and copy an image to the root partition.
:param dev: Path for the device to work on.
:param root_mb: Size of the root partition in megabytes.
:param swap_mb: Size of the swap partition in megabytes.
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
no ephemeral partition will be created.
:param ephemeral_format: The type of file system to format the ephemeral
partition.
:param image_path: Path for the instance's disk image.
:param node_uuid: node's uuid. Used for logging.
:param preserve_ephemeral: If True, no filesystem is written to the
ephemeral block device, preserving whatever content it had (if the
partition table has not changed).
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
or configdrive HTTP URL.
:param boot_option: Can be "local" or "netboot". "netboot" by default.
:param boot_mode: Can be "bios" or "uefi". "bios" by default.
:param tempdir: A temporary directory
:param disk_label: The disk label to be used when creating the
partition table. Valid values are: "msdos", "gpt" or None; If None
Ironic will figure it out according to the boot_mode parameter.
:returns: a dictionary containing the following keys:
'root uuid': UUID of root partition
'efi system partition uuid': UUID of the uefi system partition
(if boot mode is uefi).
NOTE: If key exists but value is None, it means partition doesn't
exist.
"""
# the only way for preserve_ephemeral to be set to true is if we are
# rebuilding an instance with --preserve_ephemeral.
commit = not preserve_ephemeral
# now if we are committing the changes to disk clean first.
if commit:
destroy_disk_metadata(dev, node_uuid)
try:
# If requested, get the configdrive file and determine the size
# of the configdrive partition
configdrive_mb = 0
configdrive_file = None
if configdrive:
configdrive_mb, configdrive_file = _get_configdrive(
configdrive, node_uuid, tempdir=tempdir)
part_dict = make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, node_uuid,
commit=commit,
boot_option=boot_option,
boot_mode=boot_mode,
disk_label=disk_label)
LOG.info(_LI("Successfully completed the disk device"
" %(dev)s partitioning for node %(node)s"),
{'dev': dev, "node": node_uuid})
ephemeral_part = part_dict.get('ephemeral')
swap_part = part_dict.get('swap')
configdrive_part = part_dict.get('configdrive')
root_part = part_dict.get('root')
if not is_block_device(root_part):
raise exception.InstanceDeployFailure(
_("Root device '%s' not found") % root_part)
for part in ('swap', 'ephemeral', 'configdrive',
'efi system partition'):
part_device = part_dict.get(part)
LOG.debug("Checking for %(part)s device (%(dev)s) on node "
"%(node)s.", {'part': part, 'dev': part_device,
'node': node_uuid})
if part_device and not is_block_device(part_device):
raise exception.InstanceDeployFailure(
_("'%(partition)s' device '%(part_device)s' not found") %
{'partition': part, 'part_device': part_device})
# If it's a uefi localboot, then we have created the efi system
# partition. Create a fat filesystem on it.
if boot_mode == "uefi" and boot_option == "local":
efi_system_part = part_dict.get('efi system partition')
mkfs(dev=efi_system_part, fs='vfat', label='efi-part')
if configdrive_part:
# Copy the configdrive content to the configdrive partition
dd(configdrive_file, configdrive_part)
LOG.info(_LI("Configdrive for node %(node)s successfully copied "
"onto partition %(partition)s"),
{'node': node_uuid, 'partition': configdrive_part})
finally:
# If the configdrive was requested make sure we delete the file
# after copying the content to the partition
if configdrive_file:
utils.unlink_without_raise(configdrive_file)
populate_image(image_path, root_part)
LOG.info(_LI("Image for %(node)s successfully populated"),
{'node': node_uuid})
if swap_part:
mkfs(dev=swap_part, fs='swap', label='swap1')
LOG.info(_LI("Swap partition %(swap)s successfully formatted "
"for node %(node)s"),
{'swap': swap_part, 'node': node_uuid})
if ephemeral_part and not preserve_ephemeral:
mkfs(dev=ephemeral_part, fs=ephemeral_format, label="ephemeral0")
LOG.info(_LI("Ephemeral partition %(ephemeral)s successfully "
"formatted for node %(node)s"),
{'ephemeral': ephemeral_part, 'node': node_uuid})
uuids_to_return = {
'root uuid': root_part,
'efi system partition uuid': part_dict.get('efi system partition')
}
try:
for part, part_dev in uuids_to_return.items():
if part_dev:
uuids_to_return[part] = block_uuid(part_dev)
except processutils.ProcessExecutionError:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to detect %s"), part)
return uuids_to_return
def list_opts():
"""Entry point for oslo-config-generator."""
return [('disk_utils', opts)]
def _is_disk_larger_than_max_size(device, node_uuid):
"""Check if total disk size exceeds 2TB msdos limit
:param device: device path.
:param node_uuid: node's uuid. Used for logging.
:raises: InstanceDeployFailure, if any disk partitioning related
commands fail.
:returns: True if total disk size exceeds 2TB. Returns False otherwise.
"""
try:
disksize_bytes = utils.execute('blockdev', '--getsize64', device,
use_standard_locale=True,
run_as_root=True)
except (processutils.UnknownArgumentError,
processutils.ProcessExecutionError, OSError) as e:
msg = (_('Failed to get size of disk %(disk)s for node %(node)s. '
'Error: %(error)s') %
{'disk': device, 'node': node_uuid, 'error': e})
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
disksize_mb = int(disksize_bytes) // 1024 // 1024
return disksize_mb > MAX_DISK_SIZE_MB_SUPPORTED_BY_MBR
def _get_labelled_partition(device, label, node_uuid):
"""Check and return if partition with given label exists
:param device: The device path.
:param label: Partition label
:param node_uuid: UUID of the Node. Used for logging.
:raises: InstanceDeployFailure, if any disk partitioning related
commands fail.
:returns: block device file for partition if it exists; otherwise it
returns None.
"""
try:
utils.execute('partprobe', device, run_as_root=True)
label_arg = 'LABEL=%s' % label
output, err = utils.execute('blkid', '-o', 'device', device,
'-t', label_arg, check_exit_code=[0, 2],
use_standard_locale=True, run_as_root=True)
except (processutils.UnknownArgumentError,
processutils.ProcessExecutionError, OSError) as e:
msg = (_('Failed to retrieve partition labels on disk %(disk)s '
'for node %(node)s. Error: %(error)s') %
{'disk': device, 'node': node_uuid, 'error': e})
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
if output:
if len(output.split()) > 1:
raise exception.InstanceDeployFailure(
_('More than one config drive exists on device %(device)s '
'for node %(node)s.')
% {'device': device, 'node': node_uuid})
return output.rstrip()
def _is_disk_gpt_partitioned(device, node_uuid):
"""Checks if the disk is GPT partitioned
:param device: The device path.
:param node_uuid: UUID of the Node. Used for logging.
:raises: InstanceDeployFailure, if any disk partitioning related
commands fail.
:param node_uuid: UUID of the Node
:returns: Boolean. Returns True if disk is GPT partitioned
"""
try:
output = utils.execute('blkid', '-p', '-o', 'value', '-s', 'PTTYPE',
device, use_standard_locale=True,
run_as_root=True)
except (processutils.UnknownArgumentError,
processutils.ProcessExecutionError, OSError) as e:
msg = (_('Failed to retrieve partition table type for disk %(disk)s '
'for node %(node)s. Error: %(error)s') %
{'disk': device, 'node': node_uuid, 'error': e})
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
return 'gpt' in output
def _fix_gpt_structs(device, node_uuid):
"""Checks backup GPT data structures and moves them to end of the device
:param device: The device path.
:param node_uuid: UUID of the Node. Used for logging.
:raises: InstanceDeployFailure, if any disk partitioning related
commands fail.
"""
try:
output, err = utils.execute('partprobe', device,
use_standard_locale=True,
run_as_root=True)
search_str = "fix the GPT to use all of the space"
if search_str in err:
utils.execute('sgdisk', '-e', device, run_as_root=True)
except (processutils.UnknownArgumentError,
processutils.ProcessExecutionError, OSError) as e:
msg = (_('Failed to fix GPT data structures on disk %(disk)s '
'for node %(node)s. Error: %(error)s') %
{'disk': device, 'node': node_uuid, 'error': e})
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def create_config_drive_partition(node_uuid, device, configdrive):
"""Create a partition for config drive
Checks if the device is GPT or MBR partitioned and creates config drive
partition accordingly.
:param node_uuid: UUID of the Node.
:param device: The device path.
:param configdrive: Base64 encoded Gzipped configdrive content or
configdrive HTTP URL.
:raises: InstanceDeployFailure if config drive size exceeds maximum limit
or if it fails to create config drive.
"""
confdrive_file = None
try:
config_drive_part = _get_labelled_partition(device,
CONFIGDRIVE_LABEL,
node_uuid)
confdrive_mb, confdrive_file = _get_configdrive(configdrive,
node_uuid)
if confdrive_mb > MAX_CONFIG_DRIVE_SIZE_MB:
raise exception.InstanceDeployFailure(
_('Config drive size exceeds maximum limit of 64MiB. '
'Size of the given config drive is %(size)d MiB for '
'node %(node)s.')
% {'size': confdrive_mb, 'node': node_uuid})
LOG.debug("Adding config drive partition %(size)d MiB to "
"device: %(dev)s for node %(node)s",
{'dev': device, 'size': confdrive_mb, 'node': node_uuid})
if config_drive_part:
LOG.debug("Configdrive for node %(node)s exists at "
"%(part)s",
{'node': node_uuid, 'part': config_drive_part})
else:
cur_parts = set(part['number'] for part in list_partitions(device))
if _is_disk_gpt_partitioned(device, node_uuid):
_fix_gpt_structs(device, node_uuid)
create_option = '0:-%dMB:0' % MAX_CONFIG_DRIVE_SIZE_MB
utils.execute('sgdisk', '-n', create_option, device,
run_as_root=True)
else:
# Check if the disk has 4 partitions. The MBR based disk
# cannot have more than 4 partitions.
# TODO(stendulker): One can use logical partitions to create
# a config drive if there are 4 primary partitions.
# https://bugs.launchpad.net/ironic/+bug/1561283
num_parts = len(list_partitions(device))
if num_parts > 3:
raise exception.InstanceDeployFailure(
_('Config drive cannot be created for node %(node)s. '
'Disk uses MBR partitioning and already has '
'%(parts)d primary partitions.')
% {'node': node_uuid, 'parts': num_parts})
# Check if disk size exceeds 2TB msdos limit
startlimit = '-%dMiB' % MAX_CONFIG_DRIVE_SIZE_MB
endlimit = '-0'
if _is_disk_larger_than_max_size(device, node_uuid):
# Need to create a small partition at 2TB limit
LOG.warning(_LW("Disk size is larger than 2TB for "
"node %(node)s. Creating config drive "
"at the end of the disk %(disk)s."),
{'node': node_uuid, 'disk': device})
startlimit = (MAX_DISK_SIZE_MB_SUPPORTED_BY_MBR -
MAX_CONFIG_DRIVE_SIZE_MB - 1)
endlimit = MAX_DISK_SIZE_MB_SUPPORTED_BY_MBR - 1
utils.execute('parted', '-a', 'optimal', '-s', '--', device,
'mkpart', 'primary', 'ext2', startlimit,
endlimit, run_as_root=True)
upd_parts = set(part['number'] for part in list_partitions(device))
new_part = set(upd_parts) - set(cur_parts)
if len(new_part) != 1:
raise exception.InstanceDeployFailure(
_('Disk partitioning failed on device %(device)s. '
'Unable to retrive config drive partition information.')
% {'device': device})
if is_iscsi_device(device, node_uuid):
config_drive_part = '%s-part%s' % (device, new_part.pop())
else:
config_drive_part = '%s%s' % (device, new_part.pop())
dd(confdrive_file, config_drive_part)
LOG.info(_LI("Configdrive for node %(node)s successfully "
"copied onto partition %(part)s"),
{'node': node_uuid, 'part': config_drive_part})
except (processutils.UnknownArgumentError,
processutils.ProcessExecutionError, OSError) as e:
msg = (_('Failed to create config drive on disk %(disk)s '
'for node %(node)s. Error: %(error)s') %
{'disk': device, 'node': node_uuid, 'error': e})
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
finally:
# If the configdrive was requested make sure we delete the file
# after copying the content to the partition
if confdrive_file:
utils.unlink_without_raise(confdrive_file)

View File

@ -1,106 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Ironic base exception handling.
Includes decorator for re-raising Ironic-type exceptions.
SHOULD include dedicated exception logging.
"""
import logging
import six
from oslo_config import cfg
from oslo_utils import excutils
from ironic_lib.common.i18n import _
from ironic_lib.common.i18n import _LE
LOG = logging.getLogger(__name__)
exc_log_opts = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='Make exception message format errors fatal.',
deprecated_group='DEFAULT'),
]
CONF = cfg.CONF
CONF.register_opts(exc_log_opts, group='ironic_lib')
class IronicException(Exception):
"""Base Ironic Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
if not message:
try:
message = self.message % kwargs
except Exception:
with excutils.save_and_reraise_exception() as ctxt:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
prs = ', '.join('%s=%s' % pair for pair in kwargs.items())
LOG.exception(_LE('Exception in string format operation '
'(arguments %s)'), prs)
if not CONF.ironic_lib.fatal_exception_format_errors:
# at least get the core message out if something
# happened
message = self.message
ctxt.reraise = False
super(IronicException, self).__init__(message)
def format_message(self):
if self.__class__.__name__.endswith('_Remote'):
return self.args[0]
else:
return six.text_type(self)
class InstanceDeployFailure(IronicException):
message = _("Failed to deploy instance: %(reason)s")
class FileSystemNotSupported(IronicException):
message = _("Failed to create a file system. "
"File system %(fs)s is not supported.")
class InvalidMetricConfig(IronicException):
message = _("Invalid value for metrics config option: %(reason)s")

View File

@ -1,305 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import functools
import random
import time
import six
from ironic_lib.common.i18n import _
class Timer(object):
"""A timer decorator and context manager.
This metric type times the decorated method or code running inside the
context manager, and emits the time as the metric value. It is bound to
this MetricLogger. For example::
from ironic_lib import metrics_utils
METRICS = metrics_utils.get_metrics_logger()
@METRICS.timer('foo')
def foo(bar, baz):
print bar, baz
with METRICS.timer('foo'):
do_something()
"""
def __init__(self, metrics, name):
"""Init the decorator / context manager.
:param metrics: The metric logger
:param name: The metric name
"""
if not isinstance(name, six.string_types):
raise TypeError(_("The metric name is expected to be a string. "
"Value is %s") % name)
self.metrics = metrics
self.name = name
self._start = None
def __call__(self, f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
start = _time()
result = f(*args, **kwargs)
duration = _time() - start
# Log the timing data (in ms)
self.metrics.send_timer(self.metrics.get_metric_name(self.name),
duration * 1000)
return result
return wrapped
def __enter__(self):
self._start = _time()
def __exit__(self, exc_type, exc_val, exc_tb):
duration = _time() - self._start
# Log the timing data (in ms)
self.metrics.send_timer(self.metrics.get_metric_name(self.name),
duration * 1000)
class Counter(object):
"""A counter decorator and context manager.
This metric type increments a counter every time the decorated method or
context manager is executed. It is bound to this MetricLogger. For
example::
from ironic_lib import metrics_utils
METRICS = metrics_utils.get_metrics_logger()
@METRICS.counter('foo')
def foo(bar, baz):
print bar, baz
with METRICS.counter('foo'):
do_something()
"""
def __init__(self, metrics, name, sample_rate):
"""Init the decorator / context manager.
:param metrics: The metric logger
:param name: The metric name
:param sample_rate: Probabilistic rate at which the values will be sent
"""
if not isinstance(name, six.string_types):
raise TypeError(_("The metric name is expected to be a string. "
"Value is %s") % name)
if (sample_rate is not None and
(sample_rate < 0.0 or sample_rate > 1.0)):
msg = _("sample_rate is set to %s. Value must be None "
"or in the interval [0.0, 1.0]") % sample_rate
raise ValueError(msg)
self.metrics = metrics
self.name = name
self.sample_rate = sample_rate
def __call__(self, f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
self.metrics.send_counter(
self.metrics.get_metric_name(self.name),
1, sample_rate=self.sample_rate)
result = f(*args, **kwargs)
return result
return wrapped
def __enter__(self):
self.metrics.send_counter(self.metrics.get_metric_name(self.name),
1, sample_rate=self.sample_rate)
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class Gauge(object):
"""A gauge decorator.
This metric type returns the value of the decorated method as a metric
every time the method is executed. It is bound to this MetricLogger. For
example::
from ironic_lib import metrics_utils
METRICS = metrics_utils.get_metrics_logger()
@METRICS.gauge('foo')
def add_foo(bar, baz):
return (bar + baz)
"""
def __init__(self, metrics, name):
"""Init the decorator / context manager.
:param metrics: The metric logger
:param name: The metric name
"""
if not isinstance(name, six.string_types):
raise TypeError(_("The metric name is expected to be a string. "
"Value is %s") % name)
self.metrics = metrics
self.name = name
def __call__(self, f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
result = f(*args, **kwargs)
self.metrics.send_gauge(self.metrics.get_metric_name(self.name),
result)
return result
return wrapped
def _time():
"""Wraps time.time() for simpler testing."""
return time.time()
@six.add_metaclass(abc.ABCMeta)
class MetricLogger(object):
"""Abstract class representing a metrics logger.
A MetricLogger sends data to a backend (noop or statsd).
The data can be a gauge, a counter, or a timer.
The data sent to the backend is composed of:
- a full metric name
- a numeric value
The format of the full metric name is:
_prefix<delim>name
where:
- _prefix: [global_prefix<delim>][uuid<delim>][host_name<delim>]prefix
- name: the name of this metric
- <delim>: the delimiter. Default is '.'
"""
def __init__(self, prefix='', delimiter='.'):
"""Init a MetricLogger.
:param prefix: Prefix for this metric logger. This string will prefix
all metric names.
:param delimiter: Delimiter used to generate the full metric name.
"""
self._prefix = prefix
self._delimiter = delimiter
def get_metric_name(self, name):
"""Get the full metric name.
The format of the full metric name is:
_prefix<delim>name
where:
- _prefix: [global_prefix<delim>][uuid<delim>][host_name<delim>]
prefix
- name: the name of this metric
- <delim>: the delimiter. Default is '.'
:param name: The metric name.
:return: The full metric name, with logger prefix, as a string.
"""
if not self._prefix:
return name
return self._delimiter.join([self._prefix, name])
def send_gauge(self, name, value):
"""Send gauge metric data.
Gauges are simple values.
The backend will set the value of gauge 'name' to 'value'.
:param name: Metric name
:param value: Metric numeric value that will be sent to the backend
"""
self._gauge(name, value)
def send_counter(self, name, value, sample_rate=None):
"""Send counter metric data.
Counters are used to count how many times an event occurred.
The backend will increment the counter 'name' by the value 'value'.
Optionally, specify sample_rate in the interval [0.0, 1.0] to
sample data probabilistically where::
P(send metric data) = sample_rate
If sample_rate is None, then always send metric data, but do not
have the backend send sample rate information (if supported).
:param name: Metric name
:param value: Metric numeric value that will be sent to the backend
:param sample_rate: Probabilistic rate at which the values will be
sent. Value must be None or in the interval [0.0, 1.0].
"""
if (sample_rate is None or random.random() < sample_rate):
return self._counter(name, value,
sample_rate=sample_rate)
def send_timer(self, name, value):
"""Send timer data.
Timers are used to measure how long it took to do something.
:param m_name: Metric name
:param m_value: Metric numeric value that will be sent to the backend
"""
self._timer(name, value)
def timer(self, name):
return Timer(self, name)
def counter(self, name, sample_rate=None):
return Counter(self, name, sample_rate)
def gauge(self, name):
return Gauge(self, name)
@abc.abstractmethod
def _gauge(self, name, value):
"""Abstract method for backends to implement gauge behavior."""
@abc.abstractmethod
def _counter(self, name, value, sample_rate=None):
"""Abstract method for backends to implement counter behavior."""
@abc.abstractmethod
def _timer(self, name, value):
"""Abstract method for backends to implement timer behavior."""
class NoopMetricLogger(MetricLogger):
"""Noop metric logger that throws away all metric data."""
def _gauge(self, name, value):
pass
def _counter(self, name, value, sample_rate=None):
pass
def _timer(self, m_name, value):
pass

View File

@ -1,108 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import socket
from oslo_config import cfg
from oslo_log import log
from ironic_lib.common.i18n import _LW
from ironic_lib import metrics
statsd_opts = [
cfg.StrOpt('statsd_host',
default='localhost',
help='Host for use with the statsd backend.'),
cfg.PortOpt('statsd_port',
default=8125,
help='Port to use with the statsd backend.')
]
CONF = cfg.CONF
CONF.register_opts(statsd_opts, group='metrics_statsd')
LOG = log.getLogger(__name__)
class StatsdMetricLogger(metrics.MetricLogger):
"""Metric logger that reports data via the statsd protocol."""
GAUGE_TYPE = 'g'
COUNTER_TYPE = 'c'
TIMER_TYPE = 'ms'
def __init__(self, prefix, delimiter='.', host=None, port=None):
"""Initialize a StatsdMetricLogger
The logger uses the given prefix list, delimiter, host, and port.
:param prefix: Prefix for this metric logger.
:param delimiter: Delimiter used to generate the full metric name.
:param host: The statsd host
:param port: The statsd port
"""
super(StatsdMetricLogger, self).__init__(prefix,
delimiter=delimiter)
self._host = host or CONF.metrics_statsd.statsd_host
self._port = port or CONF.metrics_statsd.statsd_port
self._target = (self._host, self._port)
def _send(self, name, value, metric_type, sample_rate=None):
"""Send metrics to the statsd backend
:param name: Metric name
:param value: Metric value
:param metric_type: Metric type (GAUGE_TYPE, COUNTER_TYPE,
or TIMER_TYPE)
:param sample_rate: Probabilistic rate at which the values will be sent
"""
if sample_rate is None:
metric = '%s:%s|%s' % (name, value, metric_type)
else:
metric = '%s:%s|%s@%s' % (name, value, metric_type, sample_rate)
# Ideally, we'd cache a sending socket in self, but that
# results in a socket getting shared by multiple green threads.
with contextlib.closing(self._open_socket()) as sock:
try:
sock.settimeout(0.0)
sock.sendto(metric, self._target)
except socket.error as e:
LOG.warning(_LW("Failed to send the metric value to "
"host %(host)s port %(port)s. "
"Error: %(error)s"),
{'host': self._host, 'port': self._port,
'error': e})
def _open_socket(self):
return socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
def _gauge(self, name, value):
return self._send(name, value, self.GAUGE_TYPE)
def _counter(self, name, value, sample_rate=None):
return self._send(name, value, self.COUNTER_TYPE,
sample_rate=sample_rate)
def _timer(self, name, value):
return self._send(name, value, self.TIMER_TYPE)
def list_opts():
"""Entry point for oslo-config-generator."""
return [('metrics_statsd', statsd_opts)]

View File

@ -1,100 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import six
from ironic_lib.common.i18n import _
from ironic_lib import exception
from ironic_lib import metrics
from ironic_lib import metrics_statsd
metrics_opts = [
cfg.StrOpt('backend',
default='noop',
choices=['noop', 'statsd'],
help='Backend to use for the metrics system.'),
cfg.BoolOpt('prepend_host',
default=False,
help='Prepend the hostname to all metric names. '
'The format of metric names is '
'[global_prefix.][host_name.]prefix.metric_name.'),
cfg.BoolOpt('prepend_host_reverse',
default=True,
help='Split the prepended host value by "." and reverse it '
'(to better match the reverse hierarchical form of '
'domain names).'),
cfg.StrOpt('global_prefix',
help='Prefix all metric names with this value. '
'By default, there is no global prefix. '
'The format of metric names is '
'[global_prefix.][host_name.]prefix.metric_name.')
]
CONF = cfg.CONF
CONF.register_opts(metrics_opts, group='metrics')
def get_metrics_logger(prefix='', backend=None, host=None, delimiter='.'):
"""Return a metric logger with the specified prefix.
The format of the prefix is:
[global_prefix<delim>][host_name<delim>]prefix
where <delim> is the delimiter (default is '.')
:param prefix: Prefix for this metric logger.
Value should be a string or None.
:param backend: Backend to use for the metrics system.
Possible values are 'noop' and 'statsd'.
:param host: Name of this node.
:param delimiter: Delimiter to use for the metrics name.
:return: The new MetricLogger.
"""
if not isinstance(prefix, six.string_types):
msg = (_("This metric prefix (%s) is of unsupported type. "
"Value should be a string or None")
% str(prefix))
raise exception.InvalidMetricConfig(msg)
if CONF.metrics.prepend_host and host:
if CONF.metrics.prepend_host_reverse:
host = '.'.join(reversed(host.split('.')))
if prefix:
prefix = delimiter.join([host, prefix])
else:
prefix = host
if CONF.metrics.global_prefix:
if prefix:
prefix = delimiter.join([CONF.metrics.global_prefix, prefix])
else:
prefix = CONF.metrics.global_prefix
backend = backend or CONF.metrics.backend
if backend == 'statsd':
return metrics_statsd.StatsdMetricLogger(prefix, delimiter=delimiter)
elif backend == 'noop':
return metrics.NoopMetricLogger(prefix, delimiter=delimiter)
else:
msg = (_("The backend is set to an unsupported type: "
"%s. Value should be 'noop' or 'statsd'.")
% backend)
raise exception.InvalidMetricConfig(msg)
def list_opts():
"""Entry point for oslo-config-generator."""
return [('metrics', metrics_opts)]

View File

@ -1,178 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
import mock
from oslotest import base as test_base
from testtools.matchers import HasLength
from ironic_lib import disk_partitioner
from ironic_lib import exception
from ironic_lib import utils
class DiskPartitionerTestCase(test_base.BaseTestCase):
def test_add_partition(self):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
dp.add_partition(1024)
dp.add_partition(512, fs_type='linux-swap')
dp.add_partition(2048, boot_flag='boot')
dp.add_partition(2048, boot_flag='bios_grub')
expected = [(1, {'boot_flag': None,
'fs_type': '',
'type': 'primary',
'size': 1024}),
(2, {'boot_flag': None,
'fs_type': 'linux-swap',
'type': 'primary',
'size': 512}),
(3, {'boot_flag': 'boot',
'fs_type': '',
'type': 'primary',
'size': 2048}),
(4, {'boot_flag': 'bios_grub',
'fs_type': '',
'type': 'primary',
'size': 2048})]
partitions = [(n, p) for n, p in dp.get_partitions()]
self.assertThat(partitions, HasLength(4))
self.assertEqual(expected, partitions)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec',
autospec=True)
@mock.patch.object(utils, 'execute', autospec=True)
def test_commit(self, mock_utils_exc, mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'boot_flag': None,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'boot_flag': 'boot',
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(3, {'boot_flag': 'bios_grub',
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = (None, None)
dp.commit()
mock_disk_partitioner_exec.assert_called_once_with(
mock.ANY, 'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on',
'mkpart', 'fake-type', 'fake-fs-type', '3', '4',
'set', '3', 'bios_grub', 'on')
mock_utils_exc.assert_called_once_with(
'fuser', '/dev/fake', run_as_root=True, check_exit_code=[0, 1])
@mock.patch.object(eventlet.greenthread, 'sleep', lambda seconds: None)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec',
autospec=True)
@mock.patch.object(utils, 'execute', autospec=True)
def test_commit_with_device_is_busy_once(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'boot_flag': None,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'boot_flag': 'boot',
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
fuser_outputs = iter([("/dev/fake: 10000 10001", None), (None, None)])
with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.side_effect = fuser_outputs
dp.commit()
mock_disk_partitioner_exec.assert_called_once_with(
mock.ANY, 'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake', run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(2, mock_utils_exc.call_count)
@mock.patch.object(eventlet.greenthread, 'sleep', lambda seconds: None)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec',
autospec=True)
@mock.patch.object(utils, 'execute', autospec=True)
def test_commit_with_device_is_always_busy(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'boot_flag': None,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'boot_flag': 'boot',
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = ("/dev/fake: 10000 10001", None)
self.assertRaises(exception.InstanceDeployFailure, dp.commit)
mock_disk_partitioner_exec.assert_called_once_with(
mock.ANY, 'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake', run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(20, mock_utils_exc.call_count)
# Mock the eventlet.greenthread.sleep for the looping_call
@mock.patch.object(eventlet.greenthread, 'sleep', lambda seconds: None)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec',
autospec=True)
@mock.patch.object(utils, 'execute', autospec=True)
def test_commit_with_device_disconnected(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'boot_flag': None,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'boot_flag': 'boot',
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = (None, "Specified filename /dev/fake"
" does not exist.")
self.assertRaises(exception.InstanceDeployFailure, dp.commit)
mock_disk_partitioner_exec.assert_called_once_with(
mock.ANY, 'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake', run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(20, mock_utils_exc.call_count)

File diff suppressed because it is too large Load Diff

View File

@ -1,158 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import types
import mock
from oslotest import base as test_base
from ironic_lib import metrics as metricslib
class MockedMetricLogger(metricslib.MetricLogger):
_gauge = mock.Mock(spec_set=types.FunctionType)
_counter = mock.Mock(spec_set=types.FunctionType)
_timer = mock.Mock(spec_set=types.FunctionType)
class TestMetricLogger(test_base.BaseTestCase):
def setUp(self):
super(TestMetricLogger, self).setUp()
self.ml = MockedMetricLogger('prefix', '.')
self.ml_no_prefix = MockedMetricLogger('', '.')
self.ml_other_delim = MockedMetricLogger('prefix', '*')
self.ml_default = MockedMetricLogger()
def test_init(self):
self.assertEqual(self.ml._prefix, 'prefix')
self.assertEqual(self.ml._delimiter, '.')
self.assertEqual(self.ml_no_prefix._prefix, '')
self.assertEqual(self.ml_other_delim._delimiter, '*')
self.assertEqual(self.ml_default._prefix, '')
def test_get_metric_name(self):
self.assertEqual(
self.ml.get_metric_name('metric'),
'prefix.metric')
self.assertEqual(
self.ml_no_prefix.get_metric_name('metric'),
'metric')
self.assertEqual(
self.ml_other_delim.get_metric_name('metric'),
'prefix*metric')
def test_send_gauge(self):
self.ml.send_gauge('prefix.metric', 10)
self.ml._gauge.assert_called_once_with('prefix.metric', 10)
def test_send_counter(self):
self.ml.send_counter('prefix.metric', 10)
self.ml._counter.assert_called_once_with(
'prefix.metric', 10,
sample_rate=None)
self.ml._counter.reset_mock()
self.ml.send_counter('prefix.metric', 10, sample_rate=1.0)
self.ml._counter.assert_called_once_with(
'prefix.metric', 10,
sample_rate=1.0)
self.ml._counter.reset_mock()
self.ml.send_counter('prefix.metric', 10, sample_rate=0.0)
self.assertFalse(self.ml._counter.called)
def test_send_timer(self):
self.ml.send_timer('prefix.metric', 10)
self.ml._timer.assert_called_once_with('prefix.metric', 10)
@mock.patch('ironic_lib.metrics._time', autospec=True)
@mock.patch('ironic_lib.metrics.MetricLogger.send_timer', autospec=True)
def test_decorator_timer(self, mock_timer, mock_time):
mock_time.side_effect = [1, 43]
@self.ml.timer('foo.bar.baz')
def func(x):
return x * x
func(10)
mock_timer.assert_called_once_with(self.ml, 'prefix.foo.bar.baz',
42 * 1000)
@mock.patch('ironic_lib.metrics.MetricLogger.send_counter', autospec=True)
def test_decorator_counter(self, mock_counter):
@self.ml.counter('foo.bar.baz')
def func(x):
return x * x
func(10)
mock_counter.assert_called_once_with(self.ml, 'prefix.foo.bar.baz', 1,
sample_rate=None)
@mock.patch('ironic_lib.metrics.MetricLogger.send_counter', autospec=True)
def test_decorator_counter_sample_rate(self, mock_counter):
@self.ml.counter('foo.bar.baz', sample_rate=0.5)
def func(x):
return x * x
func(10)
mock_counter.assert_called_once_with(self.ml, 'prefix.foo.bar.baz', 1,
sample_rate=0.5)
@mock.patch('ironic_lib.metrics.MetricLogger.send_gauge', autospec=True)
def test_decorator_gauge(self, mock_gauge):
@self.ml.gauge('foo.bar.baz')
def func(x):
return x
func(10)
mock_gauge.assert_called_once_with(self.ml, 'prefix.foo.bar.baz', 10)
@mock.patch('ironic_lib.metrics._time', autospec=True)
@mock.patch('ironic_lib.metrics.MetricLogger.send_timer', autospec=True)
def test_context_mgr_timer(self, mock_timer, mock_time):
mock_time.side_effect = [1, 43]
with self.ml.timer('foo.bar.baz'):
pass
mock_timer.assert_called_once_with(self.ml, 'prefix.foo.bar.baz',
42 * 1000)
@mock.patch('ironic_lib.metrics.MetricLogger.send_counter', autospec=True)
def test_context_mgr_counter(self, mock_counter):
with self.ml.counter('foo.bar.baz'):
pass
mock_counter.assert_called_once_with(self.ml, 'prefix.foo.bar.baz', 1,
sample_rate=None)
@mock.patch('ironic_lib.metrics.MetricLogger.send_counter', autospec=True)
def test_context_mgr_counter_sample_rate(self, mock_counter):
with self.ml.counter('foo.bar.baz', sample_rate=0.5):
pass
mock_counter.assert_called_once_with(self.ml, 'prefix.foo.bar.baz', 1,
sample_rate=0.5)

View File

@ -1,96 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import socket
import mock
from oslotest import base as test_base
from ironic_lib import metrics_statsd
class TestStatsdMetricLogger(test_base.BaseTestCase):
def setUp(self):
super(TestStatsdMetricLogger, self).setUp()
self.ml = metrics_statsd.StatsdMetricLogger('prefix', '.', 'test-host',
4321)
def test_init(self):
self.assertEqual(self.ml._host, 'test-host')
self.assertEqual(self.ml._port, 4321)
self.assertEqual(self.ml._target, ('test-host', 4321))
@mock.patch('ironic_lib.metrics_statsd.StatsdMetricLogger._send',
autospec=True)
def test_gauge(self, mock_send):
self.ml._gauge('metric', 10)
mock_send.assert_called_once_with(self.ml, 'metric', 10, 'g')
@mock.patch('ironic_lib.metrics_statsd.StatsdMetricLogger._send',
autospec=True)
def test_counter(self, mock_send):
self.ml._counter('metric', 10)
mock_send.assert_called_once_with(self.ml, 'metric', 10, 'c',
sample_rate=None)
mock_send.reset_mock()
self.ml._counter('metric', 10, sample_rate=1.0)
mock_send.assert_called_once_with(self.ml, 'metric', 10, 'c',
sample_rate=1.0)
@mock.patch('ironic_lib.metrics_statsd.StatsdMetricLogger._send',
autospec=True)
def test_timer(self, mock_send):
self.ml._timer('metric', 10)
mock_send.assert_called_once_with(self.ml, 'metric', 10, 'ms')
@mock.patch('socket.socket')
def test_open_socket(self, mock_socket_constructor):
self.ml._open_socket()
mock_socket_constructor.assert_called_once_with(
socket.AF_INET,
socket.SOCK_DGRAM)
@mock.patch('socket.socket')
def test_send(self, mock_socket_constructor):
mock_socket = mock.Mock()
mock_socket_constructor.return_value = mock_socket
self.ml._send('part1.part2', 2, 'type')
mock_socket.sendto.assert_called_once_with(
'part1.part2:2|type',
('test-host', 4321))
mock_socket.close.assert_called_once_with()
mock_socket.reset_mock()
self.ml._send('part1.part2', 3.14159, 'type')
mock_socket.sendto.assert_called_once_with(
'part1.part2:3.14159|type',
('test-host', 4321))
mock_socket.close.assert_called_once_with()
mock_socket.reset_mock()
self.ml._send('part1.part2', 5, 'type')
mock_socket.sendto.assert_called_once_with(
'part1.part2:5|type',
('test-host', 4321))
mock_socket.close.assert_called_once_with()
mock_socket.reset_mock()
self.ml._send('part1.part2', 5, 'type', sample_rate=0.5)
mock_socket.sendto.assert_called_once_with(
'part1.part2:5|type@0.5',
('test-host', 4321))
mock_socket.close.assert_called_once_with()

View File

@ -1,108 +0,0 @@
# Copyright 2016 Rackspace Hosting
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base as test_base
from oslo_config import cfg
from ironic_lib import exception
from ironic_lib import metrics as metricslib
from ironic_lib import metrics_statsd
from ironic_lib import metrics_utils
CONF = cfg.CONF
class TestGetLogger(test_base.BaseTestCase):
def setUp(self):
super(TestGetLogger, self).setUp()
def test_default_backend(self):
metrics = metrics_utils.get_metrics_logger('foo')
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
def test_statsd_backend(self):
CONF.set_override('backend', 'statsd', group='metrics')
metrics = metrics_utils.get_metrics_logger('foo')
self.assertIsInstance(metrics, metrics_statsd.StatsdMetricLogger)
CONF.clear_override('backend', group='metrics')
def test_nonexisting_backend(self):
CONF.set_override('backend', 'none', group='metrics')
self.assertRaises(exception.InvalidMetricConfig,
metrics_utils.get_metrics_logger, 'foo')
CONF.clear_override('backend', group='metrics')
def test_numeric_prefix(self):
self.assertRaises(exception.InvalidMetricConfig,
metrics_utils.get_metrics_logger, 1)
def test_numeric_list_prefix(self):
self.assertRaises(exception.InvalidMetricConfig,
metrics_utils.get_metrics_logger, (1, 2))
def test_default_prefix(self):
metrics = metrics_utils.get_metrics_logger()
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
self.assertEqual(metrics.get_metric_name("bar"), "bar")
def test_prepend_host_backend(self):
CONF.set_override('prepend_host', True, group='metrics')
CONF.set_override('prepend_host_reverse', False, group='metrics')
metrics = metrics_utils.get_metrics_logger(prefix='foo',
host="host.example.com")
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
self.assertEqual(metrics.get_metric_name("bar"),
"host.example.com.foo.bar")
CONF.clear_override('prepend_host', group='metrics')
CONF.clear_override('prepend_host_reverse', group='metrics')
def test_prepend_global_prefix_host_backend(self):
CONF.set_override('prepend_host', True, group='metrics')
CONF.set_override('prepend_host_reverse', False, group='metrics')
CONF.set_override('global_prefix', 'global_pre', group='metrics')
metrics = metrics_utils.get_metrics_logger(prefix='foo',
host="host.example.com")
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
self.assertEqual(metrics.get_metric_name("bar"),
"global_pre.host.example.com.foo.bar")
CONF.clear_override('prepend_host', group='metrics')
CONF.clear_override('prepend_host_reverse', group='metrics')
CONF.clear_override('global_prefix', group='metrics')
def test_prepend_other_delim(self):
metrics = metrics_utils.get_metrics_logger('foo', delimiter='*')
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
self.assertEqual(metrics.get_metric_name("bar"),
"foo*bar")
def test_prepend_host_reverse_backend(self):
CONF.set_override('prepend_host', True, group='metrics')
CONF.set_override('prepend_host_reverse', True, group='metrics')
metrics = metrics_utils.get_metrics_logger('foo',
host="host.example.com")
self.assertIsInstance(metrics, metricslib.NoopMetricLogger)
self.assertEqual(metrics.get_metric_name("bar"),
"com.example.host.foo.bar")
CONF.clear_override('prepend_host', group='metrics')
CONF.clear_override('prepend_host_reverse', group='metrics')

View File

@ -1,543 +0,0 @@
# Copyright 2011 Justin Santa Barbara
# Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import errno
import os
import os.path
import tempfile
import mock
from oslo_concurrency import processutils
from oslo_config import cfg
from oslotest import base as test_base
from ironic_lib import exception
from ironic_lib import utils
CONF = cfg.CONF
class BareMetalUtilsTestCase(test_base.BaseTestCase):
def test_unlink(self):
with mock.patch.object(os, "unlink", autospec=True) as unlink_mock:
unlink_mock.return_value = None
utils.unlink_without_raise("/fake/path")
unlink_mock.assert_called_once_with("/fake/path")
def test_unlink_ENOENT(self):
with mock.patch.object(os, "unlink", autospec=True) as unlink_mock:
unlink_mock.side_effect = OSError(errno.ENOENT)
utils.unlink_without_raise("/fake/path")
unlink_mock.assert_called_once_with("/fake/path")
class ExecuteTestCase(test_base.BaseTestCase):
def test_retry_on_failure(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write('''#!/bin/sh
# If stdin fails to get passed during one of the runs, make a note.
if ! grep -q foo
then
echo 'failure' > "$1"
fi
# If stdin has failed to get passed during this or a previous run, exit early.
if grep failure "$1"
then
exit 1
fi
runs="$(cat $1)"
if [ -z "$runs" ]
then
runs=0
fi
runs=$(($runs + 1))
echo $runs > "$1"
exit 1
''')
fp.close()
os.chmod(tmpfilename, 0o755)
try:
self.assertRaises(processutils.ProcessExecutionError,
utils.execute,
tmpfilename, tmpfilename2, attempts=10,
process_input=b'foo',
delay_on_retry=False)
except OSError as e:
if e.errno == errno.EACCES:
self.skipTest("Permissions error detected. "
"Are you running with a noexec /tmp?")
else:
raise
fp = open(tmpfilename2, 'r')
runs = fp.read()
fp.close()
self.assertNotEqual(runs.strip(), 'failure', 'stdin did not '
'always get passed '
'correctly')
runs = int(runs.strip())
self.assertEqual(10, runs,
'Ran %d times instead of 10.' % (runs,))
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
def test_unknown_kwargs_raises_error(self):
self.assertRaises(processutils.UnknownArgumentError,
utils.execute,
'/usr/bin/env', 'true',
this_is_not_a_valid_kwarg=True)
def test_check_exit_code_boolean(self):
utils.execute('/usr/bin/env', 'false', check_exit_code=False)
self.assertRaises(processutils.ProcessExecutionError,
utils.execute,
'/usr/bin/env', 'false', check_exit_code=True)
def test_no_retry_on_success(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write('''#!/bin/sh
# If we've already run, bail out.
grep -q foo "$1" && exit 1
# Mark that we've run before.
echo foo > "$1"
# Check that stdin gets passed correctly.
grep foo
''')
fp.close()
os.chmod(tmpfilename, 0o755)
try:
utils.execute(tmpfilename,
tmpfilename2,
process_input=b'foo',
attempts=2)
except OSError as e:
if e.errno == errno.EACCES:
self.skipTest("Permissions error detected. "
"Are you running with a noexec /tmp?")
else:
raise
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
@mock.patch.object(processutils, 'execute', autospec=True)
@mock.patch.object(os.environ, 'copy', return_value={}, autospec=True)
def test_execute_use_standard_locale_no_env_variables(self, env_mock,
execute_mock):
utils.execute('foo', use_standard_locale=True)
execute_mock.assert_called_once_with('foo',
env_variables={'LC_ALL': 'C'})
@mock.patch.object(processutils, 'execute', autospec=True)
def test_execute_use_standard_locale_with_env_variables(self,
execute_mock):
utils.execute('foo', use_standard_locale=True,
env_variables={'foo': 'bar'})
execute_mock.assert_called_once_with('foo',
env_variables={'LC_ALL': 'C',
'foo': 'bar'})
@mock.patch.object(processutils, 'execute', autospec=True)
def test_execute_not_use_standard_locale(self, execute_mock):
utils.execute('foo', use_standard_locale=False,
env_variables={'foo': 'bar'})
execute_mock.assert_called_once_with('foo',
env_variables={'foo': 'bar'})
def test_execute_without_root_helper(self):
CONF.set_override('root_helper', None, group='ironic_lib')
with mock.patch.object(
processutils, 'execute', autospec=True) as execute_mock:
utils.execute('foo', run_as_root=False)
execute_mock.assert_called_once_with('foo', run_as_root=False)
def test_execute_without_root_helper_run_as_root(self):
CONF.set_override('root_helper', None, group='ironic_lib')
with mock.patch.object(
processutils, 'execute', autospec=True) as execute_mock:
utils.execute('foo', run_as_root=True)
execute_mock.assert_called_once_with('foo', run_as_root=False)
def test_execute_with_root_helper(self):
with mock.patch.object(
processutils, 'execute', autospec=True) as execute_mock:
utils.execute('foo', run_as_root=False)
execute_mock.assert_called_once_with('foo', run_as_root=False)
def test_execute_with_root_helper_run_as_root(self):
with mock.patch.object(
processutils, 'execute', autospec=True) as execute_mock:
utils.execute('foo', run_as_root=True)
execute_mock.assert_called_once_with(
'foo', run_as_root=True,
root_helper=CONF.ironic_lib.root_helper)
@mock.patch.object(utils, 'LOG', autospec=True)
def _test_execute_with_log_stdout(self, log_mock, log_stdout=None):
with mock.patch.object(processutils, 'execute') as execute_mock:
execute_mock.return_value = ('stdout', 'stderr')
if log_stdout is not None:
utils.execute('foo', log_stdout=log_stdout)
else:
utils.execute('foo')
execute_mock.assert_called_once_with('foo')
name, args, kwargs = log_mock.debug.mock_calls[1]
if log_stdout is False:
self.assertEqual(2, log_mock.debug.call_count)
self.assertNotIn('stdout', args[0])
else:
self.assertEqual(3, log_mock.debug.call_count)
self.assertIn('stdout', args[0])
def test_execute_with_log_stdout_default(self):
self._test_execute_with_log_stdout()
def test_execute_with_log_stdout_true(self):
self._test_execute_with_log_stdout(log_stdout=True)
def test_execute_with_log_stdout_false(self):
self._test_execute_with_log_stdout(log_stdout=False)
class MkfsTestCase(test_base.BaseTestCase):
@mock.patch.object(utils, 'execute', autospec=True)
def test_mkfs(self, execute_mock):
utils.mkfs('ext4', '/my/block/dev')
utils.mkfs('msdos', '/my/msdos/block/dev')
utils.mkfs('swap', '/my/swap/block/dev')
expected = [mock.call('mkfs', '-t', 'ext4', '-F', '/my/block/dev',
run_as_root=True,
use_standard_locale=True),
mock.call('mkfs', '-t', 'msdos', '/my/msdos/block/dev',
run_as_root=True,
use_standard_locale=True),
mock.call('mkswap', '/my/swap/block/dev',
run_as_root=True,
use_standard_locale=True)]
self.assertEqual(expected, execute_mock.call_args_list)
@mock.patch.object(utils, 'execute', autospec=True)
def test_mkfs_with_label(self, execute_mock):
utils.mkfs('ext4', '/my/block/dev', 'ext4-vol')
utils.mkfs('msdos', '/my/msdos/block/dev', 'msdos-vol')
utils.mkfs('swap', '/my/swap/block/dev', 'swap-vol')
expected = [mock.call('mkfs', '-t', 'ext4', '-F', '-L', 'ext4-vol',
'/my/block/dev', run_as_root=True,
use_standard_locale=True),
mock.call('mkfs', '-t', 'msdos', '-n', 'msdos-vol',
'/my/msdos/block/dev', run_as_root=True,
use_standard_locale=True),
mock.call('mkswap', '-L', 'swap-vol',
'/my/swap/block/dev', run_as_root=True,
use_standard_locale=True)]
self.assertEqual(expected, execute_mock.call_args_list)
@mock.patch.object(utils, 'execute', autospec=True,
side_effect=processutils.ProcessExecutionError(
stderr=os.strerror(errno.ENOENT)))
def test_mkfs_with_unsupported_fs(self, execute_mock):
self.assertRaises(exception.FileSystemNotSupported,
utils.mkfs, 'foo', '/my/block/dev')
@mock.patch.object(utils, 'execute', autospec=True,
side_effect=processutils.ProcessExecutionError(
stderr='fake'))
def test_mkfs_with_unexpected_error(self, execute_mock):
self.assertRaises(processutils.ProcessExecutionError, utils.mkfs,
'ext4', '/my/block/dev', 'ext4-vol')
class IsHttpUrlTestCase(test_base.BaseTestCase):
def test_is_http_url(self):
self.assertTrue(utils.is_http_url('http://127.0.0.1'))
self.assertTrue(utils.is_http_url('https://127.0.0.1'))
self.assertTrue(utils.is_http_url('HTTP://127.1.2.3'))
self.assertTrue(utils.is_http_url('HTTPS://127.3.2.1'))
self.assertFalse(utils.is_http_url('Zm9vYmFy'))
self.assertFalse(utils.is_http_url('11111111'))
class ParseRootDeviceTestCase(test_base.BaseTestCase):
def test_parse_root_device_hints_without_operators(self):
root_device = {
'wwn': '123456', 'model': 'FOO model', 'size': 12345,
'serial': 'foo-serial', 'vendor': 'foo VENDOR with space',
'name': '/dev/sda', 'wwn_with_extension': '123456111',
'wwn_vendor_extension': '111', 'rotational': True}
result = utils.parse_root_device_hints(root_device)
expected = {
'wwn': 's== 123456', 'model': 's== foo%20model',
'size': '== 12345', 'serial': 's== foo-serial',
'vendor': 's== foo%20vendor%20with%20space',
'name': 's== /dev/sda', 'wwn_with_extension': 's== 123456111',
'wwn_vendor_extension': 's== 111', 'rotational': True}
self.assertEqual(expected, result)
def test_parse_root_device_hints_with_operators(self):
root_device = {
'wwn': 's== 123456', 'model': 's== foo MODEL', 'size': '>= 12345',
'serial': 's!= foo-serial', 'vendor': 's== foo VENDOR with space',
'name': '<or> /dev/sda <or> /dev/sdb',
'wwn_with_extension': 's!= 123456111',
'wwn_vendor_extension': 's== 111', 'rotational': True}
# Validate strings being normalized
expected = copy.deepcopy(root_device)
expected['model'] = 's== foo%20model'
expected['vendor'] = 's== foo%20vendor%20with%20space'
result = utils.parse_root_device_hints(root_device)
# The hints already contain the operators, make sure we keep it
self.assertEqual(expected, result)
def test_parse_root_device_hints_no_hints(self):
result = utils.parse_root_device_hints({})
self.assertIsNone(result)
def test_parse_root_device_hints_convert_size(self):
for size in (12345, '12345'):
result = utils.parse_root_device_hints({'size': size})
self.assertEqual({'size': '== 12345'}, result)
def test_parse_root_device_hints_invalid_size(self):
for value in ('not-int', -123, 0):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'size': value})
def test_parse_root_device_hints_int_or(self):
expr = '<or> 123 <or> 456 <or> 789'
result = utils.parse_root_device_hints({'size': expr})
self.assertEqual({'size': expr}, result)
def test_parse_root_device_hints_int_or_invalid(self):
expr = '<or> 123 <or> non-int <or> 789'
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'size': expr})
def test_parse_root_device_hints_string_or_space(self):
expr = '<or> foo <or> foo bar <or> bar'
expected = '<or> foo <or> foo%20bar <or> bar'
result = utils.parse_root_device_hints({'model': expr})
self.assertEqual({'model': expected}, result)
def _parse_root_device_hints_convert_rotational(self, values,
expected_value):
for value in values:
result = utils.parse_root_device_hints({'rotational': value})
self.assertEqual({'rotational': expected_value}, result)
def test_parse_root_device_hints_convert_rotational(self):
self._parse_root_device_hints_convert_rotational(
(True, 'true', 'on', 'y', 'yes'), True)
self._parse_root_device_hints_convert_rotational(
(False, 'false', 'off', 'n', 'no'), False)
def test_parse_root_device_hints_invalid_rotational(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'rotational': 'not-bool'})
def test_parse_root_device_hints_invalid_wwn(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'wwn': 123})
def test_parse_root_device_hints_invalid_wwn_with_extension(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'wwn_with_extension': 123})
def test_parse_root_device_hints_invalid_wwn_vendor_extension(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'wwn_vendor_extension': 123})
def test_parse_root_device_hints_invalid_model(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'model': 123})
def test_parse_root_device_hints_invalid_serial(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'serial': 123})
def test_parse_root_device_hints_invalid_vendor(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'vendor': 123})
def test_parse_root_device_hints_invalid_name(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'name': 123})
def test_parse_root_device_hints_non_existent_hint(self):
self.assertRaises(ValueError, utils.parse_root_device_hints,
{'non-existent': 'foo'})
def test_extract_hint_operator_and_values_single_value(self):
expected = {'op': '>=', 'values': ['123']}
self.assertEqual(
expected, utils._extract_hint_operator_and_values(
'>= 123', 'size'))
def test_extract_hint_operator_and_values_multiple_values(self):
expected = {'op': '<or>', 'values': ['123', '456', '789']}
expr = '<or> 123 <or> 456 <or> 789'
self.assertEqual(
expected, utils._extract_hint_operator_and_values(expr, 'size'))
def test_extract_hint_operator_and_values_multiple_values_space(self):
expected = {'op': '<or>', 'values': ['foo', 'foo bar', 'bar']}
expr = '<or> foo <or> foo bar <or> bar'
self.assertEqual(
expected, utils._extract_hint_operator_and_values(expr, 'model'))
def test_extract_hint_operator_and_values_no_operator(self):
expected = {'op': '', 'values': ['123']}
self.assertEqual(
expected, utils._extract_hint_operator_and_values('123', 'size'))
def test_extract_hint_operator_and_values_empty_value(self):
self.assertRaises(
ValueError, utils._extract_hint_operator_and_values, '', 'size')
def test_extract_hint_operator_and_values_integer(self):
expected = {'op': '', 'values': ['123']}
self.assertEqual(
expected, utils._extract_hint_operator_and_values(123, 'size'))
def test__append_operator_to_hints(self):
root_device = {'serial': 'foo', 'size': 12345,
'model': 'foo model', 'rotational': True}
expected = {'serial': 's== foo', 'size': '== 12345',
'model': 's== foo model', 'rotational': True}
result = utils._append_operator_to_hints(root_device)
self.assertEqual(expected, result)
def test_normalize_hint_expression_or(self):
expr = '<or> foo <or> foo bar <or> bar'
expected = '<or> foo <or> foo%20bar <or> bar'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_in(self):
expr = '<in> foo <in> foo bar <in> bar'
expected = '<in> foo <in> foo%20bar <in> bar'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_op_space(self):
expr = 's== test string with space'
expected = 's== test%20string%20with%20space'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_op_no_space(self):
expr = 's!= SpongeBob'
expected = 's!= spongebob'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_no_op_space(self):
expr = 'no operators'
expected = 'no%20operators'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_no_op_no_space(self):
expr = 'NoSpace'
expected = 'nospace'
result = utils._normalize_hint_expression(expr, 'model')
self.assertEqual(expected, result)
def test_normalize_hint_expression_empty_value(self):
self.assertRaises(
ValueError, utils._normalize_hint_expression, '', 'size')
class MatchRootDeviceTestCase(test_base.BaseTestCase):
def setUp(self):
super(MatchRootDeviceTestCase, self).setUp()
self.devices = [
{'name': '/dev/sda', 'size': 64424509440, 'model': 'ok model',
'serial': 'fakeserial'},
{'name': '/dev/sdb', 'size': 128849018880, 'model': 'big model',
'serial': 'veryfakeserial', 'rotational': 'yes'},
{'name': '/dev/sdc', 'size': 10737418240, 'model': 'small model',
'serial': 'veryveryfakeserial', 'rotational': False},
]
def test_match_root_device_hints_one_hint(self):
root_device_hints = {'size': '>= 70'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdb', dev['name'])
def test_match_root_device_hints_rotational(self):
root_device_hints = {'rotational': False}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdc', dev['name'])
def test_match_root_device_hints_rotational_convert_devices_bool(self):
root_device_hints = {'size': '>=100', 'rotational': True}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdb', dev['name'])
def test_match_root_device_hints_multiple_hints(self):
root_device_hints = {'size': '>= 50', 'model': 's==big model',
'serial': 's==veryfakeserial'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdb', dev['name'])
def test_match_root_device_hints_multiple_hints2(self):
root_device_hints = {
'size': '<= 20',
'model': '<or> model 5 <or> foomodel <or> small model <or>',
'serial': 's== veryveryfakeserial'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdc', dev['name'])
def test_match_root_device_hints_multiple_hints3(self):
root_device_hints = {'rotational': False, 'model': '<in> small'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdc', dev['name'])
def test_match_root_device_hints_no_operators(self):
root_device_hints = {'size': '120', 'model': 'big model',
'serial': 'veryfakeserial'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertEqual('/dev/sdb', dev['name'])
def test_match_root_device_hints_no_device_found(self):
root_device_hints = {'size': '>=50', 'model': 's==foo'}
dev = utils.match_root_device_hints(self.devices, root_device_hints)
self.assertIsNone(dev)
@mock.patch.object(utils.LOG, 'warning', autospec=True)
def test_match_root_device_hints_empty_device_attribute(self, mock_warn):
empty_dev = [{'name': '/dev/sda', 'model': ' '}]
dev = utils.match_root_device_hints(empty_dev, {'model': 'foo'})
self.assertIsNone(dev)
self.assertTrue(mock_warn.called)

View File

@ -1,407 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utilities and helper functions."""
import copy
import errno
import logging
import os
import re
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import specs_matcher
from oslo_utils import strutils
from oslo_utils import units
import six
from six.moves.urllib import parse
from ironic_lib.common.i18n import _
from ironic_lib.common.i18n import _LE
from ironic_lib.common.i18n import _LW
from ironic_lib import exception
utils_opts = [
cfg.StrOpt('root_helper',
default='sudo ironic-rootwrap /etc/ironic/rootwrap.conf',
help='Command that is prefixed to commands that are run as '
'root. If not specified, no commands are run as root.'),
]
CONF = cfg.CONF
CONF.register_opts(utils_opts, group='ironic_lib')
LOG = logging.getLogger(__name__)
# A dictionary in the form {hint name: hint type}
VALID_ROOT_DEVICE_HINTS = {
'size': int, 'model': str, 'wwn': str, 'serial': str, 'vendor': str,
'wwn_with_extension': str, 'wwn_vendor_extension': str, 'name': str,
'rotational': bool,
}
ROOT_DEVICE_HINTS_GRAMMAR = specs_matcher.make_grammar()
def execute(*cmd, **kwargs):
"""Convenience wrapper around oslo's execute() method.
Executes and logs results from a system command. See docs for
oslo_concurrency.processutils.execute for usage.
:param \*cmd: positional arguments to pass to processutils.execute()
:param use_standard_locale: keyword-only argument. True | False.
Defaults to False. If set to True,
execute command with standard locale
added to environment variables.
:param log_stdout: keyword-only argument. True | False. Defaults
to True. If set to True, logs the output.
:param \*\*kwargs: keyword arguments to pass to processutils.execute()
:returns: (stdout, stderr) from process execution
:raises: UnknownArgumentError on receiving unknown arguments
:raises: ProcessExecutionError
:raises: OSError
"""
use_standard_locale = kwargs.pop('use_standard_locale', False)
if use_standard_locale:
env = kwargs.pop('env_variables', os.environ.copy())
env['LC_ALL'] = 'C'
kwargs['env_variables'] = env
log_stdout = kwargs.pop('log_stdout', True)
# If root_helper config is not specified, no commands are run as root.
run_as_root = kwargs.get('run_as_root', False)
if run_as_root:
if not CONF.ironic_lib.root_helper:
kwargs['run_as_root'] = False
else:
kwargs['root_helper'] = CONF.ironic_lib.root_helper
result = processutils.execute(*cmd, **kwargs)
LOG.debug('Execution completed, command line is "%s"',
' '.join(map(str, cmd)))
if log_stdout:
LOG.debug('Command stdout is: "%s"' % result[0])
LOG.debug('Command stderr is: "%s"' % result[1])
return result
def mkfs(fs, path, label=None):
"""Format a file or block device
:param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4'
'btrfs', etc.)
:param path: Path to file or block device to format
:param label: Volume label to use
"""
if fs == 'swap':
args = ['mkswap']
else:
args = ['mkfs', '-t', fs]
# add -F to force no interactive execute on non-block device.
if fs in ('ext3', 'ext4'):
args.extend(['-F'])
if label:
if fs in ('msdos', 'vfat'):
label_opt = '-n'
else:
label_opt = '-L'
args.extend([label_opt, label])
args.append(path)
try:
execute(*args, run_as_root=True, use_standard_locale=True)
except processutils.ProcessExecutionError as e:
with excutils.save_and_reraise_exception() as ctx:
if os.strerror(errno.ENOENT) in e.stderr:
ctx.reraise = False
LOG.exception(_LE('Failed to make file system. '
'File system %s is not supported.'), fs)
raise exception.FileSystemNotSupported(fs=fs)
else:
LOG.exception(_LE('Failed to create a file system '
'in %(path)s. Error: %(error)s'),
{'path': path, 'error': e})
def unlink_without_raise(path):
try:
os.unlink(path)
except OSError as e:
if e.errno == errno.ENOENT:
return
else:
LOG.warning(_LW("Failed to unlink %(path)s, error: %(e)s"),
{'path': path, 'e': e})
def dd(src, dst, *args):
"""Execute dd from src to dst.
:param src: the input file for dd command.
:param dst: the output file for dd command.
:param args: a tuple containing the arguments to be
passed to dd command.
:raises: processutils.ProcessExecutionError if it failed
to run the process.
"""
LOG.debug("Starting dd process.")
execute('dd', 'if=%s' % src, 'of=%s' % dst, *args,
use_standard_locale=True, run_as_root=True, check_exit_code=[0])
def is_http_url(url):
url = url.lower()
return url.startswith('http://') or url.startswith('https://')
def list_opts():
"""Entry point for oslo-config-generator."""
return [('ironic_lib', utils_opts)]
def _extract_hint_operator_and_values(hint_expression, hint_name):
"""Extract the operator and value(s) of a root device hint expression.
A root device hint expression could contain one or more values
depending on the operator. This method extracts the operator and
value(s) and returns a dictionary containing both.
:param hint_expression: The hint expression string containing value(s)
and operator (optionally).
:param hint_name: The name of the hint. Used for logging.
:raises: ValueError if the hint_expression is empty.
:returns: A dictionary containing:
:op: The operator. An empty string in case of None.
:values: A list of values stripped and converted to lowercase.
"""
expression = six.text_type(hint_expression).strip().lower()
if not expression:
raise ValueError(
_('Root device hint "%s" expression is empty') % hint_name)
# parseString() returns a list of tokens which the operator (if
# present) is always the first element.
ast = ROOT_DEVICE_HINTS_GRAMMAR.parseString(expression)
if len(ast) <= 1:
# hint_expression had no operator
return {'op': '', 'values': [expression]}
op = ast[0]
return {'values': [v.strip() for v in re.split(op, expression) if v],
'op': op}
def _normalize_hint_expression(hint_expression, hint_name):
"""Normalize a string type hint expression.
A string-type hint expression contains one or more operators and
one or more values: [<op>] <value> [<op> <value>]*. This normalizes
the values by url-encoding white spaces and special characters. The
operators are not normalized. For example: the hint value of "<or>
foo bar <or> bar" will become "<or> foo%20bar <or> bar".
:param hint_expression: The hint expression string containing value(s)
and operator (optionally).
:param hint_name: The name of the hint. Used for logging.
:raises: ValueError if the hint_expression is empty.
:returns: A normalized string.
"""
hdict = _extract_hint_operator_and_values(hint_expression, hint_name)
result = hdict['op'].join([' %s ' % parse.quote(t)
for t in hdict['values']])
return (hdict['op'] + result).strip()
def _append_operator_to_hints(root_device):
"""Add an equal (s== or ==) operator to the hints.
For backwards compatibility, for root device hints where no operator
means equal, this method adds the equal operator to the hint. This is
needed when using oslo.utils.specs_matcher methods.
:param root_device: The root device hints dictionary.
"""
for name, expression in root_device.items():
# NOTE(lucasagomes): The specs_matcher from oslo.utils does not
# support boolean, so we don't need to append any operator
# for it.
if VALID_ROOT_DEVICE_HINTS[name] is bool:
continue
expression = six.text_type(expression)
ast = ROOT_DEVICE_HINTS_GRAMMAR.parseString(expression)
if len(ast) > 1:
continue
op = 's== %s' if VALID_ROOT_DEVICE_HINTS[name] is str else '== %s'
root_device[name] = op % expression
return root_device
def parse_root_device_hints(root_device):
"""Parse the root_device property of a node.
Parses and validates the root_device property of a node. These are
hints for how a node's root device is created. The 'size' hint
should be a positive integer. The 'rotational' hint should be a
Boolean value.
:param root_device: the root_device dictionary from the node's property.
:returns: a dictionary with the root device hints parsed or
None if there are no hints.
:raises: ValueError, if some information is invalid.
"""
if not root_device:
return
root_device = copy.deepcopy(root_device)
invalid_hints = set(root_device) - set(VALID_ROOT_DEVICE_HINTS)
if invalid_hints:
raise ValueError(
_('The hints "%(invalid_hints)s" are invalid. '
'Valid hints are: "%(valid_hints)s"') %
{'invalid_hints': ', '.join(invalid_hints),
'valid_hints': ', '.join(VALID_ROOT_DEVICE_HINTS)})
for name, expression in root_device.items():
hint_type = VALID_ROOT_DEVICE_HINTS[name]
if hint_type is str:
if not isinstance(expression, six.string_types):
raise ValueError(
_('Root device hint "%(name)s" is not a string value. '
'Hint expression: %(expression)s') %
{'name': name, 'expression': expression})
root_device[name] = _normalize_hint_expression(expression, name)
elif hint_type is int:
for v in _extract_hint_operator_and_values(expression,
name)['values']:
try:
integer = int(v)
except ValueError:
raise ValueError(
_('Root device hint "%(name)s" is not an integer '
'value. Current value: %(expression)s') %
{'name': name, 'expression': expression})
if integer <= 0:
raise ValueError(
_('Root device hint "%(name)s" should be a positive '
'integer. Current value: %(expression)s') %
{'name': name, 'expression': expression})
elif hint_type is bool:
try:
root_device[name] = strutils.bool_from_string(
expression, strict=True)
except ValueError:
raise ValueError(
_('Root device hint "%(name)s" is not a Boolean value. '
'Current value: %(expression)s') %
{'name': name, 'expression': expression})
return _append_operator_to_hints(root_device)
def match_root_device_hints(devices, root_device_hints):
"""Try to find a device that matches the root device hints.
Try to find a device that matches the root device hints. In order
for a device to be matched it needs to satisfy all the given hints.
:param devices: A list of dictionaries representing the devices
containing one or more of the following keys:
:name: (String) The device name, e.g /dev/sda
:size: (Integer) Size of the device in *bytes*
:model: (String) Device model
:vendor: (String) Device vendor name
:serial: (String) Device serial number
:wwn: (String) Unique storage identifier
:wwn_with_extension: (String): Unique storage identifier with
the vendor extension appended
:wwn_vendor_extension: (String): United vendor storage identifier
:rotational: (Boolean) Whether it's a rotational device or
not. Useful to distinguish HDDs (rotational) and SSDs
(not rotational).
:param root_device_hints: A dictionary with the root device hints.
:raises: ValueError, if some information is invalid.
:returns: The first device to match all the hints or None.
"""
LOG.debug('Trying to find a device from "%(devs)s" that matches the '
'root device hints "%(hints)s"', {'devs': devices,
'hints': root_device_hints})
parsed_hints = parse_root_device_hints(root_device_hints)
for dev in devices:
for hint in parsed_hints:
hint_type = VALID_ROOT_DEVICE_HINTS[hint]
device_value = dev.get(hint)
hint_value = parsed_hints[hint]
if hint_type is str:
try:
device_value = _normalize_hint_expression(device_value,
hint)
except ValueError:
LOG.warning(
_LW('The attribute "%(attr)s" of the device "%(dev)s" '
'has an empty value. Skipping device.'),
{'attr': hint, 'dev': dev})
break
# NOTE(lucasagomes): Boolean hints are not supported by
# specs_matcher.match(), so we need to do the comparison
# ourselves
if hint_type is bool:
try:
device_value = strutils.bool_from_string(device_value,
strict=True)
except ValueError:
LOG.warning(_LW('The attribute "%(attr)s" (with value '
'"%(value)s") of device "%(dev)s" is not '
'a valid Boolean. Skipping device.'),
{'attr': hint, 'value': device_value,
'dev': dev})
break
if device_value == hint_value:
continue
break
if hint == 'size':
# Since we don't support units yet we expect the size
# in GiB for now
device_value = device_value / units.Gi
if not specs_matcher.match(device_value, hint_value):
break
else:
LOG.debug('Device found! The device "%s" matches the root '
'device hints' % dev)
return dev
LOG.debug('No device found that matches the root device hints')

View File

@ -1,18 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('ironic_lib')

View File

@ -1,13 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.config>=3.14.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
requests>=2.10.0 # Apache-2.0
six>=1.9.0 # MIT
oslo.log>=1.14.0 # Apache-2.0

View File

@ -1,38 +0,0 @@
[metadata]
name = ironic-lib
summary = Ironic common library
description-file =
README.rst
author = OpenStack Ironic
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[files]
packages =
ironic_lib
[entry_points]
oslo.config.opts =
ironic_lib.disk_partitioner = ironic_lib.disk_partitioner:list_opts
ironic_lib.disk_utils = ironic_lib.disk_utils:list_opts
ironic_lib.utils = ironic_lib.utils:list_opts
ironic_lib.metrics = ironic_lib.metrics_utils:list_opts
ironic_lib.metrics_statsd = ironic_lib.metrics_statsd:list_opts
[pbr]
autodoc_index_modules = True
warnerrors = True
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,17 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage>=3.6 # Apache-2.0
eventlet!=0.18.3,>=0.18.2 # MIT
hacking<0.11,>=0.10.0
mock>=2.0 # BSD
os-testr>=0.7.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
# Doc requirements
doc8 # Apache-2.0
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0

View File

@ -1,55 +0,0 @@
#!/usr/bin/env bash
# Client constraint file contains this client version pin that is in conflict
# with installing the client from source. We should replace the version pin in
# the constraints file before applying it for from-source installation.
ZUUL_CLONER=/usr/zuul-env/bin/zuul-cloner
BRANCH_NAME=master
CLIENT_NAME=ironic-lib
requirements_installed=$(echo "import openstack_requirements" | python 2>/dev/null ; echo $?)
set -e
CONSTRAINTS_FILE=$1
shift
install_cmd="pip install"
mydir=$(mktemp -dt "$CLIENT_NAME-tox_install-XXXXXXX")
trap "rm -rf $mydir" EXIT
localfile=$mydir/upper-constraints.txt
if [[ $CONSTRAINTS_FILE != http* ]]; then
CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE
fi
curl $CONSTRAINTS_FILE -k -o $localfile
install_cmd="$install_cmd -c$localfile"
if [ $requirements_installed -eq 0 ]; then
echo "ALREADY INSTALLED" > /tmp/tox_install.txt
echo "Requirements already installed; using existing package"
elif [ -x "$ZUUL_CLONER" ]; then
echo "ZUUL CLONER" > /tmp/tox_install.txt
pushd $mydir
$ZUUL_CLONER --cache-dir \
/opt/git \
--branch $BRANCH_NAME \
git://git.openstack.org \
openstack/requirements
cd openstack/requirements
$install_cmd -e .
popd
else
echo "PIP HARDCODE" > /tmp/tox_install.txt
if [ -z "$REQUIREMENTS_PIP_LOCATION" ]; then
REQUIREMENTS_PIP_LOCATION="git+https://git.openstack.org/openstack/requirements@$BRANCH_NAME#egg=requirements"
fi
$install_cmd -U -e ${REQUIREMENTS_PIP_LOCATION}
fi
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints $localfile -- $CLIENT_NAME "-e file://$PWD#egg=$CLIENT_NAME"
$install_cmd -U $*
exit $?

41
tox.ini
View File

@ -1,41 +0,0 @@
[tox]
minversion = 1.8
skipsdist = True
envlist = py34,py27,pep8
[testenv]
install_command =
{toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
usedevelop = True
setenv = VIRTUAL_ENV={envdir}
PYTHONDONTWRITEBYTECODE = 1
LANGUAGE=en_US
TESTS_DIR=./ironic_lib/tests/
deps = -r{toxinidir}/test-requirements.txt
commands = ostestr {posargs}
[flake8]
show-source = True
ignore = E129
exclude = .venv,.tox,dist,doc,*.egg,.update-venv
[testenv:pep8]
commands =
flake8 {posargs}
doc8 README.rst doc/source --ignore D001
[testenv:cover]
setenv = VIRTUALENV={envdir}
LANGUAGE=en_US
commands =
python setup.py test --coverage --coverage-package-name=ironic_lib --omit=ironic_lib/openstack/common/*.py {posargs}
[testenv:venv]
commands = {posargs}
[testenv:docs]
setenv = PYTHONHASHSEED=0
sitepackages = False
envdir = {toxworkdir}/venv
commands =
python setup.py build_sphinx