Initial commit for ironic-lib

This commit is contained in:
Faizan Barmawer 2015-02-24 06:08:58 -08:00
parent 6f0c39d3f4
commit 1d78cb7167
28 changed files with 1971 additions and 1536 deletions

View File

@ -1,4 +1,10 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ${TESTS_DIR:-./ironic/tests/} $LISTOPT $IDOPTION
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
OS_DEBUG=${OS_DEBUG:-0} \
${PYTHON:-python} -m subunit.run discover -t ./ $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

2
README.md Normal file
View File

@ -0,0 +1,2 @@
# ironic-lib
A collection of common Ironic utilities

View File

@ -1,31 +1,17 @@
Ironic
======
----------
ironic_lib
----------
Ironic is an integrated OpenStack project which aims to provision bare
metal machines instead of virtual machines, forked from the Nova Baremetal
driver. It is best thought of as a bare metal hypervisor **API** and a set
of plugins which interact with the bare metal hypervisors. By default, it
will use PXE and IPMI in concert to provision and turn on/off machines,
but Ironic also supports vendor-specific plugins which may implement
additional functionality.
Running Tests
-------------
-----------------
Project Resources
-----------------
To run tests in virtualenvs (preferred)::
Project status, bugs, and blueprints are tracked on Launchpad:
sudo pip install tox
tox
http://launchpad.net/ironic
To run tests in the current environment::
Developer documentation can be found here:
sudo pip install -r requirements.txt
nosetests
http://docs.openstack.org/developer/ironic
Additional resources are linked from the project wiki page:
https://wiki.openstack.org/wiki/Ironic
Anyone wishing to contribute to an OpenStack project should
find a good reference here:
http://docs.openstack.org/infra/manual/developers.html

88
TESTING.rst Normal file
View File

@ -0,0 +1,88 @@
===========================
Testing Your OpenStack Code
===========================
------------
A Quickstart
------------
This is designed to be enough information for you to run your first tests.
Detailed information on testing can be found here: https://wiki.openstack.org/wiki/Testing
*Install pip*::
[apt-get | yum] install python-pip
More information on pip here: http://www.pip-installer.org/en/latest/
*Use pip to install tox*::
pip install tox
Run The Tests
-------------
*Navigate to the project's root directory and execute*::
tox
Note: completing this command may take a long time (depends on system resources)
also, you might not see any output until tox is complete.
Information about tox can be found here: http://testrun.org/tox/latest/
Run The Tests in One Environment
--------------------------------
Tox will run your entire test suite in the environments specified in the project tox.ini::
[tox]
envlist = <list of available environments>
To run the test suite in just one of the environments in envlist execute::
tox -e <env>
so for example, *run the test suite in py26*::
tox -e py26
Run One Test
------------
To run individual tests with tox:
if testr is in tox.ini, for example::
[testenv]
includes "python setup.py testr --slowest --testr-args='{posargs}'"
run individual tests with the following syntax::
tox -e <env> -- path.to.module:Class.test
so for example, *run the cpu_limited test in Nova*::
tox -e py27 -- nova.tests.test_claims:ClaimTestCase.test_cpu_unlimited
if nose is in tox.ini, for example::
[testenv]
includes "nosetests {posargs}"
run individual tests with the following syntax::
tox -e <env> -- --tests path.to.module:Class.test
so for example, *run the list test in Glance*::
tox -e py27 -- --tests glance.tests.unit.test_auth.py:TestImageRepoProxy.test_list
Need More Info?
---------------
More information about testr: https://wiki.openstack.org/wiki/Testr
More information about nose: https://nose.readthedocs.org/en/latest/
More information about testing OpenStack code can be found here:
https://wiki.openstack.org/wiki/Testing

View File

@ -1,22 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
import eventlet
eventlet.monkey_patch(os=False)

View File

@ -1,526 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utilities and helper functions."""
import contextlib
import errno
import hashlib
import os
import random
import re
import shutil
import tempfile
import netaddr
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
import paramiko
import six
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common.i18n import _LE
from ironic.common.i18n import _LW
from ironic.openstack.common import log as logging
utils_opts = [
cfg.StrOpt('rootwrap_config',
default="/etc/ironic/rootwrap.conf",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('tempdir',
help='Explicitly specify the temporary working directory.'),
]
CONF = cfg.CONF
CONF.register_opts(utils_opts)
LOG = logging.getLogger(__name__)
def _get_root_helper():
return 'sudo ironic-rootwrap %s' % CONF.rootwrap_config
def execute(*cmd, **kwargs):
"""Convenience wrapper around oslo's execute() method.
:param cmd: Passed to processutils.execute.
:param use_standard_locale: True | False. Defaults to False. If set to
True, execute command with standard locale
added to environment variables.
:returns: (stdout, stderr) from process execution
:raises: UnknownArgumentError
:raises: ProcessExecutionError
"""
use_standard_locale = kwargs.pop('use_standard_locale', False)
if use_standard_locale:
env = kwargs.pop('env_variables', os.environ.copy())
env['LC_ALL'] = 'C'
kwargs['env_variables'] = env
if kwargs.get('run_as_root') and 'root_helper' not in kwargs:
kwargs['root_helper'] = _get_root_helper()
result = processutils.execute(*cmd, **kwargs)
LOG.debug('Execution completed, command line is "%s"',
' '.join(map(str, cmd)))
LOG.debug('Command stdout is: "%s"' % result[0])
LOG.debug('Command stderr is: "%s"' % result[1])
return result
def trycmd(*args, **kwargs):
"""Convenience wrapper around oslo's trycmd() method."""
if kwargs.get('run_as_root') and 'root_helper' not in kwargs:
kwargs['root_helper'] = _get_root_helper()
return processutils.trycmd(*args, **kwargs)
def ssh_connect(connection):
"""Method to connect to a remote system using ssh protocol.
:param connection: a dict of connection parameters.
:returns: paramiko.SSHClient -- an active ssh connection.
:raises: SSHConnectFailed
"""
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
key_contents = connection.get('key_contents')
if key_contents:
data = six.moves.StringIO(key_contents)
if "BEGIN RSA PRIVATE" in key_contents:
pkey = paramiko.RSAKey.from_private_key(data)
elif "BEGIN DSA PRIVATE" in key_contents:
pkey = paramiko.DSSKey.from_private_key(data)
else:
# Can't include the key contents - secure material.
raise ValueError(_("Invalid private key"))
else:
pkey = None
ssh.connect(connection.get('host'),
username=connection.get('username'),
password=connection.get('password'),
port=connection.get('port', 22),
pkey=pkey,
key_filename=connection.get('key_filename'),
timeout=connection.get('timeout', 10))
# send TCP keepalive packets every 20 seconds
ssh.get_transport().set_keepalive(20)
except Exception as e:
LOG.debug("SSH connect failed: %s" % e)
raise exception.SSHConnectFailed(host=connection.get('host'))
return ssh
def generate_uid(topic, size=8):
characters = '01234567890abcdefghijklmnopqrstuvwxyz'
choices = [random.choice(characters) for _x in range(size)]
return '%s-%s' % (topic, ''.join(choices))
def random_alnum(size=32):
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
return ''.join(random.choice(characters) for _ in range(size))
def delete_if_exists(pathname):
"""delete a file, but ignore file not found error."""
try:
os.unlink(pathname)
except OSError as e:
if e.errno == errno.ENOENT:
return
else:
raise
def is_valid_boolstr(val):
"""Check if the provided string is a valid bool string or not."""
boolstrs = ('true', 'false', 'yes', 'no', 'y', 'n', '1', '0')
return str(val).lower() in boolstrs
def is_valid_mac(address):
"""Verify the format of a MAC address.
Check if a MAC address is valid and contains six octets. Accepts
colon-separated format only.
:param address: MAC address to be validated.
:returns: True if valid. False if not.
"""
m = "[0-9a-f]{2}(:[0-9a-f]{2}){5}$"
return (isinstance(address, six.string_types) and
re.match(m, address.lower()))
def is_hostname_safe(hostname):
"""Determine if the supplied hostname is RFC compliant.
Check that the supplied hostname conforms to:
* http://en.wikipedia.org/wiki/Hostname
* http://tools.ietf.org/html/rfc952
* http://tools.ietf.org/html/rfc1123
:param hostname: The hostname to be validated.
:returns: True if valid. False if not.
"""
m = '^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$'
return (isinstance(hostname, six.string_types) and
(re.match(m, hostname) is not None))
def validate_and_normalize_mac(address):
"""Validate a MAC address and return normalized form.
Checks whether the supplied MAC address is formally correct and
normalize it to all lower case.
:param address: MAC address to be validated and normalized.
:returns: Normalized and validated MAC address.
:raises: InvalidMAC If the MAC address is not valid.
"""
if not is_valid_mac(address):
raise exception.InvalidMAC(mac=address)
return address.lower()
def is_valid_ipv6_cidr(address):
try:
str(netaddr.IPNetwork(address, version=6).cidr)
return True
except Exception:
return False
def get_shortened_ipv6(address):
addr = netaddr.IPAddress(address, version=6)
return str(addr.ipv6())
def get_shortened_ipv6_cidr(address):
net = netaddr.IPNetwork(address, version=6)
return str(net.cidr)
def is_valid_cidr(address):
"""Check if the provided ipv4 or ipv6 address is a valid CIDR address."""
try:
# Validate the correct CIDR Address
netaddr.IPNetwork(address)
except netaddr.core.AddrFormatError:
return False
except UnboundLocalError:
# NOTE(MotoKen): work around bug in netaddr 0.7.5 (see detail in
# https://github.com/drkjam/netaddr/issues/2)
return False
# Prior validation partially verify /xx part
# Verify it here
ip_segment = address.split('/')
if (len(ip_segment) <= 1 or
ip_segment[1] == ''):
return False
return True
def get_ip_version(network):
"""Returns the IP version of a network (IPv4 or IPv6).
:raises: AddrFormatError if invalid network.
"""
if netaddr.IPNetwork(network).version == 6:
return "IPv6"
elif netaddr.IPNetwork(network).version == 4:
return "IPv4"
def convert_to_list_dict(lst, label):
"""Convert a value or list into a list of dicts."""
if not lst:
return None
if not isinstance(lst, list):
lst = [lst]
return [{label: x} for x in lst]
def sanitize_hostname(hostname):
"""Return a hostname which conforms to RFC-952 and RFC-1123 specs."""
if isinstance(hostname, six.text_type):
hostname = hostname.encode('latin-1', 'ignore')
hostname = re.sub('[ _]', '-', hostname)
hostname = re.sub('[^\w.-]+', '', hostname)
hostname = hostname.lower()
hostname = hostname.strip('.-')
return hostname
def read_cached_file(filename, cache_info, reload_func=None):
"""Read from a file if it has been modified.
:param cache_info: dictionary to hold opaque cache.
:param reload_func: optional function to be called with data when
file is reloaded due to a modification.
:returns: data from file
"""
mtime = os.path.getmtime(filename)
if not cache_info or mtime != cache_info.get('mtime'):
LOG.debug("Reloading cached file %s" % filename)
with open(filename) as fap:
cache_info['data'] = fap.read()
cache_info['mtime'] = mtime
if reload_func:
reload_func(cache_info['data'])
return cache_info['data']
def file_open(*args, **kwargs):
"""Open file
see built-in file() documentation for more details
Note: The reason this is kept in a separate module is to easily
be able to provide a stub module that doesn't alter system
state at all (for unit tests)
"""
return file(*args, **kwargs)
def hash_file(file_like_object):
"""Generate a hash for the contents of a file."""
checksum = hashlib.sha1()
for chunk in iter(lambda: file_like_object.read(32768), b''):
checksum.update(chunk)
return checksum.hexdigest()
@contextlib.contextmanager
def temporary_mutation(obj, **kwargs):
"""Temporarily change object attribute.
Temporarily set the attr on a particular object to a given value then
revert when finished.
One use of this is to temporarily set the read_deleted flag on a context
object:
with temporary_mutation(context, read_deleted="yes"):
do_something_that_needed_deleted_objects()
"""
def is_dict_like(thing):
return hasattr(thing, 'has_key')
def get(thing, attr, default):
if is_dict_like(thing):
return thing.get(attr, default)
else:
return getattr(thing, attr, default)
def set_value(thing, attr, val):
if is_dict_like(thing):
thing[attr] = val
else:
setattr(thing, attr, val)
def delete(thing, attr):
if is_dict_like(thing):
del thing[attr]
else:
delattr(thing, attr)
NOT_PRESENT = object()
old_values = {}
for attr, new_value in kwargs.items():
old_values[attr] = get(obj, attr, NOT_PRESENT)
set_value(obj, attr, new_value)
try:
yield
finally:
for attr, old_value in old_values.items():
if old_value is NOT_PRESENT:
delete(obj, attr)
else:
set_value(obj, attr, old_value)
@contextlib.contextmanager
def tempdir(**kwargs):
tempfile.tempdir = CONF.tempdir
tmpdir = tempfile.mkdtemp(**kwargs)
try:
yield tmpdir
finally:
try:
shutil.rmtree(tmpdir)
except OSError as e:
LOG.error(_LE('Could not remove tmpdir: %s'), e)
def mkfs(fs, path, label=None):
"""Format a file or block device
:param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4'
'btrfs', etc.)
:param path: Path to file or block device to format
:param label: Volume label to use
"""
if fs == 'swap':
args = ['mkswap']
else:
args = ['mkfs', '-t', fs]
# add -F to force no interactive execute on non-block device.
if fs in ('ext3', 'ext4'):
args.extend(['-F'])
if label:
if fs in ('msdos', 'vfat'):
label_opt = '-n'
else:
label_opt = '-L'
args.extend([label_opt, label])
args.append(path)
try:
execute(*args, run_as_root=True, use_standard_locale=True)
except processutils.ProcessExecutionError as e:
with excutils.save_and_reraise_exception() as ctx:
if os.strerror(errno.ENOENT) in e.stderr:
ctx.reraise = False
LOG.exception(_LE('Failed to make file system. '
'File system %s is not supported.'), fs)
raise exception.FileSystemNotSupported(fs=fs)
else:
LOG.exception(_LE('Failed to create a file system '
'in %(path)s. Error: %(error)s'),
{'path': path, 'error': e})
def unlink_without_raise(path):
try:
os.unlink(path)
except OSError as e:
if e.errno == errno.ENOENT:
return
else:
LOG.warn(_LW("Failed to unlink %(path)s, error: %(e)s"),
{'path': path, 'e': e})
def rmtree_without_raise(path):
try:
if os.path.isdir(path):
shutil.rmtree(path)
except OSError as e:
LOG.warn(_LW("Failed to remove dir %(path)s, error: %(e)s"),
{'path': path, 'e': e})
def write_to_file(path, contents):
with open(path, 'w') as f:
f.write(contents)
def create_link_without_raise(source, link):
try:
os.symlink(source, link)
except OSError as e:
if e.errno == errno.EEXIST:
return
else:
LOG.warn(_LW("Failed to create symlink from %(source)s to %(link)s"
", error: %(e)s"),
{'source': source, 'link': link, 'e': e})
def safe_rstrip(value, chars=None):
"""Removes trailing characters from a string if that does not make it empty
:param value: A string value that will be stripped.
:param chars: Characters to remove.
:return: Stripped value.
"""
if not isinstance(value, six.string_types):
LOG.warn(_LW("Failed to remove trailing character. Returning original "
"object. Supplied object is not a string: %s,"), value)
return value
return value.rstrip(chars) or value
def mount(src, dest, *args):
"""Mounts a device/image file on specified location.
:param src: the path to the source file for mounting
:param dest: the path where it needs to be mounted.
:param args: a tuple containing the arguments to be
passed to mount command.
:raises: processutils.ProcessExecutionError if it failed
to run the process.
"""
args = ('mount', ) + args + (src, dest)
execute(*args, run_as_root=True, check_exit_code=[0])
def umount(loc, *args):
"""Umounts a mounted location.
:param loc: the path to be unmounted.
:param args: a tuple containing the argumnets to be
passed to the umount command.
:raises: processutils.ProcessExecutionError if it failed
to run the process.
"""
args = ('umount', ) + args + (loc, )
execute(*args, run_as_root=True, check_exit_code=[0])
def dd(src, dst, *args):
"""Execute dd from src to dst.
:param src: the input file for dd command.
:param dst: the output file for dd command.
:param args: a tuple containing the arguments to be
passed to dd command.
:raises: processutils.ProcessExecutionError if it failed
to run the process.
"""
LOG.debug("Starting dd process.")
execute('dd', 'if=%s' % src, 'of=%s' % dst, *args,
run_as_root=True, check_exit_code=[0])
def is_http_url(url):
url = url.lower()
return url.startswith('http://') or url.startswith('https://')

View File

@ -1,730 +0,0 @@
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import gzip
import math
import os
import re
import shutil
import socket
import stat
import tempfile
import time
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_serialization import jsonutils
from oslo_utils import excutils
from oslo_utils import units
import requests
import six
from ironic.common import disk_partitioner
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common.i18n import _LE
from ironic.common import images
from ironic.common import states
from ironic.common import utils
from ironic.conductor import utils as manager_utils
from ironic.drivers.modules import image_cache
from ironic.openstack.common import log as logging
deploy_opts = [
cfg.StrOpt('dd_block_size',
default='1M',
help='Block size to use when writing to the nodes disk.'),
cfg.IntOpt('iscsi_verify_attempts',
default=3,
help='Maximum attempts to verify an iSCSI connection is '
'active, sleeping 1 second between attempts.'),
]
CONF = cfg.CONF
CONF.register_opts(deploy_opts, group='deploy')
LOG = logging.getLogger(__name__)
# All functions are called from deploy() directly or indirectly.
# They are split for stub-out.
def discovery(portal_address, portal_port):
"""Do iSCSI discovery on portal."""
utils.execute('iscsiadm',
'-m', 'discovery',
'-t', 'st',
'-p', '%s:%s' % (portal_address, portal_port),
run_as_root=True,
check_exit_code=[0],
attempts=5,
delay_on_retry=True)
def login_iscsi(portal_address, portal_port, target_iqn):
"""Login to an iSCSI target."""
utils.execute('iscsiadm',
'-m', 'node',
'-p', '%s:%s' % (portal_address, portal_port),
'-T', target_iqn,
'--login',
run_as_root=True,
check_exit_code=[0],
attempts=5,
delay_on_retry=True)
# Ensure the login complete
verify_iscsi_connection(target_iqn)
# force iSCSI initiator to re-read luns
force_iscsi_lun_update(target_iqn)
# ensure file system sees the block device
check_file_system_for_iscsi_device(portal_address,
portal_port,
target_iqn)
def check_file_system_for_iscsi_device(portal_address,
portal_port,
target_iqn):
"""Ensure the file system sees the iSCSI block device."""
check_dir = "/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-1" % (portal_address,
portal_port,
target_iqn)
total_checks = CONF.deploy.iscsi_verify_attempts
for attempt in range(total_checks):
if os.path.exists(check_dir):
break
time.sleep(1)
LOG.debug("iSCSI connection not seen by file system. Rechecking. "
"Attempt %(attempt)d out of %(total)d",
{"attempt": attempt + 1,
"total": total_checks})
else:
msg = _("iSCSI connection was not seen by the file system after "
"attempting to verify %d times.") % total_checks
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def verify_iscsi_connection(target_iqn):
"""Verify iscsi connection."""
LOG.debug("Checking for iSCSI target to become active.")
for attempt in range(CONF.deploy.iscsi_verify_attempts):
out, _err = utils.execute('iscsiadm',
'-m', 'node',
'-S',
run_as_root=True,
check_exit_code=[0])
if target_iqn in out:
break
time.sleep(1)
LOG.debug("iSCSI connection not active. Rechecking. Attempt "
"%(attempt)d out of %(total)d", {"attempt": attempt + 1,
"total": CONF.deploy.iscsi_verify_attempts})
else:
msg = _("iSCSI connection did not become active after attempting to "
"verify %d times.") % CONF.deploy.iscsi_verify_attempts
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def force_iscsi_lun_update(target_iqn):
"""force iSCSI initiator to re-read luns."""
LOG.debug("Re-reading iSCSI luns.")
utils.execute('iscsiadm',
'-m', 'node',
'-T', target_iqn,
'-R',
run_as_root=True,
check_exit_code=[0])
def logout_iscsi(portal_address, portal_port, target_iqn):
"""Logout from an iSCSI target."""
utils.execute('iscsiadm',
'-m', 'node',
'-p', '%s:%s' % (portal_address, portal_port),
'-T', target_iqn,
'--logout',
run_as_root=True,
check_exit_code=[0],
attempts=5,
delay_on_retry=True)
def delete_iscsi(portal_address, portal_port, target_iqn):
"""Delete the iSCSI target."""
# Retry delete until it succeeds (exit code 0) or until there is
# no longer a target to delete (exit code 21).
utils.execute('iscsiadm',
'-m', 'node',
'-p', '%s:%s' % (portal_address, portal_port),
'-T', target_iqn,
'-o', 'delete',
run_as_root=True,
check_exit_code=[0, 21],
attempts=5,
delay_on_retry=True)
def make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, commit=True):
"""Partition the disk device.
Create partitions for root, swap, ephemeral and configdrive on a
disk device.
:param root_mb: Size of the root partition in mebibytes (MiB).
:param swap_mb: Size of the swap partition in mebibytes (MiB). If 0,
no partition will be created.
:param ephemeral_mb: Size of the ephemeral partition in mebibytes (MiB).
If 0, no partition will be created.
:param configdrive_mb: Size of the configdrive partition in
mebibytes (MiB). If 0, no partition will be created.
:param commit: True/False. Default for this setting is True. If False
partitions will not be written to disk.
:returns: A dictionary containing the partition type as Key and partition
path as Value for the partitions created by this method.
"""
LOG.debug("Starting to partition the disk device: %(dev)s",
{'dev': dev})
part_template = dev + '-part%d'
part_dict = {}
dp = disk_partitioner.DiskPartitioner(dev)
if ephemeral_mb:
LOG.debug("Add ephemeral partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': ephemeral_mb})
part_num = dp.add_partition(ephemeral_mb)
part_dict['ephemeral'] = part_template % part_num
if swap_mb:
LOG.debug("Add Swap partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': swap_mb})
part_num = dp.add_partition(swap_mb, fs_type='linux-swap')
part_dict['swap'] = part_template % part_num
if configdrive_mb:
LOG.debug("Add config drive partition (%(size)d MB) to device: "
"%(dev)s", {'dev': dev, 'size': configdrive_mb})
part_num = dp.add_partition(configdrive_mb)
part_dict['configdrive'] = part_template % part_num
# NOTE(lucasagomes): Make the root partition the last partition. This
# enables tools like cloud-init's growroot utility to expand the root
# partition until the end of the disk.
LOG.debug("Add root partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': root_mb})
part_num = dp.add_partition(root_mb)
part_dict['root'] = part_template % part_num
if commit:
# write to the disk
dp.commit()
return part_dict
def is_block_device(dev):
"""Check whether a device is block or not."""
attempts = CONF.deploy.iscsi_verify_attempts
for attempt in range(attempts):
try:
s = os.stat(dev)
except OSError as e:
LOG.debug("Unable to stat device %(dev)s. Attempt %(attempt)d "
"out of %(total)d. Error: %(err)s", {"dev": dev,
"attempt": attempt + 1, "total": attempts, "err": e})
time.sleep(1)
else:
return stat.S_ISBLK(s.st_mode)
msg = _("Unable to stat device %(dev)s after attempting to verify "
"%(attempts)d times.") % {'dev': dev, 'attempts': attempts}
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def dd(src, dst):
"""Execute dd from src to dst."""
utils.dd(src, dst, 'bs=%s' % CONF.deploy.dd_block_size, 'oflag=direct')
def populate_image(src, dst):
data = images.qemu_img_info(src)
if data.file_format == 'raw':
dd(src, dst)
else:
images.convert_image(src, dst, 'raw', True)
def mkswap(dev, label='swap1'):
"""Execute mkswap on a device."""
utils.mkfs('swap', dev, label)
def mkfs_ephemeral(dev, ephemeral_format, label="ephemeral0"):
utils.mkfs(ephemeral_format, dev, label)
def block_uuid(dev):
"""Get UUID of a block device."""
out, _err = utils.execute('blkid', '-s', 'UUID', '-o', 'value', dev,
run_as_root=True,
check_exit_code=[0])
return out.strip()
def switch_pxe_config(path, root_uuid, boot_mode):
"""Switch a pxe config from deployment mode to service mode."""
with open(path) as f:
lines = f.readlines()
root = 'UUID=%s' % root_uuid
rre = re.compile(r'\{\{ ROOT \}\}')
if boot_mode == 'uefi':
dre = re.compile('^default=.*$')
boot_line = 'default=boot'
else:
pxe_cmd = 'goto' if CONF.pxe.ipxe_enabled else 'default'
dre = re.compile('^%s .*$' % pxe_cmd)
boot_line = '%s boot' % pxe_cmd
with open(path, 'w') as f:
for line in lines:
line = rre.sub(root, line)
line = dre.sub(boot_line, line)
f.write(line)
def notify(address, port):
"""Notify a node that it becomes ready to reboot."""
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((address, port))
s.send('done')
finally:
s.close()
def get_dev(address, port, iqn, lun):
"""Returns a device path for given parameters."""
dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s"
% (address, port, iqn, lun))
return dev
def get_image_mb(image_path, virtual_size=True):
"""Get size of an image in Megabyte."""
mb = 1024 * 1024
if not virtual_size:
image_byte = os.path.getsize(image_path)
else:
image_byte = images.converted_size(image_path)
# round up size to MB
image_mb = int((image_byte + mb - 1) / mb)
return image_mb
def get_dev_block_size(dev):
"""Get the device size in 512 byte sectors."""
block_sz, cmderr = utils.execute('blockdev', '--getsz', dev,
run_as_root=True, check_exit_code=[0])
return int(block_sz)
def destroy_disk_metadata(dev, node_uuid):
"""Destroy metadata structures on node's disk.
Ensure that node's disk appears to be blank without zeroing the entire
drive. To do this we will zero:
- the first 18KiB to clear MBR / GPT data
- the last 18KiB to clear GPT and other metadata like: LVM, veritas,
MDADM, DMRAID, ...
"""
# NOTE(NobodyCam): This is needed to work around bug:
# https://bugs.launchpad.net/ironic/+bug/1317647
LOG.debug("Start destroy disk metadata for node %(node)s.",
{'node': node_uuid})
try:
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
'bs=512', 'count=36', run_as_root=True,
check_exit_code=[0])
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to erase beginning of disk for node "
"%(node)s. Command: %(command)s. Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
# now wipe the end of the disk.
# get end of disk seek value
try:
block_sz = get_dev_block_size(dev)
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to get disk block count for node %(node)s. "
"Command: %(command)s. Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
else:
seek_value = block_sz - 36
try:
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
'bs=512', 'count=36', 'seek=%d' % seek_value,
run_as_root=True, check_exit_code=[0])
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to erase the end of the disk on node "
"%(node)s. Command: %(command)s. "
"Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
def _get_configdrive(configdrive, node_uuid):
"""Get the information about size and location of the configdrive.
:param configdrive: Base64 encoded Gzipped configdrive content or
configdrive HTTP URL.
:param node_uuid: Node's uuid. Used for logging.
:raises: InstanceDeployFailure if it can't download or decode the
config drive.
:returns: A tuple with the size in MiB and path to the uncompressed
configdrive file.
"""
# Check if the configdrive option is a HTTP URL or the content directly
is_url = utils.is_http_url(configdrive)
if is_url:
try:
data = requests.get(configdrive).content
except requests.exceptions.RequestException as e:
raise exception.InstanceDeployFailure(
_("Can't download the configdrive content for node %(node)s "
"from '%(url)s'. Reason: %(reason)s") %
{'node': node_uuid, 'url': configdrive, 'reason': e})
else:
data = configdrive
try:
data = six.StringIO(base64.b64decode(data))
except TypeError:
error_msg = (_('Config drive for node %s is not base64 encoded '
'or the content is malformed.') % node_uuid)
if is_url:
error_msg += _(' Downloaded from "%s".') % configdrive
raise exception.InstanceDeployFailure(error_msg)
configdrive_file = tempfile.NamedTemporaryFile(delete=False,
prefix='configdrive')
configdrive_mb = 0
with gzip.GzipFile('configdrive', 'rb', fileobj=data) as gunzipped:
try:
shutil.copyfileobj(gunzipped, configdrive_file)
except EnvironmentError as e:
# Delete the created file
utils.unlink_without_raise(configdrive_file.name)
raise exception.InstanceDeployFailure(
_('Encountered error while decompressing and writing '
'config drive for node %(node)s. Error: %(exc)s') %
{'node': node_uuid, 'exc': e})
else:
# Get the file size and convert to MiB
configdrive_file.seek(0, os.SEEK_END)
bytes_ = configdrive_file.tell()
configdrive_mb = int(math.ceil(float(bytes_) / units.Mi))
finally:
configdrive_file.close()
return (configdrive_mb, configdrive_file.name)
def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format,
image_path, node_uuid, preserve_ephemeral=False,
configdrive=None):
"""Create partitions and copy an image to the root partition.
:param dev: Path for the device to work on.
:param root_mb: Size of the root partition in megabytes.
:param swap_mb: Size of the swap partition in megabytes.
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
no ephemeral partition will be created.
:param ephemeral_format: The type of file system to format the ephemeral
partition.
:param image_path: Path for the instance's disk image.
:param node_uuid: node's uuid. Used for logging.
:param preserve_ephemeral: If True, no filesystem is written to the
ephemeral block device, preserving whatever content it had (if the
partition table has not changed).
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
or configdrive HTTP URL.
:returns: the UUID of the root partition.
"""
if not is_block_device(dev):
raise exception.InstanceDeployFailure(
_("Parent device '%s' not found") % dev)
# the only way for preserve_ephemeral to be set to true is if we are
# rebuilding an instance with --preserve_ephemeral.
commit = not preserve_ephemeral
# now if we are committing the changes to disk clean first.
if commit:
destroy_disk_metadata(dev, node_uuid)
try:
# If requested, get the configdrive file and determine the size
# of the configdrive partition
configdrive_mb = 0
configdrive_file = None
if configdrive:
configdrive_mb, configdrive_file = _get_configdrive(configdrive,
node_uuid)
part_dict = make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, commit=commit)
ephemeral_part = part_dict.get('ephemeral')
swap_part = part_dict.get('swap')
configdrive_part = part_dict.get('configdrive')
root_part = part_dict.get('root')
if not is_block_device(root_part):
raise exception.InstanceDeployFailure(
_("Root device '%s' not found") % root_part)
for part in ('swap', 'ephemeral', 'configdrive'):
part_device = part_dict.get(part)
LOG.debug("Checking for %(part)s device (%(dev)s) on node "
"%(node)s.", {'part': part, 'dev': part_device,
'node': node_uuid})
if part_device and not is_block_device(part_device):
raise exception.InstanceDeployFailure(
_("'%(partition)s' device '%(part_device)s' not found") %
{'partition': part, 'part_device': part_device})
if configdrive_part:
# Copy the configdrive content to the configdrive partition
dd(configdrive_file, configdrive_part)
finally:
# If the configdrive was requested make sure we delete the file
# after copying the content to the partition
if configdrive_file:
utils.unlink_without_raise(configdrive_file)
populate_image(image_path, root_part)
if swap_part:
mkswap(swap_part)
if ephemeral_part and not preserve_ephemeral:
mkfs_ephemeral(ephemeral_part, ephemeral_format)
try:
root_uuid = block_uuid(root_part)
except processutils.ProcessExecutionError:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to detect root device UUID."))
return root_uuid
def deploy(address, port, iqn, lun, image_path,
root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid,
preserve_ephemeral=False, configdrive=None):
"""All-in-one function to deploy a node.
:param address: The iSCSI IP address.
:param port: The iSCSI port number.
:param iqn: The iSCSI qualified name.
:param lun: The iSCSI logical unit number.
:param image_path: Path for the instance's disk image.
:param root_mb: Size of the root partition in megabytes.
:param swap_mb: Size of the swap partition in megabytes.
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
no ephemeral partition will be created.
:param ephemeral_format: The type of file system to format the ephemeral
partition.
:param node_uuid: node's uuid. Used for logging.
:param preserve_ephemeral: If True, no filesystem is written to the
ephemeral block device, preserving whatever content it had (if the
partition table has not changed).
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
or configdrive HTTP URL.
:returns: the UUID of the root partition.
"""
dev = get_dev(address, port, iqn, lun)
image_mb = get_image_mb(image_path)
if image_mb > root_mb:
root_mb = image_mb
discovery(address, port)
login_iscsi(address, port, iqn)
try:
root_uuid = work_on_disk(dev, root_mb, swap_mb, ephemeral_mb,
ephemeral_format, image_path, node_uuid,
preserve_ephemeral=preserve_ephemeral,
configdrive=configdrive)
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Deploy to address %s failed."), address)
LOG.error(_LE("Command: %s"), err.cmd)
LOG.error(_LE("StdOut: %r"), err.stdout)
LOG.error(_LE("StdErr: %r"), err.stderr)
except exception.InstanceDeployFailure as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Deploy to address %s failed."), address)
LOG.error(e)
finally:
logout_iscsi(address, port, iqn)
delete_iscsi(address, port, iqn)
return root_uuid
def notify_deploy_complete(address):
"""Notifies the completion of deployment to the baremetal node.
:param address: The IP address of the node.
"""
# Ensure the node started netcat on the port after POST the request.
time.sleep(3)
notify(address, 10000)
def check_for_missing_params(info_dict, error_msg, param_prefix=''):
"""Check for empty params in the provided dictionary.
:param info_dict: The dictionary to inspect.
:param error_msg: The error message to prefix before printing the
information about missing parameters.
:param param_prefix: Add this prefix to each parameter for error messages
:raises: MissingParameterValue, if one or more parameters are
empty in the provided dictionary.
"""
missing_info = []
for label, value in info_dict.items():
if not value:
missing_info.append(param_prefix + label)
if missing_info:
exc_msg = _("%(error_msg)s. Missing are: %(missing_info)s")
raise exception.MissingParameterValue(exc_msg %
{'error_msg': error_msg, 'missing_info': missing_info})
def fetch_images(ctx, cache, images_info, force_raw=True):
"""Check for available disk space and fetch images using ImageCache.
:param ctx: context
:param cache: ImageCache instance to use for fetching
:param images_info: list of tuples (image href, destination path)
:param force_raw: boolean value, whether to convert the image to raw
format
:raises: InstanceDeployFailure if unable to find enough disk space
"""
try:
image_cache.clean_up_caches(ctx, cache.master_dir, images_info)
except exception.InsufficientDiskSpace as e:
raise exception.InstanceDeployFailure(reason=e)
# NOTE(dtantsur): This code can suffer from race condition,
# if disk space is used between the check and actual download.
# This is probably unavoidable, as we can't control other
# (probably unrelated) processes
for href, path in images_info:
cache.fetch_image(href, path, ctx=ctx, force_raw=force_raw)
def set_failed_state(task, msg):
"""Sets the deploy status as failed with relevant messages.
This method sets the deployment as fail with the given message.
It sets node's provision_state to DEPLOYFAIL and updates last_error
with the given error message. It also powers off the baremetal node.
:param task: a TaskManager instance containing the node to act on.
:param msg: the message to set in last_error of the node.
:raises: InvalidState if the event is not allowed by the associated
state machine.
"""
task.process_event('fail')
node = task.node
try:
manager_utils.node_power_action(task, states.POWER_OFF)
except Exception:
msg2 = (_LE('Node %s failed to power off while handling deploy '
'failure. This may be a serious condition. Node '
'should be removed from Ironic or put in maintenance '
'mode until the problem is resolved.') % node.uuid)
LOG.exception(msg2)
finally:
# NOTE(deva): node_power_action() erases node.last_error
# so we need to set it again here.
node.last_error = msg
node.save()
def get_single_nic_with_vif_port_id(task):
"""Returns the MAC address of a port which has a VIF port id.
:param task: a TaskManager instance containing the ports to act on.
:returns: MAC address of the port connected to deployment network.
None if it cannot find any port with vif id.
"""
for port in task.ports:
if port.extra.get('vif_port_id'):
return port.address
def parse_instance_info_capabilities(node):
"""Parse the instance_info capabilities.
One way of having these capabilities set is via Nova, where the
capabilities are defined in the Flavor extra_spec and passed to
Ironic by the Nova Ironic driver.
NOTE: Although our API fully supports JSON fields, to maintain the
backward compatibility with Juno the Nova Ironic driver is sending
it as a string.
:param node: a single Node.
:raises: InvalidParameterValue if the capabilities string is not a
dictionary or is malformed.
:returns: A dictionary with the capabilities if found, otherwise an
empty dictionary.
"""
def parse_error():
error_msg = (_('Error parsing capabilities from Node %s instance_info '
'field. A dictionary or a "jsonified" dictionary is '
'expected.') % node.uuid)
raise exception.InvalidParameterValue(error_msg)
capabilities = node.instance_info.get('capabilities', {})
if isinstance(capabilities, six.string_types):
try:
capabilities = jsonutils.loads(capabilities)
except (ValueError, TypeError):
parse_error()
if not isinstance(capabilities, dict):
parse_error()
return capabilities

22
ironic_lib/__init__.py Normal file
View File

@ -0,0 +1,22 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This ensures the ironic_lib namespace is defined
try:
import pkg_resources
pkg_resources.declare_namespace(__name__)
except ImportError:
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__)

View File

@ -13,17 +13,19 @@
# License for the specific language governing permissions and limitations
# under the License.
import logging
import re
from oslo_concurrency import processutils
from oslo_config import cfg
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common.i18n import _LW
from ironic.common import utils
from ironic.openstack.common import log as logging
from ironic.openstack.common import loopingcall
from ironic_lib.openstack.common._i18n import _
from ironic_lib.openstack.common._i18n import _LW
from ironic_lib.openstack.common import loopingcall
from ironic_lib import exception
from ironic_lib import utils
opts = [
cfg.IntOpt('check_device_interval',
@ -38,6 +40,9 @@ opts = [
'not accessed by another process. If the device is still '
'busy after that, the disk partitioning will be treated as'
' having failed.'),
cfg.StrOpt('dd_block_size',
default='1M',
help='Block size to use when writing to the nodes disk.'),
]
CONF = cfg.CONF
@ -157,8 +162,8 @@ class DiskPartitioner(object):
max_retries = CONF.disk_partitioner.check_device_max_retries
timer = loopingcall.FixedIntervalLoopingCall(
self._wait_for_disk_to_become_available,
retries, max_retries, pids, fuser_err)
self._wait_for_disk_to_become_available,
retries, max_retries, pids, fuser_err)
timer.start(interval=interval).wait()
if retries[0] > max_retries:
@ -174,36 +179,3 @@ class DiskPartitioner(object):
'exited with "%(fuser_err)s". Time out waiting for '
'completion.')
% {'device': self._device, 'fuser_err': fuser_err[0]})
_PARTED_PRINT_RE = re.compile(r"^\d+:([\d\.]+)MiB:"
"([\d\.]+)MiB:([\d\.]+)MiB:(\w*)::(\w*)")
def list_partitions(device):
"""Get partitions information from given device.
:param device: The device path.
:returns: list of dictionaries (one per partition) with keys:
start, end, size (in MiB), filesystem, flags
"""
output = utils.execute(
'parted', '-s', '-m', device, 'unit', 'MiB', 'print',
use_standard_locale=True)[0]
lines = [line for line in output.split('\n') if line.strip()][2:]
# Example of line: 1:1.00MiB:501MiB:500MiB:ext4::boot
fields = ('start', 'end', 'size', 'filesystem', 'flags')
result = []
for line in lines:
match = _PARTED_PRINT_RE.match(line)
if match is None:
LOG.warn(_LW("Partition information from parted for device "
"%(device)s does not match "
"expected format: %(line)s"),
dict(device=device, line=line))
continue
# Cast int fields to ints (some are floats and we round them down)
groups = [int(float(x)) if i < 3 else x
for i, x in enumerate(match.groups())]
result.append(dict(zip(fields, groups)))
return result

429
ironic_lib/disk_utils.py Normal file
View File

@ -0,0 +1,429 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import gzip
import logging
import math
import os
import re
import requests
import shutil
import six
import stat
import tempfile
import time
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import units
from ironic_lib.openstack.common._i18n import _
from ironic_lib.openstack.common._i18n import _LE
from ironic_lib.openstack.common._i18n import _LW
from ironic_lib.openstack.common import imageutils
from ironic_lib import disk_partitioner
from ironic_lib import exception
from ironic_lib import utils
opts = [
cfg.StrOpt('dd_block_size',
default='1M',
help='Block size to use when writing to the nodes disk.'),
cfg.IntOpt('iscsi_verify_attempts',
default=3,
help='Maximum attempts to verify an iSCSI connection is '
'active, sleeping 1 second between attempts.'),
]
CONF = cfg.CONF
CONF.register_opts(opts, group='deploy')
LOG = logging.getLogger(__name__)
_PARTED_PRINT_RE = re.compile(r"^\d+:([\d\.]+)MiB:"
"([\d\.]+)MiB:([\d\.]+)MiB:(\w*)::(\w*)")
def list_partitions(device):
"""Get partitions information from given device.
:param device: The device path.
:returns: list of dictionaries (one per partition) with keys:
start, end, size (in MiB), filesystem, flags
"""
output = utils.execute(
'parted', '-s', '-m', device, 'unit', 'MiB', 'print',
use_standard_locale=True)[0]
lines = [line for line in output.split('\n') if line.strip()][2:]
# Example of line: 1:1.00MiB:501MiB:500MiB:ext4::boot
fields = ('start', 'end', 'size', 'filesystem', 'flags')
result = []
for line in lines:
match = _PARTED_PRINT_RE.match(line)
if match is None:
LOG.warn(_LW("Partition information from parted for device "
"%(device)s does not match "
"expected format: %(line)s"),
dict(device=device, line=line))
continue
# Cast int fields to ints (some are floats and we round them down)
groups = [int(float(x)) if i < 3 else x
for i, x in enumerate(match.groups())]
result.append(dict(zip(fields, groups)))
return result
def make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, commit=True):
"""Partition the disk device.
Create partitions for root, swap, ephemeral and configdrive on a
disk device.
:param root_mb: Size of the root partition in mebibytes (MiB).
:param swap_mb: Size of the swap partition in mebibytes (MiB). If 0,
no partition will be created.
:param ephemeral_mb: Size of the ephemeral partition in mebibytes (MiB).
If 0, no partition will be created.
:param configdrive_mb: Size of the configdrive partition in
mebibytes (MiB). If 0, no partition will be created.
:param commit: True/False. Default for this setting is True. If False
partitions will not be written to disk.
:returns: A dictionary containing the partition type as Key and partition
path as Value for the partitions created by this method.
"""
LOG.debug("Starting to partition the disk device: %(dev)s",
{'dev': dev})
part_template = dev + '-part%d'
part_dict = {}
dp = disk_partitioner.DiskPartitioner(dev)
if ephemeral_mb:
LOG.debug("Add ephemeral partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': ephemeral_mb})
part_num = dp.add_partition(ephemeral_mb)
part_dict['ephemeral'] = part_template % part_num
if swap_mb:
LOG.debug("Add Swap partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': swap_mb})
part_num = dp.add_partition(swap_mb, fs_type='linux-swap')
part_dict['swap'] = part_template % part_num
if configdrive_mb:
LOG.debug("Add config drive partition (%(size)d MB) to device: "
"%(dev)s", {'dev': dev, 'size': configdrive_mb})
part_num = dp.add_partition(configdrive_mb)
part_dict['configdrive'] = part_template % part_num
# NOTE(lucasagomes): Make the root partition the last partition. This
# enables tools like cloud-init's growroot utility to expand the root
# partition until the end of the disk.
LOG.debug("Add root partition (%(size)d MB) to device: %(dev)s",
{'dev': dev, 'size': root_mb})
part_num = dp.add_partition(root_mb)
part_dict['root'] = part_template % part_num
if commit:
# write to the disk
dp.commit()
return part_dict
def dd(src, dst):
"""Execute dd from src to dst."""
utils.dd(src, dst, 'bs=%s' % CONF.deploy.dd_block_size, 'oflag=direct')
def qemu_img_info(path):
"""Return an object containing the parsed output from qemu-img info."""
if not os.path.exists(path):
return imageutils.QemuImgInfo()
out, err = utils.execute('env', 'LC_ALL=C', 'LANG=C',
'qemu-img', 'info', path)
return imageutils.QemuImgInfo(out)
def get_image_mb(image_path, virtual_size=True):
"""Get size of an image in Megabyte."""
mb = 1024 * 1024
if not virtual_size:
image_byte = os.path.getsize(image_path)
else:
data = qemu_img_info(image_path)
image_byte = data.virtual_size
# round up size to MB
image_mb = int((image_byte + mb - 1) / mb)
return image_mb
def convert_image(source, dest, out_format, run_as_root=False):
"""Convert image to other format."""
cmd = ('qemu-img', 'convert', '-O', out_format, source, dest)
utils.execute(*cmd, run_as_root=run_as_root)
def populate_image(src, dst):
data = qemu_img_info(src)
if data.file_format == 'raw':
dd(src, dst)
else:
convert_image(src, dst, 'raw', True)
def is_block_device(dev):
"""Check whether a device is block or not."""
attempts = CONF.deploy.iscsi_verify_attempts
for attempt in range(attempts):
try:
s = os.stat(dev)
except OSError as e:
LOG.debug("Unable to stat device %(dev)s. Attempt %(attempt)d "
"out of %(total)d. Error: %(err)s", {"dev": dev,
"attempt": attempt + 1, "total": attempts, "err": e})
time.sleep(1)
else:
return stat.S_ISBLK(s.st_mode)
msg = _("Unable to stat device %(dev)s after attempting to verify "
"%(attempts)d times.") % {'dev': dev, 'attempts': attempts}
LOG.error(msg)
raise exception.InstanceDeployFailure(msg)
def mkswap(dev, label='swap1'):
"""Execute mkswap on a device."""
utils.mkfs('swap', dev, label)
def mkfs_ephemeral(dev, ephemeral_format, label="ephemeral0"):
utils.mkfs(ephemeral_format, dev, label)
def block_uuid(dev):
"""Get UUID of a block device."""
out, _err = utils.execute('blkid', '-s', 'UUID', '-o', 'value', dev,
run_as_root=True,
check_exit_code=[0])
return out.strip()
def get_dev_block_size(dev):
"""Get the device size in 512 byte sectors."""
block_sz, cmderr = utils.execute('blockdev', '--getsz', dev,
run_as_root=True, check_exit_code=[0])
return int(block_sz)
def destroy_disk_metadata(dev, node_uuid):
"""Destroy metadata structures on node's disk.
Ensure that node's disk appears to be blank without zeroing the entire
drive. To do this we will zero the first 18KiB to clear MBR / GPT data
and the last 18KiB to clear GPT and other metadata like LVM, veritas,
MDADM, DMRAID, etc.
"""
# NOTE(NobodyCam): This is needed to work around bug:
# https://bugs.launchpad.net/ironic/+bug/1317647
LOG.debug("Start destroy disk metadata for node %(node)s.",
{'node': node_uuid})
try:
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
'bs=512', 'count=36', run_as_root=True,
check_exit_code=[0])
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to erase beginning of disk for node "
"%(node)s. Command: %(command)s. Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
# now wipe the end of the disk.
# get end of disk seek value
try:
block_sz = get_dev_block_size(dev)
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to get disk block count for node %(node)s. "
"Command: %(command)s. Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
else:
seek_value = block_sz - 36
try:
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
'bs=512', 'count=36', 'seek=%d' % seek_value,
run_as_root=True, check_exit_code=[0])
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to erase the end of the disk on node "
"%(node)s. Command: %(command)s. "
"Error: %(error)s."),
{'node': node_uuid,
'command': err.cmd,
'error': err.stderr})
def _get_configdrive(configdrive, node_uuid):
"""Get the information about size and location of the configdrive.
:param configdrive: Base64 encoded Gzipped configdrive content or
configdrive HTTP URL.
:param node_uuid: Node's uuid. Used for logging.
:raises: InstanceDeployFailure if it can't download or decode the
config drive.
:returns: A tuple with the size in MiB and path to the uncompressed
configdrive file.
"""
# Check if the configdrive option is a HTTP URL or the content directly
is_url = utils.is_http_url(configdrive)
if is_url:
try:
data = requests.get(configdrive).content
except requests.exceptions.RequestException as e:
raise exception.InstanceDeployFailure(
_("Can't download the configdrive content for node %(node)s "
"from '%(url)s'. Reason: %(reason)s") %
{'node': node_uuid, 'url': configdrive, 'reason': e})
else:
data = configdrive
try:
data = six.StringIO(base64.b64decode(data))
except TypeError:
error_msg = (_('Config drive for node %s is not base64 encoded '
'or the content is malformed.') % node_uuid)
if is_url:
error_msg += _(' Downloaded from "%s".') % configdrive
raise exception.InstanceDeployFailure(error_msg)
configdrive_file = tempfile.NamedTemporaryFile(delete=False,
prefix='configdrive')
configdrive_mb = 0
with gzip.GzipFile('configdrive', 'rb', fileobj=data) as gunzipped:
try:
shutil.copyfileobj(gunzipped, configdrive_file)
except EnvironmentError as e:
# Delete the created file
utils.unlink_without_raise(configdrive_file.name)
raise exception.InstanceDeployFailure(
_('Encountered error while decompressing and writing '
'config drive for node %(node)s. Error: %(exc)s') %
{'node': node_uuid, 'exc': e})
else:
# Get the file size and convert to MiB
configdrive_file.seek(0, os.SEEK_END)
bytes_ = configdrive_file.tell()
configdrive_mb = int(math.ceil(float(bytes_) / units.Mi))
finally:
configdrive_file.close()
return (configdrive_mb, configdrive_file.name)
def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format,
image_path, node_uuid, preserve_ephemeral=False,
configdrive=None):
"""Create partitions and copy an image to the root partition.
:param dev: Path for the device to work on.
:param root_mb: Size of the root partition in megabytes.
:param swap_mb: Size of the swap partition in megabytes.
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
no ephemeral partition will be created.
:param ephemeral_format: The type of file system to format the ephemeral
partition.
:param image_path: Path for the instance's disk image.
:param node_uuid: node's uuid. Used for logging.
:param preserve_ephemeral: If True, no filesystem is written to the
ephemeral block device, preserving whatever content it had (if the
partition table has not changed).
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
or configdrive HTTP URL.
:returns: the UUID of the root partition.
"""
if not is_block_device(dev):
raise exception.InstanceDeployFailure(
_("Parent device '%s' not found") % dev)
# the only way for preserve_ephemeral to be set to true is if we are
# rebuilding an instance with --preserve_ephemeral.
commit = not preserve_ephemeral
# now if we are committing the changes to disk clean first.
if commit:
destroy_disk_metadata(dev, node_uuid)
try:
# If requested, get the configdrive file and determine the size
# of the configdrive partition
configdrive_mb = 0
configdrive_file = None
if configdrive:
configdrive_mb, configdrive_file = _get_configdrive(configdrive,
node_uuid)
part_dict = make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
configdrive_mb, commit=commit)
ephemeral_part = part_dict.get('ephemeral')
swap_part = part_dict.get('swap')
configdrive_part = part_dict.get('configdrive')
root_part = part_dict.get('root')
if not is_block_device(root_part):
raise exception.InstanceDeployFailure(
_("Root device '%s' not found") % root_part)
for part in ('swap', 'ephemeral', 'configdrive'):
part_device = part_dict.get(part)
LOG.debug("Checking for %(part)s device (%(dev)s) on node "
"%(node)s.", {'part': part, 'dev': part_device,
'node': node_uuid})
if part_device and not is_block_device(part_device):
raise exception.InstanceDeployFailure(
_("'%(partition)s' device '%(part_device)s' not found") %
{'partition': part, 'part_device': part_device})
if configdrive_part:
# Copy the configdrive content to the configdrive partition
dd(configdrive_file, configdrive_part)
finally:
# If the configdrive was requested make sure we delete the file
# after copying the content to the partition
if configdrive_file:
utils.unlink_without_raise(configdrive_file)
populate_image(image_path, root_part)
if swap_part:
mkswap(swap_part)
if ephemeral_part and not preserve_ephemeral:
mkfs_ephemeral(ephemeral_part, ephemeral_format)
try:
root_uuid = block_uuid(root_part)
except processutils.ProcessExecutionError:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to detect root device UUID."))
return root_uuid

100
ironic_lib/exception.py Executable file
View File

@ -0,0 +1,100 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Ironic base exception handling.
Includes decorator for re-raising Ironic-type exceptions.
SHOULD include dedicated exception logging.
"""
import logging
import six
from oslo_config import cfg
from ironic_lib.openstack.common._i18n import _
from ironic_lib.openstack.common._i18n import _LE
LOG = logging.getLogger(__name__)
exc_log_opts = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='Make exception message format errors fatal.'),
]
CONF = cfg.CONF
CONF.register_opts(exc_log_opts)
class IronicException(Exception):
"""Base Ironic Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
if not message:
try:
message = self.message % kwargs
except Exception as e:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation'))
for name, value in kwargs.iteritems():
LOG.error("%s: %s" % (name, value))
if CONF.fatal_exception_format_errors:
raise e
else:
# at least get the core message out if something happened
message = self.message
super(IronicException, self).__init__(message)
def format_message(self):
if self.__class__.__name__.endswith('_Remote'):
return self.args[0]
else:
return six.text_type(self)
class InstanceDeployFailure(IronicException):
message = _("Failed to deploy instance: %(reason)s")
class FileSystemNotSupported(IronicException):
message = _("Failed to create a file system. "
"File system %(fs)s is not supported.")

View File

@ -24,7 +24,7 @@ try:
# repository. It is OK to have more than one translation function
# using the same domain, since there will still only be one message
# catalog.
_translators = oslo.i18n.TranslatorFactory(domain='ironic')
_translators = oslo.i18n.TranslatorFactory(domain='ironic_lib')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -21,8 +21,9 @@ Helper methods to deal with images.
import re
from ironic.openstack.common.gettextutils import _
from ironic.openstack.common import strutils
from oslo.utils import strutils
from ironic_lib.openstack.common._i18n import _
class QemuImgInfo(object):
@ -100,10 +101,9 @@ class QemuImgInfo(object):
real_details = real_details.strip().lower()
elif root_cmd == 'snapshot_list':
# Next line should be a header, starting with 'ID'
if not lines_after or not lines_after[0].startswith("ID"):
if not lines_after or not lines_after.pop(0).startswith("ID"):
msg = _("Snapshot list encountered but no header found!")
raise ValueError(msg)
del lines_after[0]
real_details = []
# This is the sprintf pattern we will try to match
# "%-10s%-20s%7s%20s%15s"
@ -118,6 +118,7 @@ class QemuImgInfo(object):
date_pieces = line_pieces[5].split(":")
if len(date_pieces) != 3:
break
lines_after.pop(0)
real_details.append({
'id': line_pieces[0],
'tag': line_pieces[1],
@ -125,7 +126,6 @@ class QemuImgInfo(object):
'date': line_pieces[3],
'vm_clock': line_pieces[4] + " " + line_pieces[5],
})
del lines_after[0]
return real_details
def _parse(self, cmd_output):

View File

@ -15,14 +15,15 @@
# License for the specific language governing permissions and limitations
# under the License.
import logging
import sys
import time
from eventlet import event
from eventlet import greenthread
from ironic.openstack.common.gettextutils import _LE, _LW
from ironic.openstack.common import log as logging
from ironic_lib.openstack.common._i18n import _LE
from ironic_lib.openstack.common._i18n import _LW
LOG = logging.getLogger(__name__)
@ -84,9 +85,9 @@ class FixedIntervalLoopingCall(LoopingCallBase):
break
delay = end - start - interval
if delay > 0:
LOG.warn(_LW('task %(func_name)s run outlasted '
LOG.warn(_LW('task %(func_name)r run outlasted '
'interval by %(delay).2f sec'),
{'func_name': repr(self.f), 'delay': delay})
{'func_name': self.f, 'delay': delay})
greenthread.sleep(-delay if delay < 0 else 0)
except LoopingCallDone as e:
self.stop()
@ -127,9 +128,9 @@ class DynamicLoopingCall(LoopingCallBase):
if periodic_interval_max is not None:
idle = min(idle, periodic_interval_max)
LOG.debug('Dynamic looping call %(func_name)s sleeping '
LOG.debug('Dynamic looping call %(func_name)r sleeping '
'for %(idle).02f seconds',
{'func_name': repr(self.f), 'idle': idle})
{'func_name': self.f, 'idle': idle})
greenthread.sleep(idle)
except LoopingCallDone as e:
self.stop()

148
ironic_lib/utils.py Normal file
View File

@ -0,0 +1,148 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utilities and helper functions."""
import errno
import logging
import os
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
from ironic_lib import exception
from ironic_lib.openstack.common._i18n import _LE
from ironic_lib.openstack.common._i18n import _LW
utils_opts = [
cfg.StrOpt('rootwrap_config',
default="",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('rootwrap_helper_cmd',
default="",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('tempdir',
help='Explicitly specify the temporary working directory.'),
]
CONF = cfg.CONF
CONF.register_opts(utils_opts)
LOG = logging.getLogger(__name__)
def _get_root_helper():
return '%s %s' % (CONF.rootwrap_helper_cmd, CONF.rootwrap_config)
def execute(*cmd, **kwargs):
"""Convenience wrapper around oslo's execute() method.
:param cmd: Passed to processutils.execute.
:param use_standard_locale: True | False. Defaults to False. If set to
True, execute command with standard locale
added to environment variables.
:returns: (stdout, stderr) from process execution
:raises: UnknownArgumentError
:raises: ProcessExecutionError
"""
use_standard_locale = kwargs.pop('use_standard_locale', False)
if use_standard_locale:
env = kwargs.pop('env_variables', os.environ.copy())
env['LC_ALL'] = 'C'
kwargs['env_variables'] = env
if kwargs.get('run_as_root') and 'root_helper' not in kwargs:
kwargs['root_helper'] = _get_root_helper()
result = processutils.execute(*cmd, **kwargs)
LOG.debug('Execution completed, command line is "%s"',
' '.join(map(str, cmd)))
LOG.debug('Command stdout is: "%s"' % result[0])
LOG.debug('Command stderr is: "%s"' % result[1])
return result
def mkfs(fs, path, label=None):
"""Format a file or block device
:param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4'
'btrfs', etc.)
:param path: Path to file or block device to format
:param label: Volume label to use
"""
if fs == 'swap':
args = ['mkswap']
else:
args = ['mkfs', '-t', fs]
# add -F to force no interactive execute on non-block device.
if fs in ('ext3', 'ext4'):
args.extend(['-F'])
if label:
if fs in ('msdos', 'vfat'):
label_opt = '-n'
else:
label_opt = '-L'
args.extend([label_opt, label])
args.append(path)
try:
execute(*args, run_as_root=True, use_standard_locale=True)
except processutils.ProcessExecutionError as e:
with excutils.save_and_reraise_exception() as ctx:
if os.strerror(errno.ENOENT) in e.stderr:
ctx.reraise = False
LOG.exception(_LE('Failed to make file system. '
'File system %s is not supported.'), fs)
raise exception.FileSystemNotSupported(fs=fs)
else:
LOG.exception(_LE('Failed to create a file system '
'in %(path)s. Error: %(error)s'),
{'path': path, 'error': e})
def unlink_without_raise(path):
try:
os.unlink(path)
except OSError as e:
if e.errno == errno.ENOENT:
return
else:
LOG.warn(_LW("Failed to unlink %(path)s, error: %(e)s"),
{'path': path, 'e': e})
def dd(src, dst, *args):
"""Execute dd from src to dst.
:param src: the input file for dd command.
:param dst: the output file for dd command.
:param args: a tuple containing the arguments to be
passed to dd command.
:raises: processutils.ProcessExecutionError if it failed
to run the process.
"""
LOG.debug("Starting dd process.")
execute('dd', 'if=%s' % src, 'of=%s' % dst, *args,
run_as_root=True, check_exit_code=[0])
def is_http_url(url):
url = url.lower()
return url.startswith('http://') or url.startswith('https://')

View File

@ -1,22 +1,8 @@
[DEFAULT]
# The list of modules to copy from oslo-incubator
module=config.generator
module=context
module=fileutils
module=gettextutils
module=imageutils
module=log
module=loopingcall
module=periodic_task
module=policy
module=service
module=versionutils
# Tools
script=tools/install_venv_common.py
script=tools/config/generate_sample.sh
script=tools/config/check_uptodate.sh
# The base module to hold the copy of openstack.common
base=ironic
base=ironic_lib

View File

@ -1,38 +1,21 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=0.6,!=0.7,<1.0
SQLAlchemy>=0.9.7,<=0.9.99
alembic>=0.7.2
argparse
eventlet>=0.16.1
lxml>=2.3
WebOb>=1.2.3
greenlet>=0.3.2
sqlalchemy-migrate>=0.9.1,!=0.9.2
netaddr>=0.7.12
paramiko>=1.13.0
iso8601>=0.1.9
python-neutronclient>=2.3.6,<3
python-glanceclient>=0.15.0
python-keystoneclient>=1.1.0
python-swiftclient>=2.2.0
stevedore>=1.1.0 # Apache-2.0
pysendfile==2.0.0
websockify>=0.6.0,<0.7
Jinja2>=2.6 # BSD License (3 clause)
oslo.concurrency>=1.4.1 # Apache-2.0
oslo.config>=1.6.0 # Apache-2.0
oslo.db>=1.4.1 # Apache-2.0
oslo.rootwrap>=1.5.0
oslo.i18n>=1.3.0 # Apache-2.0
oslo.middleware>=0.3.0 # Apache-2.0
oslo.serialization>=1.2.0 # Apache-2.0
oslo.utils>=1.2.0 # Apache-2.0
pecan>=0.8.0
PrettyTable>=0.7,<0.8
psutil>=1.1.1,<2.0.0
pycrypto>=2.6
requests>=2.2.0,!=2.4.0
six>=1.9.0
jsonpatch>=1.1
WSME>=0.6
Jinja2>=2.6 # BSD License (3 clause)
keystonemiddleware>=1.0.0
oslo.messaging>=1.6.0 # Apache-2.0
retrying>=1.2.3,!=1.3.0 # Apache-2.0
posix_ipc
six>=1.7.0
stevedore>=1.1.0 # Apache-2.0
oslo.i18n>=1.3.0 # Apache-2.0

View File

@ -1,10 +1,10 @@
[metadata]
name = ironic
version = 2015.1
summary = OpenStack Bare Metal Provisioning
name = ironic_lib
version = 2015.0
summary = Ironic Common Libraries
description-file =
README.rst
author = OpenStack
author = OpenStack Ironic
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
@ -16,83 +16,17 @@ classifier =
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 2.6
[files]
packages =
ironic
ironic_lib
namespace_packages =
ironic_lib
[global]
[entry_points]
console_scripts =
ironic-api = ironic.cmd.api:main
ironic-dbsync = ironic.cmd.dbsync:main
ironic-conductor = ironic.cmd.conductor:main
ironic-rootwrap = oslo_rootwrap.cmd:main
ironic-nova-bm-migrate = ironic.migrate_nova.migrate_db:main
ironic.dhcp =
neutron = ironic.dhcp.neutron:NeutronDHCPApi
none = ironic.dhcp.none:NoneDHCPApi
[nosetests]
ironic.drivers =
agent_ilo = ironic.drivers.ilo:IloVirtualMediaAgentDriver
agent_ipmitool = ironic.drivers.agent:AgentAndIPMIToolDriver
agent_pyghmi = ironic.drivers.agent:AgentAndIPMINativeDriver
agent_ssh = ironic.drivers.agent:AgentAndSSHDriver
agent_vbox = ironic.drivers.agent:AgentAndVirtualBoxDriver
fake = ironic.drivers.fake:FakeDriver
fake_agent = ironic.drivers.fake:FakeAgentDriver
fake_ipmitool = ironic.drivers.fake:FakeIPMIToolDriver
fake_ipminative = ironic.drivers.fake:FakeIPMINativeDriver
fake_ssh = ironic.drivers.fake:FakeSSHDriver
fake_pxe = ironic.drivers.fake:FakePXEDriver
fake_seamicro = ironic.drivers.fake:FakeSeaMicroDriver
fake_iboot = ironic.drivers.fake:FakeIBootDriver
fake_ilo = ironic.drivers.fake:FakeIloDriver
fake_drac = ironic.drivers.fake:FakeDracDriver
fake_snmp = ironic.drivers.fake:FakeSNMPDriver
fake_irmc = ironic.drivers.fake:FakeIRMCDriver
fake_vbox = ironic.drivers.fake:FakeVirtualBoxDriver
iscsi_ilo = ironic.drivers.ilo:IloVirtualMediaIscsiDriver
pxe_ipmitool = ironic.drivers.pxe:PXEAndIPMIToolDriver
pxe_ipminative = ironic.drivers.pxe:PXEAndIPMINativeDriver
pxe_ssh = ironic.drivers.pxe:PXEAndSSHDriver
pxe_vbox = ironic.drivers.pxe:PXEAndVirtualBoxDriver
pxe_seamicro = ironic.drivers.pxe:PXEAndSeaMicroDriver
pxe_iboot = ironic.drivers.pxe:PXEAndIBootDriver
pxe_ilo = ironic.drivers.pxe:PXEAndIloDriver
pxe_drac = ironic.drivers.drac:PXEDracDriver
pxe_snmp = ironic.drivers.pxe:PXEAndSNMPDriver
pxe_irmc = ironic.drivers.pxe:PXEAndIRMCDriver
ironic.database.migration_backend =
sqlalchemy = ironic.db.sqlalchemy.migration
[pbr]
autodoc_index_modules = True
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[compile_catalog]
directory = ironic/locale
domain = ironic
[update_catalog]
domain = ironic
output_dir = ironic/locale
input_file = ironic/locale/ironic.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = ironic/locale/ironic.pot
[wheel]
universal = 1

View File

@ -1,22 +1,16 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=0.9.2,<0.10
coverage>=3.6
discover
fixtures>=0.3.14
mock>=1.0
Babel>=1.3
MySQL-python
oslotest>=1.2.0 # Apache-2.0
psycopg2
python-ironicclient>=0.2.1
python-subunit>=0.0.18
testrepository>=0.0.18
testtools>=0.9.36,!=1.2.0
# Doc requirements
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
sphinxcontrib-pecanwsme>=0.8
hacking>=0.10.0,<0.11
oslosphinx>=2.2.0 # Apache-2.0
oslotest>=1.2.0 # Apache-2.0
pylint>=1.3.0 # GNU GPL v2
simplejson>=2.2.0
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.36,!=1.2.0
mox3>=0.7.0

0
tests/__init__.py Normal file
View File

View File

View File

@ -0,0 +1,162 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslotest import base as test_base
from testtools.matchers import HasLength
from ironic_lib import disk_partitioner
from ironic_lib import exception
from ironic_lib import utils
class DiskPartitionerTestCase(test_base.BaseTestCase):
def test_add_partition(self):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
dp.add_partition(1024)
dp.add_partition(512, fs_type='linux-swap')
dp.add_partition(2048, bootable=True)
expected = [(1, {'bootable': False,
'fs_type': '',
'type': 'primary',
'size': 1024}),
(2, {'bootable': False,
'fs_type': 'linux-swap',
'type': 'primary',
'size': 512}),
(3, {'bootable': True,
'fs_type': '',
'type': 'primary',
'size': 2048})]
partitions = [(n, p) for n, p in dp.get_partitions()]
self.assertThat(partitions, HasLength(3))
self.assertEqual(expected, partitions)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec')
@mock.patch.object(utils, 'execute')
def test_commit(self, mock_utils_exc, mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'bootable': False,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'bootable': True,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions') as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = (None, None)
dp.commit()
mock_disk_partitioner_exec.assert_called_once_with(
'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_once_with(
'fuser', '/dev/fake',
run_as_root=True, check_exit_code=[0, 1])
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec')
@mock.patch.object(utils, 'execute')
def test_commit_with_device_is_busy_once(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'bootable': False,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'bootable': True,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
fuser_outputs = [("/dev/fake: 10000 10001", None), (None, None)]
with mock.patch.object(dp, 'get_partitions') as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.side_effect = fuser_outputs
dp.commit()
mock_disk_partitioner_exec.assert_called_once_with(
'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake',
run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(2, mock_utils_exc.call_count)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec')
@mock.patch.object(utils, 'execute')
def test_commit_with_device_is_always_busy(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'bootable': False,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'bootable': True,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions') as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = ("/dev/fake: 10000 10001", None)
self.assertRaises(exception.InstanceDeployFailure, dp.commit)
mock_disk_partitioner_exec.assert_called_once_with(
'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake',
run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(20, mock_utils_exc.call_count)
@mock.patch.object(disk_partitioner.DiskPartitioner, '_exec')
@mock.patch.object(utils, 'execute')
def test_commit_with_device_disconnected(self, mock_utils_exc,
mock_disk_partitioner_exec):
dp = disk_partitioner.DiskPartitioner('/dev/fake')
fake_parts = [(1, {'bootable': False,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1}),
(2, {'bootable': True,
'fs_type': 'fake-fs-type',
'type': 'fake-type',
'size': 1})]
with mock.patch.object(dp, 'get_partitions') as mock_gp:
mock_gp.return_value = fake_parts
mock_utils_exc.return_value = (None, "Specified filename /dev/fake"
" does not exist.")
self.assertRaises(exception.InstanceDeployFailure, dp.commit)
mock_disk_partitioner_exec.assert_called_once_with(
'mklabel', 'msdos',
'mkpart', 'fake-type', 'fake-fs-type', '1', '2',
'mkpart', 'fake-type', 'fake-fs-type', '2', '3',
'set', '2', 'boot', 'on')
mock_utils_exc.assert_called_with(
'fuser', '/dev/fake',
run_as_root=True, check_exit_code=[0, 1])
self.assertEqual(20, mock_utils_exc.call_count)

View File

@ -0,0 +1,493 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import gzip
import mock
import os
import shutil
import stat
import tempfile
from oslo_concurrency import processutils
from oslotest import base as test_base
import requests
from ironic_lib import disk_partitioner
from ironic_lib import disk_utils
from ironic_lib import exception
from ironic_lib import utils
@mock.patch.object(utils, 'execute')
class ListPartitionsTestCase(test_base.BaseTestCase):
def test_correct(self, execute_mock):
output = """
BYT;
/dev/sda:500107862016B:scsi:512:4096:msdos:ATA HGST HTS725050A7:;
1:1.00MiB:501MiB:500MiB:ext4::boot;
2:501MiB:476940MiB:476439MiB:::;
"""
expected = [
{'start': 1, 'end': 501, 'size': 500,
'filesystem': 'ext4', 'flags': 'boot'},
{'start': 501, 'end': 476940, 'size': 476439,
'filesystem': '', 'flags': ''},
]
execute_mock.return_value = (output, '')
result = disk_utils.list_partitions('/dev/fake')
self.assertEqual(expected, result)
execute_mock.assert_called_once_with(
'parted', '-s', '-m', '/dev/fake', 'unit', 'MiB', 'print',
use_standard_locale=True)
@mock.patch.object(disk_utils.LOG, 'warn')
def test_incorrect(self, log_mock, execute_mock):
output = """
BYT;
/dev/sda:500107862016B:scsi:512:4096:msdos:ATA HGST HTS725050A7:;
1:XX1076MiB:---:524MiB:ext4::boot;
"""
execute_mock.return_value = (output, '')
self.assertEqual([], disk_utils.list_partitions('/dev/fake'))
self.assertEqual(1, log_mock.call_count)
@mock.patch.object(disk_partitioner.DiskPartitioner, 'commit', lambda _: None)
class WorkOnDiskTestCase(test_base.BaseTestCase):
def setUp(self):
super(WorkOnDiskTestCase, self).setUp()
self.image_path = '/tmp/xyz/image'
self.root_mb = 128
self.swap_mb = 64
self.ephemeral_mb = 0
self.ephemeral_format = None
self.configdrive_mb = 0
self.dev = '/dev/fake'
self.swap_part = '/dev/fake-part1'
self.root_part = '/dev/fake-part2'
self.mock_ibd = mock.patch.object(disk_utils,
'is_block_device').start()
self.mock_mp = mock.patch.object(disk_utils,
'make_partitions').start()
self.addCleanup(self.mock_ibd.stop)
self.addCleanup(self.mock_mp.stop)
self.mock_remlbl = mock.patch.object(disk_utils,
'destroy_disk_metadata').start()
self.addCleanup(self.mock_remlbl.stop)
self.mock_mp.return_value = {'swap': self.swap_part,
'root': self.root_part}
def test_no_parent_device(self):
self.mock_ibd.return_value = False
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.work_on_disk, self.dev,
self.root_mb, self.swap_mb, self.ephemeral_mb,
self.ephemeral_format, self.image_path, False)
self.mock_ibd.assert_called_once_with(self.dev)
self.assertFalse(self.mock_mp.called,
"make_partitions mock was unexpectedly called.")
def test_no_root_partition(self):
self.mock_ibd.side_effect = [True, False]
calls = [mock.call(self.dev),
mock.call(self.root_part)]
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.work_on_disk, self.dev,
self.root_mb, self.swap_mb, self.ephemeral_mb,
self.ephemeral_format, self.image_path, False)
self.assertEqual(self.mock_ibd.call_args_list, calls)
self.mock_mp.assert_called_once_with(self.dev, self.root_mb,
self.swap_mb, self.ephemeral_mb,
self.configdrive_mb, commit=True)
def test_no_swap_partition(self):
self.mock_ibd.side_effect = [True, True, False]
calls = [mock.call(self.dev),
mock.call(self.root_part),
mock.call(self.swap_part)]
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.work_on_disk, self.dev,
self.root_mb, self.swap_mb, self.ephemeral_mb,
self.ephemeral_format, self.image_path, False)
self.assertEqual(self.mock_ibd.call_args_list, calls)
self.mock_mp.assert_called_once_with(self.dev, self.root_mb,
self.swap_mb, self.ephemeral_mb,
self.configdrive_mb, commit=True)
def test_no_ephemeral_partition(self):
ephemeral_part = '/dev/fake-part1'
swap_part = '/dev/fake-part2'
root_part = '/dev/fake-part3'
ephemeral_mb = 256
ephemeral_format = 'exttest'
self.mock_mp.return_value = {'ephemeral': ephemeral_part,
'swap': swap_part,
'root': root_part}
self.mock_ibd.side_effect = [True, True, True, False]
calls = [mock.call(self.dev),
mock.call(root_part),
mock.call(swap_part),
mock.call(ephemeral_part)]
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.work_on_disk, self.dev,
self.root_mb, self.swap_mb, ephemeral_mb,
ephemeral_format, self.image_path, False)
self.assertEqual(self.mock_ibd.call_args_list, calls)
self.mock_mp.assert_called_once_with(self.dev, self.root_mb,
self.swap_mb, ephemeral_mb,
self.configdrive_mb, commit=True)
@mock.patch.object(utils, 'unlink_without_raise')
@mock.patch.object(disk_utils, '_get_configdrive')
def test_no_configdrive_partition(self, mock_configdrive, mock_unlink):
mock_configdrive.return_value = (10, 'fake-path')
swap_part = '/dev/fake-part1'
configdrive_part = '/dev/fake-part2'
root_part = '/dev/fake-part3'
configdrive_url = 'http://1.2.3.4/cd'
configdrive_mb = 10
self.mock_mp.return_value = {'swap': swap_part,
'configdrive': configdrive_part,
'root': root_part}
self.mock_ibd.side_effect = [True, True, True, False]
calls = [mock.call(self.dev),
mock.call(root_part),
mock.call(swap_part),
mock.call(configdrive_part)]
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.work_on_disk, self.dev,
self.root_mb, self.swap_mb, self.ephemeral_mb,
self.ephemeral_format, self.image_path, 'fake-uuid',
preserve_ephemeral=False,
configdrive=configdrive_url)
self.assertEqual(self.mock_ibd.call_args_list, calls)
self.mock_mp.assert_called_once_with(self.dev, self.root_mb,
self.swap_mb, self.ephemeral_mb,
configdrive_mb, commit=True)
mock_unlink.assert_called_once_with('fake-path')
@mock.patch.object(utils, 'execute')
class MakePartitionsTestCase(test_base.BaseTestCase):
def setUp(self):
super(MakePartitionsTestCase, self).setUp()
self.dev = 'fake-dev'
self.root_mb = 1024
self.swap_mb = 512
self.ephemeral_mb = 0
self.configdrive_mb = 0
self.parted_static_cmd = ['parted', '-a', 'optimal', '-s', self.dev,
'--', 'unit', 'MiB', 'mklabel', 'msdos']
def test_make_partitions(self, mock_exc):
mock_exc.return_value = (None, None)
disk_utils.make_partitions(self.dev, self.root_mb, self.swap_mb,
self.ephemeral_mb, self.configdrive_mb)
expected_mkpart = ['mkpart', 'primary', 'linux-swap', '1', '513',
'mkpart', 'primary', '', '513', '1537']
parted_cmd = self.parted_static_cmd + expected_mkpart
parted_call = mock.call(*parted_cmd, run_as_root=True,
check_exit_code=[0])
fuser_cmd = ['fuser', 'fake-dev']
fuser_call = mock.call(*fuser_cmd, run_as_root=True,
check_exit_code=[0, 1])
mock_exc.assert_has_calls([parted_call, fuser_call])
def test_make_partitions_with_ephemeral(self, mock_exc):
self.ephemeral_mb = 2048
expected_mkpart = ['mkpart', 'primary', '', '1', '2049',
'mkpart', 'primary', 'linux-swap', '2049', '2561',
'mkpart', 'primary', '', '2561', '3585']
cmd = self.parted_static_cmd + expected_mkpart
mock_exc.return_value = (None, None)
disk_utils.make_partitions(self.dev, self.root_mb, self.swap_mb,
self.ephemeral_mb, self.configdrive_mb)
parted_call = mock.call(*cmd, run_as_root=True, check_exit_code=[0])
mock_exc.assert_has_calls(parted_call)
@mock.patch.object(disk_utils, 'get_dev_block_size')
@mock.patch.object(utils, 'execute')
class DestroyMetaDataTestCase(test_base.BaseTestCase):
def setUp(self):
super(DestroyMetaDataTestCase, self).setUp()
self.dev = 'fake-dev'
self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz"
def test_destroy_disk_metadata(self, mock_exec, mock_gz):
mock_gz.return_value = 64
expected_calls = [mock.call('dd', 'if=/dev/zero', 'of=fake-dev',
'bs=512', 'count=36', run_as_root=True,
check_exit_code=[0]),
mock.call('dd', 'if=/dev/zero', 'of=fake-dev',
'bs=512', 'count=36', 'seek=28',
run_as_root=True,
check_exit_code=[0])]
disk_utils.destroy_disk_metadata(self.dev, self.node_uuid)
mock_exec.assert_has_calls(expected_calls)
self.assertTrue(mock_gz.called)
def test_destroy_disk_metadata_get_dev_size_fail(self, mock_exec, mock_gz):
mock_gz.side_effect = processutils.ProcessExecutionError
expected_call = [mock.call('dd', 'if=/dev/zero', 'of=fake-dev',
'bs=512', 'count=36', run_as_root=True,
check_exit_code=[0])]
self.assertRaises(processutils.ProcessExecutionError,
disk_utils.destroy_disk_metadata,
self.dev,
self.node_uuid)
mock_exec.assert_has_calls(expected_call)
def test_destroy_disk_metadata_dd_fail(self, mock_exec, mock_gz):
mock_exec.side_effect = processutils.ProcessExecutionError
expected_call = [mock.call('dd', 'if=/dev/zero', 'of=fake-dev',
'bs=512', 'count=36', run_as_root=True,
check_exit_code=[0])]
self.assertRaises(processutils.ProcessExecutionError,
disk_utils.destroy_disk_metadata,
self.dev,
self.node_uuid)
mock_exec.assert_has_calls(expected_call)
self.assertFalse(mock_gz.called)
@mock.patch.object(utils, 'execute')
class GetDeviceBlockSizeTestCase(test_base.BaseTestCase):
def setUp(self):
super(GetDeviceBlockSizeTestCase, self).setUp()
self.dev = 'fake-dev'
self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz"
def test_get_dev_block_size(self, mock_exec):
mock_exec.return_value = ("64", "")
expected_call = [mock.call('blockdev', '--getsz', self.dev,
run_as_root=True, check_exit_code=[0])]
disk_utils.get_dev_block_size(self.dev)
mock_exec.assert_has_calls(expected_call)
@mock.patch.object(disk_utils, 'dd')
@mock.patch.object(disk_utils, 'qemu_img_info')
@mock.patch.object(disk_utils, 'convert_image')
class PopulateImageTestCase(test_base.BaseTestCase):
def setUp(self):
super(PopulateImageTestCase, self).setUp()
def test_populate_raw_image(self, mock_cg, mock_qinfo, mock_dd):
type(mock_qinfo.return_value).file_format = mock.PropertyMock(
return_value='raw')
disk_utils.populate_image('src', 'dst')
mock_dd.assert_called_once_with('src', 'dst')
self.assertFalse(mock_cg.called)
def test_populate_qcow2_image(self, mock_cg, mock_qinfo, mock_dd):
type(mock_qinfo.return_value).file_format = mock.PropertyMock(
return_value='qcow2')
disk_utils.populate_image('src', 'dst')
mock_cg.assert_called_once_with('src', 'dst', 'raw', True)
self.assertFalse(mock_dd.called)
@mock.patch.object(disk_utils, 'is_block_device', lambda d: True)
@mock.patch.object(disk_utils, 'block_uuid', lambda p: 'uuid')
@mock.patch.object(disk_utils, 'dd', lambda *_: None)
@mock.patch.object(disk_utils, 'convert_image', lambda *_: None)
@mock.patch.object(utils, 'mkfs', lambda *_: None)
# NOTE(dtantsur): destroy_disk_metadata resets file size, disabling it
@mock.patch.object(disk_utils, 'destroy_disk_metadata', lambda *_: None)
class RealFilePartitioningTestCase(test_base.BaseTestCase):
"""This test applies some real-world partitioning scenario to a file.
This test covers the whole partitioning, mocking everything not possible
on a file. That helps us assure, that we do all partitioning math properly
and also conducts integration testing of DiskPartitioner.
"""
def setUp(self):
super(RealFilePartitioningTestCase, self).setUp()
# NOTE(dtantsur): no parted utility on gate-ironic-python26
try:
utils.execute('parted', '--version')
except OSError as exc:
self.skipTest('parted utility was not found: %s' % exc)
self.file = tempfile.NamedTemporaryFile(delete=False)
# NOTE(ifarkas): the file needs to be closed, so fuser won't report
# any usage
self.file.close()
# NOTE(dtantsur): 20 MiB file with zeros
utils.execute('dd', 'if=/dev/zero', 'of=%s' % self.file.name,
'bs=1', 'count=0', 'seek=20MiB')
@staticmethod
def _run_without_root(func, *args, **kwargs):
"""Make sure root is not required when using utils.execute."""
real_execute = utils.execute
def fake_execute(*cmd, **kwargs):
kwargs['run_as_root'] = False
return real_execute(*cmd, **kwargs)
with mock.patch.object(utils, 'execute', fake_execute):
return func(*args, **kwargs)
def test_different_sizes(self):
# NOTE(dtantsur): Keep this list in order with expected partitioning
fields = ['ephemeral_mb', 'swap_mb', 'root_mb']
variants = ((0, 0, 12), (4, 2, 8), (0, 4, 10), (5, 0, 10))
for variant in variants:
kwargs = dict(zip(fields, variant))
self._run_without_root(disk_utils.work_on_disk,
self.file.name, ephemeral_format='ext4',
node_uuid='', image_path='path', **kwargs)
part_table = self._run_without_root(
disk_utils.list_partitions, self.file.name)
for part, expected_size in zip(part_table, filter(None, variant)):
self.assertEqual(expected_size, part['size'],
"comparison failed for %s" % list(variant))
def test_whole_disk(self):
# 6 MiB ephemeral + 3 MiB swap + 9 MiB root + 1 MiB for MBR
# + 1 MiB MAGIC == 20 MiB whole disk
# TODO(dtantsur): figure out why we need 'magic' 1 more MiB
# and why the is different on Ubuntu and Fedora (see below)
self._run_without_root(disk_utils.work_on_disk, self.file.name,
root_mb=9, ephemeral_mb=6, swap_mb=3,
ephemeral_format='ext4', node_uuid='',
image_path='path')
part_table = self._run_without_root(
disk_utils.list_partitions, self.file.name)
sizes = [part['size'] for part in part_table]
# NOTE(dtantsur): parted in Ubuntu 12.04 will occupy the last MiB,
# parted in Fedora 20 won't - thus two possible variants for last part
self.assertEqual([6, 3], sizes[:2],
"unexpected partitioning %s" % part_table)
self.assertIn(sizes[2], (9, 10))
@mock.patch.object(shutil, 'copyfileobj')
@mock.patch.object(requests, 'get')
class GetConfigdriveTestCase(test_base.BaseTestCase):
@mock.patch.object(gzip, 'GzipFile')
def test_get_configdrive(self, mock_gzip, mock_requests, mock_copy):
mock_requests.return_value = mock.MagicMock(content='Zm9vYmFy')
disk_utils._get_configdrive('http://1.2.3.4/cd',
'fake-node-uuid')
mock_requests.assert_called_once_with('http://1.2.3.4/cd')
mock_gzip.assert_called_once_with('configdrive', 'rb',
fileobj=mock.ANY)
mock_copy.assert_called_once_with(mock.ANY, mock.ANY)
@mock.patch.object(gzip, 'GzipFile')
def test_get_configdrive_base64_string(self, mock_gzip, mock_requests,
mock_copy):
disk_utils._get_configdrive('Zm9vYmFy', 'fake-node-uuid')
self.assertFalse(mock_requests.called)
mock_gzip.assert_called_once_with('configdrive', 'rb',
fileobj=mock.ANY)
mock_copy.assert_called_once_with(mock.ANY, mock.ANY)
def test_get_configdrive_bad_url(self, mock_requests, mock_copy):
mock_requests.side_effect = requests.exceptions.RequestException
self.assertRaises(exception.InstanceDeployFailure,
disk_utils._get_configdrive,
'http://1.2.3.4/cd', 'fake-node-uuid')
self.assertFalse(mock_copy.called)
@mock.patch.object(base64, 'b64decode')
def test_get_configdrive_base64_error(self, mock_b64, mock_requests,
mock_copy):
mock_b64.side_effect = TypeError
self.assertRaises(exception.InstanceDeployFailure,
disk_utils._get_configdrive,
'malformed', 'fake-node-uuid')
mock_b64.assert_called_once_with('malformed')
self.assertFalse(mock_copy.called)
@mock.patch.object(gzip, 'GzipFile')
def test_get_configdrive_gzip_error(self, mock_gzip, mock_requests,
mock_copy):
mock_requests.return_value = mock.MagicMock(content='Zm9vYmFy')
mock_copy.side_effect = IOError
self.assertRaises(exception.InstanceDeployFailure,
disk_utils._get_configdrive,
'http://1.2.3.4/cd', 'fake-node-uuid')
mock_requests.assert_called_once_with('http://1.2.3.4/cd')
mock_gzip.assert_called_once_with('configdrive', 'rb',
fileobj=mock.ANY)
mock_copy.assert_called_once_with(mock.ANY, mock.ANY)
@mock.patch('time.sleep', lambda sec: None)
class OtherFunctionTestCase(test_base.BaseTestCase):
@mock.patch.object(os, 'stat')
@mock.patch.object(stat, 'S_ISBLK')
def test_is_block_device_works(self, mock_is_blk, mock_os):
device = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9'
mock_is_blk.return_value = True
mock_os().st_mode = 10000
self.assertTrue(disk_utils.is_block_device(device))
mock_is_blk.assert_called_once_with(mock_os().st_mode)
@mock.patch.object(os, 'stat')
def test_is_block_device_raises(self, mock_os):
device = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9'
mock_os.side_effect = OSError
self.assertRaises(exception.InstanceDeployFailure,
disk_utils.is_block_device, device)
mock_os.assert_has_calls([mock.call(device)] * 3)
@mock.patch.object(os.path, 'getsize')
@mock.patch.object(disk_utils, 'qemu_img_info')
def test_get_image_mb(self, mock_qinfo, mock_getsize):
mb = 1024 * 1024
mock_getsize.return_value = 0
type(mock_qinfo.return_value).virtual_size = mock.PropertyMock(
return_value=0)
self.assertEqual(0, disk_utils.get_image_mb('x', False))
self.assertEqual(0, disk_utils.get_image_mb('x', True))
mock_getsize.return_value = 1
type(mock_qinfo.return_value).virtual_size = mock.PropertyMock(
return_value=1)
self.assertEqual(1, disk_utils.get_image_mb('x', False))
self.assertEqual(1, disk_utils.get_image_mb('x', True))
mock_getsize.return_value = mb
type(mock_qinfo.return_value).virtual_size = mock.PropertyMock(
return_value=mb)
self.assertEqual(1, disk_utils.get_image_mb('x', False))
self.assertEqual(1, disk_utils.get_image_mb('x', True))
mock_getsize.return_value = mb + 1
type(mock_qinfo.return_value).virtual_size = mock.PropertyMock(
return_value=mb + 1)
self.assertEqual(2, disk_utils.get_image_mb('x', False))
self.assertEqual(2, disk_utils.get_image_mb('x', True))

View File

@ -0,0 +1,239 @@
# Copyright 2011 Justin Santa Barbara
# Copyright 2012 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import errno
import os
import os.path
import tempfile
import mock
from oslo.config import cfg
from oslo_concurrency import processutils
from oslotest import base as test_base
from ironic_lib import exception
from ironic_lib import utils
CONF = cfg.CONF
class BareMetalUtilsTestCase(test_base.BaseTestCase):
def test_unlink(self):
with mock.patch.object(os, "unlink") as unlink_mock:
unlink_mock.return_value = None
utils.unlink_without_raise("/fake/path")
unlink_mock.assert_called_once_with("/fake/path")
def test_unlink_ENOENT(self):
with mock.patch.object(os, "unlink") as unlink_mock:
unlink_mock.side_effect = OSError(errno.ENOENT)
utils.unlink_without_raise("/fake/path")
unlink_mock.assert_called_once_with("/fake/path")
class ExecuteTestCase(test_base.BaseTestCase):
def test_retry_on_failure(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write('''#!/bin/sh
# If stdin fails to get passed during one of the runs, make a note.
if ! grep -q foo
then
echo 'failure' > "$1"
fi
# If stdin has failed to get passed during this or a previous run, exit early.
if grep failure "$1"
then
exit 1
fi
runs="$(cat $1)"
if [ -z "$runs" ]
then
runs=0
fi
runs=$(($runs + 1))
echo $runs > "$1"
exit 1
''')
fp.close()
os.chmod(tmpfilename, 0o755)
try:
self.assertRaises(processutils.ProcessExecutionError,
utils.execute,
tmpfilename, tmpfilename2, attempts=10,
process_input='foo',
delay_on_retry=False)
except OSError as e:
if e.errno == errno.EACCES:
self.skipTest("Permissions error detected. "
"Are you running with a noexec /tmp?")
else:
raise
fp = open(tmpfilename2, 'r')
runs = fp.read()
fp.close()
self.assertNotEqual(runs.strip(), 'failure', 'stdin did not '
'always get passed '
'correctly')
runs = int(runs.strip())
self.assertEqual(10, runs,
'Ran %d times instead of 10.' % (runs,))
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
def test_unknown_kwargs_raises_error(self):
self.assertRaises(processutils.UnknownArgumentError,
utils.execute,
'/usr/bin/env', 'true',
this_is_not_a_valid_kwarg=True)
def test_check_exit_code_boolean(self):
utils.execute('/usr/bin/env', 'false', check_exit_code=False)
self.assertRaises(processutils.ProcessExecutionError,
utils.execute,
'/usr/bin/env', 'false', check_exit_code=True)
def test_no_retry_on_success(self):
fd, tmpfilename = tempfile.mkstemp()
_, tmpfilename2 = tempfile.mkstemp()
try:
fp = os.fdopen(fd, 'w+')
fp.write('''#!/bin/sh
# If we've already run, bail out.
grep -q foo "$1" && exit 1
# Mark that we've run before.
echo foo > "$1"
# Check that stdin gets passed correctly.
grep foo
''')
fp.close()
os.chmod(tmpfilename, 0o755)
try:
utils.execute(tmpfilename,
tmpfilename2,
process_input='foo',
attempts=2)
except OSError as e:
if e.errno == errno.EACCES:
self.skipTest("Permissions error detected. "
"Are you running with a noexec /tmp?")
else:
raise
finally:
os.unlink(tmpfilename)
os.unlink(tmpfilename2)
@mock.patch.object(processutils, 'execute')
@mock.patch.object(os.environ, 'copy', return_value={})
def test_execute_use_standard_locale_no_env_variables(self, env_mock,
execute_mock):
utils.execute('foo', use_standard_locale=True)
execute_mock.assert_called_once_with('foo',
env_variables={'LC_ALL': 'C'})
@mock.patch.object(processutils, 'execute')
def test_execute_use_standard_locale_with_env_variables(self,
execute_mock):
utils.execute('foo', use_standard_locale=True,
env_variables={'foo': 'bar'})
execute_mock.assert_called_once_with('foo',
env_variables={'LC_ALL': 'C',
'foo': 'bar'})
@mock.patch.object(processutils, 'execute')
def test_execute_not_use_standard_locale(self, execute_mock):
utils.execute('foo', use_standard_locale=False,
env_variables={'foo': 'bar'})
execute_mock.assert_called_once_with('foo',
env_variables={'foo': 'bar'})
def test_execute_get_root_helper(self):
with mock.patch.object(processutils, 'execute') as execute_mock:
helper = utils._get_root_helper()
utils.execute('foo', run_as_root=True)
execute_mock.assert_called_once_with('foo', run_as_root=True,
root_helper=helper)
def test_execute_without_root_helper(self):
with mock.patch.object(processutils, 'execute') as execute_mock:
utils.execute('foo', run_as_root=False)
execute_mock.assert_called_once_with('foo', run_as_root=False)
class MkfsTestCase(test_base.BaseTestCase):
@mock.patch.object(utils, 'execute')
def test_mkfs(self, execute_mock):
utils.mkfs('ext4', '/my/block/dev')
utils.mkfs('msdos', '/my/msdos/block/dev')
utils.mkfs('swap', '/my/swap/block/dev')
expected = [mock.call('mkfs', '-t', 'ext4', '-F', '/my/block/dev',
run_as_root=True,
use_standard_locale=True),
mock.call('mkfs', '-t', 'msdos', '/my/msdos/block/dev',
run_as_root=True,
use_standard_locale=True),
mock.call('mkswap', '/my/swap/block/dev',
run_as_root=True,
use_standard_locale=True)]
self.assertEqual(expected, execute_mock.call_args_list)
@mock.patch.object(utils, 'execute')
def test_mkfs_with_label(self, execute_mock):
utils.mkfs('ext4', '/my/block/dev', 'ext4-vol')
utils.mkfs('msdos', '/my/msdos/block/dev', 'msdos-vol')
utils.mkfs('swap', '/my/swap/block/dev', 'swap-vol')
expected = [mock.call('mkfs', '-t', 'ext4', '-F', '-L', 'ext4-vol',
'/my/block/dev', run_as_root=True,
use_standard_locale=True),
mock.call('mkfs', '-t', 'msdos', '-n', 'msdos-vol',
'/my/msdos/block/dev', run_as_root=True,
use_standard_locale=True),
mock.call('mkswap', '-L', 'swap-vol',
'/my/swap/block/dev', run_as_root=True,
use_standard_locale=True)]
self.assertEqual(expected, execute_mock.call_args_list)
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr=os.strerror(errno.ENOENT)))
def test_mkfs_with_unsupported_fs(self, execute_mock):
self.assertRaises(exception.FileSystemNotSupported,
utils.mkfs, 'foo', '/my/block/dev')
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr='fake'))
def test_mkfs_with_unexpected_error(self, execute_mock):
self.assertRaises(processutils.ProcessExecutionError, utils.mkfs,
'ext4', '/my/block/dev', 'ext4-vol')
class IsHttpUrlTestCase(test_base.BaseTestCase):
def test_is_http_url(self):
self.assertTrue(utils.is_http_url('http://127.0.0.1'))
self.assertTrue(utils.is_http_url('https://127.0.0.1'))
self.assertTrue(utils.is_http_url('HTTP://127.1.2.3'))
self.assertTrue(utils.is_http_url('HTTPS://127.3.2.1'))
self.assertFalse(utils.is_http_url('Zm9vYmFy'))
self.assertFalse(utils.is_http_url('11111111'))

54
tox.ini
View File

@ -1,60 +1,26 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = py27,pep8
[testenv]
sitepackages = False
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
PYTHONDONTWRITEBYTECODE = 1
PYTHONHASHSEED=0
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
whitelist_externals = bash
commands =
bash -c "TESTS_DIR=./ironic/tests/ python setup.py testr --slowest --testr-args='{posargs}'"
# Use the lockutils wrapper to ensure that external locking works correctly
lockutils-wrapper python setup.py test --slowest --testr-args='{posargs}'
[tox:jenkins]
downloadcache = ~/cache/pip
[flake8]
show-source = True
ignore = E123,E126,E127,E128,E129,E711,H405,H904
exclude = .venv,.tox,dist,doc,*.egg,.update-venv
[testenv:pep8]
commands =
flake8 {posargs}
# Check that .po and .pot files are valid:
bash -c "find ironic -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null"
[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
commands =
python setup.py testr --coverage {posargs}
[testenv:checkconfig]
sitepackages = False
envdir = {toxworkdir}/venv
commands =
{toxinidir}/tools/config/check_uptodate.sh
[testenv:genconfig]
sitepackages = False
envdir = {toxworkdir}/venv
commands =
bash tools/config/generate_sample.sh -b . -p ironic -o etc/ironic
[testenv:gendocs]
sitepackages = False
envdir = {toxworkdir}/venv
commands =
python setup.py build_sphinx
commands = flake8 {posargs}
[testenv:venv]
setenv = PYTHONHASHSEED=0
commands = {posargs}
[flake8]
# E711: ignored because it is normal to use "column == None" in sqlalchemy
ignore = E123,E126,E127,E128,E129,E711
exclude = .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools,*ironic/nova*
max-complexity=17
[hacking]
import_exceptions = testtools.matchers, ironic.common.i18n