Initial commit

Change-Id: I34e41fabb648368189b479e4cfbc78747d465b95
This commit is contained in:
Mark Goddard 2016-03-01 11:11:51 +02:00 committed by Oleksandr Berezovskyi
parent 4e2caa7e9c
commit de9c84dc80
30 changed files with 6153 additions and 0 deletions

176
LICENSE Normal file
View File

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

75
README.rst Normal file
View File

@ -0,0 +1,75 @@
=========================================
Bareon-based deployment driver for Ironic
=========================================
``bareon_ironic`` package adds support for Bareon to OpenStack Ironic.
Ironic [#]_ is baremetal provisioning service with support of multiple hardware
types. Ironic architecture is able to work with deploy agents. Deploy agent
is a service that does provisioning tasks on the node side. Deploy agent is
integrated into bootstrap ramdisk image.
``bareon_ironic`` contains pluggable drivers code for Ironic that uses
Bareon [#]_ as deploy agent. Current implementation requires and tested with
Ironic/Nova Stable Kilo release.
Features overview
=================
Flexible deployment configuration
---------------------------------
A JSON called deploy_config carries partitions schema, partitioning behavior,
images schema, various deployment args. It can be passed through various
places like nova VM metadata, image metadata, node metadata etc. Resulting
JSON is a result of merge operation, that works basing on the priorities
configureed in ``/etc/ironic/ironic.conf``.
LVM support
-----------
Configuration JSON allows to define schemas with mixed partitions and logical
volumes.
Multiple partitioning behaviors available
-----------------------------------------
- Verify. Reads schema from the baremetal hardware and compares with user
schema.
- Verify+clean. Compares baremetal schema with user schema, wipes particular
filesystems basing on the user schema.
- Clean. Wipes disk, deploys from scratch.
Multiple image deployment
-------------------------
Configuration JSON allows to define more than 1 image. The Bareon Ironic driver
provides handles to switch between deployed images. This allows to perform a
baremetal node upgrades with minimal downtime.
Block-level copy & file-level Image deployment
----------------------------------------------
Bareon Ironic driver allows to do both: bare_swift drivers for block-level and
bare_rsync drivers for file-level.
Deployment termination
----------------------
The driver allows to teminate deployment in both silent (wait-callback) and
active (deploying) phases.
Post-deployment hooks
---------------------
Two hook mechanisms available: on_fail_script and deploy actions. The first one
is a user-provided shell script which is executed inside the deploy ramdisk if
deployment has failed. The latter is a JSON-based, allows to define various
actions with associated resources and run them after the deployment has passed.
Building HTML docs
==================
$ pip install sphinx
$ cd bareon-ironic/doc && make html
.. [#] https://wiki.openstack.org/wiki/Ironic
.. [#] https://wiki.openstack.org/wiki/Bareon

View File

108
bareon_ironic/bareon.py Normal file
View File

@ -0,0 +1,108 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ironic.drivers import base
from ironic.drivers.modules import discoverd
from ironic.drivers.modules import ipmitool
from ironic.drivers.modules import ssh
from bareon_ironic.modules import bareon_rsync
from bareon_ironic.modules import bareon_swift
class BareonSwiftAndIPMIToolDriver(base.BaseDriver):
"""Bareon Swift + IPMITool driver.
This driver implements the `core` functionality, combining
:class:`ironic.drivers.modules.ipmitool.IPMIPower` (for power on/off and
reboot) with
:class:`ironic.drivers.modules.bareon_swift.BareonSwiftDeploy`
(for image deployment).
Implementations are in those respective classes; this class is merely the
glue between them.
"""
def __init__(self):
self.power = ipmitool.IPMIPower()
self.deploy = bareon_swift.BareonSwiftDeploy()
self.management = ipmitool.IPMIManagement()
self.vendor = bareon_swift.BareonSwiftVendor()
self.inspect = discoverd.DiscoverdInspect.create_if_enabled(
'BareonSwiftAndIPMIToolDriver')
class BareonSwiftAndSSHDriver(base.BaseDriver):
"""Bareon Swift + SSH driver.
NOTE: This driver is meant only for testing environments.
This driver implements the `core` functionality, combining
:class:`ironic.drivers.modules.ssh.SSH` (for power on/off and reboot of
virtual machines tunneled over SSH), with
:class:`ironic.drivers.modules.bareon_swift.BareonSwiftDeploy`
(for image deployment). Implementations are in those respective classes;
this class is merely the glue between them.
"""
def __init__(self):
self.power = ssh.SSHPower()
self.deploy = bareon_swift.BareonSwiftDeploy()
self.management = ssh.SSHManagement()
self.vendor = bareon_swift.BareonSwiftVendor()
self.inspect = discoverd.DiscoverdInspect.create_if_enabled(
'BareonSwiftAndSSHDriver')
class BareonRsyncAndIPMIToolDriver(base.BaseDriver):
"""Bareon Rsync + IPMITool driver.
This driver implements the `core` functionality, combining
:class:`ironic.drivers.modules.ipmitool.IPMIPower` (for power on/off and
reboot) with
:class:`ironic.drivers.modules.bareon_rsync.BareonRsyncDeploy`
(for image deployment).
Implementations are in those respective classes; this class is merely the
glue between them.
"""
def __init__(self):
self.power = ipmitool.IPMIPower()
self.deploy = bareon_rsync.BareonRsyncDeploy()
self.management = ipmitool.IPMIManagement()
self.vendor = bareon_rsync.BareonRsyncVendor()
self.inspect = discoverd.DiscoverdInspect.create_if_enabled(
'BareonRsyncAndIPMIToolDriver')
class BareonRsyncAndSSHDriver(base.BaseDriver):
"""Bareon Rsync + SSH driver.
NOTE: This driver is meant only for testing environments.
This driver implements the `core` functionality, combining
:class:`ironic.drivers.modules.ssh.SSH` (for power on/off and reboot of
virtual machines tunneled over SSH), with
:class:`ironic.drivers.modules.bareon_rsync.BareonRsyncDeploy`
(for image deployment). Implementations are in those respective classes;
this class is merely the glue between them.
"""
def __init__(self):
self.power = ssh.SSHPower()
self.deploy = bareon_rsync.BareonRsyncDeploy()
self.management = ssh.SSHManagement()
self.vendor = bareon_rsync.BareonRsyncVendor()
self.inspect = discoverd.DiscoverdInspect.create_if_enabled(
'BareonRsyncAndSSHDriver')

View File

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,6 @@
default deploy
label deploy
kernel {{ pxe_options.deployment_aki_path }}
append initrd={{ pxe_options.deployment_ari_path }} text {{ pxe_options.bareon_pxe_append_params|default("", true) }} deployment_id={{ pxe_options.deployment_id }} api-url={{ pxe_options['api-url'] }}
ipappend 2

View File

@ -0,0 +1,6 @@
default deploy
label deploy
kernel {{ pxe_options.deployment_aki_path }}
append initrd={{ pxe_options.deployment_ari_path }} root=live:{{ pxe_options['rootfs-url'] }} boot=live text fetch={{ pxe_options['rootfs-url'] }} {{ pxe_options.bareon_pxe_append_params|default("", true) }} deployment_id={{ pxe_options.deployment_id }} api-url={{ pxe_options['api-url'] }}
ipappend 2

View File

@ -0,0 +1,48 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Bareon driver exceptions"""
from ironic.common import exception
from ironic.common.i18n import _
class IncompatibleRamdiskVersion(exception.IronicException):
message = _("Incompatible node ramdisk version. %(details)s")
class UnsafeUrlError(exception.IronicException):
message = _("URL '%(url)s' is not safe and cannot be used for sensitive "
"data. %(details)s")
class InvalidResourceState(exception.IronicException):
pass
class DeploymentTimeout(exception.IronicException):
message = _("Deployment timeout expired. Timeout: %(timeout)s")
class RetriesException(exception.IronicException):
message = _("Retries count exceeded. Retried %(retry_count)d times.")
class DeployTerminationSucceed(exception.IronicException):
message = _("Deploy termination succeed.")
class BootSwitchFailed(exception.IronicException):
message = _("Boot switch failed. Error: %(error)s")

View File

@ -0,0 +1,71 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bareon Rsync deploy driver.
"""
from oslo_config import cfg
from bareon_ironic.modules import bareon_utils
from bareon_ironic.modules import bareon_base
from bareon_ironic.modules.resources import resources
from bareon_ironic.modules.resources import rsync
rsync_opts = [
cfg.StrOpt('rsync_master_path',
default='/rsync/master_images',
help='Directory where master rsync images are stored on disk.'),
cfg.IntOpt('image_cache_size',
default=20480,
help='Maximum size (in MiB) of cache for master images, '
'including those in use.'),
cfg.IntOpt('image_cache_ttl',
default=10080,
help='Maximum TTL (in minutes) for old master images in '
'cache.'),
]
CONF = cfg.CONF
CONF.register_opts(rsync_opts, group='rsync')
class BareonRsyncDeploy(bareon_base.BareonDeploy):
"""Interface for deploy-related actions."""
def _get_deploy_driver(self):
return 'rsync'
def _get_image_resource_mode(self):
return resources.PullMountResource.MODE
class BareonRsyncVendor(bareon_base.BareonVendor):
def _execute_deploy_script(self, task, ssh, cmd, **kwargs):
if CONF.rsync.rsync_secure_transfer:
user = kwargs.get('username', 'root')
key_file = kwargs.get('key_filename', '/dev/null')
ssh_port = kwargs.get('bareon_ssh_port', 22)
host = (kwargs.get('host') or
bareon_utils.get_node_ip(kwargs.get('task')))
with bareon_utils.ssh_tunnel(rsync.RSYNC_PORT, user,
key_file, host, ssh_port):
return super(
BareonRsyncVendor, self
)._execute_deploy_script(task, ssh, cmd, **kwargs)
else:
return super(
BareonRsyncVendor, self
)._execute_deploy_script(task, ssh, cmd, **kwargs)

View File

@ -0,0 +1,35 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bareon Swift deploy driver.
"""
from bareon_ironic.modules import bareon_base
from bareon_ironic.modules.resources import resources
class BareonSwiftDeploy(bareon_base.BareonDeploy):
"""Interface for deploy-related actions."""
def _get_deploy_driver(self):
return 'swift'
def _get_image_resource_mode(self):
return resources.PullSwiftTempurlResource.MODE
class BareonSwiftVendor(bareon_base.BareonVendor):
pass

View File

@ -0,0 +1,251 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import copy
import hashlib
import os
import subprocess
import tempfile
import six
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import strutils
from ironic.common import dhcp_factory
from ironic.common import exception
from ironic.common import keystone
from ironic.common import utils
from ironic.common.i18n import _, _LW
from ironic.openstack.common import log as logging
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def get_service_tenant_id():
ksclient = keystone._get_ksclient()
if not keystone._is_apiv3(CONF.keystone_authtoken.auth_uri,
CONF.keystone_authtoken.auth_version):
tenant_name = CONF.keystone_authtoken.admin_tenant_name
if tenant_name:
return ksclient.tenants.find(name=tenant_name).to_dict()['id']
def change_node_dict(node, dict_name, new_data):
"""Workaround for Ironic object model to update dict."""
dict_data = getattr(node, dict_name).copy()
dict_data.update(new_data)
setattr(node, dict_name, dict_data)
def str_to_alnum(s):
if not s.isalnum():
s = ''.join([c for c in s if c.isalnum()])
return s
def str_replace_non_alnum(s, replace_by="_"):
if not s.isalnum():
s = ''.join([(c if c.isalnum() else replace_by) for c in s])
return s
def validate_json(required, raw):
for k in required:
if k not in raw:
raise exception.MissingParameterValue(
"%s object missing %s parameter"
% (str(raw), k)
)
def get_node_ip(task):
provider = dhcp_factory.DHCPFactory()
addresses = provider.provider.get_ip_addresses(task)
if addresses:
return addresses[0]
return None
def get_ssh_connection(task, **kwargs):
ssh = utils.ssh_connect(kwargs)
# Note(oberezovskyi): this is required to prevent printing private_key to
# the conductor log
if kwargs.get('key_contents'):
kwargs['key_contents'] = '*****'
LOG.debug("SSH with params:")
LOG.debug(kwargs)
return ssh
@contextlib.contextmanager
def ssh_tunnel(port, user, key_file, target_host, ssh_port=22):
tunnel = _create_ssh_tunnel(port, port, user, key_file, target_host,
local_forwarding=False)
try:
yield
finally:
tunnel.terminate()
def _create_ssh_tunnel(remote_port, local_port, user, key_file, target_host,
remote_ip='127.0.0.1', local_ip='127.0.0.1',
local_forwarding=True,
ssh_port=22):
cmd = ['ssh', '-N', '-o', 'StrictHostKeyChecking=no', '-o',
'UserKnownHostsFile=/dev/null', '-p', str(ssh_port), '-i', key_file]
if local_forwarding:
cmd += ['-L', '{}:{}:{}:{}'.format(local_ip, local_port, remote_ip,
remote_port)]
else:
cmd += ['-R', '{}:{}:{}:{}'.format(remote_ip, remote_port, local_ip,
local_port)]
cmd.append('@'.join((user, target_host)))
# TODO(lobur): Make this sync, check status. (may use ssh control socket).
return subprocess.Popen(cmd)
def sftp_write_to(sftp, data, path):
with tempfile.NamedTemporaryFile(dir=CONF.tempdir) as f:
f.write(data)
f.flush()
sftp.put(f.name, path)
def sftp_ensure_tree(sftp, path):
try:
sftp.mkdir(path)
except IOError:
pass
# TODO(oberezovskyi): merge this code with processutils.ssh_execute
def ssh_execute(ssh, cmd, process_input=None,
addl_env=None, check_exit_code=True,
binary=False, timeout=None):
sanitized_cmd = strutils.mask_password(cmd)
LOG.debug('Running cmd (SSH): %s', sanitized_cmd)
if addl_env:
raise exception.InvalidArgumentError(
_('Environment not supported over SSH'))
if process_input:
# This is (probably) fixable if we need it...
raise exception.InvalidArgumentError(
_('process_input not supported over SSH'))
stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(cmd)
channel = stdout_stream.channel
if timeout and not channel.status_event.wait(timeout=timeout):
raise exception.SSHCommandFailed(cmd=cmd)
# NOTE(justinsb): This seems suspicious...
# ...other SSH clients have buffering issues with this approach
stdout = stdout_stream.read()
stderr = stderr_stream.read()
stdin_stream.close()
exit_status = channel.recv_exit_status()
if six.PY3:
# Decode from the locale using using the surrogateescape error handler
# (decoding cannot fail). Decode even if binary is True because
# mask_password() requires Unicode on Python 3
stdout = os.fsdecode(stdout)
stderr = os.fsdecode(stderr)
stdout = strutils.mask_password(stdout)
stderr = strutils.mask_password(stderr)
# exit_status == -1 if no exit code was returned
if exit_status != -1:
LOG.debug('Result was %s' % exit_status)
if check_exit_code and exit_status != 0:
raise processutils.ProcessExecutionError(exit_code=exit_status,
stdout=stdout,
stderr=stderr,
cmd=sanitized_cmd)
if binary:
if six.PY2:
# On Python 2, stdout is a bytes string if mask_password() failed
# to decode it, or an Unicode string otherwise. Encode to the
# default encoding (ASCII) because mask_password() decodes from
# the same encoding.
if isinstance(stdout, unicode):
stdout = stdout.encode()
if isinstance(stderr, unicode):
stderr = stderr.encode()
else:
# fsencode() is the reverse operation of fsdecode()
stdout = os.fsencode(stdout)
stderr = os.fsencode(stderr)
return (stdout, stderr)
def umount_without_raise(loc, *args):
"""Helper method to umount without raise."""
try:
utils.umount(loc, *args)
except processutils.ProcessExecutionError as e:
LOG.warn(_LW("umount_without_raise unable to umount dir %(path)s, "
"error: %(e)s"), {'path': loc, 'e': e})
def md5(url):
"""Generate md5 has for the sting."""
return hashlib.md5(url).hexdigest()
class RawToPropertyMixin(object):
"""A helper mixin for json-based entities.
Should be used for the json-based class definitions. If you have a class
corresponding to a json, use this mixin to get direct json <-> class
attribute mapping. It also gives out-of-the-box serialization back to json.
"""
_raw = {}
def __getattr__(self, item):
if not self._is_special_name(item):
return self._raw.get(item)
def __setattr__(self, key, value):
if (not self._is_special_name(key)) and (key not in self.__dict__):
self._raw[key] = value
else:
self.__dict__[key] = value
def _is_special_name(self, name):
return name.startswith("_") or name.startswith("__")
def to_dict(self):
data = {}
for k, v in self._raw.iteritems():
if (isinstance(v, list) and len(v) > 0 and
isinstance(v[0], RawToPropertyMixin)):
data[k] = [r.to_dict() for r in v]
else:
data[k] = v
return copy.deepcopy(data)

View File

@ -0,0 +1,229 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Deploy driver actions.
"""
import datetime
import os
import tempfile
from oslo_concurrency import processutils
from oslo_config import cfg
from ironic.common import exception
from ironic.openstack.common import log
from bareon_ironic.modules import bareon_utils
from bareon_ironic.modules.resources import resources
from bareon_ironic.modules.resources import rsync
LOG = log.getLogger(__name__)
CONF = cfg.CONF
# This module allows to run a set of json-defined actions on the node.
#
# Actions json structure:
# {
# "actions": [
# {
# "cmd": "echo 'test 1 run success!'",
# "name": "test_script_1",
# "terminate_on_fail": true,
# "args": "",
# "sudo": false,
# "resources": [
# {
# "name": "swift prefetced",
# "mode": "push",
# "url": "swift:max_container/swift_pref1",
# "target": "/tmp/swift_prefetch_res"
# }
# ]
# },
# {
# another action ...
# }
# ]
# }
#
# Each action carries a list of associated resources. Resource, and thus
# action, can be in two states: not_fetched and fetched. See Resource
# documentation. You should always pass a json of not_fetched resources.
#
# The workflow can be one of the following:
# 1. When you need to fetch actions while you have a proper context,
# then serialize them, and run later, deserializing from a file.
# - Create controller from user json -> fetch_action_resources() -> to_dict()
# - Do whatever while actions are serialized, e.g. wait for node boot.
# - Create controller from the serialized json -> execute()
# 2. A workflow when you need to fetch and run actions immediately.
# - Create controller from user json -> execute()
class Action(resources.ResourceList):
def __init__(self, action, task):
req = ('name', 'cmd', 'args', 'sudo', 'resources',
'terminate_on_fail')
bareon_utils.validate_json(req, action)
super(Action, self).__init__(action, task)
LOG.debug("[%s] Action created from %s"
% (self._task.node.uuid, action))
def execute(self, ssh, sftp):
"""Execute action.
Fetch resources, upload them, and run command.
"""
cmd = ("%s %s" % (self.cmd, self.args))
if self.sudo:
cmd = "sudo %s" % cmd
self.fetch_resources()
self.upload_resources(sftp)
return processutils.ssh_execute(ssh, cmd)
@staticmethod
def from_dict(action, task):
return Action(action, task)
class ActionController(bareon_utils.RawToPropertyMixin):
def __init__(self, task, action_data):
self._raw = action_data
self._task = task
try:
req = ('actions',)
bareon_utils.validate_json(req, action_data)
self.actions = [Action.from_dict(a, self._task)
for a in action_data['actions']]
except Exception as ex:
self._save_exception_result(ex)
raise
def fetch_action_resources(self):
"""Fetch all resources of all actions.
Must be idempotent.
"""
for action in self.actions:
try:
action.fetch_resources()
except Exception as ex:
# Cleanup is already done in ResourceList.fetch_resources()
self._save_exception_result(ex)
raise
def cleanup_action_resources(self):
"""Cleanup all resources of all actions.
Must be idempotent.
Must return None if called when actions resources are not fetched.
"""
for action in self.actions:
action.cleanup_resources()
def _execute(self, ssh, sftp):
results = []
# Clean previous results at the beginning
self._save_results(results)
for action in self.actions:
try:
out, err = action.execute(ssh, sftp)
results.append({'name': action.name, 'passed': True})
LOG.info("[%s] Action '%s' finished with:"
"\n stdout: %s\n stderr: %s" %
(self._task.node.uuid, action.name, out, err))
except Exception as ex:
results.append({'name': action.name, 'passed': False,
'exception': str(ex)})
LOG.info("[%s] Action '%s' failed with error: %s" %
(self._task.node.uuid, action.name, str(ex)))
if action.terminate_on_fail:
raise
finally:
# Save results after each action. Result list will grow until
# all actions are done.
self._save_results(results)
def execute(self, ssh, sftp, **ssh_params):
"""Execute using already opened SSH connection to the node."""
try:
if CONF.rsync.rsync_secure_transfer:
ssh_user = ssh_params.get('username')
ssh_key_file = ssh_params.get('key_filename')
ssh_host = ssh_params.get('host')
ssh_port = ssh_params.get('port', 22)
with bareon_utils.ssh_tunnel(rsync.RSYNC_PORT, ssh_user,
ssh_key_file, ssh_host, ssh_port):
self._execute(ssh, sftp)
else:
self._execute(ssh, sftp)
finally:
self.cleanup_action_resources()
def ssh_and_execute(self, node_ip, ssh_user, ssh_key_url):
"""Open an SSH connection to the node and execute."""
# NOTE(lobur): Security flaw.
# A random-name tempfile with private key contents exists on Conductor
# during the time of execution of tenant-image actions when
# rsync_secure_transfer is True.
# Because we are using a bash command to start a tunnel we need to
# have the private key in a file.
# To fix this we need to start tunnel using Paramiko, which
# is impossible currently. Paramiko would accept raw key contents,
# thus we won't need a file.
with tempfile.NamedTemporaryFile(delete=True) as key_file:
os.chmod(key_file.name, 0o700)
try:
if not (ssh_user and ssh_key_url):
raise exception.MissingParameterValue(
"Need action_user and action_key params to "
"execute actions")
key_contents = resources.url_download_raw_secured(
self._task.context, self._task.node, ssh_key_url)
key_file.file.write(key_contents)
key_file.file.flush()
ssh = bareon_utils.get_ssh_connection(
self._task, username=ssh_user,
key_contents=key_contents, host=node_ip)
sftp = ssh.open_sftp()
except Exception as ex:
self._save_exception_result(ex)
raise
else:
self.execute(ssh, sftp,
username=ssh_user,
key_filename=key_file.name,
host=node_ip)
def _save_results(self, results):
bareon_utils.change_node_dict(
self._task.node, 'instance_info',
{'exec_actions': {
'results': results,
'finished_at': str(datetime.datetime.utcnow())}})
self._task.node.save()
def _save_exception_result(self, ex):
self._save_results({'exception': str(ex)})

View File

@ -0,0 +1,373 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""An extension for ironic/common/image_service.py"""
import abc
import os
import shutil
import uuid
from oslo_config import cfg
from oslo_concurrency import processutils
from oslo_utils import uuidutils
import requests
import six
import six.moves.urllib.parse as urlparse
from ironic.common import exception
from ironic.common.i18n import _
from ironic.openstack.common import log as logging
from ironic.common import image_service
from ironic.common import keystone
from ironic.common import utils
from ironic.common import swift
from bareon_ironic.modules import bareon_utils
swift_opts = [
cfg.IntOpt('swift_native_temp_url_duration',
default=1200,
help='The length of time in seconds that the temporary URL '
'will be valid for. Defaults to 20 minutes. This option '
'is different from the "swift_temp_url_duration" defined '
'under [glance]. Glance option controls temp urls '
'obtained from Glance while this option controls ones '
'obtained from Swift directly, e.g. when '
'swift:<object> ref is used.')
]
CONF = cfg.CONF
CONF.register_opts(swift_opts, group='swift')
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
IMAGE_CHUNK_SIZE = 1024 * 1024 # 1mb
@six.add_metaclass(abc.ABCMeta)
class BaseImageService(image_service.BaseImageService):
"""Provides retrieval of disk images."""
def __init__(self, *args, **kwargs):
super(BaseImageService, self).__init__()
def get_image_unique_id(self, image_href):
"""Get unique ID of the resource.
If possible, the ID should change if resource contents are changed.
:param image_href: Image reference.
:returns: Unique ID of the resource.
"""
# NOTE(vdrok): Doing conversion of href in case it's unicode
# string, UUID cannot be generated for unicode strings on python 2.
return str(uuid.uuid5(uuid.NAMESPACE_URL,
image_href.encode('utf-8')))
@abc.abstractmethod
def get_http_href(self, image_href):
"""Get HTTP ref to the image.
Validate given href and, if possible, convert it to HTTP ref.
Otherwise raise ImageRefValidationFailed with appropriate message.
:param image_href: Image reference.
:raises: exception.ImageRefValidationFailed.
:returns: http reference to the image
"""
def _GlanceImageService(client=None, version=1, context=None):
module = image_service.import_versioned_module(version, 'image_service')
service_class = getattr(module, 'GlanceImageService')
if (context is not None and CONF.glance.auth_strategy == 'keystone' and
not context.auth_token):
context.auth_token = keystone.get_admin_auth_token()
return service_class(client, version, context)
class GlanceImageService(BaseImageService):
def __init__(self, client=None, version=1, context=None):
super(GlanceImageService, self).__init__()
self.glance = _GlanceImageService(client=client,
version=version,
context=context)
def __getattr__(self, attr):
# NOTE(lobur): Known redirects:
# - swift_temp_url
return self.glance.__getattribute__(attr)
def get_image_unique_id(self, image_href):
return self.validate_href(image_href)['id']
def get_http_href(self, image_href):
img_info = self.validate_href(image_href)
return self.glance.swift_temp_url(img_info)
def validate_href(self, image_href):
parsed_ref = urlparse.urlparse(image_href)
# Supporting both glance:UUID and glance://UUID URLs
image_href = parsed_ref.path or parsed_ref.netloc
if not uuidutils.is_uuid_like(image_href):
images = self.glance.detail(filters={'name': image_href})
if len(images) == 0:
raise exception.ImageNotFound(_(
'No Glance images found by name %s') % image_href)
if len(images) > 1:
raise exception.ImageRefValidationFailed(_(
'Multiple Glance images found by name %s') % image_href)
image_href = images[0]['id']
return self.glance.show(image_href)
def download(self, image_href, image_file):
image_href = self.validate_href(image_href)['id']
return self.glance.download(image_href, image_file)
def show(self, image_href):
return self.validate_href(image_href)
class HttpImageService(image_service.HttpImageService, BaseImageService):
"""Provides retrieval of disk images using HTTP."""
def get_http_href(self, image_href):
self.validate_href(image_href)
return image_href
def download(self, image_href, image_file):
"""Downloads image to specified location.
:param image_href: Image reference.
:param image_file: File object to write data to.
:raises: exception.ImageRefValidationFailed if GET request returned
response code not equal to 200.
:raises: exception.ImageDownloadFailed if:
* IOError happened during file write;
* GET request failed.
"""
try:
response = requests.get(image_href, stream=True)
if response.status_code != 200:
raise exception.ImageRefValidationFailed(
image_href=image_href,
reason=_(
"Got HTTP code %s instead of 200 in response to "
"GET request.") % response.status_code)
response.raw.decode_content = True
with response.raw as input_img:
shutil.copyfileobj(input_img, image_file, IMAGE_CHUNK_SIZE)
except (requests.RequestException, IOError) as e:
raise exception.ImageDownloadFailed(image_href=image_href,
reason=e)
class FileImageService(image_service.FileImageService, BaseImageService):
"""Provides retrieval of disk images available locally on the conductor."""
def get_http_href(self, image_href):
raise exception.ImageRefValidationFailed(
"File image store is not able to provide HTTP reference.")
def get_image_unique_id(self, image_href):
"""Get unique ID of the resource.
:param image_href: Image reference.
:raises: exception.ImageRefValidationFailed if source image file
doesn't exist.
:returns: Unique ID of the resource.
"""
path = self.validate_href(image_href)
stat = str(os.stat(path))
return bareon_utils.md5(stat)
class SwiftImageService(BaseImageService):
def __init__(self, context):
self.client = self._get_swiftclient(context)
super(SwiftImageService, self).__init__()
def get_image_unique_id(self, image_href):
return self.show(image_href)['properties']['etag']
def get_http_href(self, image_href):
container, object, headers = self.validate_href(image_href)
return self.client.get_temp_url(
container, object, CONF.swift.swift_native_temp_url_duration)
def validate_href(self, image_href):
path = urlparse.urlparse(image_href).path.lstrip('/')
if not path:
raise exception.ImageRefValidationFailed(
_("No path specified in swift resource reference: %s. "
"Reference must be like swift:container/path")
% str(image_href))
container, s, object = path.partition('/')
try:
headers = self.client.head_object(container, object)
except exception.SwiftOperationError as e:
raise exception.ImageRefValidationFailed(
_("Cannot fetch %(url)s resource. %(exc)s") %
dict(url=str(image_href), exc=str(e)))
return (container, object, headers)
def download(self, image_href, image_file):
try:
container, object, headers = self.validate_href(image_href)
headers, body = self.client.get_object(container, object,
chunk_size=IMAGE_CHUNK_SIZE)
for chunk in body:
image_file.write(chunk)
except exception.SwiftOperationError as ex:
raise exception.ImageDownloadFailed(
_("Cannot fetch %(url)s resource. %(exc)s") %
dict(url=str(image_href), exc=str(ex)))
def show(self, image_href):
container, object, headers = self.validate_href(image_href)
return {
'size': int(headers['content-length']),
'properties': headers
}
@staticmethod
def _get_swiftclient(context):
return swift.SwiftAPI(user=context.user,
preauthtoken=context.auth_token,
preauthtenant=context.tenant)
class RsyncImageService(BaseImageService):
def get_http_href(self, image_href):
raise exception.ImageRefValidationFailed(
"Rsync image store is not able to provide HTTP reference.")
def validate_href(self, image_href):
path = urlparse.urlparse(image_href).path.lstrip('/')
if not path:
raise exception.InvalidParameterValue(
_("No path specified in rsync resource reference: %s. "
"Reference must be like rsync:host::module/path")
% str(image_href))
try:
stdout, stderr = utils.execute(
'rsync', '--stats', '--dry-run',
path,
".",
check_exit_code=[0],
log_errors=processutils.LOG_ALL_ERRORS)
return path, stdout, stderr
except (processutils.ProcessExecutionError, OSError) as ex:
raise exception.ImageRefValidationFailed(
_("Cannot fetch %(url)s resource. %(exc)s") %
dict(url=str(image_href), exc=str(ex)))
def download(self, image_href, image_file):
path, out, err = self.validate_href(image_href)
try:
utils.execute('rsync', '-tvz',
path,
image_file.name,
check_exit_code=[0],
log_errors=processutils.LOG_ALL_ERRORS)
except (processutils.ProcessExecutionError, OSError) as ex:
raise exception.ImageDownloadFailed(
_("Cannot fetch %(url)s resource. %(exc)s") %
dict(url=str(image_href), exc=str(ex)))
def show(self, image_href):
path, out, err = self.validate_href(image_href)
# Example of the size str"
# "Total file size: 2218131456 bytes"
size_str = filter(lambda l: "Total file size" in l,
out.splitlines())[0]
size = filter(str.isdigit, size_str.split())[0]
return {
'size': int(size),
'properties': {}
}
protocol_mapping = {
'http': HttpImageService,
'https': HttpImageService,
'file': FileImageService,
'glance': GlanceImageService,
'swift': SwiftImageService,
'rsync': RsyncImageService,
}
def get_image_service(image_href, client=None, version=1, context=None):
"""Get image service instance to download the image.
:param image_href: String containing href to get image service for.
:param client: Glance client to be used for download, used only if
image_href is Glance href.
:param version: Version of Glance API to use, used only if image_href is
Glance href.
:param context: request context, used only if image_href is Glance href.
:raises: exception.ImageRefValidationFailed if no image service can
handle specified href.
:returns: Instance of an image service class that is able to download
specified image.
"""
scheme = urlparse.urlparse(image_href).scheme.lower()
try:
cls = protocol_mapping[scheme or 'glance']
except KeyError:
raise exception.ImageRefValidationFailed(
image_href=image_href,
reason=_('Image download protocol '
'%s is not supported.') % scheme
)
if cls == GlanceImageService:
return cls(client, version, context)
return cls(context)
def _get_glanceclient(context):
return GlanceImageService(version=2, context=context)
def get_glance_image_uuid_name(task, url):
"""Converting glance links.
Links like:
glance:name
glance://name
glance:uuid
glance://uuid
name
uuid
are converted to tuple
uuid, name
"""
urlobj = urlparse.urlparse(url)
if urlobj.scheme and urlobj.scheme != 'glance':
raise exception.InvalidImageRef("Only glance images are supported.")
path = urlobj.path or urlobj.netloc
img_info = _get_glanceclient(task.context).show(path)
return img_info['id'], img_info['name']

View File

@ -0,0 +1,557 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import StringIO
import os
from oslo_config import cfg
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from ironic.common import exception
from ironic.common import utils
from ironic.common.i18n import _
from ironic.drivers.modules import image_cache
from ironic.openstack.common import fileutils
from ironic.openstack.common import log
from bareon_ironic.modules import bareon_exception
from bareon_ironic.modules import bareon_utils
from bareon_ironic.modules.resources import rsync
from bareon_ironic.modules.resources import image_service
opts = [
cfg.StrOpt('default_resource_storage_prefix',
default='glance:',
help='A prefix that will be added when resource reference '
'is not url-like. E.g. if storage prefix is '
'"rsync:10.0.0.10::module1/" then "resource_1" is treated '
'like rsync:10.0.0.10::module1/resource_1'),
cfg.StrOpt('resource_root_path',
default='/ironic_resources',
help='Directory where per-node resources are stored.'),
cfg.StrOpt('resource_cache_master_path',
default='/ironic_resources/master_resources',
help='Directory where master resources are stored.'),
cfg.IntOpt('resource_cache_size',
default=10240,
help='Maximum size (in MiB) of cache for master resources, '
'including those in use.'),
cfg.IntOpt('resource_cache_ttl',
default=1440,
help='Maximum TTL (in minutes) for old master resources in '
'cache.'),
]
CONF = cfg.CONF
CONF.register_opts(opts, group='resources')
LOG = log.getLogger(__name__)
@image_cache.cleanup(priority=25)
class ResourceCache(image_cache.ImageCache):
def __init__(self, image_service=None):
super(ResourceCache, self).__init__(
CONF.resources.resource_cache_master_path,
# MiB -> B
CONF.resources.resource_cache_size * 1024 * 1024,
# min -> sec
CONF.resources.resource_cache_ttl * 60,
image_service=image_service)
def get_abs_node_workdir_path(node):
return os.path.join(CONF.resources.resource_root_path,
node.uuid)
def get_node_resources_dir(node):
return get_abs_node_workdir_path(node)
def get_node_resources_dir_rsync(node):
return rsync.get_abs_node_workdir_path(node)
def _url_to_filename(url):
basename = bareon_utils.str_replace_non_alnum(os.path.basename(url))
return "%(url_hash)s_%(name)s" % dict(url_hash=bareon_utils.md5(url),
name=basename)
def append_storage_prefix(node, url):
urlobj = parse.urlparse(url)
if not urlobj.scheme:
prefix = CONF.resources.default_resource_storage_prefix
LOG.info('[%(node)s] Plain reference given: "%(ref)s". Adding a '
'resource_storage_prefix "%(pref)s".'
% dict(node=node.uuid, ref=url, pref=prefix))
url = prefix + url
return url
def url_download(context, node, url, dest_path=None):
if not url:
return
url = url.rstrip('/')
url = append_storage_prefix(node, url)
LOG.info(_("[%(node)s] Downloading resource by the following url: %(url)s")
% dict(node=node.uuid, url=url))
if not dest_path:
dest_path = os.path.join(get_node_resources_dir(node),
_url_to_filename(url))
fileutils.ensure_tree(os.path.dirname(dest_path))
resource_storage = image_service.get_image_service(
url, context=context)
# NOTE(lobur): http(s) and rsync resources are cached basing on the URL.
# They do not have per-revision object UUID / object hash, thus
# cache cannot identify change of the contents. If URL doesn't change,
# resource assumed to be the same - cache hit.
cache = ResourceCache(resource_storage)
cache.fetch_image(url, dest_path)
return dest_path
def url_download_raw(context, node, url):
if not url:
return
resource_path = url_download(context, node, url)
with open(resource_path) as f:
raw = f.read()
utils.unlink_without_raise(resource_path)
return raw
def url_download_json(context, node, url):
if not url:
return
raw = url_download_raw(context, node, url)
try:
return jsonutils.loads(raw)
except ValueError:
raise exception.InvalidParameterValue(
_('Resource %s is not a JSON.') % url)
def url_download_raw_secured(context, node, url):
"""Download raw contents of the URL bypassing cache and temp files."""
if not url:
return
url = url.rstrip('/')
url = append_storage_prefix(node, url)
scheme = parse.urlparse(url).scheme
sources_with_tenant_isolation = ('glance', 'swift')
if scheme not in sources_with_tenant_isolation:
raise bareon_exception.UnsafeUrlError(
url=url,
details="Use one of the following storages "
"for this resource: %s"
% str(sources_with_tenant_isolation))
resource_storage = image_service.get_image_service(
url, context=context)
out = StringIO.StringIO()
try:
resource_storage.download(url, out)
return out.getvalue()
finally:
out.close()
class Resource(bareon_utils.RawToPropertyMixin):
"""Base class for Ironic Resource
Used to manage a resource which should be fetched from the URL and
then uploaded to the node. Each resource is a single file.
Resource can be in two states: not_fetched and fetched. When fetched,
a resource has an additional attribute, local_path.
"""
def __init__(self, resource, task, resource_list_name):
self._raw = resource
self._resource_list_name = resource_list_name
self._task = task
self.name = bareon_utils.str_replace_non_alnum(resource['name'])
self._validate(resource)
def _validate(self, raw):
"""Validates resource source json depending of the resource state."""
if self.is_fetched():
req = ('name', 'mode', 'target', 'url', 'local_path')
else:
req = ('name', 'mode', 'target', 'url')
bareon_utils.validate_json(req, raw)
def is_fetched(self):
"""Shows whether the resouce has been fetched or not."""
return bool(self._raw.get('local_path'))
def get_dir_names(self):
"""Returns a list of directory names
These directories show where the resource can reside when serialized.
"""
raise NotImplemented
def fetch(self):
"""Download the resource from the URL and store locally.
Must be idempotent.
"""
raise NotImplemented
def upload(self, sftp):
"""Take resource stored locally and put to the node at target path."""
raise NotImplemented
def cleanup(self):
"""Cleanup files used to store the resource locally.
Must be idempotent.
Must return None if called when resources is not fetched.
"""
if not self.is_fetched():
return
utils.unlink_without_raise(self.local_path)
self._raw.pop('local_path', None)
@staticmethod
def from_dict(resource, task, resource_list_name):
"""Generic method used to instantiate a resource
Returns particular implementation of the resource depending of the
'mode' attribute value.
"""
mode = resource.get('mode')
if not mode:
raise exception.InvalidParameterValue(
"Missing resource 'mode' attribute for %s resource."
% str(resource)
)
mode_to_class = {}
for c in Resource.__subclasses__():
mode_to_class[c.MODE] = c
try:
return mode_to_class[mode](resource, task, resource_list_name)
except KeyError:
raise exception.InvalidParameterValue(
"Unknown resource mode: '%s'. Supported modes are: "
"%s " % (mode, str(mode_to_class.keys()))
)
class PushResource(Resource):
"""A resource with immediate upload
Resource file is uploaded to the node at target path.
"""
MODE = 'push'
def get_dir_names(self):
my_dir = os.path.join(
get_node_resources_dir(self._task.node),
self._resource_list_name
)
return [my_dir]
def fetch(self):
if self.is_fetched():
return
res_dir = self.get_dir_names()[0]
local_path = os.path.join(res_dir, self.name)
url_download(self._task.context, self._task.node, self.url,
dest_path=local_path)
self.local_path = local_path
def upload(self, sftp):
if not self.is_fetched():
raise bareon_exception.InvalidResourceState(
"Cannot upload action '%s' because it is not fetched."
% self.name
)
LOG.info("[%s] Uploading resource %s to the node at %s." % (
self._task.node.uuid, self.name, self.target))
bareon_utils.sftp_ensure_tree(sftp, os.path.dirname(self.target))
sftp.put(self.local_path, self.target)
class PullResource(Resource):
"""A resource with delayed upload
It is fetched onto rsync share, and rsync pointer is uploaded to the
node at target path. The user (or action) can use this pointer to download
the resource from the node shell.
"""
MODE = 'pull'
def _validate(self, raw):
if self.is_fetched():
req = ('name', 'mode', 'target', 'url', 'local_path',
'pull_url')
else:
req = ('name', 'mode', 'target', 'url')
bareon_utils.validate_json(req, raw)
def get_dir_names(self):
my_dir = os.path.join(
get_node_resources_dir_rsync(self._task.node),
self._resource_list_name
)
return [my_dir]
def fetch(self):
if self.is_fetched():
return
# NOTE(lobur): Security issue.
# Resources of all tenants are on the same rsync root, so tenant
# can change the URL manually, remove UUID of the node from path,
# and rsync whole resource share into it's instance.
# To solve this we need to create a user per-tenant on Conductor
# and separate access controls.
res_dir = self.get_dir_names()[0]
local_path = os.path.join(res_dir, self.name)
url_download(self._task.context, self._task.node, self.url,
dest_path=local_path)
pull_url = rsync.build_rsync_url_from_abs(local_path)
self.local_path = local_path
self.pull_url = pull_url
def upload(self, sftp):
if not self.is_fetched():
raise bareon_exception.InvalidResourceState(
"Cannot upload action '%s' because it is not fetched."
% self.name
)
LOG.info("[%(node)s] Writing resource url '%(url)s' to "
"the '%(path)s' for further pull."
% dict(node=self._task.node.uuid,
url=self.pull_url,
path=self.target))
bareon_utils.sftp_ensure_tree(sftp, os.path.dirname(self.target))
bareon_utils.sftp_write_to(sftp, self.pull_url, self.target)
class PullSwiftTempurlResource(Resource):
"""A resource with delayed upload
The URL of this resource is converted to Swift temp URL, and written
to the node at target path. The user (or action) can use this pointer
to download the resource from the node shell.
"""
MODE = 'pull-swift-tempurl'
def _validate(self, raw):
if self.is_fetched():
req = ('name', 'mode', 'target', 'url', 'local_path',
'pull_url')
else:
req = ('name', 'mode', 'target', 'url')
bareon_utils.validate_json(req, raw)
url = append_storage_prefix(self._task.node, raw['url'])
scheme = parse.urlparse(url).scheme
storages_supporting_tempurl = (
'glance',
'swift'
)
# NOTE(lobur): Even though we could also use HTTP image service in a
# manner of tempurl this will not meet user expectation. Swift
# tempurl in contrast to plain HTTP reference supposed to give
# scalable access, e.g. allow an image to be pulled from high number
# of nodes simultaneously without speed degradation.
if scheme not in storages_supporting_tempurl:
raise exception.InvalidParameterValue(
"%(res)s resource can be used only with the "
"following storages: %(st)s" %
dict(res=self.__class__.__name__,
st=storages_supporting_tempurl)
)
def get_dir_names(self):
res_dir = os.path.join(
get_node_resources_dir(self._task.node),
self._resource_list_name
)
return [res_dir]
def fetch(self):
if self.is_fetched():
return
url = append_storage_prefix(self._task.node, self.url)
# NOTE(lobur): Only Glance and Swift can be here. See _validate.
img_service = image_service.get_image_service(
url, version=2, context=self._task.context)
temp_url = img_service.get_http_href(url)
res_dir = self.get_dir_names()[0]
fileutils.ensure_tree(res_dir)
local_path = os.path.join(res_dir, self.name)
with open(local_path, 'w') as f:
f.write(temp_url)
self.local_path = local_path
self.pull_url = temp_url
def upload(self, sftp):
if not self.is_fetched():
raise bareon_exception.InvalidResourceState(
"Cannot upload action '%s' because it is not fetched."
% self.name
)
LOG.info("[%s] Writing %s resource tempurl to the node at %s." % (
self._task.node.uuid, self.name, self.target))
bareon_utils.sftp_ensure_tree(sftp, os.path.dirname(self.target))
sftp.put(self.local_path, self.target)
class PullMountResource(Resource):
"""A resource with delayed upload
A resource of this type is supposed to be a raw image. It is fetched
and mounted to the the rsync share. The rsync pointer is uploaded to
the node at target path. The user (or action) can use this pointer
to download the resource from the node shell.
"""
MODE = 'pull-mount'
def _validate(self, raw):
if self.is_fetched():
req = ('name', 'mode', 'target', 'url', 'local_path',
'mount_point', 'pull_url')
else:
req = ('name', 'mode', 'target', 'url')
bareon_utils.validate_json(req, raw)
def get_dir_names(self):
res_dir = os.path.join(
get_node_resources_dir(self._task.node),
self._resource_list_name
)
mount_dir = os.path.join(
get_node_resources_dir_rsync(self._task.node),
self._resource_list_name,
)
return [res_dir, mount_dir]
def fetch(self):
if self.is_fetched():
return
res_dir, mount_dir = self.get_dir_names()
local_path = os.path.join(res_dir, self.name)
url_download(self._task.context, self._task.node, self.url,
dest_path=local_path)
mount_point = os.path.join(mount_dir, self.name)
fileutils.ensure_tree(mount_point)
utils.mount(local_path, mount_point, '-o', 'ro')
pull_url = rsync.build_rsync_url_from_abs(mount_point,
trailing_slash=True)
self.local_path = local_path
self.mount_point = mount_point
self.pull_url = pull_url
def upload(self, sftp):
if not self.is_fetched():
raise bareon_exception.InvalidResourceState(
"Cannot upload action '%s' because it is not fetched."
% self.name
)
LOG.info("[%(node)s] Writing resource url '%(url)s' to "
"the '%(path)s' for further pull."
% dict(node=self._task.node.uuid,
url=self.pull_url,
path=self.target))
bareon_utils.sftp_ensure_tree(sftp, os.path.dirname(self.target))
bareon_utils.sftp_write_to(sftp, self.pull_url, self.target)
def cleanup(self):
if not self.is_fetched():
return
bareon_utils.umount_without_raise(self.mount_point, '-fl')
self._raw.pop('mount_point', None)
utils.unlink_without_raise(self.local_path)
self._raw.pop('local_path', None)
class ResourceList(bareon_utils.RawToPropertyMixin):
"""Class representing a list of resources."""
def __init__(self, resource_list, task):
self._raw = resource_list
self._task = task
req = ('name', 'resources')
bareon_utils.validate_json(req, resource_list)
self.name = bareon_utils.str_replace_non_alnum(resource_list['name'])
self.resources = [Resource.from_dict(res, self._task, self.name)
for res in resource_list['resources']]
def fetch_resources(self):
"""Fetch all resources of the list.
Must be idempotent.
"""
try:
for r in self.resources:
r.fetch()
except Exception:
self.cleanup_resources()
raise
def upload_resources(self, sftp):
"""Upload all resources of the list."""
for r in self.resources:
r.upload(sftp)
def cleanup_resources(self):
"""Cleanup all resources of the list.
Must be idempotent.
Must return None if called when action resources are not fetched.
"""
# Cleanup resources
for res in self.resources:
res.cleanup()
# Cleanup resource dirs
res_dirs = []
for res in self.resources:
res_dirs.extend(res.get_dir_names())
for dir in set(res_dirs):
utils.rmtree_without_raise(dir)
def __getitem__(self, item):
return self.resources[item]
@staticmethod
def from_dict(resouce_list, task):
return ResourceList(resouce_list, task)

View File

@ -0,0 +1,67 @@
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_config import cfg
from ironic.openstack.common import log
rsync_opts = [
cfg.StrOpt('rsync_server',
default='$my_ip',
help='IP address of Ironic compute node\'s rsync server.'),
cfg.StrOpt('rsync_root',
default='/rsync',
help='Ironic compute node\'s rsync root path.'),
cfg.StrOpt('rsync_module',
default='ironic_rsync',
help='Ironic compute node\'s rsync module name.'),
cfg.BoolOpt('rsync_secure_transfer',
default=False,
help='Whether the driver will use secure rsync transfer '
'over ssh'),
]
CONF = cfg.CONF
CONF.register_opts(rsync_opts, group='rsync')
LOG = log.getLogger(__name__)
RSYNC_PORT = 873
def get_abs_node_workdir_path(node):
return os.path.join(CONF.rsync.rsync_root, node.uuid)
def build_rsync_url_from_rel(rel_path, trailing_slash=False):
if CONF.rsync.rsync_secure_transfer:
rsync_server_ip = '127.0.0.1'
else:
rsync_server_ip = CONF.rsync.rsync_server
return '%(ip)s::%(mod)s/%(path)s%(sl)s' % dict(
ip=rsync_server_ip,
mod=CONF.rsync.rsync_module,
path=rel_path,
sl='/' if trailing_slash else ''
)
def build_rsync_url_from_abs(abs_path, trailing_slash=False):
rel_path = os.path.relpath(abs_path,
CONF.rsync.rsync_root)
return build_rsync_url_from_rel(rel_path, trailing_slash)

193
doc/Makefile Normal file
View File

@ -0,0 +1,193 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
# Default OPENSTACK_RELEASE to kilo
ifndef OPENSTACK_RELEASE
OPENSTACK_RELEASE=kilo
endif
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) \
-D openstack_release=$(OPENSTACK_RELEASE) \
-D version=$(OPENSTACK_RELEASE) \
-D release=$(OPENSTACK_RELEASE)\ $(shell date +%Y-%m-%d) \
source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/bareon-ironic.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/bareon-ironic.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/bareon-ironic"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/bareon-ironic"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

201
doc/source/conf.py Normal file
View File

@ -0,0 +1,201 @@
# -*- coding: utf-8 -*-
#
# Copyright 2016 Cray Inc., All Rights Reserved
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.ifconfig']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Bareon-Ironic'
copyright = u'Copyright 2016 Cray Inc., All Rights Reserved'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# NOTE: version is set by the Makefile (-D version=)
# version = '1.0'
# The full version, including alpha/beta/rc tags.
# NOTE: version is set by the Makefile (-D release=)
# release = '1.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"headbgcolor" : "#E9F2FF",
"sidebarbgcolor" : "#003D73",
"sidebarlinkcolor" : "#D3E5FF",
"relbarbgcolor" : "#3669A9",
"footerbgcolor" : "#000000"
}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
"**": ['localtoc.html', 'relations.html', 'sourcelink.html']
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'Newtdoc'

9
doc/source/index.rst Normal file
View File

@ -0,0 +1,9 @@
========================================
Welcome to Bareon-Ironic's documentation
========================================
.. toctree::
:maxdepth: 0
install-guide
user-guide

View File

@ -0,0 +1,70 @@
Installation Guide
==================
This guide describes installation on top of OpenStack Kilo release.
The ``bare_swift_ipmi`` driver used as example.
1. Install bareon_ironic package
2. Patch Nova service with bareon patch ``(patches/patch-nova-stable-kilo)``
3. Restart nova compute-service
4. Patch Ironic service with bareon patch ``(patches/patch-ironic-stable-kilo)``
5. Enable the driver: add ``bare_swift_ipmi`` to the list of ``enabled_drivers``
in ``[DEFAULT]`` section of ``/etc/ironic/ironic.conf``.
6. Update ironic.conf using bareon sample ``(etc/ironic/ironic.conf.bareon_sample)``
7. Restart ``ironic-api`` and ``ironic-conductor`` services
8. Build a Bareon ramdisk:
8.1 Get Bareon source code ([1]_)
8.2 Run build
.. code-block:: console
$ cd bareon && bash bareon/tests_functional/image_build/centos_minimal.sh
Resulting images and SSH private key needed to access it will appear
at /tmp/rft_image_build/.
9. Upload kernel and initrd images to the Glance image service.
10. Create a node in Ironic with ``bare_swift_ipmi`` driver and associate port with the node
.. code-block:: console
$ ironic node-create -d bare_swift_ipmi
$ ironic port-create -n <node uuid> -a <MAC address>
11. Set IPMI address and credentials as described in the Ironic documentation [2]_.
12. Setup nova flavor as described in the Ironic documentation [2]_.
13. Set Bareon related driver's parameters for the node
.. code-block:: console
$ KERNEL=<UUID_of_kernel_image_in_Glance>
$ INITRD=<UUID_of_initrd_image_in_Glance>
$ PRIVATE_KEY_PATH=/tmp/rft_image_build/fuel_key
$ ironic node-update <node uuid> add driver_info/deploy_kernel=$KERNEL \
driver_info/deploy_ramdisk=$INITRD \
driver_info/bareon_key_filename=$PRIVATE_KEY_PATH
14. Issue ironic validate command to check for errors
.. code-block:: console
$ ironic node-validate <node uuid>
After steps above the node is ready for deploying. User can invoke
``nova boot`` command and link appropriate deploy_config as described in User
Guide
.. [1] https://github.com/openstack/bareon
.. [2] http://docs.openstack.org/developer/ironic/deploy/install-guide.html

1029
doc/source/user-guide.rst Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,105 @@
[bareon]
# Template file for two-disk boot PXE configuration. (string
# value)
#pxe_config_template=$pybasedir/modules/bareon_config.template
# Template file for three-disk (live boot) PXE configuration.
# (string value)
#pxe_config_template_live=$pybasedir/modules/bareon_config_live.template
# Additional append parameters for baremetal PXE boot. (string
# value)
#bareon_pxe_append_params=nofb nomodeset vga=normal
# UUID (from Glance) of the default deployment kernel. (string
# value)
#deploy_kernel=<None>
# UUID (from Glance) of the default deployment ramdisk.
# (string value)
#deploy_ramdisk=<None>
# UUID (from Glance) of the default deployment root FS.
# (string value)
#deploy_squashfs=<None>
# Priority for deploy config (string value)
#deploy_config_priority=instance:node:image:conf
# A uuid or name of glance image representing deploy config.
# (string value)
#deploy_config=<None>
# Timeout in minutes for the node continue-deploy process
# (deployment phase following the callback). (integer value)
#deploy_timeout=15
# Time interval in seconds to check whether the deployment
# driver has responded to termination signal (integer value)
#check_terminate_interval=5
# Max retries to check is node already terminated (integer
# value)
#check_terminate_max_retries=20
[resources]
# A prefix that will be added when resource reference is not
# url-like. E.g. if storage prefix is
# "rsync:10.0.0.10::module1/" then "resource_1" is treated
# like rsync:10.0.0.10::module1/resource_1 (string value)
#default_resource_storage_prefix=glance:
# Directory where per-node resources are stored. (string
# value)
#resource_root_path=/ironic_resources
# Directory where master resources are stored. (string value)
#resource_cache_master_path=/ironic_resources/master_resources
# Maximum size (in MiB) of cache for master resources,
# including those in use. (integer value)
#resource_cache_size=10240
# Maximum TTL (in minutes) for old master resources in cache.
# (integer value)
#resource_cache_ttl=1440
[rsync]
# Directory where master rsync images are stored on disk.
# (string value)
#rsync_master_path=/rsync/master_images
# Maximum size (in MiB) of cache for master images, including
# those in use. (integer value)
#image_cache_size=20480
# Maximum TTL (in minutes) for old master images in cache.
# (integer value)
#image_cache_ttl=10080
# IP address of Ironic compute node's rsync server. (string
# value)
#rsync_server=$my_ip
# Ironic compute node's rsync root path. (string value)
#rsync_root=/rsync
# Ironic compute node's rsync module name. (string value)
#rsync_module=ironic_rsync
# Whether the driver will use secure rsync transfer over ssh
# (boolean value)
#rsync_secure_transfer=false
[swift]
# The length of time in seconds that the temporary URL will be
# valid for. Defaults to 20 minutes. This option is different
# from the "swift_temp_url_duration" defined under [glance].
# Glance option controls temp urls obtained from Glance while
# this option controls ones obtained from Swift directly, e.g.
# when swift:<object> ref is used. (integer value)
#swift_native_temp_url_duration=1200

View File

@ -0,0 +1,311 @@
diff --git a/ironic/api/config.py b/ironic/api/config.py
index 38938c1..18c82fd 100644
--- a/ironic/api/config.py
+++ b/ironic/api/config.py
@@ -31,7 +31,8 @@ app = {
'/',
'/v1',
'/v1/drivers/[a-z_]*/vendor_passthru/lookup',
- '/v1/nodes/[a-z0-9\-]+/vendor_passthru/heartbeat'
+ '/v1/nodes/[a-z0-9\-]+/vendor_passthru/heartbeat',
+ '/v1/nodes/[a-z0-9\-]+/vendor_passthru/pass_deploy_info',
],
}
diff --git a/ironic/api/controllers/v1/node.py b/ironic/api/controllers/v1/node.py
index ce48e09..0df9a3f 100644
--- a/ironic/api/controllers/v1/node.py
+++ b/ironic/api/controllers/v1/node.py
@@ -381,13 +381,17 @@ class NodeStatesController(rest.RestController):
rpc_node = api_utils.get_rpc_node(node_ident)
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
+ driver = api_utils.get_driver_by_name(rpc_node.driver)
+ driver_can_terminate = (driver and
+ driver.deploy.can_terminate_deployment)
# Normally, we let the task manager recognize and deal with
# NodeLocked exceptions. However, that isn't done until the RPC calls
# below. In order to main backward compatibility with our API HTTP
# response codes, we have this check here to deal with cases where
# a node is already being operated on (DEPLOYING or such) and we
# want to continue returning 409. Without it, we'd return 400.
- if rpc_node.reservation:
+ if (not (target == ir_states.DELETED and driver_can_terminate) and
+ rpc_node.reservation):
raise exception.NodeLocked(node=rpc_node.uuid,
host=rpc_node.reservation)
diff --git a/ironic/api/controllers/v1/utils.py b/ironic/api/controllers/v1/utils.py
index 6132e12..91ca0f2 100644
--- a/ironic/api/controllers/v1/utils.py
+++ b/ironic/api/controllers/v1/utils.py
@@ -19,6 +19,7 @@ from oslo_utils import uuidutils
import pecan
import wsme
+from ironic.common import driver_factory
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common import utils
@@ -102,3 +103,12 @@ def is_valid_node_name(name):
:returns: True if the name is valid, False otherwise.
"""
return utils.is_hostname_safe(name) and (not uuidutils.is_uuid_like(name))
+
+
+def get_driver_by_name(driver_name):
+ _driver_factory = driver_factory.DriverFactory()
+ try:
+ driver = _driver_factory[driver_name]
+ return driver.obj
+ except Exception:
+ return None
diff --git a/ironic/common/context.py b/ironic/common/context.py
index aaeffb3..d167e26 100644
--- a/ironic/common/context.py
+++ b/ironic/common/context.py
@@ -63,5 +63,4 @@ class RequestContext(context.RequestContext):
@classmethod
def from_dict(cls, values):
values.pop('user', None)
- values.pop('tenant', None)
return cls(**values)
diff --git a/ironic/common/states.py b/ironic/common/states.py
index 7ebd052..df30c2f 100644
--- a/ironic/common/states.py
+++ b/ironic/common/states.py
@@ -218,6 +218,9 @@ machine.add_state(INSPECTFAIL, target=MANAGEABLE, **watchers)
# A deployment may fail
machine.add_transition(DEPLOYING, DEPLOYFAIL, 'fail')
+# A deployment may be terminated
+machine.add_transition(DEPLOYING, DELETING, 'delete')
+
# A failed deployment may be retried
# ironic/conductor/manager.py:do_node_deploy()
machine.add_transition(DEPLOYFAIL, DEPLOYING, 'rebuild')
diff --git a/ironic/common/swift.py b/ironic/common/swift.py
index a4444e2..4cc36c4 100644
--- a/ironic/common/swift.py
+++ b/ironic/common/swift.py
@@ -23,6 +23,7 @@ from swiftclient import utils as swift_utils
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common import keystone
+from ironic.common import utils
from ironic.openstack.common import log as logging
swift_opts = [
@@ -36,6 +37,13 @@ swift_opts = [
CONF = cfg.CONF
CONF.register_opts(swift_opts, group='swift')
+CONF.import_opt('swift_endpoint_url',
+ 'ironic.common.glance_service.v2.image_service',
+ group='glance')
+CONF.import_opt('swift_api_version',
+ 'ironic.common.glance_service.v2.image_service',
+ group='glance')
+
CONF.import_opt('admin_user', 'keystonemiddleware.auth_token',
group='keystone_authtoken')
CONF.import_opt('admin_tenant_name', 'keystonemiddleware.auth_token',
@@ -60,7 +68,9 @@ class SwiftAPI(object):
tenant_name=CONF.keystone_authtoken.admin_tenant_name,
key=CONF.keystone_authtoken.admin_password,
auth_url=CONF.keystone_authtoken.auth_uri,
- auth_version=CONF.keystone_authtoken.auth_version):
+ auth_version=CONF.keystone_authtoken.auth_version,
+ preauthtoken=None,
+ preauthtenant=None):
"""Constructor for creating a SwiftAPI object.
:param user: the name of the user for Swift account
@@ -68,15 +78,40 @@ class SwiftAPI(object):
:param key: the 'password' or key to authenticate with
:param auth_url: the url for authentication
:param auth_version: the version of api to use for authentication
+ :param preauthtoken: authentication token (if you have already
+ authenticated) note authurl/user/key/tenant_name
+ are not required when specifying preauthtoken
+ :param preauthtenant a tenant that will be accessed using the
+ preauthtoken
"""
- auth_url = keystone.get_keystone_url(auth_url, auth_version)
- params = {'retries': CONF.swift.swift_max_retries,
- 'insecure': CONF.keystone_authtoken.insecure,
- 'user': user,
- 'tenant_name': tenant_name,
- 'key': key,
- 'authurl': auth_url,
- 'auth_version': auth_version}
+ params = {
+ 'retries': CONF.swift.swift_max_retries,
+ 'insecure': CONF.keystone_authtoken.insecure
+ }
+
+ if preauthtoken:
+ # Determining swift url for the user's tenant account.
+ tenant_id = utils.get_tenant_id(tenant_name=preauthtenant)
+ url = "{endpoint}/{api_ver}/AUTH_{tenant}".format(
+ endpoint=CONF.glance.swift_endpoint_url,
+ api_ver=CONF.glance.swift_api_version,
+ tenant=tenant_id
+ )
+ # authurl/user/key/tenant_name are not required when specifying
+ # preauthtoken
+ params.update({
+ 'preauthtoken': preauthtoken,
+ 'preauthurl': url
+ })
+ else:
+ auth_url = keystone.get_keystone_url(auth_url, auth_version)
+ params.update({
+ 'user': user,
+ 'tenant_name': tenant_name,
+ 'key': key,
+ 'authurl': auth_url,
+ 'auth_version': auth_version
+ })
self.connection = swift_client.Connection(**params)
@@ -128,8 +163,8 @@ class SwiftAPI(object):
operation = _("head account")
raise exception.SwiftOperationError(operation=operation,
error=e)
-
- storage_url, token = self.connection.get_auth()
+ storage_url = (self.connection.os_options.get('object_storage_url') or
+ self.connection.get_auth()[0])
parse_result = parse.urlparse(storage_url)
swift_object_path = '/'.join((parse_result.path, container, object))
temp_url_key = account_info['x-account-meta-temp-url-key']
@@ -186,3 +221,23 @@ class SwiftAPI(object):
except swift_exceptions.ClientException as e:
operation = _("post object")
raise exception.SwiftOperationError(operation=operation, error=e)
+
+ def get_object(self, container, object, object_headers=None,
+ chunk_size=None):
+ """Get Swift object.
+
+ :param container: The name of the container in which Swift object
+ is placed.
+ :param object: The name of the object in Swift
+ :param object_headers: the headers for the object to pass to Swift
+ :param chunk_size: size of the chunk used read to read from response
+ :returns: Tuple (body, headers)
+ :raises: SwiftOperationError, if operation with Swift fails.
+ """
+ try:
+ return self.connection.get_object(container, object,
+ headers=object_headers,
+ resp_chunk_size=chunk_size)
+ except swift_exceptions.ClientException as e:
+ operation = _("get object")
+ raise exception.SwiftOperationError(operation=operation, error=e)
diff --git a/ironic/common/utils.py b/ironic/common/utils.py
index 3633f82..4d1ca28 100644
--- a/ironic/common/utils.py
+++ b/ironic/common/utils.py
@@ -38,6 +38,7 @@ from ironic.common import exception
from ironic.common.i18n import _
from ironic.common.i18n import _LE
from ironic.common.i18n import _LW
+from ironic.common import keystone
from ironic.openstack.common import log as logging
utils_opts = [
@@ -536,3 +537,8 @@ def dd(src, dst, *args):
def is_http_url(url):
url = url.lower()
return url.startswith('http://') or url.startswith('https://')
+
+
+def get_tenant_id(tenant_name):
+ ksclient = keystone._get_ksclient()
+ return ksclient.tenants.find(name=tenant_name).to_dict()['id']
diff --git a/ironic/conductor/manager.py b/ironic/conductor/manager.py
index c2b75bc..53f516b 100644
--- a/ironic/conductor/manager.py
+++ b/ironic/conductor/manager.py
@@ -766,6 +766,11 @@ class ConductorManager(periodic_task.PeriodicTasks):
"""
LOG.debug("RPC do_node_tear_down called for node %s." % node_id)
+ with task_manager.acquire(context, node_id, shared=True) as task:
+ if (task.node.provision_state == states.DEPLOYING and
+ task.driver.deploy.can_terminate_deployment):
+ task.driver.deploy.terminate_deployment(task)
+
with task_manager.acquire(context, node_id, shared=False) as task:
try:
# NOTE(ghe): Valid power driver values are needed to perform
diff --git a/ironic/drivers/base.py b/ironic/drivers/base.py
index e0685d0..d1fa4bc 100644
--- a/ironic/drivers/base.py
+++ b/ironic/drivers/base.py
@@ -318,6 +318,13 @@ class DeployInterface(BaseInterface):
"""
pass
+ def terminate_deployment(self, *args, **kwargs):
+ pass
+
+ @property
+ def can_terminate_deployment(self):
+ return False
+
@six.add_metaclass(abc.ABCMeta)
class PowerInterface(BaseInterface):
diff --git a/ironic/drivers/modules/image_cache.py b/ironic/drivers/modules/image_cache.py
index d7b27c0..eb3ec55 100644
--- a/ironic/drivers/modules/image_cache.py
+++ b/ironic/drivers/modules/image_cache.py
@@ -25,9 +25,9 @@ import uuid
from oslo_concurrency import lockutils
from oslo_config import cfg
+from oslo_utils import uuidutils
from ironic.common import exception
-from ironic.common.glance_service import service_utils
from ironic.common.i18n import _LI
from ironic.common.i18n import _LW
from ironic.common import images
@@ -100,15 +100,15 @@ class ImageCache(object):
# TODO(ghe): have hard links and counts the same behaviour in all fs
- # NOTE(vdrok): File name is converted to UUID if it's not UUID already,
- # so that two images with same file names do not collide
- if service_utils.is_glance_image(href):
- master_file_name = service_utils.parse_image_ref(href)[0]
+ if uuidutils.is_uuid_like(href):
+ master_file_name = href
+ elif (self._image_service and
+ hasattr(self._image_service, 'get_image_unique_id')):
+ master_file_name = self._image_service.get_image_unique_id(href)
else:
- # NOTE(vdrok): Doing conversion of href in case it's unicode
- # string, UUID cannot be generated for unicode strings on python 2.
master_file_name = str(uuid.uuid5(uuid.NAMESPACE_URL,
href.encode('utf-8')))
+
master_path = os.path.join(self.master_dir, master_file_name)
if CONF.parallel_image_downloads:
diff --git a/ironic/tests/test_swift.py b/ironic/tests/test_swift.py
index 9daa06e..aaa1b7c 100644
--- a/ironic/tests/test_swift.py
+++ b/ironic/tests/test_swift.py
@@ -113,6 +113,7 @@ class SwiftTestCase(base.TestCase):
connection_obj_mock.get_auth.return_value = auth
head_ret_val = {'x-account-meta-temp-url-key': 'secretkey'}
connection_obj_mock.head_account.return_value = head_ret_val
+ connection_obj_mock.os_options = {}
gen_temp_url_mock.return_value = 'temp-url-path'
temp_url_returned = swiftapi.get_temp_url('container', 'object', 10)
connection_obj_mock.get_auth.assert_called_once_with()

File diff suppressed because it is too large Load Diff

8
requirements.txt Normal file
View File

@ -0,0 +1,8 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# The driver uses Ironic code base and it's requirements, no additional
# requirements needed
# Since Ironic is not published to pip, Ironic must be installed on the system
# before test run
#ironic>=4.3.0

23
setup.cfg Normal file
View File

@ -0,0 +1,23 @@
[metadata]
name = bareon-ironic
version = 1.0.0
author = Cray Inc.
summary = Bareon-based deployment driver for Ironic
classifier =
Programming Language :: Python
[files]
packages =
bareon_ironic
extra_files =
bareon_ironic/modules/bareon_config.template
bareon_ironic/modules/bareon_config_live.template
[entry_points]
ironic.drivers =
bare_swift_ssh = bareon_ironic.bareon:BareonSwiftAndSSHDriver
bare_swift_ipmi = bareon_ironic.bareon:BareonSwiftAndIPMIToolDriver
bare_rsync_ssh = bareon_ironic.bareon:BareonRsyncAndSSHDriver
bare_rsync_ipmi = bareon_ironic.bareon:BareonRsyncAndIPMIToolDriver

29
setup.py Normal file
View File

@ -0,0 +1,29 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

4
test-requirements.txt Normal file
View File

@ -0,0 +1,4 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.2 # Apache-2.0

37
tox.ini Normal file
View File

@ -0,0 +1,37 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8,py27
[testenv]
usedevelop = True
install_command = pip install --allow-external -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
PYTHONDONTWRITEBYTECODE = 1
LANGUAGE=en_US
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
whitelist_externals = bash
commands =
bash -c "TESTS_DIR=./bareon_ironic/tests/ python setup.py testr --slowest --testr-args='{posargs}'"
[tox:jenkins]
downloadcache = ~/cache/pip
[testenv:pep8]
commands =
flake8 {posargs}
[testenv:venv]
commands =
[testenv:py27]
commands =
[flake8]
ignore = H102,H306,H307
exclude = .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools,*ironic/nova*
max-complexity=17
[hacking]
import_exceptions = testtools.matchers, ironic.common.i18n