Retire stackforge/novaimagebuilder

This commit is contained in:
Monty Taylor 2015-10-17 16:03:50 -04:00
parent 65e06f4603
commit 2c6b018403
25 changed files with 7 additions and 3314 deletions

35
.gitignore vendored
View File

@ -1,35 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/novaimagebuilder.git

161
COPYING
View File

@ -1,161 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and
configuration files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object
code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form,
made available under the License, as indicated by a copyright notice that is
included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or Legal
Entity authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal, or
written communication sent to the Licensor or its representatives, including
but not limited to communication on electronic mailing lists, source code
control systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise designated in
writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this
License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable copyright license to
reproduce, prepare Derivative Works of, publicly display, publicly perform,
sublicense, and distribute the Work and such Derivative Works in Source or
Object form.
3. Grant of Patent License. Subject to the terms and conditions of this
License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this
section) patent license to make, have made, use, offer to sell, sell, import,
and otherwise transfer the Work, where such license applies only to those
patent claims licensable by such Contributor that are necessarily infringed by
their Contribution(s) alone or by combination of their Contribution(s) with the
Work to which such Contribution(s) was submitted. If You institute patent
litigation against any entity (including a cross-claim or counterclaim in a
lawsuit) alleging that the Work or a Contribution incorporated within the Work
constitutes direct or contributory patent infringement, then any patent
licenses granted to You under this License for that Work shall terminate as of
the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or
Derivative Works thereof in any medium, with or without modifications, and in
Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You
distribute, all copyright, patent, trademark, and attribution notices from the
Source form of the Work, excluding those notices that do not pertain to any
part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License. You may add Your own copyright statement to Your
modifications and may provide additional or different license terms and
conditions for use, reproduction, or distribution of Your modifications, or for
any such Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions stated in this
License.
5. Submission of Contributions. Unless You explicitly state otherwise, any
Contribution intentionally submitted for inclusion in the Work by You to the
Licensor shall be under the terms and conditions of this License, without any
additional terms or conditions. Notwithstanding the above, nothing herein shall
supersede or modify the terms of any separate license agreement you may have
executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names,
trademarks, service marks, or product names of the Licensor, except as required
for reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in
writing, Licensor provides the Work (and each Contributor provides its
Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied, including, without limitation, any warranties
or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any risks
associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in
tort (including negligence), contract, or otherwise, unless required by
applicable law (such as deliberate and grossly negligent acts) or agreed to in
writing, shall any Contributor be liable to You for damages, including any
direct, indirect, special, incidental, or consequential damages of any
character arising as a result of this License or out of the use or inability to
use the Work (including but not limited to damages for loss of goodwill, work
stoppage, computer failure or malfunction, or any and all other commercial
damages or losses), even if such Contributor has been advised of the
possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or
Derivative Works thereof, You may choose to offer, and charge a fee for,
acceptance of support, warranty, indemnity, or other liability obligations
and/or rights consistent with this License. However, in accepting such
obligations, You may act only on Your own behalf and on Your sole
responsibility, not on behalf of any other Contributor, and only if You agree
to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

7
README.rst Normal file
View File

@ -0,0 +1,7 @@
This project is no longer maintained.
The contents of this repository are still available in the Git source code
management system. To see the contents of this repository before it reached
its end of life, please check out the previous commit with
"git checkout HEAD^1".

View File

@ -1,147 +0,0 @@
#!/usr/bin/env python
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import signal
import argparse
from novaimagebuilder.Singleton import Singleton
from novaimagebuilder.OSInfo import OSInfo
from novaimagebuilder.Builder import Builder
class Arguments(Singleton):
def _singleton_init(self, *args, **kwargs):
super(Arguments, self)._singleton_init()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.argparser = self._argparser_setup()
self.args = self.argparser.parse_args()
def _argparser_setup(self):
app_name = sys.argv[0].rpartition('/')[2]
description_text = """Creates a new VM image in Nova using an OS's native installation tools."""
argparser = argparse.ArgumentParser(description=description_text, prog=app_name)
argparser.add_argument('--os', help='The shortid of an OS. Required for both installation types.')
argparser.add_argument('--os_list', action='store_true', default=False,
help='Show the OS list available for image building.')
install_location_group = argparser.add_mutually_exclusive_group()
install_location_group.add_argument('--install_iso', help='Location of the installation media ISO.')
install_location_group.add_argument('--install_tree', help='Location of an installation file tree.')
argparser.add_argument('--install_script', type=argparse.FileType(),
help='Custom install script file to use instead of generating one.')
argparser.add_argument('--admin_pw', help='The password to set for the admin user in the image.')
argparser.add_argument('--license_key', help='License/product key to use if needed.')
argparser.add_argument('--arch', default='x86_64',
help='The architecture the image is built for. (default: %(default)s)')
argparser.add_argument('--disk_size', type=int, default=10,
help='Size of the image root disk in gigabytes. (default: %(default)s)')
argparser.add_argument('--instance_flavor', default='vanilla',
help='The type of instance to use for building the image. (default: %(default)s)')
argparser.add_argument('--name', help='A name to assign to the built image.', default='new-image')
argparser.add_argument('--image_storage', choices=('glance', 'cinder', 'both'), default='glance',
help='Where to store the final image: glance, cinder, both (default: %(default)s)')
argparser.add_argument('--debug', action='store_true', default=False,
help='Print debugging output to the logfile. (default: %(default)s)')
return argparser
class Application(Singleton):
def _singleton_init(self, *args, **kwargs):
super(Application, self)._singleton_init()
self.arguments = Arguments().args
self.log = self._logger(debug=self.arguments.debug)
if not self.log:
print 'No logger!!! stopping...'
sys.exit(1)
signal.signal(signal.SIGTERM, self.signal_handler)
self.osinfo = OSInfo()
self.builder = None
def _logger(self, debug=False):
if debug:
level = logging.DEBUG
else:
level = logging.WARNING
logging.basicConfig(level=level, format='%(asctime)s %(levelname)s %(name)s thread(%(threadName)s) Message: %(message)s')
logger = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
#filehandler = logging.FileHandler('/var/log/%s' % sys.argv[0].rpartition('/')[2])
#formatter = logging.Formatter('%(asctime)s %(levelname)s %(name)s thread(%(threadName)s) Message: %(message)s')
#filehandler.setFormatter(formatter)
#logger.addHandler(filehandler)
return logger
def signal_handler(self, signum, stack):
if signum == signal.SIGTERM:
logging.warning('caught signal SIGTERM, stopping...')
if self.builder:
self.builder.abort()
sys.exit(0)
def main(self):
if self.arguments.os:
if self.arguments.install_iso:
location = self.arguments.install_iso
install_type = 'iso'
elif self.arguments.install_tree:
location = self.arguments.install_tree
install_type = 'tree'
else:
# if iso or tree is missing, print a message and exit non-zero
print('One of --install_iso or --install_tree must be given.')
return 1
install_config = {'admin_password': self.arguments.admin_pw,
'license_key': self.arguments.license_key,
'arch': self.arguments.arch,
'disk_size': self.arguments.disk_size,
'flavor': self.arguments.instance_flavor,
'storage': self.arguments.image_storage,
'name': self.arguments.name}
self.builder = Builder(self.arguments.os,
install_location=location,
install_type=install_type,
install_script=self.arguments.install_script,
install_config=install_config)
# TODO: create a better way to run this.
# The inactivity timeout is 180 seconds
self.builder.run()
self.builder.wait_for_completion(180)
elif self.arguments.os_list:
# possible distro values from libosinfo (for reference):
# 'osx', 'openbsd', 'centos', 'win', 'mandrake', 'sled', 'sles', 'netbsd', 'winnt', 'fedora', 'solaris',
# 'rhel', 'opensuse', 'rhl', 'mes', 'ubuntu', 'debian', 'netware', 'msdos', 'gnome', 'opensolaris',
# 'freebsd', 'mandriva'
os_dict = self.osinfo.os_ids(distros={'fedora': 17, 'rhel': 5, 'ubuntu': 12, 'win': 6})
if len(os_dict) > 0:
for os in sorted(os_dict.keys()):
print '%s - %s' % (os, os_dict[os])
else:
Arguments().argparser.parse_args(['--help'])
if __name__ == '__main__':
sys.exit(Application().main())

View File

@ -1,118 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from CacheManager import CacheManager
from StackEnvironment import StackEnvironment
from SyslinuxHelper import SyslinuxHelper
import inspect
import logging
class BaseOS(object):
"""
@param osinfo_dict:
@param install_type:
@param install_media_location:
@param install_config:
@param install_script:
"""
def __init__(self, osinfo_dict, install_type, install_media_location, install_config, install_script = None):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.env = StackEnvironment()
self.cache = CacheManager()
self.syslinux = SyslinuxHelper()
self.osinfo_dict = osinfo_dict
self.install_type = install_type
self.install_media_location = install_media_location
self.install_config = install_config
self.install_script = install_script
self.iso_volume_delete = False
# Subclasses can pull in the above and then do OS specific tasks to fill in missing
# information and determine if the resulting install is possible
def os_ver_arch(self):
"""
@return:
"""
return self.osinfo_dict['shortid'] + "-" + self.install_config['arch']
def prepare_install_instance(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def start_install_instance(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def update_status(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def wants_iso_content(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def iso_content_dict(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def url_content_dict(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def abort(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))
def cleanup(self):
"""
@return:
"""
raise NotImplementedError("Function (%s) not implemented" % (inspect.stack()[0][3]))

View File

@ -1,176 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from OSInfo import OSInfo
from StackEnvironment import StackEnvironment
from time import sleep
class Builder(object):
def __init__(self, osid, install_location=None, install_type=None, install_script=None, install_config={}):
"""
Builder selects the correct OS object to delegate build activity to.
@param osid: The shortid for an OS record.
@param install_location: The location of an ISO or install tree.
@param install_type: The type of installation (iso or tree)
@param install_script: A custom install script to be used instead of what OSInfo can generate
@param install_config: A dict of various info that may be needed for the build.
(admin_pw, license_key, arch, disk_size, flavor, storage, name)
"""
super(Builder, self).__init__()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.install_location = install_location
self.install_type = install_type
self.install_script = install_script
self.install_config = install_config
self.os = OSInfo().os_for_shortid(osid)
self.os_delegate = self._delegate_for_os(self.os)
self.env = StackEnvironment()
def _delegate_for_os(self, os):
"""
Select and instantiate the correct OS class for build delegation.
@param os: The dictionary of OS info for a give OS shortid
@return: An instance of an OS class that will control a VM for the image installation
"""
# TODO: Change the way we select what class to instantiate to something that we do not have to touch
# every time we add another OS class
os_classes = {'fedora': 'RedHatOS', 'rhel': 'RedHatOS', 'win': 'WindowsOS', 'ubuntu': 'UbuntuOS'}
os_classname = os_classes.get(os['distro'])
if os_classname:
try:
os_module = __import__("novaimagebuilder." + os_classname, fromlist=[os_classname])
os_class = getattr(os_module, os_classname)
#import pdb; pdb.set_trace()
return os_class(osinfo_dict=self.os,
install_type=self.install_type,
install_media_location=self.install_location,
install_config=self.install_config,
install_script=self.install_script)
except ImportError as e:
self.log.exception(e)
return None
else:
raise Exception("No delegate found for distro (%s)" % os['distro'])
def run(self):
"""
Starts the installation of an OS in an image via the appropriate OS class
@return: Status of the installation.
"""
self.os_delegate.prepare_install_instance()
self.os_delegate.start_install_instance()
return self.os_delegate.update_status()
def wait_for_completion(self, inactivity_timeout):
"""
Waits for the install_instance to enter SHUTDOWN state then launches a snapshot
@param inactivity_timeout amount of time to wait for activity before declaring the installation a failure in 10s of seconds (6 is 60 seconds)
@return: Success or Failure
"""
# TODO: Timeouts, activity checking
instance = self._wait_for_shutoff(self.os_delegate.install_instance, inactivity_timeout)
# Snapshot with self.install_config['name']
if instance:
finished_image_id = instance.instance.create_image(self.install_config['name'])
self._wait_for_glance_snapshot(finished_image_id)
self._terminate_instance(instance.id)
if self.os_delegate.iso_volume_delete:
self.env.cinder.volumes.get(self.os_delegate.iso_volume).delete()
self.log.debug("Deleted install ISO volume from cinder: %s" % self.os_delegate.iso_volume)
# Leave instance running if install did not finish
def _wait_for_shutoff(self, instance, inactivity_timeout):
inactivity_countdown = inactivity_timeout
for i in range(1200):
status = instance.status
if status == "SHUTOFF":
self.log.debug("Instance (%s) has entered SHUTOFF state" % instance.id)
return instance
if i % 10 == 0:
self.log.debug("Waiting for instance status SHUTOFF - current status (%s): %d/1200" % (status, i))
if not instance.is_active():
inactivity_countdown -= 1
else:
inactivity_countdown = inactivity_timeout
if inactivity_countdown == 0:
self.log.debug("Install instance has become inactive. Instance will remain running so you can investigate what happened.")
return
sleep(1)
def _wait_for_glance_snapshot(self, image_id):
image = self.env.glance.images.get(image_id)
self.log.debug("Waiting for glance image id (%s) to become active" % image_id)
while True:
self.log.debug("Current image status: %s" % image.status)
sleep(2)
image = self.env.glance.images.get(image.id)
if image.status == "error":
raise Exception("Image entered error status while waiting for completion")
elif image.status == 'active':
break
# Remove any direct boot properties if they exist
properties = image.properties
for key in ['kernel_id', 'ramdisk_id', 'command_line']:
if key in properties:
del properties[key]
meta = {'properties': properties}
image.update(**meta)
def _terminate_instance(self, instance_id):
nova = self.env.nova
instance = nova.servers.get(instance_id)
instance.delete()
self.log.debug("Waiting for instance id (%s) to be terminated/delete" % instance_id)
while True:
self.log.debug("Current instance status: %s" % instance.status)
sleep(5)
try:
instance = nova.servers.get(instance_id)
except Exception as e:
self.log.debug("Got exception (%s) assuming deletion complete" % e)
break
def abort(self):
"""
Aborts the installation of an OS in an image.
@return: Status of the installation.
"""
self.os_delegate.abort()
self.os_delegate.cleanup()
return self.os_delegate.update_status()
def status(self):
"""
Returns the status of the installation.
@return: Status of the installation.
"""
# TODO: replace this with a background thread that watches the status and cleans up as needed.
status = self.os_delegate.update_status()
if status in ('COMPLETE', 'FAILED'):
self.os_delegate.cleanup()
return status

View File

@ -1,302 +0,0 @@
#!/usr/bin/python
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import json
import os
import os.path
import pycurl
import guestfs
import fcntl
import threading
import time
import StackEnvironment
from Singleton import Singleton
class CacheManager(Singleton):
"""
Class to manage the retrieval and storage of install source objects
Typically the source for these objects are ISO images or install trees
accessible via HTTP. Content is moved into glance and optionally cinder.
Some smaller pieces of content are also cached locally
Currently items are keyed by os, version, arch and can have arbitrary
names. The name install_iso is special. OS plugins are allowed to
access a local copy before it is sent to glance, even if that local copy
will eventually be deleted.
"""
# TODO: Currently assumes the target environment is static - allow this to change
# TODO: Sane handling of a pending cache item
# TODO: Configurable
CACHE_ROOT = "/var/lib/novaimagebuilder/"
INDEX_THREAD_LOCK = threading.Lock()
INDEX_FILE = "_cache_index"
def _singleton_init(self):
self.env = StackEnvironment.StackEnvironment()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.index_filename = self.CACHE_ROOT + self.INDEX_FILE
if not os.path.isfile(self.index_filename):
self.log.debug("Creating cache index file (%s)" % self.index_filename)
# TODO: somehow prevent a race here
index_file = open(self.index_filename, 'w')
json.dump({ } , index_file)
index_file.close()
# This should be None except when we are actively working on it and hold a lock
self.index = None
self.index_file = None
self.locked = False
def lock_and_get_index(self):
"""
Obtain an exclusive lock on the cache index and then load it into the
"index" instance variable. Tasks done while holding this lock should be
very brief and non-blocking. Calls to this should be followed by either
write_index_and_unlock() or unlock_index() depending upon whether or not the
index has been modified.
"""
# We acquire a thread lock under all circumstances
# This is the safest approach and should be relatively harmless if we are used
# as a module in a non-threaded Python program
self.INDEX_THREAD_LOCK.acquire()
# atomic create if not present
fd = os.open(self.index_filename, os.O_RDWR | os.O_CREAT)
# blocking
fcntl.flock(fd, fcntl.LOCK_EX)
self.index_file = os.fdopen(fd, "r+")
index = self.index_file.read()
if len(index) == 0:
# Empty - possibly because we created it earlier - create empty dict
self.index = { }
else:
self.index = json.loads(index)
def write_index_and_unlock(self):
"""
Write contents of self.index back to the persistent file and then unlock it
"""
self.index_file.seek(0)
self.index_file.truncate()
json.dump(self.index , self.index_file)
# TODO: Double-check that this is safe
self.index_file.flush()
fcntl.flock(self.index_file, fcntl.LOCK_UN)
self.index_file.close()
self.index = None
self.INDEX_THREAD_LOCK.release()
def unlock_index(self):
"""
Release the cache index lock without updating the persistent file
"""
self.index = None
fcntl.flock(self.index_file, fcntl.LOCK_UN)
self.index_file.close()
self.index_file = None
self.INDEX_THREAD_LOCK.release()
# INDEX looks like
#
# { "fedora-19-x86_64": { "install_iso": { "local": "/blah", "glance": "UUID", "cinder": "UUID" },
# "install_iso_kernel": { "local"
def _get_index_value(self, os_ver_arch, name, location):
"""
Utility function to retrieve the location of the named object for the given OS version and architecture.
Only use this if your thread has obtained the thread-global lock by using the
lock_and_get_index() function above
"""
if self.index is None:
raise Exception("Attempt made to read index values while a locked index is not present")
if not os_ver_arch in self.index:
return None
if not name in self.index[os_ver_arch]:
return None
# If the specific location is not requested, return the whole location dict
if not location:
return self.index[os_ver_arch][name]
if not location in self.index[os_ver_arch][name]:
return None
else:
return self.index[os_ver_arch][name][location]
def _set_index_value(self, os_ver_arch, name, location, value):
"""
Utility function to set the location of the named object for the given OS version and architecture.
Only use this if your thread has obtained the thread-global lock by using the
lock_and_get_index() function above
"""
if self.index is None:
raise Exception("Attempt made to read index values while a locked index is not present")
if not os_ver_arch in self.index:
self.index[os_ver_arch] = {}
if not name in self.index[os_ver_arch]:
self.index[os_ver_arch][name] = {}
# If the specific location is not specified, assume value is the entire dict
# or a string indicating the object is pending
if not location:
self.index[os_ver_arch][name] = value
return
self.index[os_ver_arch][name][location] = value
def retrieve_and_cache_object(self, object_type, os_plugin, source_url, save_local):
"""
Download a file from a URL and store it in the cache. Uses the object_type and
data from the OS delegate/plugin to index the file correctly. Also treats the
object type "install-iso" as a special case, downloading it locally and then allowing
the OS delegate to request individual files from within the ISO for extraction and
caching. This is used to efficiently retrieve the kernel and ramdisk from Linux
install ISOs.
@param object_type: A string indicating the type of object being retrieved
@param os_plugin: Instance of the delegate for the OS associated with the download
@param source_url: Location from which to retrieve the object/file
@param save_local: bool indicating whether a local copy of the object should be saved
@return dict containing the various cached locations of the file
local: Local path to file
glance: Glance object UUID
cinder: Cinder object UUID
"""
# TODO: Gracefully deal with the situation where, for example, we are asked to save_local
# and find that the object is already cached but only exists in glance and/or cinder
# TODO: Allow for local-only caching
pending_countdown = 360
while True:
self.lock_and_get_index()
existing_cache = self._get_index_value(os_plugin.os_ver_arch(), object_type, None)
if existing_cache == None:
# We are the first - mark as pending and then start to retreive
self._set_index_value(os_plugin.os_ver_arch(), object_type, None, "pending")
self.write_index_and_unlock()
break
if isinstance(existing_cache, dict):
self.log.debug("Found object in cache")
self.unlock_index()
return existing_cache
# TODO: special case when object is ISO and sub-artifacts are not cached
if existing_cache == "pending":
# Another thread or process is currently obtaining this object
# poll every 10 seconds until we get a dict, then return it
# TODO: A graceful event based solution
self.unlock_index()
if pending_countdown == 360:
self.log.debug("Object is being retrieved in another thread or process - Waiting")
pending_countdown -= 1
if pending_countdown == 0:
raise Exception("Waited one hour on pending cache fill for version (%s) - object (%s)- giving up" %
( os_plugin.os_ver_arch(), object_type ) )
sleep(10)
continue
# We should never get here
raise Exception("Got unexpected non-string, non-dict, non-None value when reading cache")
# If we have gotten here the object is not yet in the cache
self.log.debug("Object not in cache")
# TODO: If not save_local and the plugin doesn't need the iso, direct download in glance
object_name = os_plugin.os_ver_arch() + "-" + object_type
local_object_filename = self.CACHE_ROOT + object_name
if not os.path.isfile(local_object_filename):
self._http_download_file(source_url, local_object_filename)
else:
self.log.warning("Local file (%s) is already present - assuming it is valid" % local_object_filename)
if object_type == "install-iso" and os_plugin.wants_iso_content():
self.log.debug("The plugin wants to do something with the ISO - extracting stuff now")
icd = os_plugin.iso_content_dict()
if icd:
self.log.debug("Launching guestfs")
g = guestfs.GuestFS()
g.add_drive_ro(local_object_filename)
g.launch()
g.mount_options ("", "/dev/sda", "/")
for nested_obj_type in icd.keys():
nested_obj_name = os_plugin.os_ver_arch() + "-" + nested_obj_type
nested_object_filename = self.CACHE_ROOT + nested_obj_name
self.log.debug("Downloading ISO file (%s) to local file (%s)" % (icd[nested_obj_type],
nested_object_filename))
g.download(icd[nested_obj_type],nested_object_filename)
if nested_obj_type == "install-iso-kernel":
image_format = "aki"
elif nested_obj_type == "install-iso-initrd":
image_format = "ari"
else:
raise Exception("Nested object of unknown type requested")
(glance_id, cinder_id) = self._do_remote_uploads(nested_obj_name, nested_object_filename,
format=image_format, container_format=image_format,
use_cinder = False)
locations = {"local": nested_object_filename, "glance": str(glance_id), "cinder": str(cinder_id)}
self._do_index_updates(os_plugin.os_ver_arch(), object_type, locations)
g.shutdown()
g.close()
(glance_id, cinder_id) = self._do_remote_uploads(object_name, local_object_filename)
locations = {"local": local_object_filename, "glance": str(glance_id), "cinder": str(cinder_id)}
self._do_index_updates(os_plugin.os_ver_arch(), object_type, locations)
return locations
def _do_index_updates(self, os_ver_arch, object_type, locations):
self.lock_and_get_index()
self._set_index_value(os_ver_arch, object_type, None, locations )
self.write_index_and_unlock()
def _do_remote_uploads(self, object_name, local_object_filename, format='raw', container_format='bare',
use_cinder=True):
if self.env.is_cinder() and use_cinder:
(glance_id, cinder_id) = self.env.upload_volume_to_cinder(object_name, local_path=local_object_filename,
format=format, container_format=container_format)
else:
cinder_id = None
glance_id = self.env.upload_image_to_glance(object_name, local_path=local_object_filename,
format=format, container_format=container_format)
return (glance_id, cinder_id)
def _http_download_file(self, url, filename):
# Function to download a file from url to filename
# Borrowed and modified from Oz by Chris Lalancette
# https://github.com/clalancette/oz
def _data(buf):
# Function that is called back from the pycurl perform() method to
# actually write data to disk.
os.write(fd, buf)
fd = os.open(filename,os.O_CREAT | os.O_WRONLY | os.O_TRUNC)
try:
c = pycurl.Curl()
c.setopt(c.URL, url)
c.setopt(c.CONNECTTIMEOUT, 15)
c.setopt(c.WRITEFUNCTION, _data)
c.setopt(c.FOLLOWLOCATION, 1)
c.perform()
c.close()
finally:
os.close(fd)

View File

@ -1,430 +0,0 @@
# Copyright (C) 2010,2011 Chris Lalancette <clalance@redhat.com>
# Copyright (C) 2012,2013 Chris Lalancette <clalancette@gmail.com>
# Copyright (C) 2013 Ian McLeod <imcleod@redhat.com>
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation;
# version 2.1 of the License.
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import struct
import shutil
import os
import guestfs
import logging
import tempfile
import subprocess
import stat
class ISOHelper():
"""
Class for assisting with the respin of install ISOs.
At present the only purpose for this class is to allow the injection of a custom
autounattend.xml file to Windows install isos.
This class is largely derived from the Guest.py, Windows.py and ozutil.py files
from the Oz project by Chris Lalancette:
https://github.com/clalancette/oz
"""
def __init__(self, original_iso, arch):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.orig_iso = original_iso
self.arch = arch
self.winarch = arch
if self.winarch == "x86_64":
self.winarch = "amd64"
self.iso_contents = tempfile.mkdtemp()
def _validate_primary_volume_descriptor(self, cdfd):
"""
Method to extract the primary volume descriptor from a CD.
"""
# check out the primary volume descriptor to make sure it is sane
cdfd.seek(16*2048)
fmt = "=B5sBB32s32sQLL32sHHHH"
(desc_type, identifier, version, unused1, system_identifier, volume_identifier, unused2, space_size_le, space_size_be, unused3, set_size_le, set_size_be, seqnum_le, seqnum_be) = struct.unpack(fmt, cdfd.read(struct.calcsize(fmt)))
if desc_type != 0x1:
raise Exception("Invalid primary volume descriptor")
if identifier != "CD001":
raise Exception("invalid CD isoIdentification")
if unused1 != 0x0:
raise Exception("data in unused field")
if unused2 != 0x0:
raise Exception("data in 2nd unused field")
def _geteltorito(self, outfile):
"""
Method to extract the El-Torito boot sector off of a CD and write it
to a file.
"""
if outfile is None:
raise Exception("output file is None")
cdfd = open(self.orig_iso, "r")
self._validate_primary_volume_descriptor(cdfd)
# the 17th sector contains the boot specification and the offset of the
# boot sector
cdfd.seek(17*2048)
# NOTE: With "native" alignment (the default for struct), there is
# some padding that happens that causes the unpacking to fail.
# Instead we force "standard" alignment, which has no padding
fmt = "=B5sB23s41sI"
(boot, isoIdent, version, toritoSpec, unused, bootP) = struct.unpack(fmt,
cdfd.read(struct.calcsize(fmt)))
if boot != 0x0:
raise Exception("invalid CD boot sector")
if isoIdent != "CD001":
raise Exception("invalid CD isoIdentification")
if version != 0x1:
raise Exception("invalid CD version")
if toritoSpec != "EL TORITO SPECIFICATION":
raise Exception("invalid CD torito specification")
# OK, this looks like a bootable CD. Seek to the boot sector, and
# look for the header, 0x55, and 0xaa in the first 32 bytes
cdfd.seek(bootP*2048)
fmt = "=BBH24sHBB"
bootdata = cdfd.read(struct.calcsize(fmt))
(header, platform, unused, manu, unused2, five, aa) = struct.unpack(fmt,
bootdata)
if header != 0x1:
raise Exception("invalid CD boot sector header")
if platform != 0x0 and platform != 0x1 and platform != 0x2:
raise Exception("invalid CD boot sector platform")
if unused != 0x0:
raise Exception("invalid CD unused boot sector field")
if five != 0x55 or aa != 0xaa:
raise Exception("invalid CD boot sector footer")
def _checksum(data):
"""
Method to compute the checksum on the ISO. Note that this is *not*
a 1's complement checksum; when an addition overflows, the carry
bit is discarded, not added to the end.
"""
s = 0
for i in range(0, len(data), 2):
w = ord(data[i]) + (ord(data[i+1]) << 8)
s = (s + w) & 0xffff
return s
csum = _checksum(bootdata)
if csum != 0:
raise Exception("invalid CD checksum: expected 0, saw %d" % (csum))
# OK, everything so far has checked out. Read the default/initial
# boot entry
cdfd.seek(bootP*2048+32)
fmt = "=BBHBBHIB"
(boot, media, loadsegment, systemtype, unused, scount, imgstart, unused2) = struct.unpack(fmt, cdfd.read(struct.calcsize(fmt)))
if boot != 0x88:
raise Exception("invalid CD initial boot indicator")
if unused != 0x0 or unused2 != 0x0:
raise Exception("invalid CD initial boot unused field")
if media == 0 or media == 4:
count = scount
elif media == 1:
# 1.2MB floppy in sectors
count = 1200*1024/512
elif media == 2:
# 1.44MB floppy in sectors
count = 1440*1024/512
elif media == 3:
# 2.88MB floppy in sectors
count = 2880*1024/512
else:
raise Exception("invalid CD media type")
# finally, seek to "imgstart", and read "count" sectors, which
# contains the boot image
cdfd.seek(imgstart*2048)
# The eltorito specification section 2.5 says:
#
# Sector Count. This is the number of virtual/emulated sectors the
# system will store at Load Segment during the initial boot
# procedure.
#
# and then Section 1.5 says:
#
# Virtual Disk - A series of sectors on the CD which INT 13 presents
# to the system as a drive with 200 byte virtual sectors. There
# are 4 virtual sectors found in each sector on a CD.
#
# (note that the bytes above are in hex). So we read count*512
eltoritodata = cdfd.read(count*512)
cdfd.close()
out = open(outfile, "w")
out.write(eltoritodata)
out.close()
def _generate_new_iso_win_v5(self, output_iso):
"""
Method to create a new ISO based on the modified CD/DVD.
For Windows versions based on kernel 5.x (2000, XP, and 2003).
"""
self.log.debug("Recreating El Torito boot sector")
os.mkdir(os.path.join(self.iso_contents, "cdboot"))
self._geteltorito(os.path.join(self.iso_contents, "cdboot", "boot.bin"))
self.log.debug("Generating new ISO")
self.subprocess_check_output(["genisoimage",
"-b", "cdboot/boot.bin",
"-no-emul-boot", "-boot-load-seg",
"1984", "-boot-load-size", "4",
"-iso-level", "2", "-J", "-l", "-D",
"-N", "-joliet-long",
"-relaxed-filenames", "-v", "-v",
"-V", "Custom",
"-o", output_iso,
self.iso_contents])
def _modify_iso_win_v5(self, install_script):
"""
Method to copy a Windows v5 install script into the appropriate location
"""
self.log.debug("Copying in Windows v5 winnt.sif file")
outname = os.path.join(self.iso_contents, self.winarch, "winnt.sif")
shutil.copy(install_script, outname)
def _generate_new_iso_win_v6(self, output_iso):
"""
Method to create a new Windows v6 ISO based on the modified CD/DVD.
"""
self.log.debug("Recreating El Torito boot sector")
os.mkdir(os.path.join(self.iso_contents, "cdboot"))
self._geteltorito(os.path.join(self.iso_contents, "cdboot", "boot.bin"))
self.log.debug("Generating new ISO")
# NOTE: Windows 2008 is very picky about which arguments to genisoimage
# will generate a bootable CD, so modify these at your own risk
self.subprocess_check_output(["genisoimage",
"-b", "cdboot/boot.bin",
"-no-emul-boot", "-c", "BOOT.CAT",
"-iso-level", "2", "-J", "-l", "-D",
"-N", "-joliet-long",
"-relaxed-filenames", "-v", "-v",
"-V", "Custom", "-udf",
"-o", output_iso,
self.iso_contents])
def _install_script_win_v6(self, install_script):
"""
Method to copy a Windows v6 install script into the appropriate location
"""
self.log.debug("Copying in Windows v6 autounattend.xml file")
outname = os.path.join(self.iso_contents, "autounattend.xml")
shutil.copy(install_script, outname)
def _copy_iso(self):
"""
Method to copy the data out of an ISO onto the local filesystem.
"""
self.log.info("Copying ISO contents for modification")
try:
shutil.rmtree(self.iso_contents)
except OSError as err:
if err.errno != errno.ENOENT:
raise
os.makedirs(self.iso_contents)
self.log.info("Setting up guestfs handle")
gfs = guestfs.GuestFS()
self.log.debug("Adding ISO image %s" % (self.orig_iso))
gfs.add_drive_opts(self.orig_iso, readonly=1, format='raw')
self.log.debug("Launching guestfs")
gfs.launch()
try:
self.log.debug("Mounting ISO")
gfs.mount_options('ro', "/dev/sda", "/")
self.log.debug("Checking if there is enough space on the filesystem")
isostat = gfs.statvfs("/")
outputstat = os.statvfs(self.iso_contents)
if (outputstat.f_bsize*outputstat.f_bavail) < (isostat['blocks']*isostat['bsize']):
raise Exception("Not enough room on %s to extract install media" % (self.iso_contents))
self.log.debug("Extracting ISO contents")
current = os.getcwd()
os.chdir(self.iso_contents)
try:
rd, wr = os.pipe()
try:
# NOTE: it is very, very important that we use temporary
# files for collecting stdout and stderr here. There is a
# nasty bug in python subprocess; if your process produces
# more than 64k of data on an fd that is using
# subprocess.PIPE, the whole thing will hang. To avoid
# this, we use temporary fds to capture the data
stdouttmp = tempfile.TemporaryFile()
stderrtmp = tempfile.TemporaryFile()
try:
tar = subprocess.Popen(["tar", "-x", "-v"], stdin=rd,
stdout=stdouttmp,
stderr=stderrtmp)
try:
gfs.tar_out("/", "/dev/fd/%d" % wr)
except:
# we need this here if gfs.tar_out throws an
# exception. In that case, we need to manually
# kill off the tar process and re-raise the
# exception, otherwise we hang forever
tar.kill()
raise
# FIXME: we really should check tar.poll() here to get
# the return code, and print out stdout and stderr if
# we fail. This will make debugging problems easier
finally:
stdouttmp.close()
stderrtmp.close()
finally:
os.close(rd)
os.close(wr)
# since we extracted from an ISO, there are no write bits
# on any of the directories. Fix that here
for dirpath, dirnames, filenames in os.walk(self.iso_contents):
st = os.stat(dirpath)
os.chmod(dirpath, st.st_mode|stat.S_IWUSR)
for name in filenames:
fullpath = os.path.join(dirpath, name)
try:
# if there are broken symlinks in the ISO,
# then the below might fail. This probably
# isn't fatal, so just allow it and go on
st = os.stat(fullpath)
os.chmod(fullpath, st.st_mode|stat.S_IWUSR)
except OSError as err:
if err.errno != errno.ENOENT:
raise
finally:
os.chdir(current)
finally:
gfs.sync()
gfs.umount_all()
gfs.kill_subprocess()
def _cleanup_iso(self):
"""
Method to cleanup the local ISO contents.
"""
self.log.info("Cleaning up old ISO data")
# if we are running as non-root, then there might be some files left
# around that are not writable, which means that the rmtree below would
# fail. Recurse into the iso_contents tree, doing a chmod +w on
# every file and directory to make sure the rmtree succeeds
for dirpath, dirnames, filenames in os.walk(self.iso_contents):
os.chmod(dirpath, stat.S_IWUSR|stat.S_IXUSR|stat.S_IRUSR)
for name in filenames:
try:
# if there are broken symlinks in the ISO,
# then the below might fail. This probably
# isn't fatal, so just allow it and go on
os.chmod(os.path.join(dirpath, name), stat.S_IRUSR|stat.S_IWUSR)
except OSError as err:
if err.errno != errno.ENOENT:
raise
self.rmtree_and_sync(self.iso_contents)
def rmtree_and_sync(self, directory):
"""
Function to remove a directory tree and do an fsync afterwards. Because
the removal of the directory tree can cause a lot of metadata updates, it
can cause a lot of disk activity. By doing the fsync, we ensure that any
metadata updates caused by us will not cause subsequent steps to fail. This
cannot help if the system is otherwise very busy, but it does ensure that
the problem is not self-inflicted.
"""
shutil.rmtree(directory)
fd = os.open(os.path.dirname(directory), os.O_RDONLY)
try:
os.fsync(fd)
finally:
os.close(fd)
def subprocess_check_output(self, *popenargs, **kwargs):
"""
Function to call a subprocess and gather the output.
"""
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
if 'stderr' in kwargs:
raise ValueError('stderr argument not allowed, it will be overridden.')
self.executable_exists(popenargs[0][0])
# NOTE: it is very, very important that we use temporary files for
# collecting stdout and stderr here. There is a nasty bug in python
# subprocess; if your process produces more than 64k of data on an fd that
# is using subprocess.PIPE, the whole thing will hang. To avoid this, we
# use temporary fds to capture the data
stdouttmp = tempfile.TemporaryFile()
stderrtmp = tempfile.TemporaryFile()
process = subprocess.Popen(stdout=stdouttmp, stderr=stderrtmp, *popenargs,
**kwargs)
process.communicate()
retcode = process.poll()
stdouttmp.seek(0, 0)
stdout = stdouttmp.read()
stdouttmp.close()
stderrtmp.seek(0, 0)
stderr = stderrtmp.read()
stderrtmp.close()
if retcode:
cmd = ' '.join(*popenargs)
raise SubprocessException("'%s' failed(%d): %s" % (cmd, retcode, stderr), retcode)
return (stdout, stderr, retcode)
def executable_exists(self, program):
"""
Function to find out whether an executable exists in the PATH
of the user. If so, the absolute path to the executable is returned.
If not, an exception is raised.
"""
def is_exe(fpath):
"""
Helper method to check if a file exists and is executable
"""
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
if program is None:
raise Exception("Invalid program name passed")
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
raise Exception("Could not find %s" % (program))

View File

@ -1,90 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from time import sleep
class NovaInstance:
def __init__(self, instance, stack_env):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.last_disk_activity = 0
self.last_net_activity = 0
self.instance = instance
self.stack_env = stack_env
@property
def id(self):
"""
@return:
"""
return self.instance.id
@property
def status(self):
"""
@return:
"""
self.instance = self.stack_env.nova.servers.get(self.instance.id)
return self.instance.status
def get_disk_and_net_activity(self):
"""
@return:
"""
disk_activity = 0
net_activity = 0
diagnostics = self.instance.diagnostics()[1]
if not diagnostics:
return 0, 0
for key, value in diagnostics.items():
if ('read' in key) or ('write' in key):
disk_activity += int(value)
if ('rx' in key) or ('tx' in key):
net_activity += int(value)
return disk_activity, net_activity
def is_active(self):
"""
@param inactivity_timeout:
@return:
"""
self.log.debug("checking for inactivity")
try:
current_disk_activity, current_net_activity = self.get_disk_and_net_activity()
except Exception, e:
saved_exception = e
# Since we can't get disk and net activity we assume
# instance is not active (usually before instance finished
# spawning.
return False
if (current_disk_activity == self.last_disk_activity) and \
(current_net_activity < (self.last_net_activity + 4096)):
# if we saw no read or write requests since the last iteration
return False
else:
# if we did see some activity, record it
self.last_disk_activity = current_disk_activity
self.last_net_activity = current_net_activity
return True

View File

@ -1,233 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from gi.repository import Libosinfo as osinfo
from gi.repository import Gio
class OSInfo(object):
"""
OSInfo offers convenience methods for getting information out of libosinfo
@param path: Path (str) to the libosinfo data to use. Defaults to /usr/share/libosinfo/db
"""
def __init__(self, path='/usr/share/libosinfo/db'):
super(OSInfo, self).__init__()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
loader = osinfo.Loader()
loader.process_path(path)
self.db = loader.get_db()
def os_id_for_shortid(self, shortid):
"""
Get the full libosinfo OS id for a given shortid.
@param shortid: The short form id for an OS record in libosinfo. (Ex. fedora18)
@return: The id for an OS record in libosinfo. (Ex. http://fedoraproject.org/fedora/18)
"""
for an_os in self.db.get_os_list().get_elements():
if an_os.get_short_id() == shortid:
return an_os.get_id()
def os_for_shortid(self, shortid):
"""
Given the shortid for an OS, get a dictionary of information about that OS.
Items in 'media_list' are libosinfo Media objects. Useful methods on these objects include:
get_url() - URL str to the media
get_initrd_path() - Path str to the initrd image within the install tree for Linux installers
get_kernel_path() - Path str to the kernel within the install tree for Linux installers
get_volume_id() - A regular expression for matching the volume ID of an ISO9660 image
get_installer() - Does the media provide an installer for the OS (True or False)
get_installer_redoots() - The number of reboots required to complete an installation
get_live() - Can an OS be booted directly from this media without installation (True or False)
Items in the 'tree_list' are libosinfo Tree objects. Useful methods on these objects include:
get_url() - URL str to the install tree
get_boot_iso_path() - Path str to the boot image iso in the install tree
get_initrd_path() - Path str to the initrd image within the install tree for Linux trees
get_kernel_path() - Path str to the kernel within the install tree for Linux trees
Items in the 'minimum_resources' and 'recommended_resources' lists are libosinfo Resources objects. Useful
methods on these objects include:
get_cpu() - The CPU frequency in Hz or -1 if a value is not available
get_n_cpus() - The number of CPUs or -1 if a value is not available
get_ram() - The amount of RAM in bytes or -1 if a value is not available
get_storage() - The amount of storage in bytes or -1 if a value is not available
Further documentation on the libosinfo API should be found at http://libosinfo.org/api/
@param shortid: A str id for an OS such as rhel5
@return: dict with keys:
name (str)
version (str)
distro (str)
family (str)
shortid (str)
id (str)
media_list (list of libosinfo.Media objects)
tree_list (list of libosinfo.Tree objects)
minimum_resources (list of libosinfo.Resources objects)
recommended_resources (list of libosinfo.Resources objects)
"""
os = self.db.get_os(self.os_id_for_shortid(shortid))
if os:
return {'name': os.get_name(),
'version': os.get_version(),
'distro': os.get_distro(),
'family': os.get_family(),
'shortid': os.get_short_id(),
'id': os.get_id(),
'media_list': os.get_media_list().get_elements(),
'tree_list': os.get_tree_list().get_elements(),
'minimum_resources': os.get_minimum_resources().get_elements(),
'recommended_resources': os.get_recommended_resources().get_elements()}
else:
return None
def os_for_iso(self, iso):
"""
Given an install ISO, get information about the OS.
*** THIS IS ONLY PARTIALLY IMPLEMENTED, USE AT YOUR OWN RISK ***
@param iso: URL of an install iso
@return: dict with keys:
name
version
distro
family
shortid
id
media_list
tree_list
minimum_resources
recommended_resources
"""
# TODO: Figure out the correct way to implement / use this method
media = osinfo.Media().create_from_location(iso)
return self.os_for_shortid(media.get_os().get_shortid())
def os_for_tree(self, tree):
"""
Given an install tree, get information about the OS.
*** THIS IS ONLY PARTIALLY IMPLEMENTED, USE AT YOUR OWN RISK ***
@param tree: URL of an install tree
@return: dict with keys:
name
version
distro
family
shortid
id
media_list
tree_list
minimum_resources
recommended_resources
"""
# TODO: Figure out the correct way to implement / use this method
install_tree = osinfo.Media().create_from_location(tree)
return self.os_for_shortid(install_tree.get_os().get_shortid())
def install_script(self, osid, configuration, profile='jeos'):
"""
Get an install script for a given OS.
@param osid: Either the shortid or id for an OS (str)
@param configuration: A dict of install script customizations with the following keys:
admin_password (required)
arch (required)
license (optional, default: None)
target_disk (optional, default: None)
script_disk (optional, default: None)
preinstall_disk (optional, default: None)
postinstall_disk (optional, default: None)
signed_drivers (optional, default: True)
keyboard (optional, default: 'en_US')
language (optional, default: 'en_US')
timezone (optional, default: 'America/New_York')
@param profile: The profile of the install. (str) 'jeos', 'desktop', etc
@return: install script as a str
"""
if not osid.startswith('http'):
osid = self.os_id_for_shortid(osid)
os = self.db.get_os(osid)
if os:
script = None
# TODO: This seems to be broken. Need to file a bug.
#script = os.find_install_script(profile)
# TODO: remove this once find_install_script() is fixed
script_list = os.get_install_script_list().get_elements()
for a_script in script_list:
if a_script.get_profile() == profile:
script = a_script
config = osinfo.InstallConfig()
config.set_admin_password(configuration['admin_password'])
config.set_hardware_arch(configuration['arch'])
if configuration.get('license'):
config.set_reg_product_key(configuration['license'])
if configuration.get('target_disk'):
config.set_target_disk(configuration['target_disk'])
if configuration.get('script_disk'):
config.set_script_disk(configuration['script_disk'])
if configuration.get('preinstall_disk'):
config.set_pre_install_drivers_disk(configuration['preinstall_disk'])
if configuration.get('postinstall_disk'):
config.set_post_install_drivers_disk(configuration['postinstall_disk'])
if configuration.get('signed_drivers'):
config.set_driver_signing(configuration['signed_drivers'])
if configuration.get('keyboard'):
config.set_l10n_keyboard(configuration['keyboard'])
if configuration.get('language'):
config.set_l10n_language(configuration['language'])
if configuration.get('timezone'):
config.set_l10n_timezone(configuration['timezone'])
return script.generate(os, config, Gio.Cancellable())
else:
return None
def os_ids(self, distros=None):
"""
List the operating systems available from libosinfo.
@param distros: A dict with keys being distro names and the values being the lowest version to list.
Ex. {'fedora': 17, 'rhel': 5, 'ubuntu':12, 'win':6}
@return: A dict with keys being OS shortid and values being OS name
"""
os_dict = {}
for os in self.db.get_os_list().get_elements():
if distros:
distro = os.get_distro()
version = int(os.get_version().split('.')[0]) # Just compare major versions, ie 2 instead of 2.2.8
for a_distro in distros:
if a_distro == distro and version >= distros[a_distro]:
os_dict[os.get_short_id()] = os.get_name()
else:
os_dict[os.get_short_id()] = os.get_name()
return os_dict

View File

@ -1,165 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from CacheManager import CacheManager
from BaseOS import BaseOS
from OSInfo import OSInfo
class RedHatOS(BaseOS):
def __init__(self, osinfo_dict, install_type, install_media_location, install_config, install_script = None):
super(RedHatOS, self).__init__(osinfo_dict, install_type, install_media_location, install_config, install_script)
#TODO: Check for direct boot - for now we are using environments
# where we know it is present
#if not self.env.is_direct_boot():
# raise Exception("Direct Boot feature required - Installs using syslinux stub not yet implemented")
if install_type == "iso" and not self.env.is_cdrom():
raise Exception("ISO installs require a Nova environment that can \
support CDROM block device mapping")
if not install_script:
info = OSInfo()
install_script_string = info.install_script(self.osinfo_dict['shortid'], self.install_config)
install_script_string = install_script_string.replace('reboot','poweroff')
if self.install_type == 'tree':
install_script_string = install_script_string.replace('cdrom','')
if self.install_media_location:
url = self.install_media_location
else:
url = self.osinfo_dict['tree_list'][0].get_url()
self.install_script = "url --url=%s\n%s" % (url,
install_script_string)
else:
self.install_script = install_script_string
def prepare_install_instance(self):
""" Method to prepare all necessary local and remote images for an
install. This method may require significant local disk or CPU
resource.
"""
self.cmdline = "ks=http://169.254.169.254/latest/user-data"
#If direct boot option is available, prepare kernel and ramdisk
if self.env.is_direct_boot():
if self.install_type == "iso":
iso_locations = self.cache.retrieve_and_cache_object(
"install-iso", self, self.install_media_location, True)
self.iso_volume = iso_locations['cinder']
self.iso_aki = self.cache.retrieve_and_cache_object(
"install-iso-kernel", self, None, True)['glance']
self.iso_ari = self.cache.retrieve_and_cache_object(
"install-iso-initrd", self, None, True)['glance']
self.log.debug ("Prepared cinder iso (%s), aki (%s) and ari \
(%s) for install instance" % (self.iso_volume,
self.iso_aki, self.iso_ari))
if self.install_type == "tree":
kernel_location = "%s%s" % (self.install_media_location,
self.url_content_dict()["install-url-kernel"])
ramdisk_location = "%s%s" % (self.install_media_location,
self.url_content_dict()["install-url-initrd"])
self.tree_aki = self.cache.retrieve_and_cache_object(
"install-url-kernel", self, kernel_location,
True)['glance']
self.tree_ari = self.cache.retrieve_and_cache_object(
"install-url-kernel", self, ramdisk_location,
True)['glance']
self.log.debug ("Prepared cinder aki (%s) and ari (%s) for \
install instance" % (self.iso_volume, self.iso_aki,
self.iso_ari))
#Else, download kernel and ramdisk and prepare syslinux image with the two
else:
if self.install_type == "iso":
iso_locations = self.cache.retrieve_and_cache_object(
"install-iso", self, self.install_media_location, True)
self.iso_volume = iso_locations['cinder']
self.iso_aki = self.cache.retrieve_and_cache_object(
"install-iso-kernel", self, None, True)['local']
self.iso_ari = self.cache.retrieve_and_cache_object(
"install-iso-initrd", self, None, True)['local']
self.boot_disk_id = self.syslinux.create_syslinux_stub(
"%s syslinux" % self.os_ver_arch(), self.cmdline,
self.iso_aki, self.iso_ari)
self.log.debug("Prepared syslinux image by extracting kernel \
and ramdisk from ISO")
if self.install_type == "tree":
kernel_location = "%s%s" % (self.install_media_location,
self.url_content_dict()["install-url-kernel"])
ramdisk_location = "%s%s" % (self.install_media_location,
self.url_content_dict()["install-url-initrd"])
self.url_aki = self.cache.retrieve_and_cache_object(
"install-url-kernel", self, kernel_location,
True)['local']
self.url_ari = self.cache.retrieve_and_cache_object(
"install-url-initrd", self, ramdisk_location,
True)['local']
self.boot_disk_id = self.syslinux.create_syslinux_stub(
"%s syslinux" % self.os_ver_arch(), self.cmdline,
self.url_aki, self.url_ari)
self.log.debug("Prepared syslinux image by extracting kernel \
and ramdisk from ISO")
def start_install_instance(self):
if self.env.is_direct_boot():
self.log.debug("Launching direct boot ISO install instance")
if self.install_type == "iso":
self.install_instance = self.env.launch_instance(
root_disk=('blank', 10),
install_iso=('cinder', self.iso_volume),
aki=self.iso_aki, ari=self.iso_ari,
cmdline=self.cmdline, userdata=self.install_script)
if self.install_type == "tree":
self.install_instance = self.env.launch_instance(
root_disk=('blank', 10), aki=self.iso_aki,
ari=self.iso_ari, cmdline=self.cmdline,
userdata=self.install_script)
else:
if self.install_type == "tree":
self.log.debug("Launching syslinux install instance")
self.install_instance = self.env.launch_instance(root_disk=(
'glance', self.boot_disk_id), userdata=self.install_script)
if self.install_type == "iso":
self.install_instance = self.env.launch_instance(root_disk=(
'glance', self.boot_disk_id), install_iso=('cinder',
self.iso_volume), userdata=self.install_script)
def update_status(self):
return "RUNNING"
def wants_iso_content(self):
return True
def iso_content_dict(self):
return { "install-iso-kernel": "/images/pxeboot/vmlinuz",
"install-iso-initrd": "/images/pxeboot/initrd.img" }
def url_content_dict(self):
return { "install-url-kernel": "/images/pxeboot/vmlinuz",
"install-url-initrd": "/images/pxeboot/initrd.img" }
def abort(self):
pass
def cleanup(self):
pass

View File

@ -1,30 +0,0 @@
# Copyright 2011 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if cls._instance is None:
instance = super(Singleton, cls).__new__(cls)
instance._singleton_init(*args, **kwargs)
cls._instance = instance
return cls._instance
def __init__(self, *args, **kwargs):
pass
def _singleton_init(self, *args, **kwargs):
"""Initialize a singleton instance before it is registered."""
pass

View File

@ -1,502 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keystoneclient.v2_0 import client as keystone_client
from novaclient.v1_1 import client as nova_client
from glanceclient import client as glance_client
from cinderclient import client as cinder_client
from Singleton import Singleton
from time import sleep
from novaclient.v1_1.contrib.list_extensions import ListExtManager
import os
from NovaInstance import NovaInstance
import logging
class StackEnvironment(Singleton):
"""
StackEnvironment
"""
def _singleton_init(self):
super(StackEnvironment, self)._singleton_init()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
# We want the following environment variables set: OS_USERNAME, OS_PASSWORD, OS_TENANT, OS_AUTH_URL
try:
username = os.environ['OS_USERNAME']
password = os.environ['OS_PASSWORD']
tenant = os.environ['OS_TENANT_NAME']
auth_url = os.environ['OS_AUTH_URL']
except Exception, e:
raise Exception("Unable to retrieve auth info from environment \
variables. exception: %s" % e.message)
try:
self.keystone = keystone_client.Client(username=username,
password=password, tenant_name=tenant, auth_url=auth_url)
self.keystone.authenticate()
except Exception, e:
raise Exception('Error authenticating with keystone. Original \
exception: %s' % e.message)
try:
self.nova = nova_client.Client(username, password, tenant,
auth_url=auth_url, insecure=True)
except Exception, e:
raise Exception('Error connecting to Nova. Nova is required for \
building images. Original exception: %s' % e.message)
try:
glance_url = self.keystone.service_catalog.get_endpoints()['image'][0]['adminURL']
self.glance = glance_client.Client('1', endpoint=glance_url,
token=self.keystone.auth_token)
except Exception, e:
raise Exception('Error connecting to glance. Glance is required for\
building images. Original exception: %s' % e.message)
try:
self.cinder = cinder_client.Client('1', username, password, tenant,
auth_url)
except:
self.cinder = None
@property
def keystone_server(self):
"""
@return: keystone client
"""
return self.keystone
@property
def glance_server(self):
"""
@return: glance client
"""
return self.glance
@property
def cinder_server(self):
"""
@return: cinder client or None
"""
return self.cinder
def upload_image_to_glance(self, name, local_path=None, location=None, format='raw', min_disk=0, min_ram=0,
container_format='bare', is_public=True, properties={}):
"""
@param name: human readable name for image in glance
@param local_path: path to an image file
@param location: URL for image file
@param format: 'raw', 'vhd', 'vmdk', 'vdi', 'iso', 'qcow2', 'aki',
'ari', 'ami'
@param min_disk: integer of minimum disk size in GB that a nova instance
needs to launch using this image
@param min_ram: integer of minimum amount of RAM in GB that a nova
instance needs to launch using this image
@param container_format: currently not used by OpenStack components, so
'bare' is a good default
@param is_public: boolean to mark an image as being publically
available
@param properties: dictionary where keys are property names such as
ramdisk_id and kernel_id and values are the property values
@return: glance image id @raise Exception:
"""
image_meta = {'container_format': container_format, 'disk_format':
format, 'is_public': is_public, 'min_disk': min_disk, 'min_ram':
min_ram, 'name': name, 'properties': properties}
try:
image_meta['data'] = open(local_path, "r")
except Exception, e:
if location:
image_meta['location'] = location
else:
raise e
image = self.glance.images.create(name=name)
self.log.debug("Started uploading to Glance")
image.update(**image_meta)
while image.status != 'active':
image = self.glance.images.get(image.id)
if image.status == 'error':
raise Exception('Error uploading image to Glance.')
sleep(1)
self.log.debug("Finished uploading to Glance")
return image.id
def upload_volume_to_cinder(self, name, volume_size=None, local_path=None,
location=None, format='raw', container_format='bare',
is_public=True, keep_image=True):
"""
@param name: human readable name for volume in cinder
@param volume_size: integer size in GB of volume
@param local_path: path to an image file
@param location: URL to an image file
@param format: 'raw', 'vhd', 'vmdk', 'vdi', 'iso', 'qcow2', 'aki',
'ari', 'ami'
@param container_format: currently not used by OpenStack components, so
'bare' is a good default
@param is_public: boolean to mark an image as being publically
available
@param keep_image: currently not implemented
@return: tuple (glance image id, cinder volume id)
"""
image_id = self.upload_image_to_glance(name, local_path=local_path,
location=location, format=format, is_public=is_public)
volume_id = self._migrate_from_glance_to_cinder(image_id, volume_size)
if not keep_image:
#TODO: spawn a thread to delete image after volume is created
return volume_id
return (image_id, volume_id)
def create_volume_from_image(self, image_id, volume_size=None):
"""
@param image_id: uuid of glance image
@param volume_size: integer size in GB of volume to be created
@return: cinder volume id
"""
return self._migrate_from_glance_to_cinder(image_id, volume_size)
def delete_image(self, image_id):
"""
@param image_id: glance image id
"""
self.glance.images.get(image_id).delete()
def delete_volume(self, volume_id):
"""
@param volume_id: cinder volume id
"""
self.cinder.volumes.get(volume_id).delete()
def _migrate_from_glance_to_cinder(self, image_id, volume_size):
image = self.glance.images.get(image_id)
if not volume_size:
# Gigabytes rounded up
volume_size = int(image.size/(1024*1024*1024)+1)
self.log.debug("Started copying to Cinder")
volume = self.cinder.volumes.create(volume_size,
display_name=image.name, imageRef=image.id)
while volume.status != 'available':
volume = self.cinder.volumes.get(volume.id)
if volume.status == 'error':
volume.delete()
raise Exception('Error occured copying glance image %s to \
volume %s' % (image_id, volume.id))
sleep(1)
self.log.debug("Finished copying to Cinder")
return volume.id
def get_volume_status(self, volume_id):
"""
@param volume_id: cinder volume id
@return: 'active', 'error', 'saving', 'deleted' (possibly more states
exist, but dkliban could not find documentation where they are all
listed)
"""
volume = self.cinder.volumes.get(volume_id)
return volume.status
def get_image_status(self, image_id):
"""
@param image_id: glance image id
@return: 'queued', 'saving', 'active', 'killed', 'deleted', or
'pending_delete'
"""
image = self.glance.images.get(image_id)
return image.status
def _create_blank_image(self, size):
rc = os.system("qemu-img create -f qcow2 blank_image.tmp %dG" % size)
if rc == 0:
return
else:
raise Exception("Unable to create blank image")
def _remove_blank_image(self):
rc = os.system("rm blank_image.tmp")
if rc == 0:
return
else:
raise Exception("Unable to create blank image")
def launch_instance(self, root_disk=None, install_iso=None,
secondary_iso=None, floppy=None, aki=None, ari=None, cmdline=None,
userdata=None):
"""
@param root_disk: tuple where first element is 'blank', 'cinder', or
'glance' and second element is size, or cinder volume id, or glance
image id.
@param install_iso: install media represented by tuple where first
element is 'cinder' or 'glance' and second element is cinder volume id
or glance image id.
@param secondary_iso: media containing extra drivers represented by
tuple where first element is 'cinder' or 'glance' and second element is
cinder volume id or glance image id.
@param floppy: media to be mounted as a floppy represented by tuple
where first element is 'cinder' or 'glance' and second element is
cinder volume id or glance image id.
@param aki: glance image id for kernel
@param ari: glance image id for ramdisk
@param cmdline: string command line argument for anaconda
@param userdata: string containing kickstart file or preseed file
@return: NovaInstance launched @raise Exception:
"""
if root_disk:
#if root disk needs to be created
if root_disk[0] == 'blank':
root_disk_size = root_disk[1]
#Create a blank qcow2 image and uploads it
self._create_blank_image(root_disk_size)
if aki and ari and cmdline:
root_disk_properties = {'kernel_id': aki,
'ramdisk_id': ari, 'command_line': cmdline}
else:
root_disk_properties = {}
root_disk_image_id = self.upload_image_to_glance(
'blank %dG disk' % root_disk_size,
local_path='./blank_image.tmp', format='qcow2',
properties=root_disk_properties)
self._remove_blank_image()
elif root_disk[0] == 'glance':
root_disk_image_id = root_disk[1]
else:
raise Exception("Boot disk must be of type 'blank' or 'glance'")
if install_iso:
if install_iso[0] == 'cinder':
install_iso_id = install_iso[1]
elif install_iso[0] == 'glance':
install_iso_id = self.create_volume_from_image(install_iso[1])
else:
raise Exception("Install ISO must be of type 'cinder' or \
'glance'")
if secondary_iso:
if secondary_iso[0] == 'cinder':
secondary_iso_id = secondary_iso[1]
elif secondary_iso[0] == 'glance':
secondary_iso_id = self.create_volume_from_image(secondary_iso_id)
else:
raise Exception("Secondary ISO must be of type 'cinder' or\
'glance'")
if floppy:
if floppy[0] == 'cinder':
floppy_id = floppy[1]
elif floppy[0] == 'glance':
floppy_id = self.create_volume_from_image(floppy[1])
else:
raise Exception("Floppy must be of type 'cinder' or 'glance'")
# if direct boot is not available (Havana):
if not self.is_direct_boot():
instance = None
# 0 crdom drives are needed
if not install_iso and not secondary_iso and not floppy:
instance = self._launch_network_install(root_disk_image_id,
userdata)
# 1 cdrom drive is needed
elif install_iso and not secondary_iso and not floppy:
instance = self._launch_single_cdrom_install(root_disk_image_id,
userdata, install_iso_id)
# 2 cdrom drives are needed
elif install_iso and secondary_iso and not floppy:
instance = self._launch_instance_with_dual_cdrom(root_disk_image_id,
install_iso_id, secondary_iso_id)
if instance:
return NovaInstance(instance, self)
#blank root disk with ISO, ISO2 and Floppy - Windows
if install_iso and secondary_iso and floppy:
instance = self._launch_windows_install(root_disk_image_id,
install_iso_id, secondary_iso_id, floppy_id)
return NovaInstance(instance, self)
#blank root disk with aki, ari and cmdline. install iso is optional.
if aki and ari and cmdline and userdata:
instance = self._launch_direct_boot(root_disk_image_id, userdata,
install_iso=install_iso_id)
return NovaInstance(instance, self)
def _launch_network_install(self, root_disk, userdata):
#TODO: check the kickstart file in userdata for sanity
self.log.debug("Starting instance for network install")
image = self.glance.images.get(root_disk)
instance = self.nova.servers.create("Install from network", image, "2",
userdata=userdata)
return instance
def _launch_single_cdrom_install(self, root_disk, userdata, install_iso):
image = self.glance.images.get(root_disk)
self.log.debug("Starting instance for single cdrom install")
if install_iso:
if self.is_cdrom():
block_device_mapping_v2 = [
{"source_type": "volume",
"destination_type": "volume",
"uuid": install_iso,
"boot_index": "1",
"device_type": "cdrom",
"disk_bus": "ide",
},
]
instance = self.nova.servers.create("Install with single cdrom",
image, "2",
block_device_mapping_v2=block_device_mapping_v2,
userdata=userdata)
return instance
else:
#TODO: use BDM mappings from grizzly to launch instance
pass
else:
raise Exception("Install ISO image id is required for single cdrom\
drive installations.")
def _launch_instance_with_dual_cdrom(self, root_disk, install_iso,
secondary_iso):
block_device_mapping_v2 = [
{"source_type": "volume",
"destination_type": "volume",
"uuid": install_iso,
"boot_index": "1",
"device_type": "cdrom",
"disk_bus": "ide",
},
{"source_type": "volume",
"destination_type": "volume",
"uuid": secondary_iso,
"boot_index": "2",
"device_type": "cdrom",
"disk_bus": "ide",
},
]
image = self.glance.images.get(root_disk)
instance = self.nova.servers.create("Install with dual cdroms", image, "2",
meta={}, block_device_mapping_v2=block_device_mapping_v2)
return instance
def _launch_direct_boot(self, root_disk, userdata, install_iso=None):
image = self.glance.images.get(root_disk)
if install_iso:
#assume that install iso is already a cinder volume
block_device_mapping_v2 = [
{"source_type": "volume",
"destination_type": "volume",
"uuid": install_iso,
"boot_index": "1",
"device_type": "cdrom",
"disk_bus": "ide",
},
]
else:
#must be a network install
block_device_mapping_v2 = None
instance = self.nova.servers.create("direct-boot-linux", image, "2",
block_device_mapping_v2=block_device_mapping_v2,
userdata=userdata)
return instance
def _launch_windows_install(self, root_disk, install_cdrom, drivers_cdrom,
autounattend_floppy):
block_device_mapping_v2 = [
{"source_type": "volume",
"destination_type": "volume",
"uuid": install_cdrom,
"boot_index": "1",
"device_type": "cdrom",
"disk_bus": "ide",
},
{"source_type": "volume",
"destination_type": "volume",
"uuid": drivers_cdrom,
"boot_index": "3",
"device_type": "cdrom",
"disk_bus": "ide",
},
{"source_type": "volume",
"destination_type": "volume",
"uuid": autounattend_floppy,
"boot_index": "2",
"device_type": "floppy",
},
]
image = self.glance.images.get(root_disk)
instance = self.nova.servers.create("windows-volume-backed", image, "2",
meta={}, block_device_mapping_v2=block_device_mapping_v2)
return instance
def is_cinder(self):
"""
Checks if cinder is available.
@return: True if cinder service is available
"""
if not self.cinder:
return False
else:
return True
def is_cdrom(self):
"""
Checks if nova allows mapping a volume as cdrom drive.
This is only available starting with Havana
@return: True if volume can be attached as cdrom
"""
nova_extension_manager = ListExtManager(self.nova)
for ext in nova_extension_manager.show_all():
if ext.name == "VolumeAttachmentUpdate" and ext.is_loaded():
return True
return False
def is_floppy(self):
#TODO: check if floppy is available.
"""
Checks if nova allows mapping a volume as a floppy drive.
This will not be available until Icehouse
@return: Currently this always returns True.
"""
return False
def is_direct_boot(self):
#TODO: check if direct boot is available
"""
Checks if nova allows booting an instance with a command line argument
This will not be available until Icehouse
@return: Currently this always returns False
"""
return False

View File

@ -1,181 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from tempfile import NamedTemporaryFile, TemporaryFile, mkdtemp
import guestfs
import shutil
import os
import subprocess
from StackEnvironment import StackEnvironment
class SyslinuxHelper:
def __init__(self):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.env = StackEnvironment()
def create_syslinux_stub(self, image_name, cmdline, kernel_filename, ramdisk_filename):
"""
@param cmdline: kernel command line
@param kernel_filename: path to kernel file
@param ramdisk_filename: path to ramdisk file
@return glance image id
"""
raw_fs_image = NamedTemporaryFile(delete=False)
raw_image_name = raw_fs_image.name
tmp_content_dir = None
glance_image_id = None
try:
qcow2_image_name = "%s.qcow2" % raw_image_name
# 200 MB sparse file
self.log.debug("Creating sparse 200 MB file")
outsize = 1024 * 1024 * 200
raw_fs_image.truncate(outsize)
raw_fs_image.close()
# Partition, format and add DOS MBR
g = guestfs.GuestFS()
g.add_drive(raw_image_name)
g.launch()
g.part_disk("/dev/sda","msdos")
g.part_set_mbr_id("/dev/sda",1,0xb)
g.mkfs("vfat", "/dev/sda1")
g.part_set_bootable("/dev/sda", 1, 1)
dosmbr = open("/usr/share/syslinux/mbr.bin").read()
ws = g.pwrite_device("/dev/sda", dosmbr, 0)
if ws != len(dosmbr):
raise Exception("Failed to write entire MBR")
# Install syslinux
g.syslinux("/dev/sda1")
#Insert kernel, ramdisk and syslinux.cfg file
tmp_content_dir = mkdtemp()
kernel_dest = os.path.join(tmp_content_dir,"vmlinuz")
shutil.copy(kernel_filename, kernel_dest)
initrd_dest = os.path.join(tmp_content_dir,"initrd.img")
shutil.copy(ramdisk_filename, initrd_dest)
syslinux_conf="""default customhd
timeout 30
prompt 1
label customhd
kernel vmlinuz
append initrd=initrd.img %s
""" % (cmdline)
f = open(os.path.join(tmp_content_dir, "syslinux.cfg"),"w")
f.write(syslinux_conf)
f.close()
# copy the tmp content to the image
g.mount_options ("", "/dev/sda1", "/")
for filename in os.listdir(tmp_content_dir):
g.upload(os.path.join(tmp_content_dir,filename),"/" + filename)
g.sync()
g.close()
try:
self.log.debug("Converting syslinux stub image from raw to qcow2")
self._subprocess_check_output(["qemu-img","convert","-c","-O","qcow2",raw_image_name, qcow2_image_name])
self.log.debug("Uploading syslinux qcow2 image to glance")
glance_image_id = self.env.upload_image_to_glance(image_name, local_path=qcow2_image_name, format='qcow2')
except Exception, e:
self.log.debug("Exception while converting syslinux image to qcow2: %s" % e)
self.log.debug("Uploading syslinux raw image to glance.")
glance_image_id = self.env.upload_image_to_glance(image_name, local_path=raw_image_name, format='raw')
finally:
self.log.debug("Removing temporary file.")
if os.path.exists(raw_image_name):
os.remove(raw_image_name)
if os.path.exists(qcow2_image_name):
os.remove(qcow2_image_name)
if tmp_content_dir:
shutil.rmtree(tmp_content_dir)
return glance_image_id
### Utility functions borrowed from Oz and lightly modified
def _executable_exists(self, program):
"""
Function to find out whether an executable exists in the PATH
of the user. If so, the absolute path to the executable is returned.
If not, an exception is raised.
"""
def is_exe(fpath):
"""
Helper method to check if a file exists and is executable
"""
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
if program is None:
raise Exception("Invalid program name passed")
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
raise Exception("Could not find %s" % (program))
def _subprocess_check_output(self, *popenargs, **kwargs):
"""
Function to call a subprocess and gather the output.
Addresses a lack of check_output() prior to Python 2.7
"""
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
if 'stderr' in kwargs:
raise ValueError('stderr argument not allowed, it will be overridden.')
self._executable_exists(popenargs[0][0])
# NOTE: it is very, very important that we use temporary files for
# collecting stdout and stderr here. There is a nasty bug in python
# subprocess; if your process produces more than 64k of data on an fd that
# is using subprocess.PIPE, the whole thing will hang. To avoid this, we
# use temporary fds to capture the data
stdouttmp = TemporaryFile()
stderrtmp = TemporaryFile()
process = subprocess.Popen(stdout=stdouttmp, stderr=stderrtmp, *popenargs,
**kwargs)
process.communicate()
retcode = process.poll()
stdouttmp.seek(0, 0)
stdout = stdouttmp.read()
stdouttmp.close()
stderrtmp.seek(0, 0)
stderr = stderrtmp.read()
stderrtmp.close()
if retcode:
cmd = ' '.join(*popenargs)
raise Exception("'%s' failed(%d): %s" % (cmd, retcode, stderr), retcode)
return (stdout, stderr, retcode)

View File

@ -1,141 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import guestfs
import uuid
from CacheManager import CacheManager
from ISOHelper import ISOHelper
from BaseOS import BaseOS
from tempfile import NamedTemporaryFile
from shutil import copyfile
from os import remove
class WindowsOS(BaseOS):
BLANK_FLOPPY = "/usr/share/novaimagebuilder/disk.img"
def __init__(self, osinfo_dict, install_type, install_media_location, install_config, install_script = None):
super(WindowsOS, self).__init__(osinfo_dict, install_type, install_media_location, install_config, install_script)
#TODO: Check for direct boot - for now we are using environments
# where we know it is present
#if not self.env.is_direct_boot():
# raise Exception("Direct Boot feature required - Installs using syslinux stub not yet implemented")
if install_type != "iso":
raise Exception("Only ISO installs supported for Windows installs")
if not self.env.is_cdrom():
raise Exception("ISO installs require a Nova environment that can support CDROM block device mapping")
# TODO: Remove these
self.install_artifacts = [ ]
def prepare_install_instance(self):
""" Method to prepare all necessary local and remote images for an install
This method may require significant local disk or CPU resource
"""
# These must be created and cached beforehand
# TODO: Automate
driver_locations = self.cache.retrieve_and_cache_object("driver-iso", self, None, True)
self.driver_iso_volume = driver_locations['cinder']
iso_locations = self.cache.retrieve_and_cache_object("install-iso",
self, self.install_media_location, True)
if self.env.is_floppy():
self.iso_volume = iso_locations['cinder']
self._prepare_floppy()
self.log.debug ("Prepared cinder iso (%s), driver_iso (%s) and\
floppy (%s) for install instance" % (self.iso_volume,
self.driver_iso_volume, self.floppy_volume))
else:
self._respin_iso(iso_locations['local'], "x86_64")
self.iso_volume_delete = True
def start_install_instance(self):
if self.install_type == "iso":
self.log.debug("Launching windows install instance")
if self.env.is_floppy():
self.install_instance = self.env.launch_instance(root_disk=('blank', 10),
install_iso=('cinder', self.iso_volume),
secondary_iso=('cinder',self.driver_iso_volume),
floppy=('cinder',self.floppy_volume))
else:
self.install_instance = self.env.launch_instance(root_disk=('blank', 10), install_iso=('cinder', self.iso_volume), secondary_iso=('cinder', self.driver_iso_volume))
def _respin_iso(self, iso_path, arch):
try:
new_install_iso = NamedTemporaryFile(delete=False)
new_install_iso_name = new_install_iso.name
new_install_iso.close()
ih = ISOHelper(iso_path, arch)
ih._copy_iso()
ih._install_script_win_v6(self.install_script.name)
ih._generate_new_iso_win_v6(new_install_iso_name)
image_name = "install-iso-%s-%s" % (self.osinfo_dict['shortid'],
str(uuid.uuid4())[:8])
self.iso_volume = self.env.upload_volume_to_cinder(image_name,
local_path=new_install_iso_name, keep_image=False)
finally:
if new_install_iso_name:
remove(new_install_iso_name)
def _prepare_floppy(self):
self.log.debug("Preparing floppy with autounattend.xml")
unattend_floppy_name = None
unattend_file = None
try:
# Use tempfile to get a known unique temporary location for floppy image copy
unattend_floppy = NamedTemporaryFile(delete=False)
unattend_floppy_name = unattend_floppy.name
unattend_floppy.close()
copyfile(self.BLANK_FLOPPY, unattend_floppy_name)
# Create a real file copy of the unattend content for use by guestfs
unattend_file = NamedTemporaryFile()
unattend_file.write(self.install_script.read())
unattend_file.flush()
# Copy unattend into floppy via guestfs
g = guestfs.GuestFS()
g.add_drive(unattend_floppy_name)
g.launch()
g.mount_options ("", "/dev/sda", "/")
g.upload(unattend_file.name,"/autounattend.xml")
shutdown_result = g.shutdown()
g.close()
# Upload it to glance and copy to cinder
# Unique-ish name
image_name = "unattend-floppy-%s-%s" % ( self.osinfo_dict['shortid'], str(uuid.uuid4())[:8] )
self.floppy_volume = self.env.upload_volume_to_cinder(image_name, local_path=unattend_floppy_name, keep_image = False)
self.install_artifacts.append( ('cinder', self.floppy_volume ) )
finally:
if unattend_floppy_name:
remove(unattend_floppy_name)
if unattend_file:
unattend_file.close()
def update_status(self):
return "RUNNING"
def wants_iso_content(self):
return False
def abort(self):
pass
def cleanup(self):
# TODO: Remove self.install_artifacts
pass

View File

@ -1 +0,0 @@
__author__ = 'sloranz'

View File

@ -1,155 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import os.path
import uuid
import json
from novaimagebuilder.Singleton import Singleton
class MockCacheManager(Singleton):
"""
Mock implementation of CacheManager for unit testing.
* To test against locked or unlocked state, set the attribute 'locked' to True or False.
* To test with a populated index, set the attribute 'index' to a populated dict.
"""
CACHE_ROOT = "/tmp/MockCacheManager/"
def _singleton_init(self, *args, **kwargs):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.index = {}
self.inedx_update = {}
self.locked = False
if not os.path.exists(self.CACHE_ROOT):
os.mkdir(self.CACHE_ROOT)
def lock_and_get_index(self):
"""
Sets the 'locked' attribute to True.
"""
if self.locked:
pass # Should be throwing an exception
else:
self.locked = True
def write_index_and_unlock(self):
"""
Updates the 'index' dict with whatever is in 'index_update' and sets 'locked' to False.
"""
if self.locked:
if len(self.index_update) > 0:
self.index.update(self.index_update)
self.index_update = {}
self.locked = False
else:
pass # Should throw an exception telling user to lock first
def unlock_index(self):
"""
Sets 'index_update' to an empty dict and sets 'locked' to False.
"""
self.index_update = {}
self.locked = False
def retrieve_and_cache_object(self, object_type, os_plugin, source_url, save_local):
"""
Writes out a mock cache file to '/tmp/MockCacheManager' with the same naming convention used by
CacheManager.
@param object_type: A string indicating the type of object being retrieved
@param os_plugin: Instance of the delegate for the OS associated with the download
@param source_url: Location from which to retrieve the object/file
@param save_local: bool indicating whether a local copy of the object should be saved
@return: dict containing the various cached locations of the file
local: Local path to file (contents are this dict)
glance: Glance object UUID (does not correlate to a real Glance object)
cinder: Cinder object UUID (dose not correlate to a real Cinder object)
"""
self.lock_and_get_index()
existing_cache = self._get_index_value(os_plugin.os_ver_arch(), object_type, None)
if existing_cache:
self.log.debug("Found object in cache")
self.unlock_index()
return existing_cache
self.unlock_index()
self.log.debug("Object not in cache")
object_name = os_plugin.os_ver_arch() + "-" + object_type
local_object_filename = self.CACHE_ROOT + object_name
locations = {"local": local_object_filename, "glance": str(uuid.uuid4()), "cinder": str(uuid.uuid4())}
if not os.path.isfile(local_object_filename):
object_file = open(local_object_filename, 'w')
json.dump(locations, object_file)
object_file.close()
else:
self.log.warning("Local file (%s) is already present - assuming it is valid" % local_object_filename)
self._do_index_updates(os_plugin.os_ver_arch(), object_type, locations)
return locations
def _get_index_value(self, os_ver_arch, name, location):
if self.index is None:
raise Exception("Attempt made to read index values while a locked index is not present")
if not os_ver_arch in self.index:
return None
if not name in self.index[os_ver_arch]:
return None
# If the specific location is not requested, return the whole location dict
if not location:
return self.index[os_ver_arch][name]
if not location in self.index[os_ver_arch][name]:
return None
else:
return self.index[os_ver_arch][name][location]
def _set_index_value(self, os_ver_arch, name, location, value):
if self.index is None:
raise Exception("Attempt made to read index values while a locked index is not present")
if not os_ver_arch in self.index:
self.index_update[os_ver_arch] = {}
if not name in self.index[os_ver_arch]:
self.index_update[os_ver_arch][name] = {}
# If the specific location is not specified, assume value is the entire dict
if not location:
if type(value) is not dict:
raise Exception("When setting a value without a location, the value must be a dict")
self.index_update[os_ver_arch][name] = value
return
self.index[os_ver_arch][name][location] = value
def _do_index_updates(self, os_ver_arch, object_type, locations):
self.lock_and_get_index()
self._set_index_value(os_ver_arch, object_type, None, locations )
self.write_index_and_unlock()

View File

@ -1,47 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import uuid
class MockNovaInstance(object):
INSTANCE_STATUS_LIST = ('status placeholder')
def __init__(self, instance, stack_env):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.last_disk_activity = 0
self.last_net_activity = 0
self.instance = instance
self.instance_id = str(uuid.uuid4())
self.instance_status_index = 0
self.stack_env = stack_env
self.active = True
@property
def id(self):
return self.instance_id
@property
def status(self):
return self.INSTANCE_STATUS_LIST[self.instance_status_index]
def get_disk_and_net_activity(self):
return self.last_disk_activity, self.last_net_activity
def is_active(self):
return self.active

View File

@ -1,62 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from MockStackEnvironment import MockStackEnvironment
from MockCacheManager import MockCacheManager
class MockOS(object):
def __init__(self, osinfo_dict, install_type, install_media_location, install_config, install_script=None):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.status = 'RUNNING' # Possible return values: INPROGRESS, FAILED, COMPLETE
self.env = MockStackEnvironment()
self.cache = MockCacheManager()
self.osinfo_dict = osinfo_dict
self.install_type = install_type
self.install_media_location = install_media_location
self.install_config = install_config
self.install_script = install_script
self.iso_content_flag = False
self.iso_content_dict = {}
self.url_content_dict = {}
def os_ver_arch(self):
return self.osinfo_dict['shortid'] + "-" + self.install_config['arch']
def prepare_install_instance(self):
pass
def start_install_instance(self):
pass
def update_status(self):
return self.status
def wants_iso_content(self):
return self.iso_content_flag
def iso_content_dict(self):
return self.iso_content_dict
def url_content_dict(self):
return self.url_content_dict
def abort(self):
pass
def cleanup(self):
pass

View File

@ -1,113 +0,0 @@
# encoding: utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO: add failures
import uuid
import logging
from novaimagebuilder.Singleton import Singleton
from MockNovaInstance import MockNovaInstance
class MockStackEnvironment(Singleton):
# From http://docs.openstack.org/api/openstack-block-storage/2.0/content/Volumes.html
# this does not match the docstring in novaimagebuilder.StackEnvironment.get_volume_status()
VOLUME_STATUS_LIST = ('CREATING',
'AVAILABLE',
'ATTACHING',
'IN-USE',
'DELETING',
'ERROR',
'ERROR_DELETING',
'BACKING-UP',
'RESTORING-BACKUP',
'ERROR_RESTORING')
# From the docstring in novaimagebuilder.StackEnvironment.get_image_status()
IMAGE_STATUS_LIST = ('QUEUED', 'SAVING', 'ACTIVE', 'KILLED', 'DELETED', 'PENDING_DELETE')
def _singleton_init(self):
super(StackEnvironment, self)._singleton_init()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
# Attributes controlling Mock behavior
self.cinder = False
self.cdrom = False
self.floppy = False
self.direct_boot = False
self.keystone_srvr = None
self.glance_srvr = None
self.cinder_srvr = None
self.failure = {'status': False, 'timeout': 0}
self.volume_status_index = 1
self.image_status_index = 2
@property
def keystone_server(self):
return self.keystone_srvr
@property
def glance_server(self):
return self.glance_srvr
@property
def cinder_server(self):
return self.cinder_srvr
def is_cinder(self):
return self.cinder
def is_cdrom(self):
return self.cdrom
def is_floppy(self):
return self.floppy
def is_direct_boot(self):
return self.direct_boot
def upload_image_to_glance(self, name, local_path=None, location=None, format='raw', min_disk=0, min_ram=0,
container_format='bare', is_public=True):
#self.log.debug("Doing mock glance upload")
#self.log.debug("File: (%s) - Name (%s) - Format (%s) - Container (%s)" %
# (local_path, name, format, container_format))
return uuid.uuid4()
def upload_volume_to_cinder(self, name, volume_size=None, local_path=None, location=None, format='raw',
container_format='bare', is_public=True, keep_image=True):
#self.log.debug("Doing mock glance upload and cinder copy")
#self.log.debug("File: (%s) - Name (%s) - Format (%s) - Container (%s)" %
# (local_path, name, format, container_format))
return uuid.uuid4(), uuid.uuid4()
def create_volume_from_image(self, image_id, volume_size=None):
return uuid.uuid4(), uuid.uuid4()
def delete_image(self, image_id):
pass
def delete_volume(self, volume_id):
pass
def get_volume_status(self, volume_id):
return self.VOLUME_STATUS_LIST[self.volume_status_index]
def get_image_status(self, image_id):
return self.IMAGE_STATUS_LIST[self.image_status_index]
def launch_instance(self, root_disk=None, install_iso=None, secondary_iso=None, floppy=None, aki=None, ari=None,
cmdline=None, userdata=None):
return MockNovaInstance(object(), self)

View File

View File

@ -1,85 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import TestCase
from novaimagebuilder.OSInfo import OSInfo
class TestOSInfo(TestCase):
def setUp(self):
self.osinfo = OSInfo()
def test_os_id_for_shortid(self):
os_list = self.osinfo.db.get_os_list().get_elements()
for os in os_list:
self.assertEqual(self.osinfo.os_id_for_shortid(os.get_short_id()), os.get_id())
def test_os_for_shortid(self):
os = self.osinfo.os_for_shortid('fedora18')
expected_keys = {'name': str, 'version': str, 'distro': str, 'family': str, 'shortid': str, 'id': str,
'media_list': list, 'tree_list': list, 'minimum_resources': list,
'recommended_resources': list}
self.assertIsNotNone(os)
self.assertIsInstance(os, dict)
# check that the correct items are in the dict (as defined in OSInfo)
# and that the values are the correct type
for key in expected_keys.keys():
self.assertIn(key, os)
self.assertIsInstance(os[key], expected_keys[key])
def test_os_for_iso(self):
# TODO: implement test
self.skipTest('%s is only partially implemented and unused.' % __name__)
def test_os_for_tree(self):
# TODO: implement test
self.skipTest('%s is only partially implemented and unused.' % __name__)
def test_install_script(self):
config = {'admin_password': 'test_pw',
'arch': 'test_arch',
'license': 'test_license_key',
'target_disk': 'C',
'script_disk': 'A',
'preinstall_disk': 'test-preinstall',
'postinstall_disk': 'test-postinstall',
'signed_drivers': False,
'keyboard': 'en_TEST',
'laguage': 'en_TEST',
'timezone': 'America/Chicago'}
fedora_script = self.osinfo.install_script('fedora18', config)
windows_script = self.osinfo.install_script('win2k8r2', config)
# TODO: actually check that config values were set in the script(s)
self.assertIsNotNone(fedora_script)
self.assertIsInstance(fedora_script, str)
self.assertIsNotNone(windows_script)
self.assertIsInstance(windows_script, str)
self.assertNotEqual(fedora_script, windows_script)
def test_os_ids(self):
all_ids = self.osinfo.os_ids()
fedora_ids = self.osinfo.os_ids({'fedora': 17})
self.assertIsNotNone(all_ids)
self.assertIsNotNone(fedora_ids)
self.assertIsInstance(all_ids, dict)
self.assertIsInstance(fedora_ids, dict)
self.assertLess(len(fedora_ids), len(all_ids))

View File

@ -1,31 +0,0 @@
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import TestCase
class TestCacheManager(TestCase):
def test_lock_and_get_index(self):
self.fail()
def test_write_index_and_unlock(self):
self.fail()
def test_unlock_index(self):
self.fail()
def test_retrieve_and_cache_object(self):
self.fail()

View File

@ -1,105 +0,0 @@
#!/usr/bin/python
# coding=utf-8
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
sys.path.append("../")
import MockStackEnvironment
sys.modules['StackEnvironment'] = sys.modules.pop('MockStackEnvironment')
sys.modules['StackEnvironment'].StackEnvironment = sys.modules['StackEnvironment'].MockStackEnvironment
import StackEnvironment
import novaimagebuilder.CacheManager
novaimagebuilder.CacheManager.StackEnvironment = StackEnvironment
import logging
import threading
import multiprocessing
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(name)s thread(%(threadName)s) Message: %(message)s')
se = StackEnvironment.StackEnvironment()
class MockOSPlugin(object):
def __init__(self, os_ver_arch = "fedora19-x86_64", wants_iso = True ):
self.nameverarch = os_ver_arch
self.wantscdrom = wants_iso
def os_ver_arch(self):
return self.nameverarch
def wants_iso(self):
return self.wants_iso
print "---- the following should do a glance and cinder upload"
mosp = MockOSPlugin(os_ver_arch = "fedora18-x86_64", wants_iso = False)
#mse = StackEnvironment("username","password","tenant","auth_url")
mse = StackEnvironment.StackEnvironment()
cm = novaimagebuilder.CacheManager.CacheManager()
# Create our bogus entry in the cache index and set it to 0
cm.lock_and_get_index()
cm._set_index_value("testobjOS", "testobjname", "testloc", "0")
cm.write_index_and_unlock()
class UpdateThread():
def __call__(self):
#print "about to run 20 updates"
for i in range(0,20):
cm.lock_and_get_index()
#print "--------- three lines below"
#print "In the lock - 1 next line should always show value"
value = cm._get_index_value("testobjOS", "testobjname", "testloc")
#print "In the lock - 2 value %s" % (value)
newvalue = int(value) + 1
cm._set_index_value("testobjOS", "testobjname", "testloc", str(newvalue))
#print "In the lock - 3 did update - leaving"
#print "--------- three lines above"
cm.write_index_and_unlock()
class MultiThreadProcess():
def __call__(self):
#print "Here I run 20 threads"
threads = [ ]
for i in range (0,20):
thread = threading.Thread(group=None, target=UpdateThread())
threads.append(thread)
thread.run()
# Fork 20 copies of myself
processes = [ ]
for i in range(0,20):
proc = multiprocessing.Process(group=None, target=MultiThreadProcess())
processes.append(proc)
proc.start()
for proc in processes:
proc.join()
cm.lock_and_get_index()
value = cm._get_index_value("testobjOS", "testobjname", "testloc")
cm.unlock_index()
print "Final value should be 8000 and is %s" % (value)
# Have each process create 20 threads
# Have each
#cm.retrieve_and_cache_object("install-iso2", mosp, "http://repos.fedorapeople.org/repos/aeolus/imagefactory/testing/repos/rhel/imagefactory.repo",
# True)