diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 1babc56..0000000 --- a/.gitignore +++ /dev/null @@ -1,59 +0,0 @@ -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] - -# C extensions -*.so - -# Distribution / packaging -.Python -env/ -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -*.egg-info/ -.installed.cfg -*.egg - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*,cover - -# Translations -*.mo -*.pot - -# Django stuff: -*.log - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ -*.db -*.~vsd diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index 40e57ff..0000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,53 +0,0 @@ -# Contributions - -The Fuel plugin for ScaleIO project has been licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") License. In order to contribute to the project you will to do two things: - - -1. License your contribution under the [DCO](http://elinux.org/Developer_Certificate_Of_Origin "Developer Certificate of Origin") + [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") -2. Identify the type of contribution in the commit message - - -### 1. Licensing your Contribution: - -As part of the contribution, in the code comments (or license file) associated with the contribution must include the following: - -Copyright (c) 2015, EMC Corporation - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -This code is provided under the Developer Certificate of Origin- [Insert Name], [Date (e.g., 1/1/15]” - - -**For example:** - -A contribution from **Joe Developer**, an **independent developer**, submitted in **May 15th of 2015** should have an associated license (as file or/and code comments) like this: - -Copyright (c) 2015, Joe Developer - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -This code is provided under the Developer Certificate of Origin- Joe Developer, May 15th 2015” - -### 2. Identifying the Type of Contribution - -In addition to identifying an open source license in the documentation, **all Git Commit messages** associated with a contribution must identify the type of contribution (i.e., Bug Fix, Patch, Script, Enhancement, Tool Creation, or Other). diff --git a/LICENSE b/LICENSE deleted file mode 100644 index 81fa3e6..0000000 --- a/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ -Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "{}" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright (c) 2015, EMC Corporation - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/README.md b/README.md deleted file mode 100644 index eaef333..0000000 --- a/README.md +++ /dev/null @@ -1,112 +0,0 @@ -# ScaleIO-Cinder Plugin for Fuel - -## Overview - -The `ScaleIO-Cinder` plugin leverages an existing ScaleIO cluster by configuring Cinder to use ScaleIO as the block storage backend. - -If you are looking to deploy a new ScaleIO cluster on Fuel slave nodes and replaces the default OpenStack volume backend by ScaleIO. please take a look at the [ScaleIO](https://github.com/openstack/fuel-plugin-scaleio) plugin. - - -## Requirements - -| Requirement | Version/Comment | -|----------------------------------|-----------------| -| Mirantis OpenStack compatibility | 6.1/7.0 | - - -## Recommendations - -None. - -## Limitations - -The following table shows current limitation - -![ScaleIOSupport](https://github.com/openstack/fuel-plugin-scaleio-cinder/blob/master/doc/source/images/SIO_Support.png) - - -# Installation Guide - -## ScaleIO-Cinder Plugin install from RPM file - -To install the ScaleIO-Cinder plugin, follow these steps: - -1. Download the plugin from the [Fuel Plugins Catalog](https://software.mirantis.com/download-mirantis-openstack-fuel-plug-ins/). - -2. Copy the plugin file to the Fuel Master node. Follow the [Quick start guide](https://software.mirantis.com/quick-start/) if you don't have a running Fuel Master node yet. - ``` - $ scp scaleio-cinder-1.5-1.5.0-1.noarch.rpm root@:/tmp/ - ``` - -3. Log into the Fuel Master node and install the plugin using the fuel command line. - ``` - $ fuel plugins --install /tmp/scaleio-cinder-1.5-1.5.0-1.noarch.rpm - ``` - -4. Verify that the plugin is installed correctly. - ``` - $ fuel plugins - ``` - -## ScaleIO-Cinder Plugin install from source code - -To install the ScaleIO-Cinder Plugin from source code, you first need to prepare an environment to build the RPM file of the plugin. The recommended approach is to build the RPM file directly onto the Fuel Master node so that you won't have to copy that file later. - -Prepare an environment for building the plugin on the **Fuel Master node**. - -1. Install the standard Linux development tools: - ``` - $ yum install createrepo rpm rpm-build dpkg-devel - ``` - -2. Install the Fuel Plugin Builder. To do that, you should first get pip: - ``` - $ easy_install pip - ``` - -3. Then install the Fuel Plugin Builder (the `fpb` command line) with `pip`: - ``` - $ pip install fuel-plugin-builder - ``` - -*Note: You may also have to build the Fuel Plugin Builder if the package version of the -plugin is higher than package version supported by the Fuel Plugin Builder you get from `pypi`. -In this case, please refer to the section "Preparing an environment for plugin development" -of the [Fuel Plugins wiki](https://wiki.openstack.org/wiki/Fuel/Plugins) if you -need further instructions about how to build the Fuel Plugin Builder.* - -4. Clone the ScaleIO Plugin git repository: - ``` - $ git clone --recursive git@github.com:openstack/fuel-plugin-scaleio-cinder.git - ``` - -5. Check that the plugin is valid: - ``` - $ fpb --check ./fuel-plugin-scaleio-cinder - ``` - -6. Build the plugin: - ``` - $ fpb --build ./fuel-plugin-scaleio-cinder - ``` - -7. Now you have created an RPM file that you can install using the steps described above. The RPM file will be located in: - ``` - $ ./fuel-plugin-scaleio-cinder/scaleio-cinder-1.5-1.5.0-1.noarch.rpm - ``` - -# User Guide - -Please read the [ScaleIO Plugin User Guide](doc). - -# Contributions - -Please read the [CONTRIBUTING.md](CONTRIBUTING.md) document for the latest information about contributions. - -# Bugs, requests, questions - -Please use the [Launchpad project site](https://launchpad.net/fuel-plugin-scaleio-cinder) to report bugs, request features, ask questions, etc. - -# License - -Please read the [LICENSE](LICENSE) document for the latest licensing information. diff --git a/README.rst b/README.rst new file mode 100644 index 0000000..86e34d6 --- /dev/null +++ b/README.rst @@ -0,0 +1,10 @@ +This project is no longer maintained. + +The contents of this repository are still available in the Git +source code management system. To see the contents of this +repository before it reached its end of life, please check out the +previous commit with "git checkout HEAD^1". + +For any further questions, please email +openstack-discuss@lists.openstack.org or join #openstack-dev on +Freenode. diff --git a/checksums.sha1 b/checksums.sha1 deleted file mode 100644 index 0166ea8..0000000 --- a/checksums.sha1 +++ /dev/null @@ -1,35 +0,0 @@ -c700a8b9312d24bdc57570f7d6a131cf63d89016 LICENSE -f15e1db1b640c3ae73cc129934f9d914440b0250 README.md -bffb5460de132beba188914cb0dcc14e9dc4e36b deployment_scripts/deploy.sh -7eb8ef1f761b13f04c8beb1596bdb95bc281b54a deployment_scripts/install_scaleio_compute.pp -2f62d18be6c2a612f8513815c95d2ca3a20600e6 deployment_scripts/install_scaleio_controller.pp -76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters -c9efb58603cc6e448c2974c4a095cda14f13b431 deployment_scripts/puppet/install_scaleio_compute/files/scaleiolibvirtdriver.py -056e1427d651c8991c7caa3d82e62a3f0a5959d3 deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp -76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters -8dbfc34be2c4b4348736306de01769f3daf78b11 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.py -97151193ca7cafc39bb99e11175c2bd8e07410e1 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_fault_set.rb -ea0175506182a5d1265060ebca7c3c439e446042 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_sds_device_pairs.rb -fe5b7322b0f4d1b18959d6d399b20f98568e30eb deployment_scripts/puppet/install_scaleio_controller/lib/puppet/provider/scaleio_cluster_create/init_cluster.rb -015351dfe07ca666e4c50186b09b89abcaab5959 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/type/scaleio_cluster_create.rb -f46c0cf37a4e7c3a9f0d8a4d1d5c9b0fdd567692 deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp -2cf472314221fbc1520f9ec76c0eb47570a2f444 deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo -d280a227cfb05d67795d1a03bacfd781900b6134 deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb -29162123f2ad50753d7f5cb3be9d5af5687de10b deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo -d501258114fecc5e677b42486f91436ce24bf912 deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb -e644fa23da337234dfa78e03fd2f2be8162e5617 deployment_scripts/puppet/remove_scaleio_repo/manifests/init.pp -73f9232026d4bd9e74f8e97afadfc972044c64cf deployment_scripts/remove_scaleio_repo.pp -0b552e1a89b852857efe0b6fe1c368feb3870dd9 environment_config.yaml -ede2ec1bf0bdb1455f3a0b56901ef27ed214645d metadata.yaml -83c3d6d1526da89aed2fc1e24ec8bfacb4a3ea1e pre_build_hook -da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/centos/.gitkeep -7246778c54a204f01064348b067abeb4a766a24b repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2 -ad0107628e9b6dd7a82553ba5cb447388e50900a repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz -1eb13a25318339d9e8157f0bf80419c019fa5000 repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz -ebe841ac4c94ae950cfc8f5f80bc6707eb39e456 repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2 -f7affeb9ed7e353556e43caf162660cae95d8d19 repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2 -84b124bc4de1c04613859bdb7af8d5fef021e3bb repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz -e50be018e61c5d5479cd6734fc748a821440daf8 repositories/centos/repodata/repomd.xml -da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/ubuntu/.gitkeep -cbecb9edd9e08fbebf280633bc72c69ff735b8c7 repositories/ubuntu/Packages.gz -25e9290ad1ca50f8346c3beb59c6cdcdef7ecca2 tasks.yaml diff --git a/deployment_scripts/deploy.sh b/deployment_scripts/deploy.sh deleted file mode 100644 index b74477f..0000000 --- a/deployment_scripts/deploy.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash - -# It's a script which deploys your plugin -echo scaleio > /tmp/scaleio diff --git a/deployment_scripts/install_scaleio_compute.pp b/deployment_scripts/install_scaleio_compute.pp deleted file mode 100644 index 1e143d1..0000000 --- a/deployment_scripts/install_scaleio_compute.pp +++ /dev/null @@ -1,2 +0,0 @@ -$plugin_settings = hiera('scaleio-cinder') -class {'install_scaleio_compute': } diff --git a/deployment_scripts/install_scaleio_controller.pp b/deployment_scripts/install_scaleio_controller.pp deleted file mode 100644 index 557ba86..0000000 --- a/deployment_scripts/install_scaleio_controller.pp +++ /dev/null @@ -1,2 +0,0 @@ -$plugin_settings = hiera('scaleio-cinder') -class {'install_scaleio_controller': } diff --git a/deployment_scripts/puppet/install_scaleio_compute/files/6.1/scaleiolibvirtdriver.py b/deployment_scripts/puppet/install_scaleio_compute/files/6.1/scaleiolibvirtdriver.py deleted file mode 100644 index 547c7fa..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/files/6.1/scaleiolibvirtdriver.py +++ /dev/null @@ -1,329 +0,0 @@ - -# Copyright (c) 2013 EMC Corporation -# All Rights Reserved - -# This software contains the intellectual property of EMC Corporation -# or is licensed to EMC Corporation from third parties. Use of this -# software and the intellectual property contained therein is expressly -# limited to the terms and conditions of the License Agreement under which -# it is provided by or on behalf of EMC. - - -import glob -import hashlib -import os -import time -import urllib2 -import urlparse -import requests -import json -import re -import sys -import urllib - -from oslo.config import cfg - -from nova import exception -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.openstack.common import loopingcall -from nova.openstack.common import processutils -from nova import paths -from nova.storage import linuxscsi -from nova import utils -from nova.virt.libvirt import config as vconfig -from nova.virt.libvirt import utils as virtutils -from nova.virt.libvirt.volume import LibvirtBaseVolumeDriver - -LOG = logging.getLogger(__name__) - -volume_opts = [ - cfg.IntOpt('num_iscsi_scan_tries', - default=3, - help='number of times to rescan iSCSI target to find volume'), - cfg.IntOpt('num_iser_scan_tries', - default=3, - help='number of times to rescan iSER target to find volume'), - cfg.StrOpt('rbd_user', - help='the RADOS client name for accessing rbd volumes'), - cfg.StrOpt('rbd_secret_uuid', - help='the libvirt uuid of the secret for the rbd_user' - 'volumes'), - cfg.StrOpt('nfs_mount_point_base', - default=paths.state_path_def('mnt'), - help='Dir where the nfs volume is mounted on the compute node'), - cfg.StrOpt('nfs_mount_options', - help='Mount options passed to the nfs client. See section ' - 'of the nfs man page for details'), - cfg.IntOpt('num_aoe_discover_tries', - default=3, - help='number of times to rediscover AoE target to find volume'), - cfg.StrOpt('glusterfs_mount_point_base', - default=paths.state_path_def('mnt'), - help='Dir where the glusterfs volume is mounted on the ' - 'compute node'), - cfg.BoolOpt('libvirt_iscsi_use_multipath', - default=False, - help='use multipath connection of the iSCSI volume'), - cfg.BoolOpt('libvirt_iser_use_multipath', - default=False, - help='use multipath connection of the iSER volume'), - cfg.StrOpt('scality_sofs_config', - help='Path or URL to Scality SOFS configuration file'), - cfg.StrOpt('scality_sofs_mount_point', - default='$state_path/scality', - help='Base dir where Scality SOFS shall be mounted'), - cfg.ListOpt('qemu_allowed_storage_drivers', - default=[], - help='Protocols listed here will be accessed directly ' - 'from QEMU. Currently supported protocols: [gluster]') - ] - -CONF = cfg.CONF -CONF.register_opts(volume_opts) - -OK_STATUS_CODE=200 -VOLUME_NOT_MAPPED_ERROR=84 -VOLUME_ALREADY_MAPPED_ERROR=81 - -class LibvirtScaleIOVolumeDriver(LibvirtBaseVolumeDriver): - """Class implements libvirt part of volume driver for ScaleIO cinder driver.""" - local_sdc_id = None - mdm_id = None - pattern3 = None - - def __init__(self, connection): - """Create back-end to nfs.""" - LOG.warning("ScaleIO libvirt volume driver INIT") - super(LibvirtScaleIOVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def find_volume_path(self, volume_id): - - LOG.info("looking for volume %s" % volume_id) - #look for the volume in /dev/disk/by-id directory - disk_filename = "" - tries = 0 - while not disk_filename: - if (tries > 15): - raise exception.NovaException("scaleIO volume {0} not found at expected path ".format(volume_id)) - by_id_path = "/dev/disk/by-id" - if not os.path.isdir(by_id_path): - LOG.warn("scaleIO volume {0} not yet found (no directory /dev/disk/by-id yet). Try number: {1} ".format(volume_id, tries)) - tries = tries + 1 - time.sleep(1) - continue - filenames = os.listdir(by_id_path) - LOG.warning("Files found in {0} path: {1} ".format(by_id_path, filenames)) - for filename in filenames: - if (filename.startswith("emc-vol") and filename.endswith(volume_id)): - disk_filename = filename - if not disk_filename: - LOG.warn("scaleIO volume {0} not yet found. Try number: {1} ".format(volume_id, tries)) - tries = tries + 1 - time.sleep(1) - - if (tries != 0): - LOG.warning("Found scaleIO device {0} after {1} retries ".format(disk_filename, tries)) - full_disk_name = by_id_path + "/" + disk_filename - LOG.warning("Full disk name is " + full_disk_name) - return full_disk_name -# path = os.path.realpath(full_disk_name) -# LOG.warning("Path is " + path) -# return path - - def _get_client_id(self, server_ip, server_port, server_username, server_password, server_token, sdc_ip): - request = "https://" + server_ip + ":" + server_port + "/api/types/Client/instances/getByIp::" + sdc_ip + "/" - LOG.info("ScaleIO get client id by ip request: %s" % request) - r = requests.get(request, auth=(server_username, server_token), verify=False) - r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token) - - sdc_id = r.json() - if (sdc_id == '' or sdc_id is None): - msg = ("Client with ip %s wasn't found " % (sdc_ip)) - LOG.error(msg) - raise exception.NovaException(data=msg) - if (r.status_code != 200 and "errorCode" in sdc_id): - msg = ("Error getting sdc id from ip %s: %s " % (sdc_ip, sdc_id['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - LOG.info("ScaleIO sdc id is %s" % sdc_id) - return sdc_id - - def _get_volume_id(self, server_ip, server_port, server_username, server_password, server_token, volname): - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') -# volname = volname.replace('/', '%252F') - LOG.info("volume name after double encoding is %s " % volname_double_encoded) - request = "https://" + server_ip + ":" + server_port + "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - r = requests.get(request, auth=(server_username, server_token), verify=False) - r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token) - - volume_id = r.json() - if (volume_id == '' or volume_id is None): - msg = ("Volume with name %s wasn't found " % (volname)) - LOG.error(msg) - raise exception.NovaException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in volume_id): - msg = ("Error getting volume id from name %s: %s " % (volname, volume_id['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - LOG.info("ScaleIO volume id is %s" % volume_id) - return volume_id - - def _check_response(self, response, request, server_ip, server_port, server_username, server_password, server_token): - if (response.status_code == 401 or response.status_code == 403): - LOG.info("Token is invalid, going to re-login and get a new one") - login_request = "https://" + server_ip + ":" + server_port + "/api/login" - r = requests.get(login_request, auth=(server_username, server_password), verify=False) - token = r.json() - #repeat request with valid token - LOG.debug("going to perform request again {0} with valid token".format(request)) - res = requests.get(request, auth=(server_username, token), verify=False) - return res - return response - - - def connect_volume(self, connection_info, disk_info): - """Connect the volume. Returns xml for libvirt.""" - conf = super(LibvirtScaleIOVolumeDriver, - self).connect_volume(connection_info, - disk_info) - LOG.info("scaleIO connect volume in scaleio libvirt volume driver") - data = connection_info - LOG.info("scaleIO connect to stuff "+str(data)) - data = connection_info['data'] - LOG.info("scaleIO connect to joined "+str(data)) - LOG.info("scaleIO Dsk info "+str(disk_info)) - volname = connection_info['data']['scaleIO_volname'] - #sdc ip here is wrong, probably not retrieved properly in cinder driver. Currently not used. - sdc_ip = connection_info['data']['hostIP'] - server_ip = connection_info['data']['serverIP'] - server_port = connection_info['data']['serverPort'] - server_username = connection_info['data']['serverUsername'] - server_password = connection_info['data']['serverPassword'] - server_token = connection_info['data']['serverToken'] - iops_limit = connection_info['data']['iopsLimit'] - bandwidth_limit = connection_info['data']['bandwidthLimit'] - LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}, REST Server username: {3}, REST Server password: {4}, iops limit: {5}, bandwidth limit: {6}".format(volname, sdc_ip, server_ip, server_username, server_password, iops_limit, bandwidth_limit)) - - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: "+str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.NovaException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - -# sdc_id = self._get_client_id(server_ip, server_port, server_username, server_password, server_token, sdc_ip) - -# params = {'sdcId' : sdc_id} - - params = {'guid' : guid, 'allowMultipleMappings' : 'TRUE'} - - volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname) - headers = {'content-type': 'application/json'} - request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc" - LOG.info("map volume request: %s" % request) - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False) - r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token) -# LOG.info("map volume response: %s" % r.text) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_ALREADY_MAPPED_ERROR): - msg = ("Ignoring error mapping volume %s: volume already mapped" % (volname)) - LOG.warning(msg) - else: - msg = ("Error mapping volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - -# convert id to hex -# val = int(volume_id) -# id_in_hex = hex((val + (1 << 64)) % (1 << 64)) -# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0" - formated_id = volume_id - - conf.source_path = self.find_volume_path(formated_id) - conf.source_type = 'block' - -# set QoS settings after map was performed - if (iops_limit != None and bandwidth_limit != None): - params = {'sdcId' : sdc_id, 'iopsLimit': iops_limit, 'bandwidthLimitInKbps': bandwidth_limit} - request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/setMappedSdcLimits" - LOG.info("set client limit request: %s" % request) - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False) - r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token) - if (r.status_code != OK_STATUS_CODE): - response = r.json() - LOG.info("set client limit response: %s" % response) - msg = ("Error setting client limits for volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - - return conf - - def disconnect_volume(self, connection_info, disk_info): - conf = super(LibvirtScaleIOVolumeDriver, - self).disconnect_volume(connection_info, - disk_info) - LOG.info("scaleIO disconnect volume in scaleio libvirt volume driver") - volname = connection_info['data']['scaleIO_volname'] - sdc_ip = connection_info['data']['hostIP'] - server_ip = connection_info['data']['serverIP'] - server_port = connection_info['data']['serverPort'] - server_username = connection_info['data']['serverUsername'] - server_password = connection_info['data']['serverPassword'] - server_token = connection_info['data']['serverToken'] - LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}".format(volname, sdc_ip, server_ip)) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: "+str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.NovaException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid' : guid} - - headers = {'content-type': 'application/json'} - - volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname) - request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/removeMappedSdc" - LOG.info("unmap volume request: %s" % request) - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False) - r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_NOT_MAPPED_ERROR): - msg = ("Ignoring error unmapping volume %s: volume not mapped" % (volname)) - LOG.warning(msg) - else: - msg = ("Error unmapping volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - diff --git a/deployment_scripts/puppet/install_scaleio_compute/files/7.0/driver.py b/deployment_scripts/puppet/install_scaleio_compute/files/7.0/driver.py deleted file mode 100644 index f5d87ec..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/files/7.0/driver.py +++ /dev/null @@ -1,6809 +0,0 @@ -# Copyright 2010 United States Government as represented by the -# Administrator of the National Aeronautics and Space Administration. -# All Rights Reserved. -# Copyright (c) 2010 Citrix Systems, Inc. -# Copyright (c) 2011 Piston Cloud Computing, Inc -# Copyright (c) 2012 University Of Minho -# (c) Copyright 2013 Hewlett-Packard Development Company, L.P. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -A connection to a hypervisor through libvirt. - -Supports KVM, LXC, QEMU, UML, XEN and Parallels. - -""" - -import collections -import contextlib -import errno -import functools -import glob -import mmap -import operator -import os -import platform -import shutil -import sys -import tempfile -import time -import uuid - -import eventlet -from eventlet import greenthread -from eventlet import tpool -from lxml import etree -from oslo_concurrency import processutils -from oslo_config import cfg -from oslo_log import log as logging -from oslo_serialization import jsonutils -from oslo_utils import encodeutils -from oslo_utils import excutils -from oslo_utils import importutils -from oslo_utils import strutils -from oslo_utils import timeutils -from oslo_utils import units -import six - -from nova.api.metadata import base as instance_metadata -from nova import block_device -from nova.compute import arch -from nova.compute import hv_type -from nova.compute import power_state -from nova.compute import task_states -from nova.compute import utils as compute_utils -from nova.compute import vm_mode -from nova.console import serial as serial_console -from nova.console import type as ctype -from nova import context as nova_context -from nova import exception -from nova.i18n import _ -from nova.i18n import _LE -from nova.i18n import _LI -from nova.i18n import _LW -from nova import image -from nova.network import model as network_model -from nova import objects -from nova.openstack.common import fileutils -from nova.openstack.common import loopingcall -from nova.pci import manager as pci_manager -from nova.pci import utils as pci_utils -from nova import utils -from nova import version -from nova.virt import block_device as driver_block_device -from nova.virt import configdrive -from nova.virt import diagnostics -from nova.virt.disk import api as disk -from nova.virt.disk.vfs import guestfs -from nova.virt import driver -from nova.virt import firewall -from nova.virt import hardware -from nova.virt.libvirt import blockinfo -from nova.virt.libvirt import config as vconfig -from nova.virt.libvirt import dmcrypt -from nova.virt.libvirt import firewall as libvirt_firewall -from nova.virt.libvirt import host -from nova.virt.libvirt import imagebackend -from nova.virt.libvirt import imagecache -from nova.virt.libvirt import instancejobtracker -from nova.virt.libvirt import lvm -from nova.virt.libvirt import rbd_utils -from nova.virt.libvirt import utils as libvirt_utils -from nova.virt.libvirt import vif as libvirt_vif -from nova.virt import netutils -from nova.virt import watchdog_actions -from nova import volume -from nova.volume import encryptors - -libvirt = None - -LOG = logging.getLogger(__name__) - -libvirt_opts = [ - cfg.StrOpt('rescue_image_id', - help='Rescue ami image. This will not be used if an image id ' - 'is provided by the user.'), - cfg.StrOpt('rescue_kernel_id', - help='Rescue aki image'), - cfg.StrOpt('rescue_ramdisk_id', - help='Rescue ari image'), - cfg.StrOpt('virt_type', - default='kvm', - help='Libvirt domain type (valid options are: ' - 'kvm, lxc, qemu, uml, xen and parallels)'), - cfg.StrOpt('connection_uri', - default='', - help='Override the default libvirt URI ' - '(which is dependent on virt_type)'), - cfg.BoolOpt('inject_password', - default=False, - help='Inject the admin password at boot time, ' - 'without an agent.'), - cfg.BoolOpt('inject_key', - default=False, - help='Inject the ssh public key at boot time'), - cfg.IntOpt('inject_partition', - default=-2, - help='The partition to inject to : ' - '-2 => disable, -1 => inspect (libguestfs only), ' - '0 => not partitioned, >0 => partition number'), - cfg.BoolOpt('use_usb_tablet', - default=True, - help='Sync virtual and real mouse cursors in Windows VMs'), - cfg.StrOpt('live_migration_uri', - default="qemu+tcp://%s/system", - help='Migration target URI ' - '(any included "%s" is replaced with ' - 'the migration target hostname)'), - cfg.StrOpt('live_migration_flag', - default='VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, ' - 'VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED', - help='Migration flags to be set for live migration'), - cfg.StrOpt('block_migration_flag', - default='VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, ' - 'VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, ' - 'VIR_MIGRATE_NON_SHARED_INC', - help='Migration flags to be set for block migration'), - cfg.IntOpt('live_migration_bandwidth', - default=0, - help='Maximum bandwidth to be used during migration, in Mbps'), - cfg.StrOpt('snapshot_image_format', - help='Snapshot image format (valid options are : ' - 'raw, qcow2, vmdk, vdi). ' - 'Defaults to same as source image'), - cfg.StrOpt('disk_prefix', - help='Override the default disk prefix for the devices attached' - ' to a server, which is dependent on virt_type. ' - '(valid options are: sd, xvd, uvd, vd)'), - cfg.IntOpt('wait_soft_reboot_seconds', - default=120, - help='Number of seconds to wait for instance to shut down after' - ' soft reboot request is made. We fall back to hard reboot' - ' if instance does not shutdown within this window.'), - cfg.StrOpt('cpu_mode', - help='Set to "host-model" to clone the host CPU feature flags; ' - 'to "host-passthrough" to use the host CPU model exactly; ' - 'to "custom" to use a named CPU model; ' - 'to "none" to not set any CPU model. ' - 'If virt_type="kvm|qemu", it will default to ' - '"host-model", otherwise it will default to "none"'), - cfg.StrOpt('cpu_model', - help='Set to a named libvirt CPU model (see names listed ' - 'in /usr/share/libvirt/cpu_map.xml). Only has effect if ' - 'cpu_mode="custom" and virt_type="kvm|qemu"'), - cfg.StrOpt('snapshots_directory', - default='$instances_path/snapshots', - help='Location where libvirt driver will store snapshots ' - 'before uploading them to image service'), - cfg.StrOpt('xen_hvmloader_path', - default='/usr/lib/xen/boot/hvmloader', - help='Location where the Xen hvmloader is kept'), - cfg.ListOpt('disk_cachemodes', - default=[], - help='Specific cachemodes to use for different disk types ' - 'e.g: file=directsync,block=none'), - cfg.StrOpt('rng_dev_path', - help='A path to a device that will be used as source of ' - 'entropy on the host. Permitted options are: ' - '/dev/random or /dev/hwrng'), - cfg.ListOpt('hw_machine_type', - help='For qemu or KVM guests, set this option to specify ' - 'a default machine type per host architecture. ' - 'You can find a list of supported machine types ' - 'in your environment by checking the output of ' - 'the "virsh capabilities"command. The format of the ' - 'value for this config option is host-arch=machine-type. ' - 'For example: x86_64=machinetype1,armv7l=machinetype2'), - cfg.StrOpt('sysinfo_serial', - default='auto', - help='The data source used to the populate the host "serial" ' - 'UUID exposed to guest in the virtual BIOS. Permitted ' - 'options are "hardware", "os", "none" or "auto" ' - '(default).'), - cfg.IntOpt('mem_stats_period_seconds', - default=10, - help='A number of seconds to memory usage statistics period. ' - 'Zero or negative value mean to disable memory usage ' - 'statistics.'), - cfg.ListOpt('uid_maps', - default=[], - help='List of uid targets and ranges.' - 'Syntax is guest-uid:host-uid:count' - 'Maximum of 5 allowed.'), - cfg.ListOpt('gid_maps', - default=[], - help='List of guid targets and ranges.' - 'Syntax is guest-gid:host-gid:count' - 'Maximum of 5 allowed.') - ] - -CONF = cfg.CONF -CONF.register_opts(libvirt_opts, 'libvirt') -CONF.import_opt('host', 'nova.netconf') -CONF.import_opt('my_ip', 'nova.netconf') -CONF.import_opt('default_ephemeral_format', 'nova.virt.driver') -CONF.import_opt('use_cow_images', 'nova.virt.driver') -CONF.import_opt('enabled', 'nova.compute.api', - group='ephemeral_storage_encryption') -CONF.import_opt('cipher', 'nova.compute.api', - group='ephemeral_storage_encryption') -CONF.import_opt('key_size', 'nova.compute.api', - group='ephemeral_storage_encryption') -CONF.import_opt('live_migration_retry_count', 'nova.compute.manager') -CONF.import_opt('vncserver_proxyclient_address', 'nova.vnc') -CONF.import_opt('server_proxyclient_address', 'nova.spice', group='spice') -CONF.import_opt('vcpu_pin_set', 'nova.virt.hardware') -CONF.import_opt('vif_plugging_is_fatal', 'nova.virt.driver') -CONF.import_opt('vif_plugging_timeout', 'nova.virt.driver') -CONF.import_opt('enabled', 'nova.console.serial', group='serial_console') -CONF.import_opt('proxyclient_address', 'nova.console.serial', - group='serial_console') -CONF.import_opt('hw_disk_discard', 'nova.virt.libvirt.imagebackend', - group='libvirt') -CONF.import_group('workarounds', 'nova.utils') - -DEFAULT_FIREWALL_DRIVER = "%s.%s" % ( - libvirt_firewall.__name__, - libvirt_firewall.IptablesFirewallDriver.__name__) - -MAX_CONSOLE_BYTES = 100 * units.Ki - -# The libvirt driver will prefix any disable reason codes with this string. -DISABLE_PREFIX = 'AUTO: ' -# Disable reason for the service which was enabled or disabled without reason -DISABLE_REASON_UNDEFINED = None - -# Guest config console string -CONSOLE = "console=tty0 console=ttyS0" - -GuestNumaConfig = collections.namedtuple( - 'GuestNumaConfig', ['cpuset', 'cputune', 'numaconfig', 'numatune']) - -libvirt_volume_drivers = [ - 'iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver', - 'iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver', - 'local=nova.virt.libvirt.volume.LibvirtVolumeDriver', - 'fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver', - 'rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver', - 'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver', - 'nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver', - 'smbfs=nova.virt.libvirt.volume.LibvirtSMBFSVolumeDriver', - 'aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver', - 'glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver', - 'fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver', - 'scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver', - 'gpfs=nova.virt.libvirt.volume.LibvirtGPFSVolumeDriver', - 'quobyte=nova.virt.libvirt.volume.LibvirtQuobyteVolumeDriver', - 'scaleio=nova.virt.libvirt.volume.LibvirtScaleIOVolumeDriver', -] - - -def patch_tpool_proxy(): - """eventlet.tpool.Proxy doesn't work with old-style class in __str__() - or __repr__() calls. See bug #962840 for details. - We perform a monkey patch to replace those two instance methods. - """ - def str_method(self): - return str(self._obj) - - def repr_method(self): - return repr(self._obj) - - tpool.Proxy.__str__ = str_method - tpool.Proxy.__repr__ = repr_method - - -patch_tpool_proxy() - -VIR_DOMAIN_NOSTATE = 0 -VIR_DOMAIN_RUNNING = 1 -VIR_DOMAIN_BLOCKED = 2 -VIR_DOMAIN_PAUSED = 3 -VIR_DOMAIN_SHUTDOWN = 4 -VIR_DOMAIN_SHUTOFF = 5 -VIR_DOMAIN_CRASHED = 6 -VIR_DOMAIN_PMSUSPENDED = 7 - -LIBVIRT_POWER_STATE = { - VIR_DOMAIN_NOSTATE: power_state.NOSTATE, - VIR_DOMAIN_RUNNING: power_state.RUNNING, - # NOTE(maoy): The DOMAIN_BLOCKED state is only valid in Xen. - # It means that the VM is running and the vCPU is idle. So, - # we map it to RUNNING - VIR_DOMAIN_BLOCKED: power_state.RUNNING, - VIR_DOMAIN_PAUSED: power_state.PAUSED, - # NOTE(maoy): The libvirt API doc says that DOMAIN_SHUTDOWN - # means the domain is being shut down. So technically the domain - # is still running. SHUTOFF is the real powered off state. - # But we will map both to SHUTDOWN anyway. - # http://libvirt.org/html/libvirt-libvirt.html - VIR_DOMAIN_SHUTDOWN: power_state.SHUTDOWN, - VIR_DOMAIN_SHUTOFF: power_state.SHUTDOWN, - VIR_DOMAIN_CRASHED: power_state.CRASHED, - VIR_DOMAIN_PMSUSPENDED: power_state.SUSPENDED, -} - -MIN_LIBVIRT_VERSION = (0, 9, 11) -# When the above version matches/exceeds this version -# delete it & corresponding code using it -MIN_LIBVIRT_DEVICE_CALLBACK_VERSION = (1, 1, 1) -# Live snapshot requirements -REQ_HYPERVISOR_LIVESNAPSHOT = "QEMU" -MIN_LIBVIRT_LIVESNAPSHOT_VERSION = (1, 0, 0) -MIN_QEMU_LIVESNAPSHOT_VERSION = (1, 3, 0) -# block size tuning requirements -MIN_LIBVIRT_BLOCKIO_VERSION = (0, 10, 2) -# BlockJobInfo management requirement -MIN_LIBVIRT_BLOCKJOBINFO_VERSION = (1, 1, 1) -# Relative block commit (feature is detected, -# this version is only used for messaging) -MIN_LIBVIRT_BLOCKCOMMIT_RELATIVE_VERSION = (1, 2, 7) -# libvirt discard feature -MIN_LIBVIRT_DISCARD_VERSION = (1, 0, 6) -MIN_QEMU_DISCARD_VERSION = (1, 6, 0) -REQ_HYPERVISOR_DISCARD = "QEMU" -# While earlier versions could support NUMA reporting and -# NUMA placement, not until 1.2.7 was there the ability -# to pin guest nodes to host nodes, so mandate that. Without -# this the scheduler cannot make guaranteed decisions, as the -# guest placement may not match what was requested -MIN_LIBVIRT_NUMA_VERSION = (1, 2, 7) -# While earlier versions could support hugepage backed -# guests, not until 1.2.8 was there the ability to request -# a particular huge page size. Without this the scheduler -# cannot make guaranteed decisions, as the huge page size -# used by the guest may not match what was requested -MIN_LIBVIRT_HUGEPAGE_VERSION = (1, 2, 8) -# missing libvirt cpu pinning support -BAD_LIBVIRT_CPU_POLICY_VERSIONS = [(1, 2, 9, 2), (1, 2, 10)] -# fsFreeze/fsThaw requirement -MIN_LIBVIRT_FSFREEZE_VERSION = (1, 2, 5) - -# Hyper-V paravirtualized time source -MIN_LIBVIRT_HYPERV_TIMER_VERSION = (1, 2, 2) -MIN_QEMU_HYPERV_TIMER_VERSION = (2, 0, 0) - -MIN_LIBVIRT_HYPERV_FEATURE_VERSION = (1, 0, 0) -MIN_LIBVIRT_HYPERV_FEATURE_EXTRA_VERSION = (1, 1, 0) -MIN_QEMU_HYPERV_FEATURE_VERSION = (1, 1, 0) - -# parallels driver support -MIN_LIBVIRT_PARALLELS_VERSION = (1, 2, 12) - - -class LibvirtDriver(driver.ComputeDriver): - - capabilities = { - "has_imagecache": True, - "supports_recreate": True, - } - - def __init__(self, virtapi, read_only=False): - super(LibvirtDriver, self).__init__(virtapi) - - global libvirt - if libvirt is None: - libvirt = importutils.import_module('libvirt') - - self._host = host.Host(self.uri(), read_only, - lifecycle_event_handler=self.emit_event, - conn_event_handler=self._handle_conn_event) - self._initiator = None - self._fc_wwnns = None - self._fc_wwpns = None - self._caps = None - self._vcpu_total = 0 - self.firewall_driver = firewall.load_driver( - DEFAULT_FIREWALL_DRIVER, - self.virtapi, - host=self._host) - - self.vif_driver = libvirt_vif.LibvirtGenericVIFDriver() - - self.volume_drivers = driver.driver_dict_from_config( - self._get_volume_drivers(), self) - - self._disk_cachemode = None - self.image_cache_manager = imagecache.ImageCacheManager() - self.image_backend = imagebackend.Backend(CONF.use_cow_images) - - self.disk_cachemodes = {} - - self.valid_cachemodes = ["default", - "none", - "writethrough", - "writeback", - "directsync", - "unsafe", - ] - self._conn_supports_start_paused = CONF.libvirt.virt_type in ('kvm', - 'qemu') - - for mode_str in CONF.libvirt.disk_cachemodes: - disk_type, sep, cache_mode = mode_str.partition('=') - if cache_mode not in self.valid_cachemodes: - LOG.warn(_LW('Invalid cachemode %(cache_mode)s specified ' - 'for disk type %(disk_type)s.'), - {'cache_mode': cache_mode, 'disk_type': disk_type}) - continue - self.disk_cachemodes[disk_type] = cache_mode - - self._volume_api = volume.API() - self._image_api = image.API() - self._events_delayed = {} - # Note(toabctl): During a reboot of a Xen domain, STOPPED and - # STARTED events are sent. To prevent shutting - # down the domain during a reboot, delay the - # STOPPED lifecycle event some seconds. - if CONF.libvirt.virt_type == "xen": - self._lifecycle_delay = 15 - else: - self._lifecycle_delay = 0 - - sysinfo_serial_funcs = { - 'none': lambda: None, - 'hardware': self._get_host_sysinfo_serial_hardware, - 'os': self._get_host_sysinfo_serial_os, - 'auto': self._get_host_sysinfo_serial_auto, - } - - self._sysinfo_serial_func = sysinfo_serial_funcs.get( - CONF.libvirt.sysinfo_serial) - if not self._sysinfo_serial_func: - raise exception.NovaException( - _("Unexpected sysinfo_serial setting '%(actual)s'. " - "Permitted values are %(expect)s'") % - {'actual': CONF.libvirt.sysinfo_serial, - 'expect': ', '.join("'%s'" % k for k in - sysinfo_serial_funcs.keys())}) - - self.job_tracker = instancejobtracker.InstanceJobTracker() - - def _get_volume_drivers(self): - return libvirt_volume_drivers - - @property - def disk_cachemode(self): - if self._disk_cachemode is None: - # We prefer 'none' for consistent performance, host crash - # safety & migration correctness by avoiding host page cache. - # Some filesystems (eg GlusterFS via FUSE) don't support - # O_DIRECT though. For those we fallback to 'writethrough' - # which gives host crash safety, and is safe for migration - # provided the filesystem is cache coherent (cluster filesystems - # typically are, but things like NFS are not). - self._disk_cachemode = "none" - if not self._supports_direct_io(CONF.instances_path): - self._disk_cachemode = "writethrough" - return self._disk_cachemode - - def _set_cache_mode(self, conf): - """Set cache mode on LibvirtConfigGuestDisk object.""" - try: - source_type = conf.source_type - driver_cache = conf.driver_cache - except AttributeError: - return - - cache_mode = self.disk_cachemodes.get(source_type, - driver_cache) - conf.driver_cache = cache_mode - - def _do_quality_warnings(self): - """Warn about untested driver configurations. - - This will log a warning message about untested driver or host arch - configurations to indicate to administrators that the quality is - unknown. Currently, only qemu or kvm on intel 32- or 64-bit systems - is tested upstream. - """ - caps = self._host.get_capabilities() - hostarch = caps.host.cpu.arch - if (CONF.libvirt.virt_type not in ('qemu', 'kvm') or - hostarch not in (arch.I686, arch.X86_64)): - LOG.warn(_LW('The libvirt driver is not tested on ' - '%(type)s/%(arch)s by the OpenStack project and ' - 'thus its quality can not be ensured. For more ' - 'information, see: https://wiki.openstack.org/wiki/' - 'HypervisorSupportMatrix'), - {'type': CONF.libvirt.virt_type, 'arch': hostarch}) - - def _handle_conn_event(self, enabled, reason): - LOG.info(_LI("Connection event '%(enabled)d' reason '%(reason)s'"), - {'enabled': enabled, 'reason': reason}) - self._set_host_enabled(enabled, reason) - - def _version_to_string(self, version): - return '.'.join([str(x) for x in version]) - - def init_host(self, host): - self._host.initialize() - - self._do_quality_warnings() - - if (CONF.libvirt.virt_type == 'lxc' and - not (CONF.libvirt.uid_maps and CONF.libvirt.gid_maps)): - LOG.warn(_LW("Running libvirt-lxc without user namespaces is " - "dangerous. Containers spawned by Nova will be run " - "as the host's root user. It is highly suggested " - "that user namespaces be used in a public or " - "multi-tenant environment.")) - - # Stop libguestfs using KVM unless we're also configured - # to use this. This solves problem where people need to - # stop Nova use of KVM because nested-virt is broken - if CONF.libvirt.virt_type != "kvm": - guestfs.force_tcg() - - if not self._host.has_min_version(MIN_LIBVIRT_VERSION): - raise exception.NovaException( - _('Nova requires libvirt version %s or greater.') % - self._version_to_string(MIN_LIBVIRT_VERSION)) - - if (CONF.libvirt.virt_type == 'parallels' and - not self._host.has_min_version(MIN_LIBVIRT_PARALLELS_VERSION)): - raise exception.NovaException( - _('Running Nova with parallels virt_type requires ' - 'libvirt version %s') % - self._version_to_string(MIN_LIBVIRT_PARALLELS_VERSION)) - - def _get_connection(self): - return self._host.get_connection() - - _conn = property(_get_connection) - - @staticmethod - def uri(): - if CONF.libvirt.virt_type == 'uml': - uri = CONF.libvirt.connection_uri or 'uml:///system' - elif CONF.libvirt.virt_type == 'xen': - uri = CONF.libvirt.connection_uri or 'xen:///' - elif CONF.libvirt.virt_type == 'lxc': - uri = CONF.libvirt.connection_uri or 'lxc:///' - elif CONF.libvirt.virt_type == 'parallels': - uri = CONF.libvirt.connection_uri or 'parallels:///system' - else: - uri = CONF.libvirt.connection_uri or 'qemu:///system' - return uri - - def instance_exists(self, instance): - """Efficient override of base instance_exists method.""" - try: - self._host.get_domain(instance) - return True - except exception.NovaException: - return False - - def list_instances(self): - names = [] - for dom in self._host.list_instance_domains(only_running=False): - names.append(dom.name()) - - return names - - def list_instance_uuids(self): - uuids = [] - for dom in self._host.list_instance_domains(only_running=False): - uuids.append(dom.UUIDString()) - - return uuids - - def plug_vifs(self, instance, network_info): - """Plug VIFs into networks.""" - for vif in network_info: - self.vif_driver.plug(instance, vif) - - def _unplug_vifs(self, instance, network_info, ignore_errors): - """Unplug VIFs from networks.""" - for vif in network_info: - try: - self.vif_driver.unplug(instance, vif) - except exception.NovaException: - if not ignore_errors: - raise - - def unplug_vifs(self, instance, network_info): - self._unplug_vifs(instance, network_info, False) - - def _teardown_container(self, instance): - inst_path = libvirt_utils.get_instance_path(instance) - container_dir = os.path.join(inst_path, 'rootfs') - rootfs_dev = instance.system_metadata.get('rootfs_device_name') - disk.teardown_container(container_dir, rootfs_dev) - - def _destroy(self, instance, attempt=1): - try: - virt_dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - virt_dom = None - - # If the instance is already terminated, we're still happy - # Otherwise, destroy it - old_domid = -1 - if virt_dom is not None: - try: - old_domid = virt_dom.ID() - virt_dom.destroy() - - except libvirt.libvirtError as e: - is_okay = False - errcode = e.get_error_code() - if errcode == libvirt.VIR_ERR_NO_DOMAIN: - # Domain already gone. This can safely be ignored. - is_okay = True - elif errcode == libvirt.VIR_ERR_OPERATION_INVALID: - # If the instance is already shut off, we get this: - # Code=55 Error=Requested operation is not valid: - # domain is not running - state = self._get_power_state(virt_dom) - if state == power_state.SHUTDOWN: - is_okay = True - elif errcode == libvirt.VIR_ERR_INTERNAL_ERROR: - errmsg = e.get_error_message() - if (CONF.libvirt.virt_type == 'lxc' and - errmsg == 'internal error: ' - 'Some processes refused to die'): - # Some processes in the container didn't die - # fast enough for libvirt. The container will - # eventually die. For now, move on and let - # the wait_for_destroy logic take over. - is_okay = True - elif errcode == libvirt.VIR_ERR_OPERATION_TIMEOUT: - LOG.warn(_LW("Cannot destroy instance, operation time " - "out"), - instance=instance) - reason = _("operation time out") - raise exception.InstancePowerOffFailure(reason=reason) - elif errcode == libvirt.VIR_ERR_SYSTEM_ERROR: - if e.get_int1() == errno.EBUSY: - # NOTE(danpb): When libvirt kills a process it sends it - # SIGTERM first and waits 10 seconds. If it hasn't gone - # it sends SIGKILL and waits another 5 seconds. If it - # still hasn't gone then you get this EBUSY error. - # Usually when a QEMU process fails to go away upon - # SIGKILL it is because it is stuck in an - # uninterruptable kernel sleep waiting on I/O from - # some non-responsive server. - # Given the CPU load of the gate tests though, it is - # conceivable that the 15 second timeout is too short, - # particularly if the VM running tempest has a high - # steal time from the cloud host. ie 15 wallclock - # seconds may have passed, but the VM might have only - # have a few seconds of scheduled run time. - LOG.warn(_LW('Error from libvirt during destroy. ' - 'Code=%(errcode)s Error=%(e)s; ' - 'attempt %(attempt)d of 3'), - {'errcode': errcode, 'e': e, - 'attempt': attempt}, - instance=instance) - with excutils.save_and_reraise_exception() as ctxt: - # Try up to 3 times before giving up. - if attempt < 3: - ctxt.reraise = False - self._destroy(instance, attempt + 1) - return - - if not is_okay: - with excutils.save_and_reraise_exception(): - LOG.error(_LE('Error from libvirt during destroy. ' - 'Code=%(errcode)s Error=%(e)s'), - {'errcode': errcode, 'e': e}, - instance=instance) - - def _wait_for_destroy(expected_domid): - """Called at an interval until the VM is gone.""" - # NOTE(vish): If the instance disappears during the destroy - # we ignore it so the cleanup can still be - # attempted because we would prefer destroy to - # never fail. - try: - dom_info = self.get_info(instance) - state = dom_info.state - new_domid = dom_info.id - except exception.InstanceNotFound: - LOG.info(_LI("During wait destroy, instance disappeared."), - instance=instance) - raise loopingcall.LoopingCallDone() - - if state == power_state.SHUTDOWN: - LOG.info(_LI("Instance destroyed successfully."), - instance=instance) - raise loopingcall.LoopingCallDone() - - # NOTE(wangpan): If the instance was booted again after destroy, - # this may be a endless loop, so check the id of - # domain here, if it changed and the instance is - # still running, we should destroy it again. - # see https://bugs.launchpad.net/nova/+bug/1111213 for more details - if new_domid != expected_domid: - LOG.info(_LI("Instance may be started again."), - instance=instance) - kwargs['is_running'] = True - raise loopingcall.LoopingCallDone() - - kwargs = {'is_running': False} - timer = loopingcall.FixedIntervalLoopingCall(_wait_for_destroy, - old_domid) - timer.start(interval=0.5).wait() - if kwargs['is_running']: - LOG.info(_LI("Going to destroy instance again."), - instance=instance) - self._destroy(instance) - else: - # NOTE(GuanQiang): teardown container to avoid resource leak - if CONF.libvirt.virt_type == 'lxc': - self._teardown_container(instance) - - def destroy(self, context, instance, network_info, block_device_info=None, - destroy_disks=True, migrate_data=None): - self._destroy(instance) - self.cleanup(context, instance, network_info, block_device_info, - destroy_disks, migrate_data) - - def _undefine_domain(self, instance): - try: - virt_dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - virt_dom = None - if virt_dom: - try: - try: - virt_dom.undefineFlags( - libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE) - except libvirt.libvirtError: - LOG.debug("Error from libvirt during undefineFlags." - " Retrying with undefine", instance=instance) - virt_dom.undefine() - except AttributeError: - # NOTE(vish): Older versions of libvirt don't support - # undefine flags, so attempt to do the - # right thing. - try: - if virt_dom.hasManagedSaveImage(0): - virt_dom.managedSaveRemove(0) - except AttributeError: - pass - virt_dom.undefine() - except libvirt.libvirtError as e: - with excutils.save_and_reraise_exception(): - errcode = e.get_error_code() - LOG.error(_LE('Error from libvirt during undefine. ' - 'Code=%(errcode)s Error=%(e)s'), - {'errcode': errcode, 'e': e}, instance=instance) - - def cleanup(self, context, instance, network_info, block_device_info=None, - destroy_disks=True, migrate_data=None, destroy_vifs=True): - if destroy_vifs: - self._unplug_vifs(instance, network_info, True) - - retry = True - while retry: - try: - self.unfilter_instance(instance, network_info) - except libvirt.libvirtError as e: - try: - state = self.get_info(instance).state - except exception.InstanceNotFound: - state = power_state.SHUTDOWN - - if state != power_state.SHUTDOWN: - LOG.warn(_LW("Instance may be still running, destroy " - "it again."), instance=instance) - self._destroy(instance) - else: - retry = False - errcode = e.get_error_code() - LOG.exception(_LE('Error from libvirt during unfilter. ' - 'Code=%(errcode)s Error=%(e)s'), - {'errcode': errcode, 'e': e}, - instance=instance) - reason = "Error unfiltering instance." - raise exception.InstanceTerminationFailure(reason=reason) - except Exception: - retry = False - raise - else: - retry = False - - # FIXME(wangpan): if the instance is booted again here, such as the - # the soft reboot operation boot it here, it will - # become "running deleted", should we check and destroy - # it at the end of this method? - - # NOTE(vish): we disconnect from volumes regardless - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - for vol in block_device_mapping: - connection_info = vol['connection_info'] - disk_dev = vol['mount_device'] - if disk_dev is not None: - disk_dev = disk_dev.rpartition("/")[2] - - if ('data' in connection_info and - 'volume_id' in connection_info['data']): - volume_id = connection_info['data']['volume_id'] - encryption = encryptors.get_encryption_metadata( - context, self._volume_api, volume_id, connection_info) - - if encryption: - # The volume must be detached from the VM before - # disconnecting it from its encryptor. Otherwise, the - # encryptor may report that the volume is still in use. - encryptor = self._get_volume_encryptor(connection_info, - encryption) - encryptor.detach_volume(**encryption) - - try: - self._disconnect_volume(connection_info, disk_dev) - except Exception as exc: - with excutils.save_and_reraise_exception() as ctxt: - if destroy_disks: - # Don't block on Volume errors if we're trying to - # delete the instance as we may be partially created - # or deleted - ctxt.reraise = False - LOG.warn(_LW("Ignoring Volume Error on vol %(vol_id)s " - "during delete %(exc)s"), - {'vol_id': vol.get('volume_id'), 'exc': exc}, - instance=instance) - - if destroy_disks: - # NOTE(haomai): destroy volumes if needed - if CONF.libvirt.images_type == 'lvm': - self._cleanup_lvm(instance) - if CONF.libvirt.images_type == 'rbd': - self._cleanup_rbd(instance) - - if destroy_disks or ( - migrate_data and migrate_data.get('is_shared_block_storage', - False)): - attempts = int(instance.system_metadata.get('clean_attempts', - '0')) - success = self.delete_instance_files(instance) - # NOTE(mriedem): This is used in the _run_pending_deletes periodic - # task in the compute manager. The tight coupling is not great... - instance.system_metadata['clean_attempts'] = str(attempts + 1) - if success: - instance.cleaned = True - instance.save() - - if CONF.serial_console.enabled: - try: - serials = self._get_serial_ports_from_instance(instance) - except exception.InstanceNotFound: - # Serial ports already gone. Nothing to release. - serials = () - for hostname, port in serials: - serial_console.release_port(host=hostname, port=port) - - self._undefine_domain(instance) - - def _detach_encrypted_volumes(self, instance): - """Detaches encrypted volumes attached to instance.""" - disks = jsonutils.loads(self.get_instance_disk_info(instance)) - encrypted_volumes = filter(dmcrypt.is_encrypted, - [disk['path'] for disk in disks]) - for path in encrypted_volumes: - dmcrypt.delete_volume(path) - - def _get_serial_ports_from_instance(self, instance, mode=None): - """Returns an iterator over serial port(s) configured on instance. - - :param mode: Should be a value in (None, bind, connect) - """ - virt_dom = self._host.get_domain(instance) - xml = virt_dom.XMLDesc(0) - tree = etree.fromstring(xml) - - # The 'serial' device is the base for x86 platforms. Other platforms - # (e.g. kvm on system z = arch.S390X) can only use 'console' devices. - xpath_mode = "[@mode='%s']" % mode if mode else "" - serial_tcp = "./devices/serial[@type='tcp']/source" + xpath_mode - console_tcp = "./devices/console[@type='tcp']/source" + xpath_mode - - tcp_devices = tree.findall(serial_tcp) - if len(tcp_devices) == 0: - tcp_devices = tree.findall(console_tcp) - for source in tcp_devices: - yield (source.get("host"), int(source.get("service"))) - - @staticmethod - def _get_rbd_driver(): - return rbd_utils.RBDDriver( - pool=CONF.libvirt.images_rbd_pool, - ceph_conf=CONF.libvirt.images_rbd_ceph_conf, - rbd_user=CONF.libvirt.rbd_user) - - def _cleanup_rbd(self, instance): - LibvirtDriver._get_rbd_driver().cleanup_volumes(instance) - - def _cleanup_lvm(self, instance): - """Delete all LVM disks for given instance object.""" - if instance.get('ephemeral_key_uuid') is not None: - self._detach_encrypted_volumes(instance) - - disks = self._lvm_disks(instance) - if disks: - lvm.remove_volumes(disks) - - def _lvm_disks(self, instance): - """Returns all LVM disks for given instance object.""" - if CONF.libvirt.images_volume_group: - vg = os.path.join('/dev', CONF.libvirt.images_volume_group) - if not os.path.exists(vg): - return [] - pattern = '%s_' % instance.uuid - - def belongs_to_instance(disk): - return disk.startswith(pattern) - - def fullpath(name): - return os.path.join(vg, name) - - logical_volumes = lvm.list_volumes(vg) - - disk_names = filter(belongs_to_instance, logical_volumes) - disks = map(fullpath, disk_names) - return disks - return [] - - def get_volume_connector(self, instance): - if self._initiator is None: - self._initiator = libvirt_utils.get_iscsi_initiator() - if not self._initiator: - LOG.debug('Could not determine iscsi initiator name', - instance=instance) - - if self._fc_wwnns is None: - self._fc_wwnns = libvirt_utils.get_fc_wwnns() - if not self._fc_wwnns or len(self._fc_wwnns) == 0: - LOG.debug('Could not determine fibre channel ' - 'world wide node names', - instance=instance) - - if self._fc_wwpns is None: - self._fc_wwpns = libvirt_utils.get_fc_wwpns() - if not self._fc_wwpns or len(self._fc_wwpns) == 0: - LOG.debug('Could not determine fibre channel ' - 'world wide port names', - instance=instance) - - connector = {'ip': CONF.my_block_storage_ip, - 'host': CONF.host} - - if self._initiator: - connector['initiator'] = self._initiator - - if self._fc_wwnns and self._fc_wwpns: - connector["wwnns"] = self._fc_wwnns - connector["wwpns"] = self._fc_wwpns - - connector['platform'] = platform.machine() - connector['os_type'] = sys.platform - return connector - - def _cleanup_resize(self, instance, network_info): - # NOTE(wangpan): we get the pre-grizzly instance path firstly, - # so the backup dir of pre-grizzly instance can - # be deleted correctly with grizzly or later nova. - pre_grizzly_name = libvirt_utils.get_instance_path(instance, - forceold=True) - target = pre_grizzly_name + '_resize' - if not os.path.exists(target): - target = libvirt_utils.get_instance_path(instance) + '_resize' - - if os.path.exists(target): - # Deletion can fail over NFS, so retry the deletion as required. - # Set maximum attempt as 5, most test can remove the directory - # for the second time. - utils.execute('rm', '-rf', target, delay_on_retry=True, - attempts=5) - - if instance.host != CONF.host: - self._undefine_domain(instance) - self.unplug_vifs(instance, network_info) - self.unfilter_instance(instance, network_info) - - def _get_volume_driver(self, connection_info): - driver_type = connection_info.get('driver_volume_type') - if driver_type not in self.volume_drivers: - raise exception.VolumeDriverNotFound(driver_type=driver_type) - return self.volume_drivers[driver_type] - - def _connect_volume(self, connection_info, disk_info): - driver = self._get_volume_driver(connection_info) - driver.connect_volume(connection_info, disk_info) - - def _disconnect_volume(self, connection_info, disk_dev): - driver = self._get_volume_driver(connection_info) - return driver.disconnect_volume(connection_info, disk_dev) - - def _get_volume_config(self, connection_info, disk_info): - driver = self._get_volume_driver(connection_info) - return driver.get_config(connection_info, disk_info) - - def _get_volume_encryptor(self, connection_info, encryption): - encryptor = encryptors.get_volume_encryptor(connection_info, - **encryption) - return encryptor - - def attach_volume(self, context, connection_info, instance, mountpoint, - disk_bus=None, device_type=None, encryption=None): - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - virt_dom = self._host.get_domain(instance) - disk_dev = mountpoint.rpartition("/")[2] - bdm = { - 'device_name': disk_dev, - 'disk_bus': disk_bus, - 'device_type': device_type} - - # Note(cfb): If the volume has a custom block size, check that - # that we are using QEMU/KVM and libvirt >= 0.10.2. The - # presence of a block size is considered mandatory by - # cinder so we fail if we can't honor the request. - data = {} - if ('data' in connection_info): - data = connection_info['data'] - if ('logical_block_size' in data or 'physical_block_size' in data): - if ((CONF.libvirt.virt_type != "kvm" and - CONF.libvirt.virt_type != "qemu")): - msg = _("Volume sets block size, but the current " - "libvirt hypervisor '%s' does not support custom " - "block size") % CONF.libvirt.virt_type - raise exception.InvalidHypervisorType(msg) - - if not self._host.has_min_version(MIN_LIBVIRT_BLOCKIO_VERSION): - ver = ".".join([str(x) for x in MIN_LIBVIRT_BLOCKIO_VERSION]) - msg = _("Volume sets block size, but libvirt '%s' or later is " - "required.") % ver - raise exception.Invalid(msg) - - disk_info = blockinfo.get_info_from_bdm(CONF.libvirt.virt_type, - image_meta, bdm) - LOG.info(_LI("disk_info %s"),disk_info) - LOG.info(_LI("connection info %s"),connection_info) - - self._connect_volume(connection_info, disk_info) - conf = self._get_volume_config(connection_info, disk_info) - self._set_cache_mode(conf) - LOG.info(_LI("Magdy conf driver info %s"),conf.to_xml()) - - try: - # NOTE(vish): We can always affect config because our - # domains are persistent, but we should only - # affect live if the domain is running. - flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG - state = self._get_power_state(virt_dom) - if state in (power_state.RUNNING, power_state.PAUSED): - flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE - - if encryption: - encryptor = self._get_volume_encryptor(connection_info, - encryption) - encryptor.attach_volume(context, **encryption) - - virt_dom.attachDeviceFlags(conf.to_xml(), flags) - except Exception as ex: - LOG.exception(_LE('Failed to attach volume at mountpoint: %s'), - mountpoint, instance=instance) - if isinstance(ex, libvirt.libvirtError): - errcode = ex.get_error_code() - if errcode == libvirt.VIR_ERR_OPERATION_FAILED: - self._disconnect_volume(connection_info, disk_dev) - raise exception.DeviceIsBusy(device=disk_dev) - - with excutils.save_and_reraise_exception(): - self._disconnect_volume(connection_info, disk_dev) - - def _swap_volume(self, domain, disk_path, new_path, resize_to): - """Swap existing disk with a new block device.""" - # Save a copy of the domain's persistent XML file - xml = domain.XMLDesc( - libvirt.VIR_DOMAIN_XML_INACTIVE | - libvirt.VIR_DOMAIN_XML_SECURE) - - # Abort is an idempotent operation, so make sure any block - # jobs which may have failed are ended. - try: - domain.blockJobAbort(disk_path, 0) - except Exception: - pass - - try: - # NOTE (rmk): blockRebase cannot be executed on persistent - # domains, so we need to temporarily undefine it. - # If any part of this block fails, the domain is - # re-defined regardless. - if domain.isPersistent(): - domain.undefine() - - # Start copy with VIR_DOMAIN_REBASE_REUSE_EXT flag to - # allow writing to existing external volume file - domain.blockRebase(disk_path, new_path, 0, - libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY | - libvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT) - - while self._wait_for_block_job(domain, disk_path): - time.sleep(0.5) - - domain.blockJobAbort(disk_path, - libvirt.VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT) - if resize_to: - # NOTE(alex_xu): domain.blockJobAbort isn't sync call. This - # is bug in libvirt. So we need waiting for the pivot is - # finished. libvirt bug #1119173 - while self._wait_for_block_job(domain, disk_path, - wait_for_job_clean=True): - time.sleep(0.5) - domain.blockResize(disk_path, resize_to * units.Gi / units.Ki) - finally: - self._conn.defineXML(xml) - - def swap_volume(self, old_connection_info, - new_connection_info, instance, mountpoint, resize_to): - virt_dom = self._host.get_domain(instance) - disk_dev = mountpoint.rpartition("/")[2] - xml = self._get_disk_xml(virt_dom.XMLDesc(0), disk_dev) - if not xml: - raise exception.DiskNotFound(location=disk_dev) - disk_info = { - 'dev': disk_dev, - 'bus': blockinfo.get_disk_bus_for_disk_dev( - CONF.libvirt.virt_type, disk_dev), - 'type': 'disk', - } - self._connect_volume(new_connection_info, disk_info) - conf = self._get_volume_config(new_connection_info, disk_info) - if not conf.source_path: - self._disconnect_volume(new_connection_info, disk_dev) - raise NotImplementedError(_("Swap only supports host devices")) - - # Save updates made in connection_info when connect_volume was called - volume_id = new_connection_info.get('serial') - bdm = objects.BlockDeviceMapping.get_by_volume_id( - nova_context.get_admin_context(), volume_id) - driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm) - driver_bdm['connection_info'] = new_connection_info - driver_bdm.save() - - self._swap_volume(virt_dom, disk_dev, conf.source_path, resize_to) - self._disconnect_volume(old_connection_info, disk_dev) - - @staticmethod - def _get_disk_xml(xml, device): - """Returns the xml for the disk mounted at device.""" - try: - doc = etree.fromstring(xml) - except Exception: - return None - node = doc.find("./devices/disk/target[@dev='%s'].." % device) - if node is not None: - return etree.tostring(node) - - def _get_existing_domain_xml(self, instance, network_info, - block_device_info=None): - try: - virt_dom = self._host.get_domain(instance) - xml = virt_dom.XMLDesc(0) - except exception.InstanceNotFound: - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - block_device_info) - xml = self._get_guest_xml(nova_context.get_admin_context(), - instance, network_info, disk_info, - image_meta, - block_device_info=block_device_info) - return xml - - def detach_volume(self, connection_info, instance, mountpoint, - encryption=None): - disk_dev = mountpoint.rpartition("/")[2] - try: - virt_dom = self._host.get_domain(instance) - xml = self._get_disk_xml(virt_dom.XMLDesc(0), disk_dev) - if not xml: - raise exception.DiskNotFound(location=disk_dev) - else: - # NOTE(vish): We can always affect config because our - # domains are persistent, but we should only - # affect live if the domain is running. - flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG - state = self._get_power_state(virt_dom) - if state in (power_state.RUNNING, power_state.PAUSED): - flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE - virt_dom.detachDeviceFlags(xml, flags) - - if encryption: - # The volume must be detached from the VM before - # disconnecting it from its encryptor. Otherwise, the - # encryptor may report that the volume is still in use. - encryptor = self._get_volume_encryptor(connection_info, - encryption) - encryptor.detach_volume(**encryption) - except exception.InstanceNotFound: - # NOTE(zhaoqin): If the instance does not exist, _lookup_by_name() - # will throw InstanceNotFound exception. Need to - # disconnect volume under this circumstance. - LOG.warn(_LW("During detach_volume, instance disappeared.")) - except libvirt.libvirtError as ex: - # NOTE(vish): This is called to cleanup volumes after live - # migration, so we should still disconnect even if - # the instance doesn't exist here anymore. - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_NO_DOMAIN: - # NOTE(vish): - LOG.warn(_LW("During detach_volume, instance disappeared.")) - else: - raise - - self._disconnect_volume(connection_info, disk_dev) - - def post_connection_terminated(self, bdm, connection_info=None, - block_device_info=None): - if hasattr(bdm, 'device_name'): - mountpoint = bdm.device_name - else: - return - if isinstance(connection_info, dict) and connection_info.get( - 'driver_volume_type', '') == "iscsi": - disk_dev = mountpoint.rpartition("/")[2] - self._disconnect_volume(connection_info, disk_dev) - elif block_device_info: - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - for vol in block_device_mapping: - connection_info = vol['connection_info'] - disk_dev = vol['mount_device'].rpartition("/")[2] - if (disk_dev == mountpoint.rpartition("/")[2] and - connection_info.get( - 'driver_volume_type', '') == "iscsi"): - self._disconnect_volume(connection_info, disk_dev) - - def attach_interface(self, instance, image_meta, vif): - virt_dom = self._host.get_domain(instance) - self.vif_driver.plug(instance, vif) - self.firewall_driver.setup_basic_filtering(instance, [vif]) - cfg = self.vif_driver.get_config(instance, vif, image_meta, - instance.flavor, - CONF.libvirt.virt_type) - try: - flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG - state = self._get_power_state(virt_dom) - if state == power_state.RUNNING or state == power_state.PAUSED: - flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE - virt_dom.attachDeviceFlags(cfg.to_xml(), flags) - except libvirt.libvirtError: - LOG.error(_LE('attaching network adapter failed.'), - instance=instance, exc_info=True) - self.vif_driver.unplug(instance, vif) - raise exception.InterfaceAttachFailed( - instance_uuid=instance.uuid) - - def detach_interface(self, instance, vif): - virt_dom = self._host.get_domain(instance) - cfg = self.vif_driver.get_config(instance, vif, None, instance.flavor, - CONF.libvirt.virt_type) - try: - self.vif_driver.unplug(instance, vif) - flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG - state = self._get_power_state(virt_dom) - if state == power_state.RUNNING or state == power_state.PAUSED: - flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE - virt_dom.detachDeviceFlags(cfg.to_xml(), flags) - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_NO_DOMAIN: - LOG.warn(_LW("During detach_interface, " - "instance disappeared."), - instance=instance) - else: - LOG.error(_LE('detaching network adapter failed.'), - instance=instance, exc_info=True) - raise exception.InterfaceDetachFailed( - instance_uuid=instance.uuid) - - def _create_snapshot_metadata(self, base, instance, img_fmt, snp_name): - metadata = {'is_public': False, - 'status': 'active', - 'name': snp_name, - 'properties': { - 'kernel_id': instance.kernel_id, - 'image_location': 'snapshot', - 'image_state': 'available', - 'owner_id': instance.project_id, - 'ramdisk_id': instance.ramdisk_id, - } - } - if instance.os_type: - metadata['properties']['os_type'] = instance.os_type - - # NOTE(vish): glance forces ami disk format to be ami - if base.get('disk_format') == 'ami': - metadata['disk_format'] = 'ami' - else: - metadata['disk_format'] = img_fmt - - metadata['container_format'] = base.get('container_format', 'bare') - - return metadata - - def snapshot(self, context, instance, image_id, update_task_state): - """Create snapshot from a running VM instance. - - This command only works with qemu 0.14+ - """ - try: - virt_dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - raise exception.InstanceNotRunning(instance_id=instance.uuid) - - base_image_ref = instance.image_ref - - base = compute_utils.get_image_metadata( - context, self._image_api, base_image_ref, instance) - - snapshot = self._image_api.get(context, image_id) - - disk_path = libvirt_utils.find_disk(virt_dom) - source_format = libvirt_utils.get_disk_type(disk_path) - - image_format = CONF.libvirt.snapshot_image_format or source_format - - # NOTE(bfilippov): save lvm and rbd as raw - if image_format == 'lvm' or image_format == 'rbd': - image_format = 'raw' - - metadata = self._create_snapshot_metadata(base, - instance, - image_format, - snapshot['name']) - - snapshot_name = uuid.uuid4().hex - - state = self._get_power_state(virt_dom) - - # NOTE(rmk): Live snapshots require QEMU 1.3 and Libvirt 1.0.0. - # These restrictions can be relaxed as other configurations - # can be validated. - # NOTE(dgenin): Instances with LVM encrypted ephemeral storage require - # cold snapshots. Currently, checking for encryption is - # redundant because LVM supports only cold snapshots. - # It is necessary in case this situation changes in the - # future. - if (self._host.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION, - MIN_QEMU_LIVESNAPSHOT_VERSION, - REQ_HYPERVISOR_LIVESNAPSHOT) - and source_format not in ('lvm', 'rbd') - and not CONF.ephemeral_storage_encryption.enabled - and not CONF.workarounds.disable_libvirt_livesnapshot): - live_snapshot = True - # Abort is an idempotent operation, so make sure any block - # jobs which may have failed are ended. This operation also - # confirms the running instance, as opposed to the system as a - # whole, has a new enough version of the hypervisor (bug 1193146). - try: - virt_dom.blockJobAbort(disk_path, 0) - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_CONFIG_UNSUPPORTED: - live_snapshot = False - else: - pass - else: - live_snapshot = False - - # NOTE(rmk): We cannot perform live snapshots when a managedSave - # file is present, so we will use the cold/legacy method - # for instances which are shutdown. - if state == power_state.SHUTDOWN: - live_snapshot = False - - # NOTE(dkang): managedSave does not work for LXC - if CONF.libvirt.virt_type != 'lxc' and not live_snapshot: - if state == power_state.RUNNING or state == power_state.PAUSED: - self._detach_pci_devices(virt_dom, - pci_manager.get_instance_pci_devs(instance)) - self._detach_sriov_ports(context, instance, virt_dom) - virt_dom.managedSave(0) - - snapshot_backend = self.image_backend.snapshot(instance, - disk_path, - image_type=source_format) - - if live_snapshot: - LOG.info(_LI("Beginning live snapshot process"), - instance=instance) - else: - LOG.info(_LI("Beginning cold snapshot process"), - instance=instance) - - update_task_state(task_state=task_states.IMAGE_PENDING_UPLOAD) - snapshot_directory = CONF.libvirt.snapshots_directory - fileutils.ensure_tree(snapshot_directory) - with utils.tempdir(dir=snapshot_directory) as tmpdir: - try: - out_path = os.path.join(tmpdir, snapshot_name) - if live_snapshot: - # NOTE(xqueralt): libvirt needs o+x in the temp directory - os.chmod(tmpdir, 0o701) - self._live_snapshot(context, instance, virt_dom, disk_path, - out_path, image_format, base) - else: - snapshot_backend.snapshot_extract(out_path, image_format) - finally: - new_dom = None - # NOTE(dkang): because previous managedSave is not called - # for LXC, _create_domain must not be called. - if CONF.libvirt.virt_type != 'lxc' and not live_snapshot: - if state == power_state.RUNNING: - new_dom = self._create_domain(domain=virt_dom) - elif state == power_state.PAUSED: - new_dom = self._create_domain(domain=virt_dom, - launch_flags=libvirt.VIR_DOMAIN_START_PAUSED) - if new_dom is not None: - self._attach_pci_devices(new_dom, - pci_manager.get_instance_pci_devs(instance)) - self._attach_sriov_ports(context, instance, new_dom) - LOG.info(_LI("Snapshot extracted, beginning image upload"), - instance=instance) - - # Upload that image to the image service - - update_task_state(task_state=task_states.IMAGE_UPLOADING, - expected_state=task_states.IMAGE_PENDING_UPLOAD) - with libvirt_utils.file_open(out_path) as image_file: - self._image_api.update(context, - image_id, - metadata, - image_file) - LOG.info(_LI("Snapshot image upload complete"), - instance=instance) - - @staticmethod - def _wait_for_block_job(domain, disk_path, abort_on_error=False, - wait_for_job_clean=False): - """Wait for libvirt block job to complete. - - Libvirt may return either cur==end or an empty dict when - the job is complete, depending on whether the job has been - cleaned up by libvirt yet, or not. - - :returns: True if still in progress - False if completed - """ - - status = domain.blockJobInfo(disk_path, 0) - if status == -1 and abort_on_error: - msg = _('libvirt error while requesting blockjob info.') - raise exception.NovaException(msg) - try: - cur = status.get('cur', 0) - end = status.get('end', 0) - except Exception: - return False - - if wait_for_job_clean: - job_ended = not status - else: - job_ended = cur == end - - return not job_ended - - def _can_quiesce(self, image_meta): - if CONF.libvirt.virt_type not in ('kvm', 'qemu'): - return (False, _('Only KVM and QEMU are supported')) - - if not self._host.has_min_version(MIN_LIBVIRT_FSFREEZE_VERSION): - ver = ".".join([str(x) for x in MIN_LIBVIRT_FSFREEZE_VERSION]) - return (False, _('Quiescing requires libvirt version %(version)s ' - 'or greater') % {'version': ver}) - - img_meta_prop = image_meta.get('properties', {}) if image_meta else {} - hw_qga = img_meta_prop.get('hw_qemu_guest_agent', '') - if not strutils.bool_from_string(hw_qga): - return (False, _('QEMU guest agent is not enabled')) - - return (True, None) - - def _set_quiesced(self, context, instance, image_meta, quiesced): - supported, reason = self._can_quiesce(image_meta) - if not supported: - raise exception.InstanceQuiesceNotSupported( - instance_id=instance.uuid, reason=reason) - - try: - domain = self._host.get_domain(instance) - if quiesced: - domain.fsFreeze() - else: - domain.fsThaw() - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - msg = (_('Error from libvirt while quiescing %(instance_name)s: ' - '[Error Code %(error_code)s] %(ex)s') - % {'instance_name': instance.name, - 'error_code': error_code, 'ex': ex}) - raise exception.NovaException(msg) - - def quiesce(self, context, instance, image_meta): - """Freeze the guest filesystems to prepare for snapshot. - - The qemu-guest-agent must be setup to execute fsfreeze. - """ - self._set_quiesced(context, instance, image_meta, True) - - def unquiesce(self, context, instance, image_meta): - """Thaw the guest filesystems after snapshot.""" - self._set_quiesced(context, instance, image_meta, False) - - def _live_snapshot(self, context, instance, domain, disk_path, out_path, - image_format, image_meta): - """Snapshot an instance without downtime.""" - # Save a copy of the domain's persistent XML file - xml = domain.XMLDesc( - libvirt.VIR_DOMAIN_XML_INACTIVE | - libvirt.VIR_DOMAIN_XML_SECURE) - - # Abort is an idempotent operation, so make sure any block - # jobs which may have failed are ended. - try: - domain.blockJobAbort(disk_path, 0) - except Exception: - pass - - # NOTE (rmk): We are using shallow rebases as a workaround to a bug - # in QEMU 1.3. In order to do this, we need to create - # a destination image with the original backing file - # and matching size of the instance root disk. - src_disk_size = libvirt_utils.get_disk_size(disk_path) - src_back_path = libvirt_utils.get_disk_backing_file(disk_path, - basename=False) - disk_delta = out_path + '.delta' - libvirt_utils.create_cow_image(src_back_path, disk_delta, - src_disk_size) - - img_meta_prop = image_meta.get('properties', {}) if image_meta else {} - require_quiesce = strutils.bool_from_string( - img_meta_prop.get('os_require_quiesce', '')) - if require_quiesce: - self.quiesce(context, instance, image_meta) - - try: - # NOTE (rmk): blockRebase cannot be executed on persistent - # domains, so we need to temporarily undefine it. - # If any part of this block fails, the domain is - # re-defined regardless. - if domain.isPersistent(): - domain.undefine() - - # NOTE (rmk): Establish a temporary mirror of our root disk and - # issue an abort once we have a complete copy. - domain.blockRebase(disk_path, disk_delta, 0, - libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY | - libvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT | - libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW) - - while self._wait_for_block_job(domain, disk_path): - time.sleep(0.5) - - domain.blockJobAbort(disk_path, 0) - libvirt_utils.chown(disk_delta, os.getuid()) - finally: - self._conn.defineXML(xml) - if require_quiesce: - self.unquiesce(context, instance, image_meta) - - # Convert the delta (CoW) image with a backing file to a flat - # image with no backing file. - libvirt_utils.extract_snapshot(disk_delta, 'qcow2', - out_path, image_format) - - def _volume_snapshot_update_status(self, context, snapshot_id, status): - """Send a snapshot status update to Cinder. - - This method captures and logs exceptions that occur - since callers cannot do anything useful with these exceptions. - - Operations on the Cinder side waiting for this will time out if - a failure occurs sending the update. - - :param context: security context - :param snapshot_id: id of snapshot being updated - :param status: new status value - - """ - - try: - self._volume_api.update_snapshot_status(context, - snapshot_id, - status) - except Exception: - LOG.exception(_LE('Failed to send updated snapshot status ' - 'to volume service.')) - - def _volume_snapshot_create(self, context, instance, domain, - volume_id, new_file): - """Perform volume snapshot. - - :param domain: VM that volume is attached to - :param volume_id: volume UUID to snapshot - :param new_file: relative path to new qcow2 file present on share - - """ - - xml = domain.XMLDesc(0) - xml_doc = etree.fromstring(xml) - - device_info = vconfig.LibvirtConfigGuest() - device_info.parse_dom(xml_doc) - - disks_to_snap = [] # to be snapshotted by libvirt - network_disks_to_snap = [] # network disks (netfs, gluster, etc.) - disks_to_skip = [] # local disks not snapshotted - - for guest_disk in device_info.devices: - if (guest_disk.root_name != 'disk'): - continue - - if (guest_disk.target_dev is None): - continue - - if (guest_disk.serial is None or guest_disk.serial != volume_id): - disks_to_skip.append(guest_disk.target_dev) - continue - - # disk is a Cinder volume with the correct volume_id - - disk_info = { - 'dev': guest_disk.target_dev, - 'serial': guest_disk.serial, - 'current_file': guest_disk.source_path, - 'source_protocol': guest_disk.source_protocol, - 'source_name': guest_disk.source_name, - 'source_hosts': guest_disk.source_hosts, - 'source_ports': guest_disk.source_ports - } - - # Determine path for new_file based on current path - if disk_info['current_file'] is not None: - current_file = disk_info['current_file'] - new_file_path = os.path.join(os.path.dirname(current_file), - new_file) - disks_to_snap.append((current_file, new_file_path)) - elif disk_info['source_protocol'] in ('gluster', 'netfs'): - network_disks_to_snap.append((disk_info, new_file)) - - if not disks_to_snap and not network_disks_to_snap: - msg = _('Found no disk to snapshot.') - raise exception.NovaException(msg) - - snapshot = vconfig.LibvirtConfigGuestSnapshot() - - for current_name, new_filename in disks_to_snap: - snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk() - snap_disk.name = current_name - snap_disk.source_path = new_filename - snap_disk.source_type = 'file' - snap_disk.snapshot = 'external' - snap_disk.driver_name = 'qcow2' - - snapshot.add_disk(snap_disk) - - for disk_info, new_filename in network_disks_to_snap: - snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk() - snap_disk.name = disk_info['dev'] - snap_disk.source_type = 'network' - snap_disk.source_protocol = disk_info['source_protocol'] - snap_disk.snapshot = 'external' - snap_disk.source_path = new_filename - old_dir = disk_info['source_name'].split('/')[0] - snap_disk.source_name = '%s/%s' % (old_dir, new_filename) - snap_disk.source_hosts = disk_info['source_hosts'] - snap_disk.source_ports = disk_info['source_ports'] - - snapshot.add_disk(snap_disk) - - for dev in disks_to_skip: - snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk() - snap_disk.name = dev - snap_disk.snapshot = 'no' - - snapshot.add_disk(snap_disk) - - snapshot_xml = snapshot.to_xml() - LOG.debug("snap xml: %s", snapshot_xml) - - snap_flags = (libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY | - libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA | - libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) - - QUIESCE = libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE - - try: - domain.snapshotCreateXML(snapshot_xml, - snap_flags | QUIESCE) - - return - except libvirt.libvirtError: - LOG.exception(_LE('Unable to create quiesced VM snapshot, ' - 'attempting again with quiescing disabled.')) - - try: - domain.snapshotCreateXML(snapshot_xml, snap_flags) - except libvirt.libvirtError: - LOG.exception(_LE('Unable to create VM snapshot, ' - 'failing volume_snapshot operation.')) - - raise - - def _volume_refresh_connection_info(self, context, instance, volume_id): - bdm = objects.BlockDeviceMapping.get_by_volume_id(context, - volume_id) - - driver_bdm = driver_block_device.convert_volume(bdm) - if driver_bdm: - driver_bdm.refresh_connection_info(context, instance, - self._volume_api, self) - - def volume_snapshot_create(self, context, instance, volume_id, - create_info): - """Create snapshots of a Cinder volume via libvirt. - - :param instance: VM instance object reference - :param volume_id: id of volume being snapshotted - :param create_info: dict of information used to create snapshots - - snapshot_id : ID of snapshot - - type : qcow2 / - - new_file : qcow2 file created by Cinder which - becomes the VM's active image after - the snapshot is complete - """ - - LOG.debug("volume_snapshot_create: create_info: %(c_info)s", - {'c_info': create_info}, instance=instance) - - try: - virt_dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - raise exception.InstanceNotRunning(instance_id=instance.uuid) - - if create_info['type'] != 'qcow2': - raise exception.NovaException(_('Unknown type: %s') % - create_info['type']) - - snapshot_id = create_info.get('snapshot_id', None) - if snapshot_id is None: - raise exception.NovaException(_('snapshot_id required ' - 'in create_info')) - - try: - self._volume_snapshot_create(context, instance, virt_dom, - volume_id, create_info['new_file']) - except Exception: - with excutils.save_and_reraise_exception(): - LOG.exception(_LE('Error occurred during ' - 'volume_snapshot_create, ' - 'sending error status to Cinder.')) - self._volume_snapshot_update_status( - context, snapshot_id, 'error') - - self._volume_snapshot_update_status( - context, snapshot_id, 'creating') - - def _wait_for_snapshot(): - snapshot = self._volume_api.get_snapshot(context, snapshot_id) - - if snapshot.get('status') != 'creating': - self._volume_refresh_connection_info(context, instance, - volume_id) - raise loopingcall.LoopingCallDone() - - timer = loopingcall.FixedIntervalLoopingCall(_wait_for_snapshot) - timer.start(interval=0.5).wait() - - def _volume_snapshot_delete(self, context, instance, volume_id, - snapshot_id, delete_info=None): - """Note: - if file being merged into == active image: - do a blockRebase (pull) operation - else: - do a blockCommit operation - Files must be adjacent in snap chain. - - :param instance: instance object reference - :param volume_id: volume UUID - :param snapshot_id: snapshot UUID (unused currently) - :param delete_info: { - 'type': 'qcow2', - 'file_to_merge': 'a.img', - 'merge_target_file': 'b.img' or None (if merging file_to_merge into - active image) - } - - - Libvirt blockjob handling required for this method is broken - in versions of libvirt that do not contain: - http://libvirt.org/git/?p=libvirt.git;h=0f9e67bfad (1.1.1) - (Patch is pending in 1.0.5-maint branch as well, but we cannot detect - libvirt 1.0.5.5 vs. 1.0.5.6 here.) - """ - - if not self._host.has_min_version(MIN_LIBVIRT_BLOCKJOBINFO_VERSION): - ver = '.'.join([str(x) for x in MIN_LIBVIRT_BLOCKJOBINFO_VERSION]) - msg = _("Libvirt '%s' or later is required for online deletion " - "of volume snapshots.") % ver - raise exception.Invalid(msg) - - LOG.debug('volume_snapshot_delete: delete_info: %s', delete_info) - - if delete_info['type'] != 'qcow2': - msg = _('Unknown delete_info type %s') % delete_info['type'] - raise exception.NovaException(msg) - - try: - virt_dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - raise exception.InstanceNotRunning(instance_id=instance.uuid) - - # Find dev name - my_dev = None - active_disk = None - - xml = virt_dom.XMLDesc(0) - xml_doc = etree.fromstring(xml) - - device_info = vconfig.LibvirtConfigGuest() - device_info.parse_dom(xml_doc) - - active_disk_object = None - - for guest_disk in device_info.devices: - if (guest_disk.root_name != 'disk'): - continue - - if (guest_disk.target_dev is None or guest_disk.serial is None): - continue - - if guest_disk.serial == volume_id: - my_dev = guest_disk.target_dev - - active_disk = guest_disk.source_path - active_protocol = guest_disk.source_protocol - active_disk_object = guest_disk - break - - if my_dev is None or (active_disk is None and active_protocol is None): - msg = _('Disk with id: %s ' - 'not found attached to instance.') % volume_id - LOG.debug('Domain XML: %s', xml) - raise exception.NovaException(msg) - - LOG.debug("found device at %s", my_dev) - - def _get_snap_dev(filename, backing_store): - if filename is None: - msg = _('filename cannot be None') - raise exception.NovaException(msg) - - # libgfapi delete - LOG.debug("XML: %s" % xml) - - LOG.debug("active disk object: %s" % active_disk_object) - - # determine reference within backing store for desired image - filename_to_merge = filename - matched_name = None - b = backing_store - index = None - - current_filename = active_disk_object.source_name.split('/')[1] - if current_filename == filename_to_merge: - return my_dev + '[0]' - - while b is not None: - source_filename = b.source_name.split('/')[1] - if source_filename == filename_to_merge: - LOG.debug('found match: %s' % b.source_name) - matched_name = b.source_name - index = b.index - break - - b = b.backing_store - - if matched_name is None: - msg = _('no match found for %s') % (filename_to_merge) - raise exception.NovaException(msg) - - LOG.debug('index of match (%s) is %s' % (b.source_name, index)) - - my_snap_dev = '%s[%s]' % (my_dev, index) - return my_snap_dev - - if delete_info['merge_target_file'] is None: - # pull via blockRebase() - - # Merge the most recent snapshot into the active image - - rebase_disk = my_dev - rebase_flags = 0 - rebase_base = delete_info['file_to_merge'] # often None - if active_protocol is not None: - rebase_base = _get_snap_dev(delete_info['file_to_merge'], - active_disk_object.backing_store) - rebase_bw = 0 - - LOG.debug('disk: %(disk)s, base: %(base)s, ' - 'bw: %(bw)s, flags: %(flags)s', - {'disk': rebase_disk, - 'base': rebase_base, - 'bw': rebase_bw, - 'flags': rebase_flags}) - - result = virt_dom.blockRebase(rebase_disk, rebase_base, - rebase_bw, rebase_flags) - - if result == 0: - LOG.debug('blockRebase started successfully') - - while self._wait_for_block_job(virt_dom, my_dev, - abort_on_error=True): - LOG.debug('waiting for blockRebase job completion') - time.sleep(0.5) - - else: - # commit with blockCommit() - my_snap_base = None - my_snap_top = None - commit_disk = my_dev - commit_flags = 0 - - if active_protocol is not None: - my_snap_base = _get_snap_dev(delete_info['merge_target_file'], - active_disk_object.backing_store) - my_snap_top = _get_snap_dev(delete_info['file_to_merge'], - active_disk_object.backing_store) - try: - commit_flags |= libvirt.VIR_DOMAIN_BLOCK_COMMIT_RELATIVE - except AttributeError: - ver = '.'.join( - [str(x) for x in - MIN_LIBVIRT_BLOCKCOMMIT_RELATIVE_VERSION]) - msg = _("Relative blockcommit support was not detected. " - "Libvirt '%s' or later is required for online " - "deletion of network storage-backed volume " - "snapshots.") % ver - raise exception.Invalid(msg) - - commit_base = my_snap_base or delete_info['merge_target_file'] - commit_top = my_snap_top or delete_info['file_to_merge'] - bandwidth = 0 - - LOG.debug('will call blockCommit with commit_disk=%(commit_disk)s ' - 'commit_base=%(commit_base)s ' - 'commit_top=%(commit_top)s ' - % {'commit_disk': commit_disk, - 'commit_base': commit_base, - 'commit_top': commit_top}) - - result = virt_dom.blockCommit(commit_disk, commit_base, commit_top, - bandwidth, commit_flags) - - if result == 0: - LOG.debug('blockCommit started successfully') - - while self._wait_for_block_job(virt_dom, my_dev, - abort_on_error=True): - LOG.debug('waiting for blockCommit job completion') - time.sleep(0.5) - - def volume_snapshot_delete(self, context, instance, volume_id, snapshot_id, - delete_info): - try: - self._volume_snapshot_delete(context, instance, volume_id, - snapshot_id, delete_info=delete_info) - except Exception: - with excutils.save_and_reraise_exception(): - LOG.exception(_LE('Error occurred during ' - 'volume_snapshot_delete, ' - 'sending error status to Cinder.')) - self._volume_snapshot_update_status( - context, snapshot_id, 'error_deleting') - - self._volume_snapshot_update_status(context, snapshot_id, 'deleting') - self._volume_refresh_connection_info(context, instance, volume_id) - - def reboot(self, context, instance, network_info, reboot_type, - block_device_info=None, bad_volumes_callback=None): - """Reboot a virtual machine, given an instance reference.""" - if reboot_type == 'SOFT': - # NOTE(vish): This will attempt to do a graceful shutdown/restart. - try: - soft_reboot_success = self._soft_reboot(instance) - except libvirt.libvirtError as e: - LOG.debug("Instance soft reboot failed: %s", e) - soft_reboot_success = False - - if soft_reboot_success: - LOG.info(_LI("Instance soft rebooted successfully."), - instance=instance) - return - else: - LOG.warn(_LW("Failed to soft reboot instance. " - "Trying hard reboot."), - instance=instance) - return self._hard_reboot(context, instance, network_info, - block_device_info) - - def _soft_reboot(self, instance): - """Attempt to shutdown and restart the instance gracefully. - - We use shutdown and create here so we can return if the guest - responded and actually rebooted. Note that this method only - succeeds if the guest responds to acpi. Therefore we return - success or failure so we can fall back to a hard reboot if - necessary. - - :returns: True if the reboot succeeded - """ - dom = self._host.get_domain(instance) - state = self._get_power_state(dom) - old_domid = dom.ID() - # NOTE(vish): This check allows us to reboot an instance that - # is already shutdown. - if state == power_state.RUNNING: - dom.shutdown() - # NOTE(vish): This actually could take slightly longer than the - # FLAG defines depending on how long the get_info - # call takes to return. - self._prepare_pci_devices_for_use( - pci_manager.get_instance_pci_devs(instance, 'all')) - for x in xrange(CONF.libvirt.wait_soft_reboot_seconds): - dom = self._host.get_domain(instance) - state = self._get_power_state(dom) - new_domid = dom.ID() - - # NOTE(ivoks): By checking domain IDs, we make sure we are - # not recreating domain that's already running. - if old_domid != new_domid: - if state in [power_state.SHUTDOWN, - power_state.CRASHED]: - LOG.info(_LI("Instance shutdown successfully."), - instance=instance) - self._create_domain(domain=dom) - timer = loopingcall.FixedIntervalLoopingCall( - self._wait_for_running, instance) - timer.start(interval=0.5).wait() - return True - else: - LOG.info(_LI("Instance may have been rebooted during soft " - "reboot, so return now."), instance=instance) - return True - greenthread.sleep(1) - return False - - def _hard_reboot(self, context, instance, network_info, - block_device_info=None): - """Reboot a virtual machine, given an instance reference. - - Performs a Libvirt reset (if supported) on the domain. - - If Libvirt reset is unavailable this method actually destroys and - re-creates the domain to ensure the reboot happens, as the guest - OS cannot ignore this action. - - If xml is set, it uses the passed in xml in place of the xml from the - existing domain. - """ - - self._destroy(instance) - - # Convert the system metadata to image metadata - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - # NOTE(stpierre): In certain cases -- e.g., when booting a - # guest to restore its state after restarting - # Nova compute -- the context is not - # populated, which causes this (and - # _create_images_and_backing below) to error. - if not image_meta and context.auth_token is not None: - image_ref = instance.get('image_ref') - image_meta = compute_utils.get_image_metadata(context, - self._image_api, - image_ref, - instance) - - instance_dir = libvirt_utils.get_instance_path(instance) - fileutils.ensure_tree(instance_dir) - - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - block_device_info) - # NOTE(vish): This could generate the wrong device_format if we are - # using the raw backend and the images don't exist yet. - # The create_images_and_backing below doesn't properly - # regenerate raw backend images, however, so when it - # does we need to (re)generate the xml after the images - # are in place. - xml = self._get_guest_xml(context, instance, network_info, disk_info, - image_meta, - block_device_info=block_device_info, - write_to_disk=True) - - # NOTE (rmk): Re-populate any missing backing files. - disk_info_json = self._get_instance_disk_info(instance.name, xml, - block_device_info) - - if context.auth_token is not None: - self._create_images_and_backing(context, instance, instance_dir, - disk_info_json) - - # Initialize all the necessary networking, block devices and - # start the instance. - self._create_domain_and_network(context, xml, instance, network_info, - disk_info, - block_device_info=block_device_info, - reboot=True, - vifs_already_plugged=True) - self._prepare_pci_devices_for_use( - pci_manager.get_instance_pci_devs(instance, 'all')) - - def _wait_for_reboot(): - """Called at an interval until the VM is running again.""" - state = self.get_info(instance).state - - if state == power_state.RUNNING: - LOG.info(_LI("Instance rebooted successfully."), - instance=instance) - raise loopingcall.LoopingCallDone() - - timer = loopingcall.FixedIntervalLoopingCall(_wait_for_reboot) - timer.start(interval=0.5).wait() - - def pause(self, instance): - """Pause VM instance.""" - dom = self._host.get_domain(instance) - dom.suspend() - - def unpause(self, instance): - """Unpause paused VM instance.""" - dom = self._host.get_domain(instance) - dom.resume() - - def _clean_shutdown(self, instance, timeout, retry_interval): - """Attempt to shutdown the instance gracefully. - - :param instance: The instance to be shutdown - :param timeout: How long to wait in seconds for the instance to - shutdown - :param retry_interval: How often in seconds to signal the instance - to shutdown while waiting - - :returns: True if the shutdown succeeded - """ - - # List of states that represent a shutdown instance - SHUTDOWN_STATES = [power_state.SHUTDOWN, - power_state.CRASHED] - - try: - dom = self._host.get_domain(instance) - except exception.InstanceNotFound: - # If the instance has gone then we don't need to - # wait for it to shutdown - return True - - state = self._get_power_state(dom) - if state in SHUTDOWN_STATES: - LOG.info(_LI("Instance already shutdown."), - instance=instance) - return True - - LOG.debug("Shutting down instance from state %s", state, - instance=instance) - dom.shutdown() - retry_countdown = retry_interval - - for sec in six.moves.range(timeout): - - dom = self._host.get_domain(instance) - state = self._get_power_state(dom) - - if state in SHUTDOWN_STATES: - LOG.info(_LI("Instance shutdown successfully after %d " - "seconds."), sec, instance=instance) - return True - - # Note(PhilD): We can't assume that the Guest was able to process - # any previous shutdown signal (for example it may - # have still been startingup, so within the overall - # timeout we re-trigger the shutdown every - # retry_interval - if retry_countdown == 0: - retry_countdown = retry_interval - # Instance could shutdown at any time, in which case we - # will get an exception when we call shutdown - try: - LOG.debug("Instance in state %s after %d seconds - " - "resending shutdown", state, sec, - instance=instance) - dom.shutdown() - except libvirt.libvirtError: - # Assume this is because its now shutdown, so loop - # one more time to clean up. - LOG.debug("Ignoring libvirt exception from shutdown " - "request.", instance=instance) - continue - else: - retry_countdown -= 1 - - time.sleep(1) - - LOG.info(_LI("Instance failed to shutdown in %d seconds."), - timeout, instance=instance) - return False - - def power_off(self, instance, timeout=0, retry_interval=0): - """Power off the specified instance.""" - if timeout: - self._clean_shutdown(instance, timeout, retry_interval) - self._destroy(instance) - - def power_on(self, context, instance, network_info, - block_device_info=None): - """Power on the specified instance.""" - # We use _hard_reboot here to ensure that all backing files, - # network, and block device connections, etc. are established - # and available before we attempt to start the instance. - self._hard_reboot(context, instance, network_info, block_device_info) - - def suspend(self, context, instance): - """Suspend the specified instance.""" - dom = self._host.get_domain(instance) - self._detach_pci_devices(dom, - pci_manager.get_instance_pci_devs(instance)) - self._detach_sriov_ports(context, instance, dom) - dom.managedSave(0) - - def resume(self, context, instance, network_info, block_device_info=None): - """resume the specified instance.""" - image_meta = compute_utils.get_image_metadata(context, - self._image_api, instance.image_ref, instance) - - disk_info = blockinfo.get_disk_info( - CONF.libvirt.virt_type, instance, image_meta, - block_device_info=block_device_info) - - xml = self._get_existing_domain_xml(instance, network_info, - block_device_info) - dom = self._create_domain_and_network(context, xml, instance, - network_info, disk_info, - block_device_info=block_device_info, - vifs_already_plugged=True) - self._attach_pci_devices(dom, - pci_manager.get_instance_pci_devs(instance)) - self._attach_sriov_ports(context, instance, dom, network_info) - - def resume_state_on_host_boot(self, context, instance, network_info, - block_device_info=None): - """resume guest state when a host is booted.""" - # Check if the instance is running already and avoid doing - # anything if it is. - try: - domain = self._host.get_domain(instance) - state = self._get_power_state(domain) - - ignored_states = (power_state.RUNNING, - power_state.SUSPENDED, - power_state.NOSTATE, - power_state.PAUSED) - - if state in ignored_states: - return - except exception.NovaException: - pass - - # Instance is not up and could be in an unknown state. - # Be as absolute as possible about getting it back into - # a known and running state. - self._hard_reboot(context, instance, network_info, block_device_info) - - def rescue(self, context, instance, network_info, image_meta, - rescue_password): - """Loads a VM using rescue images. - - A rescue is normally performed when something goes wrong with the - primary images and data needs to be corrected/recovered. Rescuing - should not edit or over-ride the original image, only allow for - data recovery. - - """ - instance_dir = libvirt_utils.get_instance_path(instance) - unrescue_xml = self._get_existing_domain_xml(instance, network_info) - unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml') - libvirt_utils.write_to_file(unrescue_xml_path, unrescue_xml) - - if image_meta is not None: - rescue_image_id = image_meta.get('id') - else: - rescue_image_id = None - - rescue_images = { - 'image_id': (rescue_image_id or - CONF.libvirt.rescue_image_id or instance.image_ref), - 'kernel_id': (CONF.libvirt.rescue_kernel_id or - instance.kernel_id), - 'ramdisk_id': (CONF.libvirt.rescue_ramdisk_id or - instance.ramdisk_id), - } - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - rescue=True) - self._create_image(context, instance, disk_info['mapping'], - suffix='.rescue', disk_images=rescue_images, - network_info=network_info, - admin_pass=rescue_password) - xml = self._get_guest_xml(context, instance, network_info, disk_info, - image_meta, rescue=rescue_images, - write_to_disk=True) - self._destroy(instance) - self._create_domain(xml) - - def unrescue(self, instance, network_info): - """Reboot the VM which is being rescued back into primary images. - """ - instance_dir = libvirt_utils.get_instance_path(instance) - unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml') - xml = libvirt_utils.load_file(unrescue_xml_path) - virt_dom = self._host.get_domain(instance) - self._destroy(instance) - self._create_domain(xml, virt_dom) - libvirt_utils.file_delete(unrescue_xml_path) - rescue_files = os.path.join(instance_dir, "*.rescue") - for rescue_file in glob.iglob(rescue_files): - libvirt_utils.file_delete(rescue_file) - # cleanup rescue volume - lvm.remove_volumes([lvmdisk for lvmdisk in self._lvm_disks(instance) - if lvmdisk.endswith('.rescue')]) - - def poll_rebooting_instances(self, timeout, instances): - pass - - def _enable_hairpin(self, xml): - interfaces = self._get_interfaces(xml) - for interface in interfaces: - utils.execute('tee', - '/sys/class/net/%s/brport/hairpin_mode' % interface, - process_input='1', - run_as_root=True, - check_exit_code=[0, 1]) - - # NOTE(ilyaalekseyev): Implementation like in multinics - # for xenapi(tr3buchet) - def spawn(self, context, instance, image_meta, injected_files, - admin_password, network_info=None, block_device_info=None): - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - block_device_info) - self._create_image(context, instance, - disk_info['mapping'], - network_info=network_info, - block_device_info=block_device_info, - files=injected_files, - admin_pass=admin_password) - xml = self._get_guest_xml(context, instance, network_info, - disk_info, image_meta, - block_device_info=block_device_info, - write_to_disk=True) - self._create_domain_and_network(context, xml, instance, network_info, - disk_info, - block_device_info=block_device_info) - LOG.debug("Instance is running", instance=instance) - - def _wait_for_boot(): - """Called at an interval until the VM is running.""" - state = self.get_info(instance).state - - if state == power_state.RUNNING: - LOG.info(_LI("Instance spawned successfully."), - instance=instance) - raise loopingcall.LoopingCallDone() - - timer = loopingcall.FixedIntervalLoopingCall(_wait_for_boot) - timer.start(interval=0.5).wait() - - def _flush_libvirt_console(self, pty): - out, err = utils.execute('dd', - 'if=%s' % pty, - 'iflag=nonblock', - run_as_root=True, - check_exit_code=False) - return out - - def _append_to_file(self, data, fpath): - LOG.info(_LI('data: %(data)r, fpath: %(fpath)r'), - {'data': data, 'fpath': fpath}) - with open(fpath, 'a+') as fp: - fp.write(data) - - return fpath - - def get_console_output(self, context, instance): - virt_dom = self._host.get_domain(instance) - xml = virt_dom.XMLDesc(0) - tree = etree.fromstring(xml) - - console_types = {} - - # NOTE(comstud): We want to try 'file' types first, then try 'pty' - # types. We can't use Python 2.7 syntax of: - # tree.find("./devices/console[@type='file']/source") - # because we need to support 2.6. - console_nodes = tree.findall('./devices/console') - for console_node in console_nodes: - console_type = console_node.get('type') - console_types.setdefault(console_type, []) - console_types[console_type].append(console_node) - - # If the guest has a console logging to a file prefer to use that - if console_types.get('file'): - for file_console in console_types.get('file'): - source_node = file_console.find('./source') - if source_node is None: - continue - path = source_node.get("path") - if not path: - continue - - if not os.path.exists(path): - LOG.info(_LI('Instance is configured with a file console, ' - 'but the backing file is not (yet?) present'), - instance=instance) - return "" - - libvirt_utils.chown(path, os.getuid()) - - with libvirt_utils.file_open(path, 'rb') as fp: - log_data, remaining = utils.last_bytes(fp, - MAX_CONSOLE_BYTES) - if remaining > 0: - LOG.info(_LI('Truncated console log returned, ' - '%d bytes ignored'), remaining, - instance=instance) - return log_data - - # Try 'pty' types - if console_types.get('pty'): - for pty_console in console_types.get('pty'): - source_node = pty_console.find('./source') - if source_node is None: - continue - pty = source_node.get("path") - if not pty: - continue - break - else: - msg = _("Guest does not have a console available") - raise exception.NovaException(msg) - - self._chown_console_log_for_instance(instance) - data = self._flush_libvirt_console(pty) - console_log = self._get_console_log_path(instance) - fpath = self._append_to_file(data, console_log) - - with libvirt_utils.file_open(fpath, 'rb') as fp: - log_data, remaining = utils.last_bytes(fp, MAX_CONSOLE_BYTES) - if remaining > 0: - LOG.info(_LI('Truncated console log returned, ' - '%d bytes ignored'), - remaining, instance=instance) - return log_data - - @staticmethod - def get_host_ip_addr(): - ips = compute_utils.get_machine_ips() - if CONF.my_ip not in ips: - LOG.warn(_LW('my_ip address (%(my_ip)s) was not found on ' - 'any of the interfaces: %(ifaces)s'), - {'my_ip': CONF.my_ip, 'ifaces': ", ".join(ips)}) - return CONF.my_ip - - def get_vnc_console(self, context, instance): - def get_vnc_port_for_instance(instance_name): - virt_dom = self._host.get_domain(instance) - xml = virt_dom.XMLDesc(0) - xml_dom = etree.fromstring(xml) - - graphic = xml_dom.find("./devices/graphics[@type='vnc']") - if graphic is not None: - return graphic.get('port') - # NOTE(rmk): We had VNC consoles enabled but the instance in - # question is not actually listening for connections. - raise exception.ConsoleTypeUnavailable(console_type='vnc') - - port = get_vnc_port_for_instance(instance.name) - host = CONF.vncserver_proxyclient_address - - return ctype.ConsoleVNC(host=host, port=port) - - def get_spice_console(self, context, instance): - def get_spice_ports_for_instance(instance_name): - virt_dom = self._host.get_domain(instance) - xml = virt_dom.XMLDesc(0) - xml_dom = etree.fromstring(xml) - - graphic = xml_dom.find("./devices/graphics[@type='spice']") - if graphic is not None: - return (graphic.get('port'), graphic.get('tlsPort')) - # NOTE(rmk): We had Spice consoles enabled but the instance in - # question is not actually listening for connections. - raise exception.ConsoleTypeUnavailable(console_type='spice') - - ports = get_spice_ports_for_instance(instance.name) - host = CONF.spice.server_proxyclient_address - - return ctype.ConsoleSpice(host=host, port=ports[0], tlsPort=ports[1]) - - def get_serial_console(self, context, instance): - for hostname, port in self._get_serial_ports_from_instance( - instance, mode='bind'): - return ctype.ConsoleSerial(host=hostname, port=port) - raise exception.ConsoleTypeUnavailable(console_type='serial') - - @staticmethod - def _supports_direct_io(dirpath): - - if not hasattr(os, 'O_DIRECT'): - LOG.debug("This python runtime does not support direct I/O") - return False - - testfile = os.path.join(dirpath, ".directio.test") - - hasDirectIO = True - try: - f = os.open(testfile, os.O_CREAT | os.O_WRONLY | os.O_DIRECT) - # Check is the write allowed with 512 byte alignment - align_size = 512 - m = mmap.mmap(-1, align_size) - m.write(r"x" * align_size) - os.write(f, m) - os.close(f) - LOG.debug("Path '%(path)s' supports direct I/O", - {'path': dirpath}) - except OSError as e: - if e.errno == errno.EINVAL: - LOG.debug("Path '%(path)s' does not support direct I/O: " - "'%(ex)s'", {'path': dirpath, 'ex': e}) - hasDirectIO = False - else: - with excutils.save_and_reraise_exception(): - LOG.error(_LE("Error on '%(path)s' while checking " - "direct I/O: '%(ex)s'"), - {'path': dirpath, 'ex': e}) - except Exception as e: - with excutils.save_and_reraise_exception(): - LOG.error(_LE("Error on '%(path)s' while checking direct I/O: " - "'%(ex)s'"), {'path': dirpath, 'ex': e}) - finally: - try: - os.unlink(testfile) - except Exception: - pass - - return hasDirectIO - - @staticmethod - def _create_local(target, local_size, unit='G', - fs_format=None, label=None): - """Create a blank image of specified size.""" - - libvirt_utils.create_image('raw', target, - '%d%c' % (local_size, unit)) - - def _create_ephemeral(self, target, ephemeral_size, - fs_label, os_type, is_block_dev=False, - max_size=None, context=None, specified_fs=None): - if not is_block_dev: - self._create_local(target, ephemeral_size) - - # Run as root only for block devices. - disk.mkfs(os_type, fs_label, target, run_as_root=is_block_dev, - specified_fs=specified_fs) - - @staticmethod - def _create_swap(target, swap_mb, max_size=None, context=None): - """Create a swap file of specified size.""" - libvirt_utils.create_image('raw', target, '%dM' % swap_mb) - utils.mkfs('swap', target) - - @staticmethod - def _get_console_log_path(instance): - return os.path.join(libvirt_utils.get_instance_path(instance), - 'console.log') - - @staticmethod - def _get_disk_config_path(instance, suffix=''): - return os.path.join(libvirt_utils.get_instance_path(instance), - 'disk.config' + suffix) - - def _chown_console_log_for_instance(self, instance): - console_log = self._get_console_log_path(instance) - if os.path.exists(console_log): - libvirt_utils.chown(console_log, os.getuid()) - - def _chown_disk_config_for_instance(self, instance): - disk_config = self._get_disk_config_path(instance) - if os.path.exists(disk_config): - libvirt_utils.chown(disk_config, os.getuid()) - - @staticmethod - def _is_booted_from_volume(instance, disk_mapping): - """Determines whether the VM is booting from volume - - Determines whether the disk mapping indicates that the VM - is booting from a volume. - """ - return ((not bool(instance.get('image_ref'))) - or 'disk' not in disk_mapping) - - def _inject_data(self, instance, network_info, admin_pass, files, suffix): - """Injects data in a disk image - - Helper used for injecting data in a disk image file system. - - Keyword arguments: - instance -- a dict that refers instance specifications - network_info -- a dict that refers network speficications - admin_pass -- a string used to set an admin password - files -- a list of files needs to be injected - suffix -- a string used as an image name suffix - """ - # Handles the partition need to be used. - target_partition = None - if not instance.kernel_id: - target_partition = CONF.libvirt.inject_partition - if target_partition == 0: - target_partition = None - if CONF.libvirt.virt_type == 'lxc': - target_partition = None - - # Handles the key injection. - if CONF.libvirt.inject_key and instance.get('key_data'): - key = str(instance.key_data) - else: - key = None - - # Handles the admin password injection. - if not CONF.libvirt.inject_password: - admin_pass = None - - # Handles the network injection. - net = netutils.get_injected_network_template( - network_info, libvirt_virt_type=CONF.libvirt.virt_type) - - # Handles the metadata injection - metadata = instance.get('metadata') - - image_type = CONF.libvirt.images_type - if any((key, net, metadata, admin_pass, files)): - injection_image = self.image_backend.image( - instance, - 'disk' + suffix, - image_type) - img_id = instance.image_ref - - if not injection_image.check_image_exists(): - LOG.warn(_LW('Image %s not found on disk storage. ' - 'Continue without injecting data'), - injection_image.path, instance=instance) - return - try: - disk.inject_data(injection_image.path, - key, net, metadata, admin_pass, files, - partition=target_partition, - use_cow=CONF.use_cow_images, - mandatory=('files',)) - except Exception as e: - with excutils.save_and_reraise_exception(): - LOG.error(_LE('Error injecting data into image ' - '%(img_id)s (%(e)s)'), - {'img_id': img_id, 'e': e}, - instance=instance) - - def _create_image(self, context, instance, - disk_mapping, suffix='', - disk_images=None, network_info=None, - block_device_info=None, files=None, - admin_pass=None, inject_files=True, - fallback_from_host=None): - booted_from_volume = self._is_booted_from_volume( - instance, disk_mapping) - - def image(fname, image_type=CONF.libvirt.images_type): - return self.image_backend.image(instance, - fname + suffix, image_type) - - def raw(fname): - return image(fname, image_type='raw') - - # ensure directories exist and are writable - fileutils.ensure_tree(libvirt_utils.get_instance_path(instance)) - - LOG.info(_LI('Creating image'), instance=instance) - - # NOTE(dprince): for rescue console.log may already exist... chown it. - self._chown_console_log_for_instance(instance) - - # NOTE(yaguang): For evacuate disk.config already exist in shared - # storage, chown it. - self._chown_disk_config_for_instance(instance) - - # NOTE(vish): No need add the suffix to console.log - libvirt_utils.write_to_file( - self._get_console_log_path(instance), '', 7) - - if not disk_images: - disk_images = {'image_id': instance.image_ref, - 'kernel_id': instance.kernel_id, - 'ramdisk_id': instance.ramdisk_id} - - if disk_images['kernel_id']: - fname = imagecache.get_cache_fname(disk_images, 'kernel_id') - raw('kernel').cache(fetch_func=libvirt_utils.fetch_image, - context=context, - filename=fname, - image_id=disk_images['kernel_id'], - user_id=instance.user_id, - project_id=instance.project_id) - if disk_images['ramdisk_id']: - fname = imagecache.get_cache_fname(disk_images, 'ramdisk_id') - raw('ramdisk').cache(fetch_func=libvirt_utils.fetch_image, - context=context, - filename=fname, - image_id=disk_images['ramdisk_id'], - user_id=instance.user_id, - project_id=instance.project_id) - - inst_type = instance.get_flavor() - - # NOTE(ndipanov): Even if disk_mapping was passed in, which - # currently happens only on rescue - we still don't want to - # create a base image. - if not booted_from_volume: - root_fname = imagecache.get_cache_fname(disk_images, 'image_id') - size = instance.root_gb * units.Gi - - if size == 0 or suffix == '.rescue': - size = None - - backend = image('disk') - if backend.SUPPORTS_CLONE: - def clone_fallback_to_fetch(*args, **kwargs): - try: - backend.clone(context, disk_images['image_id']) - except exception.ImageUnacceptable: - libvirt_utils.fetch_image(*args, **kwargs) - fetch_func = clone_fallback_to_fetch - else: - fetch_func = libvirt_utils.fetch_image - self._try_fetch_image_cache(backend, fetch_func, context, - root_fname, disk_images['image_id'], - instance, size, fallback_from_host) - - # Lookup the filesystem type if required - os_type_with_default = disk.get_fs_type_for_os_type(instance.os_type) - # Generate a file extension based on the file system - # type and the mkfs commands configured if any - file_extension = disk.get_file_extension_for_os_type( - os_type_with_default) - - ephemeral_gb = instance.ephemeral_gb - if 'disk.local' in disk_mapping: - disk_image = image('disk.local') - fn = functools.partial(self._create_ephemeral, - fs_label='ephemeral0', - os_type=instance.os_type, - is_block_dev=disk_image.is_block_dev) - fname = "ephemeral_%s_%s" % (ephemeral_gb, file_extension) - size = ephemeral_gb * units.Gi - disk_image.cache(fetch_func=fn, - context=context, - filename=fname, - size=size, - ephemeral_size=ephemeral_gb) - - for idx, eph in enumerate(driver.block_device_info_get_ephemerals( - block_device_info)): - disk_image = image(blockinfo.get_eph_disk(idx)) - - specified_fs = eph.get('guest_format') - if specified_fs and not self.is_supported_fs_format(specified_fs): - msg = _("%s format is not supported") % specified_fs - raise exception.InvalidBDMFormat(details=msg) - - fn = functools.partial(self._create_ephemeral, - fs_label='ephemeral%d' % idx, - os_type=instance.os_type, - is_block_dev=disk_image.is_block_dev) - size = eph['size'] * units.Gi - fname = "ephemeral_%s_%s" % (eph['size'], file_extension) - disk_image.cache(fetch_func=fn, - context=context, - filename=fname, - size=size, - ephemeral_size=eph['size'], - specified_fs=specified_fs) - - if 'disk.swap' in disk_mapping: - mapping = disk_mapping['disk.swap'] - swap_mb = 0 - - swap = driver.block_device_info_get_swap(block_device_info) - if driver.swap_is_usable(swap): - swap_mb = swap['swap_size'] - elif (inst_type['swap'] > 0 and - not block_device.volume_in_mapping( - mapping['dev'], block_device_info)): - swap_mb = inst_type['swap'] - - if swap_mb > 0: - size = swap_mb * units.Mi - image('disk.swap').cache(fetch_func=self._create_swap, - context=context, - filename="swap_%s" % swap_mb, - size=size, - swap_mb=swap_mb) - - # Config drive - if configdrive.required_by(instance): - LOG.info(_LI('Using config drive'), instance=instance) - extra_md = {} - if admin_pass: - extra_md['admin_pass'] = admin_pass - - inst_md = instance_metadata.InstanceMetadata(instance, - content=files, extra_md=extra_md, network_info=network_info) - with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb: - configdrive_path = self._get_disk_config_path(instance, suffix) - LOG.info(_LI('Creating config drive at %(path)s'), - {'path': configdrive_path}, instance=instance) - - try: - cdb.make_drive(configdrive_path) - except processutils.ProcessExecutionError as e: - with excutils.save_and_reraise_exception(): - LOG.error(_LE('Creating config drive failed ' - 'with error: %s'), - e, instance=instance) - - # File injection only if needed - elif inject_files and CONF.libvirt.inject_partition != -2: - if booted_from_volume: - LOG.warn(_LW('File injection into a boot from volume ' - 'instance is not supported'), instance=instance) - self._inject_data( - instance, network_info, admin_pass, files, suffix) - - if CONF.libvirt.virt_type == 'uml': - libvirt_utils.chown(image('disk').path, 'root') - - def _prepare_pci_devices_for_use(self, pci_devices): - # kvm , qemu support managed mode - # In managed mode, the configured device will be automatically - # detached from the host OS drivers when the guest is started, - # and then re-attached when the guest shuts down. - if CONF.libvirt.virt_type != 'xen': - # we do manual detach only for xen - return - try: - for dev in pci_devices: - libvirt_dev_addr = dev['hypervisor_name'] - libvirt_dev = \ - self._conn.nodeDeviceLookupByName(libvirt_dev_addr) - # Note(yjiang5) Spelling for 'dettach' is correct, see - # http://libvirt.org/html/libvirt-libvirt.html. - libvirt_dev.dettach() - - # Note(yjiang5): A reset of one PCI device may impact other - # devices on the same bus, thus we need two separated loops - # to detach and then reset it. - for dev in pci_devices: - libvirt_dev_addr = dev['hypervisor_name'] - libvirt_dev = \ - self._conn.nodeDeviceLookupByName(libvirt_dev_addr) - libvirt_dev.reset() - - except libvirt.libvirtError as exc: - raise exception.PciDevicePrepareFailed(id=dev['id'], - instance_uuid= - dev['instance_uuid'], - reason=six.text_type(exc)) - - def _detach_pci_devices(self, dom, pci_devs): - - # for libvirt version < 1.1.1, this is race condition - # so forbid detach if not had this version - if not self._host.has_min_version(MIN_LIBVIRT_DEVICE_CALLBACK_VERSION): - if pci_devs: - reason = (_("Detaching PCI devices with libvirt < %(ver)s" - " is not permitted") % - {'ver': MIN_LIBVIRT_DEVICE_CALLBACK_VERSION}) - raise exception.PciDeviceDetachFailed(reason=reason, - dev=pci_devs) - try: - for dev in pci_devs: - dom.detachDeviceFlags(self._get_guest_pci_device(dev).to_xml(), - libvirt.VIR_DOMAIN_AFFECT_LIVE) - # after detachDeviceFlags returned, we should check the dom to - # ensure the detaching is finished - xml = dom.XMLDesc(0) - xml_doc = etree.fromstring(xml) - guest_config = vconfig.LibvirtConfigGuest() - guest_config.parse_dom(xml_doc) - - for hdev in [d for d in guest_config.devices - if isinstance(d, vconfig.LibvirtConfigGuestHostdevPCI)]: - hdbsf = [hdev.domain, hdev.bus, hdev.slot, hdev.function] - dbsf = pci_utils.parse_address(dev['address']) - if [int(x, 16) for x in hdbsf] ==\ - [int(x, 16) for x in dbsf]: - raise exception.PciDeviceDetachFailed(reason= - "timeout", - dev=dev) - - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_NO_DOMAIN: - LOG.warn(_LW("Instance disappeared while detaching " - "a PCI device from it.")) - else: - raise - - def _attach_pci_devices(self, dom, pci_devs): - try: - for dev in pci_devs: - dom.attachDevice(self._get_guest_pci_device(dev).to_xml()) - - except libvirt.libvirtError: - LOG.error(_LE('Attaching PCI devices %(dev)s to %(dom)s failed.'), - {'dev': pci_devs, 'dom': dom.ID()}) - raise - - def _prepare_args_for_get_config(self, context, instance): - image_ref = instance.image_ref - image_meta = compute_utils.get_image_metadata( - context, self._image_api, image_ref, instance) - return instance.flavor, image_meta - - @staticmethod - def _has_sriov_port(network_info): - for vif in network_info: - if vif['vnic_type'] == network_model.VNIC_TYPE_DIRECT: - return True - return False - - def _attach_sriov_ports(self, context, instance, dom, network_info=None): - if network_info is None: - network_info = instance.info_cache.network_info - if network_info is None: - return - - if self._has_sriov_port(network_info): - flavor, image_meta = self._prepare_args_for_get_config(context, - instance) - for vif in network_info: - if vif['vnic_type'] == network_model.VNIC_TYPE_DIRECT: - cfg = self.vif_driver.get_config(instance, - vif, - image_meta, - flavor, - CONF.libvirt.virt_type) - LOG.debug('Attaching SR-IOV port %(port)s to %(dom)s', - {'port': vif, 'dom': dom.ID()}) - dom.attachDevice(cfg.to_xml()) - - def _detach_sriov_ports(self, context, instance, dom): - network_info = instance.info_cache.network_info - if network_info is None: - return - - if self._has_sriov_port(network_info): - # for libvirt version < 1.1.1, this is race condition - # so forbid detach if it's an older version - if not self._host.has_min_version( - MIN_LIBVIRT_DEVICE_CALLBACK_VERSION): - reason = (_("Detaching SR-IOV ports with" - " libvirt < %(ver)s is not permitted") % - {'ver': MIN_LIBVIRT_DEVICE_CALLBACK_VERSION}) - raise exception.PciDeviceDetachFailed(reason=reason, - dev=network_info) - - flavor, image_meta = self._prepare_args_for_get_config(context, - instance) - for vif in network_info: - if vif['vnic_type'] == network_model.VNIC_TYPE_DIRECT: - cfg = self.vif_driver.get_config(instance, - vif, - image_meta, - flavor, - CONF.libvirt.virt_type) - dom.detachDeviceFlags(cfg.to_xml(), - libvirt.VIR_DOMAIN_AFFECT_LIVE) - - def _set_host_enabled(self, enabled, - disable_reason=DISABLE_REASON_UNDEFINED): - """Enables / Disables the compute service on this host. - - This doesn't override non-automatic disablement with an automatic - setting; thereby permitting operators to keep otherwise - healthy hosts out of rotation. - """ - - status_name = {True: 'disabled', - False: 'enabled'} - - disable_service = not enabled - - ctx = nova_context.get_admin_context() - try: - service = objects.Service.get_by_compute_host(ctx, CONF.host) - - if service.disabled != disable_service: - # Note(jang): this is a quick fix to stop operator- - # disabled compute hosts from re-enabling themselves - # automatically. We prefix any automatic reason code - # with a fixed string. We only re-enable a host - # automatically if we find that string in place. - # This should probably be replaced with a separate flag. - if not service.disabled or ( - service.disabled_reason and - service.disabled_reason.startswith(DISABLE_PREFIX)): - service.disabled = disable_service - service.disabled_reason = ( - DISABLE_PREFIX + disable_reason - if disable_service else DISABLE_REASON_UNDEFINED) - service.save() - LOG.debug('Updating compute service status to %s', - status_name[disable_service]) - else: - LOG.debug('Not overriding manual compute service ' - 'status with: %s', - status_name[disable_service]) - except exception.ComputeHostNotFound: - LOG.warn(_LW('Cannot update service status on host "%s" ' - 'since it is not registered.'), CONF.host) - except Exception: - LOG.warn(_LW('Cannot update service status on host "%s" ' - 'due to an unexpected exception.'), CONF.host, - exc_info=True) - - def _get_guest_cpu_model_config(self): - mode = CONF.libvirt.cpu_mode - model = CONF.libvirt.cpu_model - - if (CONF.libvirt.virt_type == "kvm" or - CONF.libvirt.virt_type == "qemu"): - if mode is None: - mode = "host-model" - if mode == "none": - return vconfig.LibvirtConfigGuestCPU() - else: - if mode is None or mode == "none": - return None - - if ((CONF.libvirt.virt_type != "kvm" and - CONF.libvirt.virt_type != "qemu")): - msg = _("Config requested an explicit CPU model, but " - "the current libvirt hypervisor '%s' does not " - "support selecting CPU models") % CONF.libvirt.virt_type - raise exception.Invalid(msg) - - if mode == "custom" and model is None: - msg = _("Config requested a custom CPU model, but no " - "model name was provided") - raise exception.Invalid(msg) - elif mode != "custom" and model is not None: - msg = _("A CPU model name should not be set when a " - "host CPU model is requested") - raise exception.Invalid(msg) - - LOG.debug("CPU mode '%(mode)s' model '%(model)s' was chosen", - {'mode': mode, 'model': (model or "")}) - - cpu = vconfig.LibvirtConfigGuestCPU() - cpu.mode = mode - cpu.model = model - - return cpu - - def _get_guest_cpu_config(self, flavor, image, - guest_cpu_numa_config, instance_numa_topology): - cpu = self._get_guest_cpu_model_config() - - if cpu is None: - return None - - topology = hardware.get_best_cpu_topology( - flavor, image, numa_topology=instance_numa_topology) - - cpu.sockets = topology.sockets - cpu.cores = topology.cores - cpu.threads = topology.threads - cpu.numa = guest_cpu_numa_config - - return cpu - - def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type, - image_type=None): - if CONF.libvirt.hw_disk_discard: - if not self._host.has_min_version(MIN_LIBVIRT_DISCARD_VERSION, - MIN_QEMU_DISCARD_VERSION, - REQ_HYPERVISOR_DISCARD): - msg = (_('Volume sets discard option, but libvirt %(libvirt)s' - ' or later is required, qemu %(qemu)s' - ' or later is required.') % - {'libvirt': MIN_LIBVIRT_DISCARD_VERSION, - 'qemu': MIN_QEMU_DISCARD_VERSION}) - raise exception.Invalid(msg) - - image = self.image_backend.image(instance, - name, - image_type) - disk_info = disk_mapping[name] - return image.libvirt_info(disk_info['bus'], - disk_info['dev'], - disk_info['type'], - self.disk_cachemode, - inst_type['extra_specs'], - self._host.get_version()) - - def _get_guest_fs_config(self, instance, name, image_type=None): - image = self.image_backend.image(instance, - name, - image_type) - return image.libvirt_fs_info("/", "ploop") - - def _get_guest_storage_config(self, instance, image_meta, - disk_info, - rescue, block_device_info, - inst_type, os_type): - devices = [] - disk_mapping = disk_info['mapping'] - - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - mount_rootfs = CONF.libvirt.virt_type == "lxc" - if mount_rootfs: - fs = vconfig.LibvirtConfigGuestFilesys() - fs.source_type = "mount" - fs.source_dir = os.path.join( - libvirt_utils.get_instance_path(instance), 'rootfs') - devices.append(fs) - elif os_type == vm_mode.EXE and CONF.libvirt.virt_type == "parallels": - fs = self._get_guest_fs_config(instance, "disk") - devices.append(fs) - else: - - if rescue: - diskrescue = self._get_guest_disk_config(instance, - 'disk.rescue', - disk_mapping, - inst_type) - devices.append(diskrescue) - - diskos = self._get_guest_disk_config(instance, - 'disk', - disk_mapping, - inst_type) - devices.append(diskos) - else: - if 'disk' in disk_mapping: - diskos = self._get_guest_disk_config(instance, - 'disk', - disk_mapping, - inst_type) - devices.append(diskos) - - if 'disk.local' in disk_mapping: - disklocal = self._get_guest_disk_config(instance, - 'disk.local', - disk_mapping, - inst_type) - devices.append(disklocal) - instance.default_ephemeral_device = ( - block_device.prepend_dev(disklocal.target_dev)) - - for idx, eph in enumerate( - driver.block_device_info_get_ephemerals( - block_device_info)): - diskeph = self._get_guest_disk_config( - instance, - blockinfo.get_eph_disk(idx), - disk_mapping, inst_type) - devices.append(diskeph) - - if 'disk.swap' in disk_mapping: - diskswap = self._get_guest_disk_config(instance, - 'disk.swap', - disk_mapping, - inst_type) - devices.append(diskswap) - instance.default_swap_device = ( - block_device.prepend_dev(diskswap.target_dev)) - - if 'disk.config' in disk_mapping: - diskconfig = self._get_guest_disk_config(instance, - 'disk.config', - disk_mapping, - inst_type, - 'raw') - devices.append(diskconfig) - - for vol in block_device.get_bdms_to_connect(block_device_mapping, - mount_rootfs): - connection_info = vol['connection_info'] - vol_dev = block_device.prepend_dev(vol['mount_device']) - info = disk_mapping[vol_dev] - self._connect_volume(connection_info, info) - cfg = self._get_volume_config(connection_info, info) - devices.append(cfg) - vol['connection_info'] = connection_info - vol.save() - - for d in devices: - self._set_cache_mode(d) - - if (image_meta and - image_meta.get('properties', {}).get('hw_scsi_model')): - hw_scsi_model = image_meta['properties']['hw_scsi_model'] - scsi_controller = vconfig.LibvirtConfigGuestController() - scsi_controller.type = 'scsi' - scsi_controller.model = hw_scsi_model - devices.append(scsi_controller) - - return devices - - def _get_host_sysinfo_serial_hardware(self): - """Get a UUID from the host hardware - - Get a UUID for the host hardware reported by libvirt. - This is typically from the SMBIOS data, unless it has - been overridden in /etc/libvirt/libvirtd.conf - """ - caps = self._host.get_capabilities() - return caps.host.uuid - - def _get_host_sysinfo_serial_os(self): - """Get a UUID from the host operating system - - Get a UUID for the host operating system. Modern Linux - distros based on systemd provide a /etc/machine-id - file containing a UUID. This is also provided inside - systemd based containers and can be provided by other - init systems too, since it is just a plain text file. - """ - with open("/etc/machine-id") as f: - # We want to have '-' in the right place - # so we parse & reformat the value - return str(uuid.UUID(f.read().split()[0])) - - def _get_host_sysinfo_serial_auto(self): - if os.path.exists("/etc/machine-id"): - return self._get_host_sysinfo_serial_os() - else: - return self._get_host_sysinfo_serial_hardware() - - def _get_guest_config_sysinfo(self, instance): - sysinfo = vconfig.LibvirtConfigGuestSysinfo() - - sysinfo.system_manufacturer = version.vendor_string() - sysinfo.system_product = version.product_string() - sysinfo.system_version = version.version_string_with_package() - - sysinfo.system_serial = self._sysinfo_serial_func() - sysinfo.system_uuid = instance.uuid - - return sysinfo - - def _get_guest_pci_device(self, pci_device): - - dbsf = pci_utils.parse_address(pci_device['address']) - dev = vconfig.LibvirtConfigGuestHostdevPCI() - dev.domain, dev.bus, dev.slot, dev.function = dbsf - - # only kvm support managed mode - if CONF.libvirt.virt_type in ('xen', 'parallels',): - dev.managed = 'no' - if CONF.libvirt.virt_type in ('kvm', 'qemu'): - dev.managed = 'yes' - - return dev - - def _get_guest_config_meta(self, context, instance): - """Get metadata config for guest.""" - - meta = vconfig.LibvirtConfigGuestMetaNovaInstance() - meta.package = version.version_string_with_package() - meta.name = instance.display_name - meta.creationTime = time.time() - - if instance.image_ref not in ("", None): - meta.roottype = "image" - meta.rootid = instance.image_ref - - if context is not None: - ometa = vconfig.LibvirtConfigGuestMetaNovaOwner() - ometa.userid = context.user_id - ometa.username = context.user_name - ometa.projectid = context.project_id - ometa.projectname = context.project_name - meta.owner = ometa - - fmeta = vconfig.LibvirtConfigGuestMetaNovaFlavor() - flavor = instance.flavor - fmeta.name = flavor.name - fmeta.memory = flavor.memory_mb - fmeta.vcpus = flavor.vcpus - fmeta.ephemeral = flavor.ephemeral_gb - fmeta.disk = flavor.root_gb - fmeta.swap = flavor.swap - - meta.flavor = fmeta - - return meta - - def _machine_type_mappings(self): - mappings = {} - for mapping in CONF.libvirt.hw_machine_type: - host_arch, _, machine_type = mapping.partition('=') - mappings[host_arch] = machine_type - return mappings - - def _get_machine_type(self, image_meta, caps): - # The underlying machine type can be set as an image attribute, - # or otherwise based on some architecture specific defaults - - mach_type = None - - if (image_meta is not None and image_meta.get('properties') and - image_meta['properties'].get('hw_machine_type') - is not None): - mach_type = image_meta['properties']['hw_machine_type'] - else: - # For ARM systems we will default to vexpress-a15 for armv7 - # and virt for aarch64 - if caps.host.cpu.arch == arch.ARMV7: - mach_type = "vexpress-a15" - - if caps.host.cpu.arch == arch.AARCH64: - mach_type = "virt" - - if caps.host.cpu.arch in (arch.S390, arch.S390X): - mach_type = 's390-ccw-virtio' - - # If set in the config, use that as the default. - if CONF.libvirt.hw_machine_type: - mappings = self._machine_type_mappings() - mach_type = mappings.get(caps.host.cpu.arch) - - return mach_type - - @staticmethod - def _create_idmaps(klass, map_strings): - idmaps = [] - if len(map_strings) > 5: - map_strings = map_strings[0:5] - LOG.warn(_LW("Too many id maps, only included first five.")) - for map_string in map_strings: - try: - idmap = klass() - values = [int(i) for i in map_string.split(":")] - idmap.start = values[0] - idmap.target = values[1] - idmap.count = values[2] - idmaps.append(idmap) - except (ValueError, IndexError): - LOG.warn(_LW("Invalid value for id mapping %s"), map_string) - return idmaps - - def _get_guest_idmaps(self): - id_maps = [] - if CONF.libvirt.virt_type == 'lxc' and CONF.libvirt.uid_maps: - uid_maps = self._create_idmaps(vconfig.LibvirtConfigGuestUIDMap, - CONF.libvirt.uid_maps) - id_maps.extend(uid_maps) - if CONF.libvirt.virt_type == 'lxc' and CONF.libvirt.gid_maps: - gid_maps = self._create_idmaps(vconfig.LibvirtConfigGuestGIDMap, - CONF.libvirt.gid_maps) - id_maps.extend(gid_maps) - return id_maps - - def _update_guest_cputune(self, guest, flavor, virt_type): - if virt_type in ('lxc', 'kvm', 'qemu'): - if guest.cputune is None: - guest.cputune = vconfig.LibvirtConfigGuestCPUTune() - # Setting the default cpu.shares value to be a value - # dependent on the number of vcpus - guest.cputune.shares = 1024 * guest.vcpus - - cputuning = ['shares', 'period', 'quota'] - for name in cputuning: - key = "quota:cpu_" + name - if key in flavor.extra_specs: - setattr(guest.cputune, name, - int(flavor.extra_specs[key])) - - def _get_cpu_numa_config_from_instance(self, instance_numa_topology): - if instance_numa_topology: - guest_cpu_numa = vconfig.LibvirtConfigGuestCPUNUMA() - for instance_cell in instance_numa_topology.cells: - guest_cell = vconfig.LibvirtConfigGuestCPUNUMACell() - guest_cell.id = instance_cell.id - guest_cell.cpus = instance_cell.cpuset - guest_cell.memory = instance_cell.memory * units.Ki - guest_cpu_numa.cells.append(guest_cell) - return guest_cpu_numa - - def _has_cpu_policy_support(self): - for ver in BAD_LIBVIRT_CPU_POLICY_VERSIONS: - if self._host.has_version(ver): - ver_ = self._version_to_string(ver) - raise exception.CPUPinningNotSupported(reason=_( - 'Invalid libvirt version %(version)s') % {'version': ver_}) - return True - - def _get_guest_numa_config(self, instance_numa_topology, flavor, pci_devs, - allowed_cpus=None): - """Returns the config objects for the guest NUMA specs. - - Determines the CPUs that the guest can be pinned to if the guest - specifies a cell topology and the host supports it. Constructs the - libvirt XML config object representing the NUMA topology selected - for the guest. Returns a tuple of: - - (cpu_set, guest_cpu_tune, guest_cpu_numa, guest_numa_tune) - - With the following caveats: - - a) If there is no specified guest NUMA topology, then - all tuple elements except cpu_set shall be None. cpu_set - will be populated with the chosen CPUs that the guest - allowed CPUs fit within, which could be the supplied - allowed_cpus value if the host doesn't support NUMA - topologies. - - b) If there is a specified guest NUMA topology, then - cpu_set will be None and guest_cpu_numa will be the - LibvirtConfigGuestCPUNUMA object representing the guest's - NUMA topology. If the host supports NUMA, then guest_cpu_tune - will contain a LibvirtConfigGuestCPUTune object representing - the optimized chosen cells that match the host capabilities - with the instance's requested topology. If the host does - not support NUMA, then guest_cpu_tune and guest_numa_tune - will be None. - """ - - if (not self._has_numa_support() and - instance_numa_topology is not None): - # We should not get here, since we should have avoided - # reporting NUMA topology from _get_host_numa_topology - # in the first place. Just in case of a scheduler - # mess up though, raise an exception - raise exception.NUMATopologyUnsupported() - - topology = self._get_host_numa_topology() - # We have instance NUMA so translate it to the config class - guest_cpu_numa_config = self._get_cpu_numa_config_from_instance( - instance_numa_topology) - - if not guest_cpu_numa_config: - # No NUMA topology defined for instance - let the host kernel deal - # with the NUMA effects. - # TODO(ndipanov): Attempt to spread the instance - # across NUMA nodes and expose the topology to the - # instance as an optimisation - return GuestNumaConfig(allowed_cpus, None, None, None) - else: - if topology: - # Now get the CpuTune configuration from the numa_topology - guest_cpu_tune = vconfig.LibvirtConfigGuestCPUTune() - guest_numa_tune = vconfig.LibvirtConfigGuestNUMATune() - allpcpus = [] - - numa_mem = vconfig.LibvirtConfigGuestNUMATuneMemory() - numa_memnodes = [vconfig.LibvirtConfigGuestNUMATuneMemNode() - for _ in guest_cpu_numa_config.cells] - - for host_cell in topology.cells: - for guest_node_id, guest_config_cell in enumerate( - guest_cpu_numa_config.cells): - if guest_config_cell.id == host_cell.id: - node = numa_memnodes[guest_node_id] - node.cellid = guest_config_cell.id - node.nodeset = [host_cell.id] - node.mode = "strict" - - numa_mem.nodeset.append(host_cell.id) - - object_numa_cell = ( - instance_numa_topology.cells[guest_node_id] - ) - for cpu in guest_config_cell.cpus: - pin_cpuset = ( - vconfig.LibvirtConfigGuestCPUTuneVCPUPin()) - pin_cpuset.id = cpu - # If there is pinning information in the cell - # we pin to individual CPUs, otherwise we float - # over the whole host NUMA node - - if (object_numa_cell.cpu_pinning and - self._has_cpu_policy_support()): - pcpu = object_numa_cell.cpu_pinning[cpu] - pin_cpuset.cpuset = set([pcpu]) - else: - pin_cpuset.cpuset = host_cell.cpuset - allpcpus.extend(pin_cpuset.cpuset) - guest_cpu_tune.vcpupin.append(pin_cpuset) - - # TODO(berrange) When the guest has >1 NUMA node, it will - # span multiple host NUMA nodes. By pinning emulator threads - # to the union of all nodes, we guarantee there will be - # cross-node memory access by the emulator threads when - # responding to guest I/O operations. The only way to avoid - # this would be to pin emulator threads to a single node and - # tell the guest OS to only do I/O from one of its virtual - # NUMA nodes. This is not even remotely practical. - # - # The long term solution is to make use of a new QEMU feature - # called "I/O Threads" which will let us configure an explicit - # I/O thread for each guest vCPU or guest NUMA node. It is - # still TBD how to make use of this feature though, especially - # how to associate IO threads with guest devices to eliminiate - # cross NUMA node traffic. This is an area of investigation - # for QEMU community devs. - emulatorpin = vconfig.LibvirtConfigGuestCPUTuneEmulatorPin() - emulatorpin.cpuset = set(allpcpus) - guest_cpu_tune.emulatorpin = emulatorpin - # Sort the vcpupin list per vCPU id for human-friendlier XML - guest_cpu_tune.vcpupin.sort(key=operator.attrgetter("id")) - - guest_numa_tune.memory = numa_mem - guest_numa_tune.memnodes = numa_memnodes - - # normalize cell.id - for i, (cell, memnode) in enumerate( - zip(guest_cpu_numa_config.cells, - guest_numa_tune.memnodes)): - cell.id = i - memnode.cellid = i - - return GuestNumaConfig(None, guest_cpu_tune, - guest_cpu_numa_config, - guest_numa_tune) - else: - return GuestNumaConfig(allowed_cpus, None, - guest_cpu_numa_config, None) - - def _get_guest_os_type(self, virt_type): - """Returns the guest OS type based on virt type.""" - if virt_type == "lxc": - ret = vm_mode.EXE - elif virt_type == "uml": - ret = vm_mode.UML - elif virt_type == "xen": - ret = vm_mode.XEN - else: - ret = vm_mode.HVM - return ret - - def _set_guest_for_rescue(self, rescue, guest, inst_path, virt_type, - root_device_name): - if rescue.get('kernel_id'): - guest.os_kernel = os.path.join(inst_path, "kernel.rescue") - if virt_type == "xen": - guest.os_cmdline = "ro root=%s" % root_device_name - else: - guest.os_cmdline = ("root=%s %s" % (root_device_name, CONSOLE)) - if virt_type == "qemu": - guest.os_cmdline += " no_timer_check" - if rescue.get('ramdisk_id'): - guest.os_initrd = os.path.join(inst_path, "ramdisk.rescue") - - def _set_guest_for_inst_kernel(self, instance, guest, inst_path, virt_type, - root_device_name, image_meta): - guest.os_kernel = os.path.join(inst_path, "kernel") - if virt_type == "xen": - guest.os_cmdline = "ro root=%s" % root_device_name - else: - guest.os_cmdline = ("root=%s %s" % (root_device_name, CONSOLE)) - if virt_type == "qemu": - guest.os_cmdline += " no_timer_check" - if instance.ramdisk_id: - guest.os_initrd = os.path.join(inst_path, "ramdisk") - # we only support os_command_line with images with an explicit - # kernel set and don't want to break nova if there's an - # os_command_line property without a specified kernel_id param - if image_meta: - img_props = image_meta.get('properties', {}) - if img_props.get('os_command_line'): - guest.os_cmdline = img_props.get('os_command_line') - - def _set_clock(self, guest, os_type, image_meta, virt_type): - # NOTE(mikal): Microsoft Windows expects the clock to be in - # "localtime". If the clock is set to UTC, then you can use a - # registry key to let windows know, but Microsoft says this is - # buggy in http://support.microsoft.com/kb/2687252 - clk = vconfig.LibvirtConfigGuestClock() - if os_type == 'windows': - LOG.info(_LI('Configuring timezone for windows instance to ' - 'localtime')) - clk.offset = 'localtime' - else: - clk.offset = 'utc' - guest.set_clock(clk) - - if virt_type == "kvm": - self._set_kvm_timers(clk, os_type, image_meta) - - def _set_kvm_timers(self, clk, os_type, image_meta): - # TODO(berrange) One day this should be per-guest - # OS type configurable - tmpit = vconfig.LibvirtConfigGuestTimer() - tmpit.name = "pit" - tmpit.tickpolicy = "delay" - - tmrtc = vconfig.LibvirtConfigGuestTimer() - tmrtc.name = "rtc" - tmrtc.tickpolicy = "catchup" - - clk.add_timer(tmpit) - clk.add_timer(tmrtc) - - guestarch = libvirt_utils.get_arch(image_meta) - if guestarch in (arch.I686, arch.X86_64): - # NOTE(rfolco): HPET is a hardware timer for x86 arch. - # qemu -no-hpet is not supported on non-x86 targets. - tmhpet = vconfig.LibvirtConfigGuestTimer() - tmhpet.name = "hpet" - tmhpet.present = False - clk.add_timer(tmhpet) - - # With new enough QEMU we can provide Windows guests - # with the paravirtualized hyperv timer source. This - # is the windows equiv of kvm-clock, allowing Windows - # guests to accurately keep time. - if (os_type == 'windows' and - self._host.has_min_version(MIN_LIBVIRT_HYPERV_TIMER_VERSION, - MIN_QEMU_HYPERV_TIMER_VERSION)): - tmhyperv = vconfig.LibvirtConfigGuestTimer() - tmhyperv.name = "hypervclock" - tmhyperv.present = True - clk.add_timer(tmhyperv) - - def _set_features(self, guest, os_type, caps, virt_type): - if virt_type == "xen": - # PAE only makes sense in X86 - if caps.host.cpu.arch in (arch.I686, arch.X86_64): - guest.features.append(vconfig.LibvirtConfigGuestFeaturePAE()) - - if (virt_type not in ("lxc", "uml", "parallels", "xen") or - (virt_type == "xen" and guest.os_type == vm_mode.HVM)): - guest.features.append(vconfig.LibvirtConfigGuestFeatureACPI()) - guest.features.append(vconfig.LibvirtConfigGuestFeatureAPIC()) - - if (virt_type in ("qemu", "kvm") and - os_type == 'windows' and - self._host.has_min_version(MIN_LIBVIRT_HYPERV_FEATURE_VERSION, - MIN_QEMU_HYPERV_FEATURE_VERSION)): - hv = vconfig.LibvirtConfigGuestFeatureHyperV() - hv.relaxed = True - - if self._host.has_min_version( - MIN_LIBVIRT_HYPERV_FEATURE_EXTRA_VERSION): - hv.spinlocks = True - # Increase spinlock retries - value recommended by - # KVM maintainers who certify Windows guests - # with Microsoft - hv.spinlock_retries = 8191 - hv.vapic = True - guest.features.append(hv) - - def _create_serial_console_devices(self, guest, instance, flavor, - image_meta): - guest_arch = libvirt_utils.get_arch(image_meta) - - if CONF.serial_console.enabled: - num_ports = hardware.get_number_of_serial_ports( - flavor, image_meta) - for port in six.moves.range(num_ports): - if guest_arch in (arch.S390, arch.S390X): - console = vconfig.LibvirtConfigGuestConsole() - else: - console = vconfig.LibvirtConfigGuestSerial() - console.port = port - console.type = "tcp" - console.listen_host = ( - CONF.serial_console.proxyclient_address) - console.listen_port = ( - serial_console.acquire_port( - console.listen_host)) - guest.add_device(console) - else: - # The QEMU 'pty' driver throws away any data if no - # client app is connected. Thus we can't get away - # with a single type=pty console. Instead we have - # to configure two separate consoles. - if guest_arch in (arch.S390, arch.S390X): - consolelog = vconfig.LibvirtConfigGuestConsole() - consolelog.target_type = "sclplm" - else: - consolelog = vconfig.LibvirtConfigGuestSerial() - consolelog.type = "file" - consolelog.source_path = self._get_console_log_path(instance) - guest.add_device(consolelog) - - def _add_video_driver(self, guest, image_meta, img_meta_prop, flavor): - VALID_VIDEO_DEVICES = ("vga", "cirrus", "vmvga", "xen", "qxl") - video = vconfig.LibvirtConfigGuestVideo() - # NOTE(ldbragst): The following logic sets the video.type - # depending on supported defaults given the architecture, - # virtualization type, and features. The video.type attribute can - # be overridden by the user with image_meta['properties'], which - # is carried out in the next if statement below this one. - guestarch = libvirt_utils.get_arch(image_meta) - if guest.os_type == vm_mode.XEN: - video.type = 'xen' - elif CONF.libvirt.virt_type == 'parallels': - video.type = 'vga' - elif guestarch in (arch.PPC, arch.PPC64): - # NOTE(ldbragst): PowerKVM doesn't support 'cirrus' be default - # so use 'vga' instead when running on Power hardware. - video.type = 'vga' - elif CONF.spice.enabled: - video.type = 'qxl' - if img_meta_prop.get('hw_video_model'): - video.type = img_meta_prop.get('hw_video_model') - if (video.type not in VALID_VIDEO_DEVICES): - raise exception.InvalidVideoMode(model=video.type) - - # Set video memory, only if the flavor's limit is set - video_ram = int(img_meta_prop.get('hw_video_ram', 0)) - max_vram = int(flavor.extra_specs.get('hw_video:ram_max_mb', 0)) - if video_ram > max_vram: - raise exception.RequestedVRamTooHigh(req_vram=video_ram, - max_vram=max_vram) - if max_vram and video_ram: - video.vram = video_ram * units.Mi / units.Ki - guest.add_device(video) - - def _add_qga_device(self, guest, instance): - qga = vconfig.LibvirtConfigGuestChannel() - qga.type = "unix" - qga.target_name = "org.qemu.guest_agent.0" - qga.source_path = ("/var/lib/libvirt/qemu/%s.%s.sock" % - ("org.qemu.guest_agent.0", instance.name)) - guest.add_device(qga) - - def _add_rng_device(self, guest, flavor): - rng_device = vconfig.LibvirtConfigGuestRng() - rate_bytes = flavor.extra_specs.get('hw_rng:rate_bytes', 0) - period = flavor.extra_specs.get('hw_rng:rate_period', 0) - if rate_bytes: - rng_device.rate_bytes = int(rate_bytes) - rng_device.rate_period = int(period) - rng_path = CONF.libvirt.rng_dev_path - if (rng_path and not os.path.exists(rng_path)): - raise exception.RngDeviceNotExist(path=rng_path) - rng_device.backend = rng_path - guest.add_device(rng_device) - - def _set_qemu_guest_agent(self, guest, flavor, instance, img_meta_prop): - qga_enabled = False - # Enable qga only if the 'hw_qemu_guest_agent' is equal to yes - hw_qga = img_meta_prop.get('hw_qemu_guest_agent', '') - if strutils.bool_from_string(hw_qga): - LOG.debug("Qemu guest agent is enabled through image " - "metadata", instance=instance) - qga_enabled = True - if qga_enabled: - self._add_qga_device(guest, instance) - rng_is_virtio = img_meta_prop.get('hw_rng_model') == 'virtio' - rng_allowed_str = flavor.extra_specs.get('hw_rng:allowed', '') - rng_allowed = strutils.bool_from_string(rng_allowed_str) - if rng_is_virtio and rng_allowed: - self._add_rng_device(guest, flavor) - - def _get_guest_memory_backing_config(self, inst_topology, numatune): - wantsmempages = False - if inst_topology: - for cell in inst_topology.cells: - if cell.pagesize: - wantsmempages = True - - if not wantsmempages: - return - - if not self._has_hugepage_support(): - # We should not get here, since we should have avoided - # reporting NUMA topology from _get_host_numa_topology - # in the first place. Just in case of a scheduler - # mess up though, raise an exception - raise exception.MemoryPagesUnsupported() - - host_topology = self._get_host_numa_topology() - - if host_topology is None: - # As above, we should not get here but just in case... - raise exception.MemoryPagesUnsupported() - - # Currently libvirt does not support the smallest - # pagesize set as a backend memory. - # https://bugzilla.redhat.com/show_bug.cgi?id=1173507 - avail_pagesize = [page.size_kb - for page in host_topology.cells[0].mempages] - avail_pagesize.sort() - smallest = avail_pagesize[0] - - pages = [] - for guest_cellid, inst_cell in enumerate(inst_topology.cells): - if inst_cell.pagesize and inst_cell.pagesize > smallest: - for memnode in numatune.memnodes: - if guest_cellid == memnode.cellid: - page = ( - vconfig.LibvirtConfigGuestMemoryBackingPage()) - page.nodeset = [guest_cellid] - page.size_kb = inst_cell.pagesize - pages.append(page) - break # Quit early... - - if pages: - membacking = vconfig.LibvirtConfigGuestMemoryBacking() - membacking.hugepages = pages - return membacking - - def _get_flavor(self, ctxt, instance, flavor): - if flavor is not None: - return flavor - return instance.flavor - - def _configure_guest_by_virt_type(self, guest, virt_type, caps, instance, - image_meta, flavor, root_device_name): - if virt_type == "xen": - if guest.os_type == vm_mode.HVM: - guest.os_loader = CONF.libvirt.xen_hvmloader_path - elif virt_type in ("kvm", "qemu"): - if caps.host.cpu.arch in (arch.I686, arch.X86_64): - guest.sysinfo = self._get_guest_config_sysinfo(instance) - guest.os_smbios = vconfig.LibvirtConfigGuestSMBIOS() - guest.os_mach_type = self._get_machine_type(image_meta, caps) - guest.os_bootmenu = strutils.bool_from_string( - flavor.extra_specs.get( - 'hw:boot_menu', image_meta.get('properties', {}).get( - 'hw_boot_menu', 'no'))) - elif virt_type == "lxc": - guest.os_init_path = "/sbin/init" - guest.os_cmdline = CONSOLE - elif virt_type == "uml": - guest.os_kernel = "/usr/bin/linux" - guest.os_root = root_device_name - elif virt_type == "parallels": - if guest.os_type == vm_mode.EXE: - guest.os_init_path = "/sbin/init" - - def _conf_non_lxc_uml(self, virt_type, guest, root_device_name, rescue, - instance, inst_path, image_meta, disk_info): - if rescue: - self._set_guest_for_rescue(rescue, guest, inst_path, virt_type, - root_device_name) - elif instance.kernel_id: - self._set_guest_for_inst_kernel(instance, guest, inst_path, - virt_type, root_device_name, - image_meta) - else: - guest.os_boot_dev = blockinfo.get_boot_order(disk_info) - - def _create_consoles(self, virt_type, guest, instance, flavor, image_meta, - caps): - if virt_type in ("qemu", "kvm"): - # Create the serial console char devices - self._create_serial_console_devices(guest, instance, flavor, - image_meta) - if caps.host.cpu.arch in (arch.S390, arch.S390X): - consolepty = vconfig.LibvirtConfigGuestConsole() - consolepty.target_type = "sclp" - else: - consolepty = vconfig.LibvirtConfigGuestSerial() - else: - consolepty = vconfig.LibvirtConfigGuestConsole() - return consolepty - - def _cpu_config_to_vcpu_model(self, cpu_config, vcpu_model): - """Update VirtCPUModel object according to libvirt CPU config. - - :param:cpu_config: vconfig.LibvirtConfigGuestCPU presenting the - instance's virtual cpu configuration. - :param:vcpu_model: VirtCPUModel object. A new object will be created - if None. - - :return: Updated VirtCPUModel object, or None if cpu_config is None - - """ - - if not cpu_config: - return - if not vcpu_model: - vcpu_model = objects.VirtCPUModel() - - vcpu_model.arch = cpu_config.arch - vcpu_model.vendor = cpu_config.vendor - vcpu_model.model = cpu_config.model - vcpu_model.mode = cpu_config.mode - vcpu_model.match = cpu_config.match - - if cpu_config.sockets: - vcpu_model.topology = objects.VirtCPUTopology( - sockets=cpu_config.sockets, - cores=cpu_config.cores, - threads=cpu_config.threads) - else: - vcpu_model.topology = None - - features = [objects.VirtCPUFeature( - name=f.name, - policy=f.policy) for f in cpu_config.features] - vcpu_model.features = features - - return vcpu_model - - def _vcpu_model_to_cpu_config(self, vcpu_model): - """Create libvirt CPU config according to VirtCPUModel object. - - :param:vcpu_model: VirtCPUModel object. - - :return: vconfig.LibvirtConfigGuestCPU. - - """ - - cpu_config = vconfig.LibvirtConfigGuestCPU() - cpu_config.arch = vcpu_model.arch - cpu_config.model = vcpu_model.model - cpu_config.mode = vcpu_model.mode - cpu_config.match = vcpu_model.match - cpu_config.vendor = vcpu_model.vendor - if vcpu_model.topology: - cpu_config.sockets = vcpu_model.topology.sockets - cpu_config.cores = vcpu_model.topology.cores - cpu_config.threads = vcpu_model.topology.threads - if vcpu_model.features: - for f in vcpu_model.features: - xf = vconfig.LibvirtConfigGuestCPUFeature() - xf.name = f.name - xf.policy = f.policy - cpu_config.features.add(xf) - return cpu_config - - def _get_guest_config(self, instance, network_info, image_meta, - disk_info, rescue=None, block_device_info=None, - context=None): - """Get config data for parameters. - - :param rescue: optional dictionary that should contain the key - 'ramdisk_id' if a ramdisk is needed for the rescue image and - 'kernel_id' if a kernel is needed for the rescue image. - """ - flavor = instance.flavor - inst_path = libvirt_utils.get_instance_path(instance) - disk_mapping = disk_info['mapping'] - img_meta_prop = image_meta.get('properties', {}) if image_meta else {} - - virt_type = CONF.libvirt.virt_type - guest = vconfig.LibvirtConfigGuest() - guest.virt_type = virt_type - guest.name = instance.name - guest.uuid = instance.uuid - # We are using default unit for memory: KiB - guest.memory = flavor.memory_mb * units.Ki - guest.vcpus = flavor.vcpus - allowed_cpus = hardware.get_vcpu_pin_set() - pci_devs = pci_manager.get_instance_pci_devs(instance, 'all') - - guest_numa_config = self._get_guest_numa_config( - instance.numa_topology, flavor, pci_devs, allowed_cpus) - - guest.cpuset = guest_numa_config.cpuset - guest.cputune = guest_numa_config.cputune - guest.numatune = guest_numa_config.numatune - - guest.membacking = self._get_guest_memory_backing_config( - instance.numa_topology, guest_numa_config.numatune) - - guest.metadata.append(self._get_guest_config_meta(context, - instance)) - guest.idmaps = self._get_guest_idmaps() - - self._update_guest_cputune(guest, flavor, virt_type) - - guest.cpu = self._get_guest_cpu_config( - flavor, image_meta, guest_numa_config.numaconfig, - instance.numa_topology) - - # Notes(yjiang5): we always sync the instance's vcpu model with - # the corresponding config file. - instance.vcpu_model = self._cpu_config_to_vcpu_model( - guest.cpu, instance.vcpu_model) - - if 'root' in disk_mapping: - root_device_name = block_device.prepend_dev( - disk_mapping['root']['dev']) - else: - root_device_name = None - - if root_device_name: - # NOTE(yamahata): - # for nova.api.ec2.cloud.CloudController.get_metadata() - instance.root_device_name = root_device_name - - guest.os_type = (vm_mode.get_from_instance(instance) or - self._get_guest_os_type(virt_type)) - caps = self._host.get_capabilities() - - self._configure_guest_by_virt_type(guest, virt_type, caps, instance, - image_meta, flavor, - root_device_name) - if virt_type not in ('lxc', 'uml'): - self._conf_non_lxc_uml(virt_type, guest, root_device_name, rescue, - instance, inst_path, image_meta, disk_info) - - self._set_features(guest, instance.os_type, caps, virt_type) - self._set_clock(guest, instance.os_type, image_meta, virt_type) - - storage_configs = self._get_guest_storage_config( - instance, image_meta, disk_info, rescue, block_device_info, - flavor, guest.os_type) - for config in storage_configs: - guest.add_device(config) - - for vif in network_info: - config = self.vif_driver.get_config( - instance, vif, image_meta, - flavor, virt_type) - guest.add_device(config) - - consolepty = self._create_consoles(virt_type, guest, instance, flavor, - image_meta, caps) - if virt_type != 'parallels': - consolepty.type = "pty" - guest.add_device(consolepty) - - # We want a tablet if VNC is enabled, or SPICE is enabled and - # the SPICE agent is disabled. If the SPICE agent is enabled - # it provides a paravirt mouse which drastically reduces - # overhead (by eliminating USB polling). - # - # NB: this implies that if both SPICE + VNC are enabled - # at the same time, we'll get the tablet whether the - # SPICE agent is used or not. - need_usb_tablet = False - if CONF.vnc_enabled: - need_usb_tablet = CONF.libvirt.use_usb_tablet - elif CONF.spice.enabled and not CONF.spice.agent_enabled: - need_usb_tablet = CONF.libvirt.use_usb_tablet - - if need_usb_tablet and guest.os_type == vm_mode.HVM: - tablet = vconfig.LibvirtConfigGuestInput() - tablet.type = "tablet" - tablet.bus = "usb" - guest.add_device(tablet) - - if (CONF.spice.enabled and CONF.spice.agent_enabled and - virt_type not in ('lxc', 'uml', 'xen')): - channel = vconfig.LibvirtConfigGuestChannel() - channel.target_name = "com.redhat.spice.0" - guest.add_device(channel) - - # NB some versions of libvirt support both SPICE and VNC - # at the same time. We're not trying to second guess which - # those versions are. We'll just let libvirt report the - # errors appropriately if the user enables both. - add_video_driver = False - if ((CONF.vnc_enabled and - virt_type not in ('lxc', 'uml'))): - graphics = vconfig.LibvirtConfigGuestGraphics() - graphics.type = "vnc" - graphics.keymap = CONF.vnc_keymap - graphics.listen = CONF.vncserver_listen - guest.add_device(graphics) - add_video_driver = True - - if (CONF.spice.enabled and - virt_type not in ('lxc', 'uml', 'xen')): - graphics = vconfig.LibvirtConfigGuestGraphics() - graphics.type = "spice" - graphics.keymap = CONF.spice.keymap - graphics.listen = CONF.spice.server_listen - guest.add_device(graphics) - add_video_driver = True - - if add_video_driver: - self._add_video_driver(guest, image_meta, img_meta_prop, flavor) - - # Qemu guest agent only support 'qemu' and 'kvm' hypervisor - if virt_type in ('qemu', 'kvm'): - self._set_qemu_guest_agent(guest, flavor, instance, img_meta_prop) - - if virt_type in ('xen', 'qemu', 'kvm'): - for pci_dev in pci_manager.get_instance_pci_devs(instance): - guest.add_device(self._get_guest_pci_device(pci_dev)) - else: - if len(pci_devs) > 0: - raise exception.PciDeviceUnsupportedHypervisor( - type=virt_type) - - if 'hw_watchdog_action' in flavor.extra_specs: - LOG.warn(_LW('Old property name "hw_watchdog_action" is now ' - 'deprecated and will be removed in the next release. ' - 'Use updated property name ' - '"hw:watchdog_action" instead')) - # TODO(pkholkin): accepting old property name 'hw_watchdog_action' - # should be removed in the next release - watchdog_action = (flavor.extra_specs.get('hw_watchdog_action') or - flavor.extra_specs.get('hw:watchdog_action') - or 'disabled') - if (image_meta is not None and - image_meta.get('properties', {}).get('hw_watchdog_action')): - watchdog_action = image_meta['properties']['hw_watchdog_action'] - - # NB(sross): currently only actually supported by KVM/QEmu - if watchdog_action != 'disabled': - if watchdog_actions.is_valid_watchdog_action(watchdog_action): - bark = vconfig.LibvirtConfigGuestWatchdog() - bark.action = watchdog_action - guest.add_device(bark) - else: - raise exception.InvalidWatchdogAction(action=watchdog_action) - - # Memory balloon device only support 'qemu/kvm' and 'xen' hypervisor - if (virt_type in ('xen', 'qemu', 'kvm') and - CONF.libvirt.mem_stats_period_seconds > 0): - balloon = vconfig.LibvirtConfigMemoryBalloon() - if virt_type in ('qemu', 'kvm'): - balloon.model = 'virtio' - else: - balloon.model = 'xen' - balloon.period = CONF.libvirt.mem_stats_period_seconds - guest.add_device(balloon) - - return guest - - def _get_guest_xml(self, context, instance, network_info, disk_info, - image_meta, rescue=None, - block_device_info=None, write_to_disk=False): - # NOTE(danms): Stringifying a NetworkInfo will take a lock. Do - # this ahead of time so that we don't acquire it while also - # holding the logging lock. - network_info_str = str(network_info) - msg = ('Start _get_guest_xml ' - 'network_info=%(network_info)s ' - 'disk_info=%(disk_info)s ' - 'image_meta=%(image_meta)s rescue=%(rescue)s ' - 'block_device_info=%(block_device_info)s' % - {'network_info': network_info_str, 'disk_info': disk_info, - 'image_meta': image_meta, 'rescue': rescue, - 'block_device_info': block_device_info}) - # NOTE(mriedem): block_device_info can contain auth_password so we - # need to sanitize the password in the message. - LOG.debug(strutils.mask_password(msg), instance=instance) - conf = self._get_guest_config(instance, network_info, image_meta, - disk_info, rescue, block_device_info, - context) - xml = conf.to_xml() - - if write_to_disk: - instance_dir = libvirt_utils.get_instance_path(instance) - xml_path = os.path.join(instance_dir, 'libvirt.xml') - libvirt_utils.write_to_file(xml_path, xml) - - LOG.debug('End _get_guest_xml xml=%(xml)s', - {'xml': xml}, instance=instance) - return xml - - def get_info(self, instance): - """Retrieve information from libvirt for a specific instance name. - - If a libvirt error is encountered during lookup, we might raise a - NotFound exception or Error exception depending on how severe the - libvirt error is. - - """ - virt_dom = self._host.get_domain(instance) - try: - dom_info = self._host.get_domain_info(virt_dom) - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_NO_DOMAIN: - raise exception.InstanceNotFound(instance_id=instance.name) - - msg = (_('Error from libvirt while getting domain info for ' - '%(instance_name)s: [Error Code %(error_code)s] %(ex)s') % - {'instance_name': instance.name, - 'error_code': error_code, - 'ex': ex}) - raise exception.NovaException(msg) - - return hardware.InstanceInfo(state=LIBVIRT_POWER_STATE[dom_info[0]], - max_mem_kb=dom_info[1], - mem_kb=dom_info[2], - num_cpu=dom_info[3], - cpu_time_ns=dom_info[4], - id=virt_dom.ID()) - - def _create_domain_setup_lxc(self, instance, image_meta, - block_device_info, disk_info): - inst_path = libvirt_utils.get_instance_path(instance) - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - disk_info = disk_info or {} - disk_mapping = disk_info.get('mapping', []) - - if self._is_booted_from_volume(instance, disk_mapping): - root_disk = block_device.get_root_bdm(block_device_mapping) - disk_path = root_disk['connection_info']['data']['device_path'] - disk_info = blockinfo.get_info_from_bdm( - CONF.libvirt.virt_type, image_meta, root_disk) - self._connect_volume(root_disk['connection_info'], disk_info) - - # Get the system metadata from the instance - use_cow = instance.system_metadata['image_disk_format'] == 'qcow2' - else: - image = self.image_backend.image(instance, 'disk') - disk_path = image.path - use_cow = CONF.use_cow_images - - container_dir = os.path.join(inst_path, 'rootfs') - fileutils.ensure_tree(container_dir) - rootfs_dev = disk.setup_container(disk_path, - container_dir=container_dir, - use_cow=use_cow) - - try: - # Save rootfs device to disconnect it when deleting the instance - if rootfs_dev: - instance.system_metadata['rootfs_device_name'] = rootfs_dev - if CONF.libvirt.uid_maps or CONF.libvirt.gid_maps: - id_maps = self._get_guest_idmaps() - libvirt_utils.chown_for_id_maps(container_dir, id_maps) - except Exception: - with excutils.save_and_reraise_exception(): - self._create_domain_cleanup_lxc(instance) - - def _create_domain_cleanup_lxc(self, instance): - inst_path = libvirt_utils.get_instance_path(instance) - container_dir = os.path.join(inst_path, 'rootfs') - - try: - state = self.get_info(instance).state - except exception.InstanceNotFound: - # The domain may not be present if the instance failed to start - state = None - - if state == power_state.RUNNING: - # NOTE(uni): Now the container is running with its own private - # mount namespace and so there is no need to keep the container - # rootfs mounted in the host namespace - disk.clean_lxc_namespace(container_dir=container_dir) - else: - disk.teardown_container(container_dir=container_dir) - - @contextlib.contextmanager - def _lxc_disk_handler(self, instance, image_meta, - block_device_info, disk_info): - """Context manager to handle the pre and post instance boot, - LXC specific disk operations. - - An image or a volume path will be prepared and setup to be - used by the container, prior to starting it. - The disk will be disconnected and unmounted if a container has - failed to start. - """ - - if CONF.libvirt.virt_type != 'lxc': - yield - return - - self._create_domain_setup_lxc(instance, image_meta, - block_device_info, disk_info) - - try: - yield - finally: - self._create_domain_cleanup_lxc(instance) - - def _create_domain(self, xml=None, domain=None, - instance=None, launch_flags=0, power_on=True): - """Create a domain. - - Either domain or xml must be passed in. If both are passed, then - the domain definition is overwritten from the xml. - """ - err = None - try: - if xml: - err = (_LE('Error defining a domain with XML: %s') % - encodeutils.safe_decode(xml, errors='ignore')) - domain = self._conn.defineXML(xml) - - if power_on: - err = _LE('Error launching a defined domain with XML: %s') \ - % encodeutils.safe_decode(domain.XMLDesc(0), - errors='ignore') - domain.createWithFlags(launch_flags) - - if not utils.is_neutron(): - err = _LE('Error enabling hairpin mode with XML: %s') \ - % encodeutils.safe_decode(domain.XMLDesc(0), - errors='ignore') - self._enable_hairpin(domain.XMLDesc(0)) - except Exception: - with excutils.save_and_reraise_exception(): - if err: - LOG.error(err) - - return domain - - def _neutron_failed_callback(self, event_name, instance): - LOG.error(_LE('Neutron Reported failure on event ' - '%(event)s for instance %(uuid)s'), - {'event': event_name, 'uuid': instance.uuid}) - if CONF.vif_plugging_is_fatal: - raise exception.VirtualInterfaceCreateException() - - def _get_neutron_events(self, network_info): - # NOTE(danms): We need to collect any VIFs that are currently - # down that we expect a down->up event for. Anything that is - # already up will not undergo that transition, and for - # anything that might be stale (cache-wise) assume it's - # already up so we don't block on it. - return [('network-vif-plugged', vif['id']) - for vif in network_info if vif.get('active', True) is False] - - def _create_domain_and_network(self, context, xml, instance, network_info, - disk_info, block_device_info=None, - power_on=True, reboot=False, - vifs_already_plugged=False): - - """Do required network setup and create domain.""" - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - - for vol in block_device_mapping: - connection_info = vol['connection_info'] - - if (not reboot and 'data' in connection_info and - 'volume_id' in connection_info['data']): - volume_id = connection_info['data']['volume_id'] - encryption = encryptors.get_encryption_metadata( - context, self._volume_api, volume_id, connection_info) - - if encryption: - encryptor = self._get_volume_encryptor(connection_info, - encryption) - encryptor.attach_volume(context, **encryption) - - timeout = CONF.vif_plugging_timeout - if (self._conn_supports_start_paused and - utils.is_neutron() and not - vifs_already_plugged and power_on and timeout): - events = self._get_neutron_events(network_info) - else: - events = [] - - launch_flags = events and libvirt.VIR_DOMAIN_START_PAUSED or 0 - domain = None - try: - with self.virtapi.wait_for_instance_event( - instance, events, deadline=timeout, - error_callback=self._neutron_failed_callback): - self.plug_vifs(instance, network_info) - self.firewall_driver.setup_basic_filtering(instance, - network_info) - self.firewall_driver.prepare_instance_filter(instance, - network_info) - with self._lxc_disk_handler(instance, image_meta, - block_device_info, disk_info): - domain = self._create_domain( - xml, instance=instance, - launch_flags=launch_flags, - power_on=power_on) - - self.firewall_driver.apply_instance_filter(instance, - network_info) - except exception.VirtualInterfaceCreateException: - # Neutron reported failure and we didn't swallow it, so - # bail here - with excutils.save_and_reraise_exception(): - if domain: - domain.destroy() - self.cleanup(context, instance, network_info=network_info, - block_device_info=block_device_info) - except eventlet.timeout.Timeout: - # We never heard from Neutron - LOG.warn(_LW('Timeout waiting for vif plugging callback for ' - 'instance %(uuid)s'), {'uuid': instance.uuid}) - if CONF.vif_plugging_is_fatal: - if domain: - domain.destroy() - self.cleanup(context, instance, network_info=network_info, - block_device_info=block_device_info) - raise exception.VirtualInterfaceCreateException() - - # Resume only if domain has been paused - if launch_flags & libvirt.VIR_DOMAIN_START_PAUSED: - domain.resume() - return domain - - def _get_all_block_devices(self): - """Return all block devices in use on this node.""" - devices = [] - for dom in self._host.list_instance_domains(): - try: - doc = etree.fromstring(dom.XMLDesc(0)) - except libvirt.libvirtError as e: - LOG.warn(_LW("couldn't obtain the XML from domain:" - " %(uuid)s, exception: %(ex)s") % - {"uuid": dom.UUIDString(), "ex": e}) - continue - except Exception: - continue - sources = doc.findall("./devices/disk[@type='block']/source") - for source in sources: - devices.append(source.get('dev')) - return devices - - def _get_interfaces(self, xml): - """Note that this function takes a domain xml. - - Returns a list of all network interfaces for this instance. - """ - doc = None - - try: - doc = etree.fromstring(xml) - except Exception: - return [] - - interfaces = [] - - nodes = doc.findall('./devices/interface/target') - for target in nodes: - interfaces.append(target.get('dev')) - - return interfaces - - def _get_vcpu_total(self): - """Get available vcpu number of physical computer. - - :returns: the number of cpu core instances can be used. - - """ - if self._vcpu_total != 0: - return self._vcpu_total - - try: - total_pcpus = self._conn.getInfo()[2] - except libvirt.libvirtError: - LOG.warn(_LW("Cannot get the number of cpu, because this " - "function is not implemented for this platform. ")) - return 0 - - if CONF.vcpu_pin_set is None: - self._vcpu_total = total_pcpus - return self._vcpu_total - - available_ids = hardware.get_vcpu_pin_set() - if sorted(available_ids)[-1] >= total_pcpus: - raise exception.Invalid(_("Invalid vcpu_pin_set config, " - "out of hypervisor cpu range.")) - self._vcpu_total = len(available_ids) - return self._vcpu_total - - def _get_memory_mb_total(self): - """Get the total memory size(MB) of physical computer. - - :returns: the total amount of memory(MB). - - """ - - return self._conn.getInfo()[1] - - @staticmethod - def _get_local_gb_info(): - """Get local storage info of the compute node in GB. - - :returns: A dict containing: - :total: How big the overall usable filesystem is (in gigabytes) - :free: How much space is free (in gigabytes) - :used: How much space is used (in gigabytes) - """ - - if CONF.libvirt.images_type == 'lvm': - info = lvm.get_volume_group_info( - CONF.libvirt.images_volume_group) - elif CONF.libvirt.images_type == 'rbd': - info = LibvirtDriver._get_rbd_driver().get_pool_info() - else: - info = libvirt_utils.get_fs_info(CONF.instances_path) - - for (k, v) in info.iteritems(): - info[k] = v / units.Gi - - return info - - def _get_vcpu_used(self): - """Get vcpu usage number of physical computer. - - :returns: The total number of vcpu(s) that are currently being used. - - """ - - total = 0 - if CONF.libvirt.virt_type == 'lxc': - return total + 1 - - for dom in self._host.list_instance_domains(): - try: - vcpus = dom.vcpus() - except libvirt.libvirtError as e: - LOG.warn(_LW("couldn't obtain the vpu count from domain id:" - " %(uuid)s, exception: %(ex)s") % - {"uuid": dom.UUIDString(), "ex": e}) - else: - if vcpus is not None and len(vcpus) > 1: - total += len(vcpus[1]) - # NOTE(gtt116): give other tasks a chance. - greenthread.sleep(0) - return total - - def _get_memory_mb_used(self): - """Get the used memory size(MB) of physical computer. - - :returns: the total usage of memory(MB). - - """ - - if sys.platform.upper() not in ['LINUX2', 'LINUX3']: - return 0 - - with open('/proc/meminfo') as fp: - m = fp.read().split() - idx1 = m.index('MemFree:') - idx2 = m.index('Buffers:') - idx3 = m.index('Cached:') - if CONF.libvirt.virt_type == 'xen': - used = 0 - for dom in self._host.list_instance_domains(only_guests=False): - try: - dom_mem = int(self._host.get_domain_info(dom)[2]) - except libvirt.libvirtError as e: - LOG.warn(_LW("couldn't obtain the memory from domain:" - " %(uuid)s, exception: %(ex)s") % - {"uuid": dom.UUIDString(), "ex": e}) - continue - # skip dom0 - if dom.ID() != 0: - used += dom_mem - else: - # the mem reported by dom0 is be greater of what - # it is being used - used += (dom_mem - - (int(m[idx1 + 1]) + - int(m[idx2 + 1]) + - int(m[idx3 + 1]))) - # Convert it to MB - return used / units.Ki - else: - avail = (int(m[idx1 + 1]) + int(m[idx2 + 1]) + int(m[idx3 + 1])) - # Convert it to MB - return self._get_memory_mb_total() - avail / units.Ki - - def _get_instance_capabilities(self): - """Get hypervisor instance capabilities - - Returns a list of tuples that describe instances the - hypervisor is capable of hosting. Each tuple consists - of the triplet (arch, hypervisor_type, vm_mode). - - :returns: List of tuples describing instance capabilities - """ - caps = self._host.get_capabilities() - instance_caps = list() - for g in caps.guests: - for dt in g.domtype: - instance_cap = ( - arch.canonicalize(g.arch), - hv_type.canonicalize(dt), - vm_mode.canonicalize(g.ostype)) - instance_caps.append(instance_cap) - - return instance_caps - - def _get_cpu_info(self): - """Get cpuinfo information. - - Obtains cpu feature from virConnect.getCapabilities. - - :return: see above description - - """ - - caps = self._host.get_capabilities() - cpu_info = dict() - - cpu_info['arch'] = caps.host.cpu.arch - cpu_info['model'] = caps.host.cpu.model - cpu_info['vendor'] = caps.host.cpu.vendor - - topology = dict() - topology['sockets'] = caps.host.cpu.sockets - topology['cores'] = caps.host.cpu.cores - topology['threads'] = caps.host.cpu.threads - cpu_info['topology'] = topology - - features = set() - for f in caps.host.cpu.features: - features.add(f.name) - cpu_info['features'] = features - return cpu_info - - def _get_pcidev_info(self, devname): - """Returns a dict of PCI device.""" - - def _get_device_type(cfgdev): - """Get a PCI device's device type. - - An assignable PCI device can be a normal PCI device, - a SR-IOV Physical Function (PF), or a SR-IOV Virtual - Function (VF). Only normal PCI devices or SR-IOV VFs - are assignable, while SR-IOV PFs are always owned by - hypervisor. - - Please notice that a PCI device with SR-IOV - capability but not enabled is reported as normal PCI device. - """ - for fun_cap in cfgdev.pci_capability.fun_capability: - if len(fun_cap.device_addrs) != 0: - if fun_cap.type == 'virt_functions': - return {'dev_type': 'type-PF'} - if fun_cap.type == 'phys_function': - phys_address = "%04x:%02x:%02x.%01x" % ( - fun_cap.device_addrs[0][0], - fun_cap.device_addrs[0][1], - fun_cap.device_addrs[0][2], - fun_cap.device_addrs[0][3]) - return {'dev_type': 'type-VF', - 'phys_function': phys_address} - return {'dev_type': 'type-PCI'} - - virtdev = self._conn.nodeDeviceLookupByName(devname) - xmlstr = virtdev.XMLDesc(0) - cfgdev = vconfig.LibvirtConfigNodeDevice() - cfgdev.parse_str(xmlstr) - - address = "%04x:%02x:%02x.%1x" % ( - cfgdev.pci_capability.domain, - cfgdev.pci_capability.bus, - cfgdev.pci_capability.slot, - cfgdev.pci_capability.function) - - device = { - "dev_id": cfgdev.name, - "address": address, - "product_id": "%04x" % cfgdev.pci_capability.product_id, - "vendor_id": "%04x" % cfgdev.pci_capability.vendor_id, - } - - device["numa_node"] = cfgdev.pci_capability.numa_node - - # requirement by DataBase Model - device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device - device.update(_get_device_type(cfgdev)) - return device - - def _get_pci_passthrough_devices(self): - """Get host PCI devices information. - - Obtains pci devices information from libvirt, and returns - as a JSON string. - - Each device information is a dictionary, with mandatory keys - of 'address', 'vendor_id', 'product_id', 'dev_type', 'dev_id', - 'label' and other optional device specific information. - - Refer to the objects/pci_device.py for more idea of these keys. - - :returns: a JSON string containaing a list of the assignable PCI - devices information - """ - # Bail early if we know we can't support `listDevices` to avoid - # repeated warnings within a periodic task - if not getattr(self, '_list_devices_supported', True): - return jsonutils.dumps([]) - - try: - dev_names = self._conn.listDevices('pci', 0) or [] - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_NO_SUPPORT: - self._list_devices_supported = False - LOG.warn(_LW("URI %(uri)s does not support " - "listDevices: " "%(error)s"), - {'uri': self.uri(), 'error': ex}) - return jsonutils.dumps([]) - else: - raise - - pci_info = [] - for name in dev_names: - pci_info.append(self._get_pcidev_info(name)) - - return jsonutils.dumps(pci_info) - - def _has_numa_support(self): - if not self._host.has_min_version(MIN_LIBVIRT_NUMA_VERSION): - return False - - if CONF.libvirt.virt_type not in ['qemu', 'kvm']: - return False - - return True - - def _has_hugepage_support(self): - if not self._host.has_min_version(MIN_LIBVIRT_HUGEPAGE_VERSION): - return False - - if CONF.libvirt.virt_type not in ['qemu', 'kvm']: - return False - - return True - - def _get_host_numa_topology(self): - if not self._has_numa_support(): - return - - caps = self._host.get_capabilities() - topology = caps.host.topology - - if topology is None or not topology.cells: - return - - cells = [] - allowed_cpus = hardware.get_vcpu_pin_set() - online_cpus = self._host.get_online_cpus() - if allowed_cpus: - allowed_cpus &= online_cpus - else: - allowed_cpus = online_cpus - - for cell in topology.cells: - cpuset = set(cpu.id for cpu in cell.cpus) - siblings = sorted(map(set, - set(tuple(cpu.siblings) - if cpu.siblings else () - for cpu in cell.cpus) - )) - cpuset &= allowed_cpus - siblings = [sib & allowed_cpus for sib in siblings] - # Filter out singles and empty sibling sets that may be left - siblings = [sib for sib in siblings if len(sib) > 1] - - mempages = [] - if self._has_hugepage_support(): - mempages = [ - objects.NUMAPagesTopology( - size_kb=pages.size, - total=pages.total, - used=0) - for pages in cell.mempages] - - cell = objects.NUMACell(id=cell.id, cpuset=cpuset, - memory=cell.memory / units.Ki, - cpu_usage=0, memory_usage=0, - siblings=siblings, - pinned_cpus=set([]), - mempages=mempages) - cells.append(cell) - - return objects.NUMATopology(cells=cells) - - def get_all_volume_usage(self, context, compute_host_bdms): - """Return usage info for volumes attached to vms on - a given host. - """ - vol_usage = [] - - for instance_bdms in compute_host_bdms: - instance = instance_bdms['instance'] - - for bdm in instance_bdms['instance_bdms']: - vol_stats = [] - mountpoint = bdm['device_name'] - if mountpoint.startswith('/dev/'): - mountpoint = mountpoint[5:] - volume_id = bdm['volume_id'] - - LOG.debug("Trying to get stats for the volume %s", - volume_id) - vol_stats = self.block_stats(instance, mountpoint) - - if vol_stats: - stats = dict(volume=volume_id, - instance=instance, - rd_req=vol_stats[0], - rd_bytes=vol_stats[1], - wr_req=vol_stats[2], - wr_bytes=vol_stats[3]) - LOG.debug( - "Got volume usage stats for the volume=%(volume)s," - " rd_req=%(rd_req)d, rd_bytes=%(rd_bytes)d, " - "wr_req=%(wr_req)d, wr_bytes=%(wr_bytes)d", - stats, instance=instance) - vol_usage.append(stats) - - return vol_usage - - def block_stats(self, instance, disk_id): - """Note that this function takes an instance name.""" - try: - domain = self._host.get_domain(instance) - return domain.blockStats(disk_id) - except libvirt.libvirtError as e: - errcode = e.get_error_code() - LOG.info(_LI('Getting block stats failed, device might have ' - 'been detached. Instance=%(instance_name)s ' - 'Disk=%(disk)s Code=%(errcode)s Error=%(e)s'), - {'instance_name': instance.name, 'disk': disk_id, - 'errcode': errcode, 'e': e}) - except exception.InstanceNotFound: - LOG.info(_LI('Could not find domain in libvirt for instance %s. ' - 'Cannot get block stats for device'), instance.name) - - def get_console_pool_info(self, console_type): - # TODO(mdragon): console proxy should be implemented for libvirt, - # in case someone wants to use it with kvm or - # such. For now return fake data. - return {'address': '127.0.0.1', - 'username': 'fakeuser', - 'password': 'fakepassword'} - - def refresh_security_group_rules(self, security_group_id): - self.firewall_driver.refresh_security_group_rules(security_group_id) - - def refresh_security_group_members(self, security_group_id): - self.firewall_driver.refresh_security_group_members(security_group_id) - - def refresh_instance_security_rules(self, instance): - self.firewall_driver.refresh_instance_security_rules(instance) - - def refresh_provider_fw_rules(self): - self.firewall_driver.refresh_provider_fw_rules() - - def get_available_resource(self, nodename): - """Retrieve resource information. - - This method is called when nova-compute launches, and - as part of a periodic task that records the results in the DB. - - :param nodename: will be put in PCI device - :returns: dictionary containing resource info - """ - - disk_info_dict = self._get_local_gb_info() - data = {} - - # NOTE(dprince): calling capabilities before getVersion works around - # an initialization issue with some versions of Libvirt (1.0.5.5). - # See: https://bugzilla.redhat.com/show_bug.cgi?id=1000116 - # See: https://bugs.launchpad.net/nova/+bug/1215593 - - # Temporary convert supported_instances into a string, while keeping - # the RPC version as JSON. Can be changed when RPC broadcast is removed - data["supported_instances"] = jsonutils.dumps( - self._get_instance_capabilities()) - - data["vcpus"] = self._get_vcpu_total() - data["memory_mb"] = self._get_memory_mb_total() - data["local_gb"] = disk_info_dict['total'] - data["vcpus_used"] = self._get_vcpu_used() - data["memory_mb_used"] = self._get_memory_mb_used() - data["local_gb_used"] = disk_info_dict['used'] - data["hypervisor_type"] = self._host.get_driver_type() - data["hypervisor_version"] = self._host.get_version() - data["hypervisor_hostname"] = self._host.get_hostname() - # TODO(berrange): why do we bother converting the - # libvirt capabilities XML into a special JSON format ? - # The data format is different across all the drivers - # so we could just return the raw capabilities XML - # which 'compare_cpu' could use directly - # - # That said, arch_filter.py now seems to rely on - # the libvirt drivers format which suggests this - # data format needs to be standardized across drivers - data["cpu_info"] = jsonutils.dumps(self._get_cpu_info()) - - disk_free_gb = disk_info_dict['free'] - disk_over_committed = self._get_disk_over_committed_size_total() - available_least = disk_free_gb * units.Gi - disk_over_committed - data['disk_available_least'] = available_least / units.Gi - - data['pci_passthrough_devices'] = \ - self._get_pci_passthrough_devices() - - numa_topology = self._get_host_numa_topology() - if numa_topology: - data['numa_topology'] = numa_topology._to_json() - else: - data['numa_topology'] = None - - return data - - def check_instance_shared_storage_local(self, context, instance): - """Check if instance files located on shared storage. - - This runs check on the destination host, and then calls - back to the source host to check the results. - - :param context: security context - :param instance: nova.objects.instance.Instance object - :returns - :tempfile: A dict containing the tempfile info on the destination - host - :None: 1. If the instance path is not existing. - 2. If the image backend is shared block storage type. - """ - if self.image_backend.backend().is_shared_block_storage(): - return None - - dirpath = libvirt_utils.get_instance_path(instance) - - if not os.path.exists(dirpath): - return None - - fd, tmp_file = tempfile.mkstemp(dir=dirpath) - LOG.debug("Creating tmpfile %s to verify with other " - "compute node that the instance is on " - "the same shared storage.", - tmp_file, instance=instance) - os.close(fd) - return {"filename": tmp_file} - - def check_instance_shared_storage_remote(self, context, data): - return os.path.exists(data['filename']) - - def check_instance_shared_storage_cleanup(self, context, data): - fileutils.delete_if_exists(data["filename"]) - - def check_can_live_migrate_destination(self, context, instance, - src_compute_info, dst_compute_info, - block_migration=False, - disk_over_commit=False): - """Check if it is possible to execute live migration. - - This runs checks on the destination host, and then calls - back to the source host to check the results. - - :param context: security context - :param instance: nova.db.sqlalchemy.models.Instance - :param block_migration: if true, prepare for block migration - :param disk_over_commit: if true, allow disk over commit - :returns: a dict containing: - :filename: name of the tmpfile under CONF.instances_path - :block_migration: whether this is block migration - :disk_over_commit: disk-over-commit factor on dest host - :disk_available_mb: available disk space on dest host - """ - disk_available_mb = None - if block_migration: - disk_available_gb = dst_compute_info['disk_available_least'] - disk_available_mb = \ - (disk_available_gb * units.Ki) - CONF.reserved_host_disk_mb - - # Compare CPU - if not instance.vcpu_model or not instance.vcpu_model.model: - source_cpu_info = src_compute_info['cpu_info'] - self._compare_cpu(None, source_cpu_info) - else: - self._compare_cpu(instance.vcpu_model, None) - - # Create file on storage, to be checked on source host - filename = self._create_shared_storage_test_file() - - return {"filename": filename, - "image_type": CONF.libvirt.images_type, - "block_migration": block_migration, - "disk_over_commit": disk_over_commit, - "disk_available_mb": disk_available_mb} - - def check_can_live_migrate_destination_cleanup(self, context, - dest_check_data): - """Do required cleanup on dest host after check_can_live_migrate calls - - :param context: security context - """ - filename = dest_check_data["filename"] - self._cleanup_shared_storage_test_file(filename) - - def check_can_live_migrate_source(self, context, instance, - dest_check_data, - block_device_info=None): - """Check if it is possible to execute live migration. - - This checks if the live migration can succeed, based on the - results from check_can_live_migrate_destination. - - :param context: security context - :param instance: nova.db.sqlalchemy.models.Instance - :param dest_check_data: result of check_can_live_migrate_destination - :param block_device_info: result of _get_instance_block_device_info - :returns: a dict containing migration info - """ - # Checking shared storage connectivity - # if block migration, instances_paths should not be on shared storage. - source = CONF.host - - dest_check_data.update({'is_shared_instance_path': - self._check_shared_storage_test_file( - dest_check_data['filename'])}) - - dest_check_data.update({'is_shared_block_storage': - self._is_shared_block_storage(instance, dest_check_data, - block_device_info)}) - - if dest_check_data['block_migration']: - if (dest_check_data['is_shared_block_storage'] or - dest_check_data['is_shared_instance_path']): - reason = _("Block migration can not be used " - "with shared storage.") - raise exception.InvalidLocalStorage(reason=reason, path=source) - self._assert_dest_node_has_enough_disk(context, instance, - dest_check_data['disk_available_mb'], - dest_check_data['disk_over_commit'], - block_device_info) - - elif not (dest_check_data['is_shared_block_storage'] or - dest_check_data['is_shared_instance_path']): - reason = _("Live migration can not be used " - "without shared storage.") - raise exception.InvalidSharedStorage(reason=reason, path=source) - - # NOTE(mikal): include the instance directory name here because it - # doesn't yet exist on the destination but we want to force that - # same name to be used - instance_path = libvirt_utils.get_instance_path(instance, - relative=True) - dest_check_data['instance_relative_path'] = instance_path - - # NOTE(danms): Emulate this old flag in case we're talking to - # an older client (<= Juno). We can remove this when we bump the - # compute RPC API to 4.0. - dest_check_data['is_shared_storage'] = ( - dest_check_data['is_shared_instance_path']) - - return dest_check_data - - def _is_shared_block_storage(self, instance, dest_check_data, - block_device_info=None): - """Check if all block storage of an instance can be shared - between source and destination of a live migration. - - Returns true if the instance is volume backed and has no local disks, - or if the image backend is the same on source and destination and the - backend shares block storage between compute nodes. - - :param instance: nova.objects.instance.Instance object - :param dest_check_data: dict with boolean fields image_type, - is_shared_instance_path, and is_volume_backed - """ - if (CONF.libvirt.images_type == dest_check_data.get('image_type') and - self.image_backend.backend().is_shared_block_storage()): - # NOTE(dgenin): currently true only for RBD image backend - return True - - if (dest_check_data.get('is_shared_instance_path') and - self.image_backend.backend().is_file_in_instance_path()): - # NOTE(angdraug): file based image backends (Raw, Qcow2) - # place block device files under the instance path - return True - - if (dest_check_data.get('is_volume_backed') and - not bool(jsonutils.loads( - self.get_instance_disk_info(instance, - block_device_info)))): - return True - - return False - - def _assert_dest_node_has_enough_disk(self, context, instance, - available_mb, disk_over_commit, - block_device_info=None): - """Checks if destination has enough disk for block migration.""" - # Libvirt supports qcow2 disk format,which is usually compressed - # on compute nodes. - # Real disk image (compressed) may enlarged to "virtual disk size", - # that is specified as the maximum disk size. - # (See qemu-img -f path-to-disk) - # Scheduler recognizes destination host still has enough disk space - # if real disk size < available disk size - # if disk_over_commit is True, - # otherwise virtual disk size < available disk size. - - available = 0 - if available_mb: - available = available_mb * units.Mi - - ret = self.get_instance_disk_info(instance, - block_device_info=block_device_info) - disk_infos = jsonutils.loads(ret) - - necessary = 0 - if disk_over_commit: - for info in disk_infos: - necessary += int(info['disk_size']) - else: - for info in disk_infos: - necessary += int(info['virt_disk_size']) - - # Check that available disk > necessary disk - if (available - necessary) < 0: - reason = (_('Unable to migrate %(instance_uuid)s: ' - 'Disk of instance is too large(available' - ' on destination host:%(available)s ' - '< need:%(necessary)s)') % - {'instance_uuid': instance.uuid, - 'available': available, - 'necessary': necessary}) - raise exception.MigrationPreCheckError(reason=reason) - - def _compare_cpu(self, guest_cpu, host_cpu_str): - """Check the host is compatible with the requested CPU - - :param guest_cpu: nova.objects.VirtCPUModel or None - :param host_cpu_str: JSON from _get_cpu_info() method - - If the 'guest_cpu' parameter is not None, this will be - validated for migration compatibility with the host. - Otherwise the 'host_cpu_str' JSON string will be used for - validation. - - :returns: - None. if given cpu info is not compatible to this server, - raise exception. - """ - - # NOTE(berendt): virConnectCompareCPU not working for Xen - if CONF.libvirt.virt_type not in ['qemu', 'kvm']: - return - - if guest_cpu is None: - info = jsonutils.loads(host_cpu_str) - LOG.info(_LI('Instance launched has CPU info: %s'), host_cpu_str) - cpu = vconfig.LibvirtConfigCPU() - cpu.arch = info['arch'] - cpu.model = info['model'] - cpu.vendor = info['vendor'] - cpu.sockets = info['topology']['sockets'] - cpu.cores = info['topology']['cores'] - cpu.threads = info['topology']['threads'] - for f in info['features']: - cpu.add_feature(vconfig.LibvirtConfigCPUFeature(f)) - else: - cpu = self._vcpu_model_to_cpu_config(guest_cpu) - - u = "http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult" - m = _("CPU doesn't have compatibility.\n\n%(ret)s\n\nRefer to %(u)s") - # unknown character exists in xml, then libvirt complains - try: - ret = self._conn.compareCPU(cpu.to_xml(), 0) - except libvirt.libvirtError as e: - error_code = e.get_error_code() - if error_code == libvirt.VIR_ERR_NO_SUPPORT: - LOG.debug("URI %(uri)s does not support cpu comparison. " - "It will be proceeded though. Error: %(error)s", - {'uri': self.uri(), 'error': e}) - return - else: - LOG.error(m, {'ret': e, 'u': u}) - raise exception.MigrationPreCheckError( - reason=m % {'ret': e, 'u': u}) - - if ret <= 0: - LOG.error(m, {'ret': ret, 'u': u}) - raise exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) - - def _create_shared_storage_test_file(self): - """Makes tmpfile under CONF.instances_path.""" - dirpath = CONF.instances_path - fd, tmp_file = tempfile.mkstemp(dir=dirpath) - LOG.debug("Creating tmpfile %s to notify to other " - "compute nodes that they should mount " - "the same storage.", tmp_file) - os.close(fd) - return os.path.basename(tmp_file) - - def _check_shared_storage_test_file(self, filename): - """Confirms existence of the tmpfile under CONF.instances_path. - - Cannot confirm tmpfile return False. - """ - tmp_file = os.path.join(CONF.instances_path, filename) - if not os.path.exists(tmp_file): - return False - else: - return True - - def _cleanup_shared_storage_test_file(self, filename): - """Removes existence of the tmpfile under CONF.instances_path.""" - tmp_file = os.path.join(CONF.instances_path, filename) - os.remove(tmp_file) - - def ensure_filtering_rules_for_instance(self, instance, network_info): - """Ensure that an instance's filtering rules are enabled. - - When migrating an instance, we need the filtering rules to - be configured on the destination host before starting the - migration. - - Also, when restarting the compute service, we need to ensure - that filtering rules exist for all running services. - """ - - self.firewall_driver.setup_basic_filtering(instance, network_info) - self.firewall_driver.prepare_instance_filter(instance, - network_info) - - # nwfilters may be defined in a separate thread in the case - # of libvirt non-blocking mode, so we wait for completion - timeout_count = range(CONF.live_migration_retry_count) - while timeout_count: - if self.firewall_driver.instance_filter_exists(instance, - network_info): - break - timeout_count.pop() - if len(timeout_count) == 0: - msg = _('The firewall filter for %s does not exist') - raise exception.NovaException(msg % instance.name) - greenthread.sleep(1) - - def filter_defer_apply_on(self): - self.firewall_driver.filter_defer_apply_on() - - def filter_defer_apply_off(self): - self.firewall_driver.filter_defer_apply_off() - - def live_migration(self, context, instance, dest, - post_method, recover_method, block_migration=False, - migrate_data=None): - """Spawning live_migration operation for distributing high-load. - - :param context: security context - :param instance: - nova.db.sqlalchemy.models.Instance object - instance object that is migrated. - :param dest: destination host - :param post_method: - post operation method. - expected nova.compute.manager._post_live_migration. - :param recover_method: - recovery method when any exception occurs. - expected nova.compute.manager._rollback_live_migration. - :param block_migration: if true, do block migration. - :param migrate_data: implementation specific params - - """ - - # 'dest' will be substituted into 'migration_uri' so ensure - # it does't contain any characters that could be used to - # exploit the URI accepted by libivrt - if not libvirt_utils.is_valid_hostname(dest): - raise exception.InvalidHostname(hostname=dest) - - utils.spawn(self._live_migration, context, instance, dest, - post_method, recover_method, block_migration, - migrate_data) - - def _update_xml(self, xml_str, volume, listen_addrs): - xml_doc = etree.fromstring(xml_str) - - if volume: - xml_doc = self._update_volume_xml(xml_doc, volume) - if listen_addrs: - xml_doc = self._update_graphics_xml(xml_doc, listen_addrs) - else: - self._check_graphics_addresses_can_live_migrate(listen_addrs) - - return etree.tostring(xml_doc) - - def _update_graphics_xml(self, xml_doc, listen_addrs): - - # change over listen addresses - for dev in xml_doc.findall('./devices/graphics'): - gr_type = dev.get('type') - listen_tag = dev.find('listen') - if gr_type in ('vnc', 'spice'): - if listen_tag is not None: - listen_tag.set('address', listen_addrs[gr_type]) - if dev.get('listen') is not None: - dev.set('listen', listen_addrs[gr_type]) - - return xml_doc - - def _update_volume_xml(self, xml_doc, volume): - """Update XML using device information of destination host.""" - - # Update volume xml - parser = etree.XMLParser(remove_blank_text=True) - disk_nodes = xml_doc.findall('./devices/disk') - for pos, disk_dev in enumerate(disk_nodes): - serial_source = disk_dev.findtext('serial') - if serial_source is None or volume.get(serial_source) is None: - continue - - if ('connection_info' not in volume[serial_source] or - 'disk_info' not in volume[serial_source]): - continue - - conf = self._get_volume_config( - volume[serial_source]['connection_info'], - volume[serial_source]['disk_info']) - xml_doc2 = etree.XML(conf.to_xml(), parser) - serial_dest = xml_doc2.findtext('serial') - - # Compare source serial and destination serial number. - # If these serial numbers match, continue the process. - if (serial_dest and (serial_source == serial_dest)): - LOG.debug("Find same serial number: pos=%(pos)s, " - "serial=%(num)s", - {'pos': pos, 'num': serial_source}) - for cnt, item_src in enumerate(disk_dev): - # If source and destination have same item, update - # the item using destination value. - for item_dst in xml_doc2.findall(item_src.tag): - disk_dev.remove(item_src) - item_dst.tail = None - disk_dev.insert(cnt, item_dst) - - # If destination has additional items, thses items should be - # added here. - for item_dst in list(xml_doc2): - item_dst.tail = None - disk_dev.insert(cnt, item_dst) - - return xml_doc - - def _check_graphics_addresses_can_live_migrate(self, listen_addrs): - LOCAL_ADDRS = ('0.0.0.0', '127.0.0.1', '::', '::1') - - local_vnc = CONF.vncserver_listen in LOCAL_ADDRS - local_spice = CONF.spice.server_listen in LOCAL_ADDRS - - if ((CONF.vnc_enabled and not local_vnc) or - (CONF.spice.enabled and not local_spice)): - - msg = _('Your libvirt version does not support the' - ' VIR_DOMAIN_XML_MIGRATABLE flag or your' - ' destination node does not support' - ' retrieving listen addresses. In order' - ' for live migration to work properly, you' - ' must configure the graphics (VNC and/or' - ' SPICE) listen addresses to be either' - ' the catch-all address (0.0.0.0 or ::) or' - ' the local address (127.0.0.1 or ::1).') - raise exception.MigrationError(reason=msg) - - if listen_addrs is not None: - dest_local_vnc = listen_addrs['vnc'] in LOCAL_ADDRS - dest_local_spice = listen_addrs['spice'] in LOCAL_ADDRS - - if ((CONF.vnc_enabled and not dest_local_vnc) or - (CONF.spice.enabled and not dest_local_spice)): - - LOG.warn(_LW('Your libvirt version does not support the' - ' VIR_DOMAIN_XML_MIGRATABLE flag, and the ' - ' graphics (VNC and/or SPICE) listen' - ' addresses on the destination node do not' - ' match the addresses on the source node.' - ' Since the source node has listen' - ' addresses set to either the catch-all' - ' address (0.0.0.0 or ::) or the local' - ' address (127.0.0.1 or ::1), the live' - ' migration will succeed, but the VM will' - ' continue to listen on the current' - ' addresses.')) - - def _live_migration_operation(self, context, instance, dest, - block_migration, migrate_data, dom): - """Invoke the live migration operation - - :param context: security context - :param instance: - nova.db.sqlalchemy.models.Instance object - instance object that is migrated. - :param dest: destination host - :param block_migration: if true, do block migration. - :param migrate_data: implementation specific params - :param dom: the libvirt domain object - - This method is intended to be run in a background thread and will - block that thread until the migration is finished or failed. - """ - - try: - if block_migration: - flaglist = CONF.libvirt.block_migration_flag.split(',') - else: - flaglist = CONF.libvirt.live_migration_flag.split(',') - flagvals = [getattr(libvirt, x.strip()) for x in flaglist] - logical_sum = reduce(lambda x, y: x | y, flagvals) - - pre_live_migrate_data = (migrate_data or {}).get( - 'pre_live_migration_result', {}) - listen_addrs = pre_live_migrate_data.get('graphics_listen_addrs') - volume = pre_live_migrate_data.get('volume') - - migratable_flag = getattr(libvirt, 'VIR_DOMAIN_XML_MIGRATABLE', - None) - - if (migratable_flag is None or - (listen_addrs is None and not volume)): - self._check_graphics_addresses_can_live_migrate(listen_addrs) - dom.migrateToURI(CONF.libvirt.live_migration_uri % dest, - logical_sum, - None, - CONF.libvirt.live_migration_bandwidth) - else: - old_xml_str = dom.XMLDesc(migratable_flag) - new_xml_str = self._update_xml(old_xml_str, - volume, - listen_addrs) - try: - dom.migrateToURI2(CONF.libvirt.live_migration_uri % dest, - None, - new_xml_str, - logical_sum, - None, - CONF.libvirt.live_migration_bandwidth) - except libvirt.libvirtError as ex: - # NOTE(mriedem): There is a bug in older versions of - # libvirt where the VIR_DOMAIN_XML_MIGRATABLE flag causes - # virDomainDefCheckABIStability to not compare the source - # and target domain xml's correctly for the CPU model. - # We try to handle that error here and attempt the legacy - # migrateToURI path, which could fail if the console - # addresses are not correct, but in that case we have the - # _check_graphics_addresses_can_live_migrate check in place - # to catch it. - # TODO(mriedem): Remove this workaround when - # Red Hat BZ #1141838 is closed. - error_code = ex.get_error_code() - if error_code == libvirt.VIR_ERR_CONFIG_UNSUPPORTED: - LOG.warn(_LW('An error occurred trying to live ' - 'migrate. Falling back to legacy live ' - 'migrate flow. Error: %s'), ex, - instance=instance) - self._check_graphics_addresses_can_live_migrate( - listen_addrs) - dom.migrateToURI( - CONF.libvirt.live_migration_uri % dest, - logical_sum, - None, - CONF.libvirt.live_migration_bandwidth) - else: - raise - except Exception as e: - with excutils.save_and_reraise_exception(): - LOG.error(_LE("Live Migration failure: %s"), e, - instance=instance) - - # If 'migrateToURI' fails we don't know what state the - # VM instances on each host are in. Possibilities include - # - # 1. src==running, dst==none - # - # Migration failed & rolled back, or never started - # - # 2. src==running, dst==paused - # - # Migration started but is still ongoing - # - # 3. src==paused, dst==paused - # - # Migration data transfer completed, but switchover - # is still ongoing, or failed - # - # 4. src==paused, dst==running - # - # Migration data transfer completed, switchover - # happened but cleanup on source failed - # - # 5. src==none, dst==running - # - # Migration fully succeeded. - # - # Libvirt will aim to complete any migration operation - # or roll it back. So even if the migrateToURI call has - # returned an error, if the migration was not finished - # libvirt should clean up. - # - # So we take the error raise here with a pinch of salt - # and rely on the domain job info status to figure out - # what really happened to the VM, which is a much more - # reliable indicator. - # - # In particular we need to try very hard to ensure that - # Nova does not "forget" about the guest. ie leaving it - # running on a different host to the one recorded in - # the database, as that would be a serious resource leak - - LOG.debug("Migration operation thread has finished", - instance=instance) - - def _live_migration_monitor(self, context, instance, dest, post_method, - recover_method, block_migration, - migrate_data, dom, finish_event): - n = 0 - while True: - info = host.DomainJobInfo.for_domain(dom) - - if info.type == libvirt.VIR_DOMAIN_JOB_NONE: - # Annoyingly this could indicate many possible - # states, so we must fix the mess: - # - # 1. Migration has not yet begun - # 2. Migration has stopped due to failure - # 3. Migration has stopped due to completion - # - # We can detect option 1 by seeing if thread is still - # running. We can distinguish 2 vs 3 by seeing if the - # VM still exists & running on the current host - # - if not finish_event.ready(): - LOG.debug("Operation thread is still running", - instance=instance) - # Leave type untouched - else: - try: - if dom.isActive(): - LOG.debug("VM running on src, migration failed", - instance=instance) - info.type = libvirt.VIR_DOMAIN_JOB_FAILED - else: - LOG.debug("VM is shutoff, migration finished", - instance=instance) - info.type = libvirt.VIR_DOMAIN_JOB_COMPLETED - except libvirt.libvirtError as ex: - LOG.debug("Error checking domain status %(ex)s", - ex, instance=instance) - if ex.get_error_code() == libvirt.VIR_ERR_NO_DOMAIN: - LOG.debug("VM is missing, migration finished", - instance=instance) - info.type = libvirt.VIR_DOMAIN_JOB_COMPLETED - else: - LOG.info(_LI("Error %(ex)s, migration failed"), - instance=instance) - info.type = libvirt.VIR_DOMAIN_JOB_FAILED - - if info.type != libvirt.VIR_DOMAIN_JOB_NONE: - LOG.debug("Fixed incorrect job type to be %d", - info.type, instance=instance) - - if info.type == libvirt.VIR_DOMAIN_JOB_NONE: - # Migration is not yet started - LOG.debug("Migration not running yet", - instance=instance) - elif info.type == libvirt.VIR_DOMAIN_JOB_UNBOUNDED: - # We loop every 500ms, so don't log on every - # iteration to avoid spamming logs for long - # running migrations. Just once every 5 secs - # is sufficient for developers to debug problems. - # We log once every 30 seconds at info to help - # admins see slow running migration operations - # when debug logs are off. - if (n % 10) == 0: - # Ignoring memory_processed, as due to repeated - # dirtying of data, this can be way larger than - # memory_total. Best to just look at what's - # remaining to copy and ignore what's done already - # - # TODO(berrange) perhaps we could include disk - # transfer stats in the progress too, but it - # might make memory info more obscure as large - # disk sizes might dwarf memory size - progress = 0 - if info.memory_total != 0: - progress = round(info.memory_remaining * - 100 / info.memory_total) - instance.progress = 100 - progress - instance.save() - - lg = LOG.debug - if (n % 60) == 0: - lg = LOG.info - - lg(_LI("Migration running for %(secs)d secs, " - "memory %(progress)d%% remaining; " - "(bytes processed=%(processed)d, " - "remaining=%(remaining)d, " - "total=%(total)d)"), - {"secs": n / 2, "progress": progress, - "processed": info.memory_processed, - "remaining": info.memory_remaining, - "total": info.memory_total}, instance=instance) - - # Migration is still running - # - # This is where we'd wire up calls to change live - # migration status. eg change max downtime, cancel - # the operation, change max bandwidth - n = n + 1 - elif info.type == libvirt.VIR_DOMAIN_JOB_COMPLETED: - # Migration is all done - LOG.info(_LI("Migration operation has completed"), - instance=instance) - post_method(context, instance, dest, block_migration, - migrate_data) - break - elif info.type == libvirt.VIR_DOMAIN_JOB_FAILED: - # Migration did not succeed - LOG.error(_LE("Migration operation has aborted"), - instance=instance) - recover_method(context, instance, dest, block_migration, - migrate_data) - break - elif info.type == libvirt.VIR_DOMAIN_JOB_CANCELLED: - # Migration was stopped by admin - LOG.warn(_LW("Migration operation was cancelled"), - instance=instance) - recover_method(context, instance, dest, block_migration, - migrate_data) - break - else: - LOG.warn(_LW("Unexpected migration job type: %d"), - info.type, instance=instance) - - time.sleep(0.5) - - def _live_migration(self, context, instance, dest, post_method, - recover_method, block_migration, - migrate_data): - """Do live migration. - - :param context: security context - :param instance: - nova.db.sqlalchemy.models.Instance object - instance object that is migrated. - :param dest: destination host - :param post_method: - post operation method. - expected nova.compute.manager._post_live_migration. - :param recover_method: - recovery method when any exception occurs. - expected nova.compute.manager._rollback_live_migration. - :param block_migration: if true, do block migration. - :param migrate_data: implementation specific params - - This fires off a new thread to run the blocking migration - operation, and then this thread monitors the progress of - migration and controls its operation - """ - - dom = self._host.get_domain(instance) - - opthread = utils.spawn(self._live_migration_operation, - context, instance, dest, - block_migration, - migrate_data, dom) - - finish_event = eventlet.event.Event() - - def thread_finished(thread, event): - LOG.debug("Migration operation thread notification", - instance=instance) - event.send() - opthread.link(thread_finished, finish_event) - - # Let eventlet schedule the new thread right away - time.sleep(0) - - try: - LOG.debug("Starting monitoring of live migration", - instance=instance) - self._live_migration_monitor(context, instance, dest, - post_method, recover_method, - block_migration, migrate_data, - dom, finish_event) - except Exception as ex: - LOG.warn(_LW("Error monitoring migration: %(ex)s"), - {"ex": ex}, instance=instance, exc_info=True) - raise - finally: - LOG.debug("Live migration monitoring is all done", - instance=instance) - - def _try_fetch_image(self, context, path, image_id, instance, - fallback_from_host=None): - try: - libvirt_utils.fetch_image(context, path, - image_id, - instance.user_id, - instance.project_id) - except exception.ImageNotFound: - if not fallback_from_host: - raise - LOG.debug("Image %(image_id)s doesn't exist anymore on " - "image service, attempting to copy image " - "from %(host)s", - {'image_id': image_id, 'host': fallback_from_host}) - libvirt_utils.copy_image(src=path, dest=path, - host=fallback_from_host, - receive=True) - - def _fetch_instance_kernel_ramdisk(self, context, instance, - fallback_from_host=None): - """Download kernel and ramdisk for instance in instance directory.""" - instance_dir = libvirt_utils.get_instance_path(instance) - if instance.kernel_id: - self._try_fetch_image(context, - os.path.join(instance_dir, 'kernel'), - instance.kernel_id, - instance, fallback_from_host) - if instance.ramdisk_id: - self._try_fetch_image(context, - os.path.join(instance_dir, 'ramdisk'), - instance.ramdisk_id, - instance, fallback_from_host) - - def rollback_live_migration_at_destination(self, context, instance, - network_info, - block_device_info, - destroy_disks=True, - migrate_data=None): - """Clean up destination node after a failed live migration.""" - try: - self.destroy(context, instance, network_info, block_device_info, - destroy_disks, migrate_data) - finally: - # NOTE(gcb): Failed block live migration may leave instance - # directory at destination node, ensure it is always deleted. - is_shared_instance_path = True - if migrate_data: - is_shared_instance_path = migrate_data.get( - 'is_shared_instance_path', True) - if not is_shared_instance_path: - instance_dir = libvirt_utils.get_instance_path_at_destination( - instance, migrate_data) - if os.path.exists(instance_dir): - shutil.rmtree(instance_dir) - - def pre_live_migration(self, context, instance, block_device_info, - network_info, disk_info, migrate_data=None): - """Preparation live migration.""" - # Steps for volume backed instance live migration w/o shared storage. - is_shared_block_storage = True - is_shared_instance_path = True - is_block_migration = True - if migrate_data: - is_shared_block_storage = migrate_data.get( - 'is_shared_block_storage', True) - is_shared_instance_path = migrate_data.get( - 'is_shared_instance_path', True) - is_block_migration = migrate_data.get('block_migration', True) - - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - - if not (is_shared_instance_path and is_shared_block_storage): - # NOTE(dims): Using config drive with iso format does not work - # because of a bug in libvirt with read only devices. However - # one can use vfat as config_drive_format which works fine. - # Please see bug/1246201 for details on the libvirt bug. - if CONF.config_drive_format != 'vfat': - if configdrive.required_by(instance): - raise exception.NoLiveMigrationForConfigDriveInLibVirt() - - if not is_shared_instance_path: - instance_dir = libvirt_utils.get_instance_path_at_destination( - instance, migrate_data) - - if os.path.exists(instance_dir): - raise exception.DestinationDiskExists(path=instance_dir) - os.mkdir(instance_dir) - - if not is_shared_block_storage: - # Ensure images and backing files are present. - self._create_images_and_backing( - context, instance, instance_dir, disk_info, - fallback_from_host=instance.host) - - if not (is_block_migration or is_shared_instance_path): - # NOTE(angdraug): when block storage is shared between source and - # destination and instance path isn't (e.g. volume backed or rbd - # backed instance), instance path on destination has to be prepared - - # Touch the console.log file, required by libvirt. - console_file = self._get_console_log_path(instance) - libvirt_utils.file_open(console_file, 'a').close() - - # if image has kernel and ramdisk, just download - # following normal way. - self._fetch_instance_kernel_ramdisk(context, instance) - - # Establishing connection to volume server. - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - for vol in block_device_mapping: - connection_info = vol['connection_info'] - disk_info = blockinfo.get_info_from_bdm( - CONF.libvirt.virt_type, image_meta, vol) - self._connect_volume(connection_info, disk_info) - - if is_block_migration and len(block_device_mapping): - # NOTE(stpierre): if this instance has mapped volumes, - # we can't do a block migration, since that will - # result in volumes being copied from themselves to - # themselves, which is a recipe for disaster. - LOG.error( - _LE('Cannot block migrate instance %s with mapped volumes'), - instance.uuid) - msg = (_('Cannot block migrate instance %s with mapped volumes') % - instance.uuid) - raise exception.MigrationError(reason=msg) - - # We call plug_vifs before the compute manager calls - # ensure_filtering_rules_for_instance, to ensure bridge is set up - # Retry operation is necessary because continuously request comes, - # concurrent request occurs to iptables, then it complains. - max_retry = CONF.live_migration_retry_count - for cnt in range(max_retry): - try: - self.plug_vifs(instance, network_info) - if utils.is_neutron(): - # Sleep here after plugging vifs to let neutron ovs agent - # handle new ports and assign proper tags. This is needed - # in order to not block RARP packets sent by qemu right - # after migration. Only side effect is more time needed - # for live migration. Positive effect however is less - # packet loss during live migration - # TODO(obondarev): leverage external event framework - time.sleep(10) - break - except processutils.ProcessExecutionError: - if cnt == max_retry - 1: - raise - else: - LOG.warn(_LW('plug_vifs() failed %(cnt)d. Retry up to ' - '%(max_retry)d.'), - {'cnt': cnt, - 'max_retry': max_retry}, - instance=instance) - greenthread.sleep(1) - - # Store vncserver_listen and latest disk device info - res_data = {'graphics_listen_addrs': {}, 'volume': {}} - res_data['graphics_listen_addrs']['vnc'] = CONF.vncserver_listen - res_data['graphics_listen_addrs']['spice'] = CONF.spice.server_listen - for vol in block_device_mapping: - connection_info = vol['connection_info'] - if connection_info.get('serial'): - serial = connection_info['serial'] - res_data['volume'][serial] = {'connection_info': {}, - 'disk_info': {}} - res_data['volume'][serial]['connection_info'] = \ - connection_info - disk_info = blockinfo.get_info_from_bdm( - CONF.libvirt.virt_type, image_meta, vol) - res_data['volume'][serial]['disk_info'] = disk_info - - return res_data - - def _try_fetch_image_cache(self, image, fetch_func, context, filename, - image_id, instance, size, - fallback_from_host=None): - try: - image.cache(fetch_func=fetch_func, - context=context, - filename=filename, - image_id=image_id, - user_id=instance.user_id, - project_id=instance.project_id, - size=size) - except exception.ImageNotFound: - if not fallback_from_host: - raise - LOG.debug("Image %(image_id)s doesn't exist anymore " - "on image service, attempting to copy " - "image from %(host)s", - {'image_id': image_id, 'host': fallback_from_host}) - - def copy_from_host(target, max_size): - libvirt_utils.copy_image(src=target, - dest=target, - host=fallback_from_host, - receive=True) - image.cache(fetch_func=copy_from_host, - filename=filename) - - def _create_images_and_backing(self, context, instance, instance_dir, - disk_info_json, fallback_from_host=None): - """:param context: security context - :param instance: - nova.db.sqlalchemy.models.Instance object - instance object that is migrated. - :param instance_dir: - instance path to use, calculated externally to handle block - migrating an instance with an old style instance path - :param disk_info_json: - json strings specified in get_instance_disk_info - :param fallback_from_host: - host where we can retrieve images if the glance images are - not available. - - """ - if not disk_info_json: - disk_info = [] - else: - disk_info = jsonutils.loads(disk_info_json) - - for info in disk_info: - base = os.path.basename(info['path']) - # Get image type and create empty disk image, and - # create backing file in case of qcow2. - instance_disk = os.path.join(instance_dir, base) - if not info['backing_file'] and not os.path.exists(instance_disk): - libvirt_utils.create_image(info['type'], instance_disk, - info['virt_disk_size']) - elif info['backing_file']: - # Creating backing file follows same way as spawning instances. - cache_name = os.path.basename(info['backing_file']) - - image = self.image_backend.image(instance, - instance_disk, - CONF.libvirt.images_type) - if cache_name.startswith('ephemeral'): - image.cache(fetch_func=self._create_ephemeral, - fs_label=cache_name, - os_type=instance.os_type, - filename=cache_name, - size=info['virt_disk_size'], - ephemeral_size=instance.ephemeral_gb) - elif cache_name.startswith('swap'): - inst_type = instance.get_flavor() - swap_mb = inst_type.swap - image.cache(fetch_func=self._create_swap, - filename="swap_%s" % swap_mb, - size=swap_mb * units.Mi, - swap_mb=swap_mb) - else: - self._try_fetch_image_cache(image, - libvirt_utils.fetch_image, - context, cache_name, - instance.image_ref, - instance, - info['virt_disk_size'], - fallback_from_host) - - # if image has kernel and ramdisk, just download - # following normal way. - self._fetch_instance_kernel_ramdisk( - context, instance, fallback_from_host=fallback_from_host) - - def post_live_migration(self, context, instance, block_device_info, - migrate_data=None): - # Disconnect from volume server - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - connector = self.get_volume_connector(instance) - volume_api = self._volume_api - for vol in block_device_mapping: - # Retrieve connection info from Cinder's initialize_connection API. - # The info returned will be accurate for the source server. - volume_id = vol['connection_info']['serial'] - connection_info = volume_api.initialize_connection(context, - volume_id, - connector) - - # Pull out multipath_id from the bdm information. The - # multipath_id can be placed into the connection info - # because it is based off of the volume and will be the - # same on the source and destination hosts. - if 'multipath_id' in vol['connection_info']['data']: - multipath_id = vol['connection_info']['data']['multipath_id'] - connection_info['data']['multipath_id'] = multipath_id - - disk_dev = vol['mount_device'].rpartition("/")[2] - self._disconnect_volume(connection_info, disk_dev) - - def post_live_migration_at_source(self, context, instance, network_info): - """Unplug VIFs from networks at source. - - :param context: security context - :param instance: instance object reference - :param network_info: instance network information - """ - self.unplug_vifs(instance, network_info) - - def post_live_migration_at_destination(self, context, - instance, - network_info, - block_migration=False, - block_device_info=None): - """Post operation of live migration at destination host. - - :param context: security context - :param instance: - nova.db.sqlalchemy.models.Instance object - instance object that is migrated. - :param network_info: instance network information - :param block_migration: if true, post operation of block_migration. - """ - # Define migrated instance, otherwise, suspend/destroy does not work. - dom_list = self._conn.listDefinedDomains() - if instance.name not in dom_list: - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - # In case of block migration, destination does not have - # libvirt.xml - disk_info = blockinfo.get_disk_info( - CONF.libvirt.virt_type, instance, - image_meta, block_device_info) - xml = self._get_guest_xml(context, instance, - network_info, disk_info, - image_meta, - block_device_info=block_device_info, - write_to_disk=True) - self._conn.defineXML(xml) - - def _get_instance_disk_info(self, instance_name, xml, - block_device_info=None): - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - - volume_devices = set() - for vol in block_device_mapping: - disk_dev = vol['mount_device'].rpartition("/")[2] - volume_devices.add(disk_dev) - - disk_info = [] - doc = etree.fromstring(xml) - disk_nodes = doc.findall('.//devices/disk') - path_nodes = doc.findall('.//devices/disk/source') - driver_nodes = doc.findall('.//devices/disk/driver') - target_nodes = doc.findall('.//devices/disk/target') - - for cnt, path_node in enumerate(path_nodes): - disk_type = disk_nodes[cnt].get('type') - path = path_node.get('file') or path_node.get('dev') - target = target_nodes[cnt].attrib['dev'] - - if not path: - LOG.debug('skipping disk for %s as it does not have a path', - instance_name) - continue - - if disk_type not in ['file', 'block']: - LOG.debug('skipping disk because it looks like a volume', path) - continue - - if target in volume_devices: - LOG.debug('skipping disk %(path)s (%(target)s) as it is a ' - 'volume', {'path': path, 'target': target}) - continue - - # get the real disk size or - # raise a localized error if image is unavailable - if disk_type == 'file': - dk_size = int(os.path.getsize(path)) - elif disk_type == 'block': - dk_size = lvm.get_volume_size(path) - - disk_type = driver_nodes[cnt].get('type') - if disk_type == "qcow2": - backing_file = libvirt_utils.get_disk_backing_file(path) - virt_size = disk.get_disk_size(path) - over_commit_size = int(virt_size) - dk_size - else: - backing_file = "" - virt_size = dk_size - over_commit_size = 0 - - disk_info.append({'type': disk_type, - 'path': path, - 'virt_disk_size': virt_size, - 'backing_file': backing_file, - 'disk_size': dk_size, - 'over_committed_disk_size': over_commit_size}) - return jsonutils.dumps(disk_info) - - def get_instance_disk_info(self, instance, - block_device_info=None): - try: - dom = self._host.get_domain(instance) - xml = dom.XMLDesc(0) - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - msg = (_('Error from libvirt while getting description of ' - '%(instance_name)s: [Error Code %(error_code)s] ' - '%(ex)s') % - {'instance_name': instance.name, - 'error_code': error_code, - 'ex': ex}) - LOG.warn(msg) - raise exception.InstanceNotFound(instance_id=instance.name) - - return self._get_instance_disk_info(instance.name, xml, - block_device_info) - - def _get_disk_over_committed_size_total(self): - """Return total over committed disk size for all instances.""" - # Disk size that all instance uses : virtual_size - disk_size - disk_over_committed_size = 0 - for dom in self._host.list_instance_domains(): - try: - xml = dom.XMLDesc(0) - disk_infos = jsonutils.loads( - self._get_instance_disk_info(dom.name(), xml)) - for info in disk_infos: - disk_over_committed_size += int( - info['over_committed_disk_size']) - except libvirt.libvirtError as ex: - error_code = ex.get_error_code() - LOG.warn(_LW( - 'Error from libvirt while getting description of ' - '%(instance_name)s: [Error Code %(error_code)s] %(ex)s' - ) % {'instance_name': dom.name(), - 'error_code': error_code, - 'ex': ex}) - except OSError as e: - if e.errno == errno.ENOENT: - LOG.warn(_LW('Periodic task is updating the host stat, ' - 'it is trying to get disk %(i_name)s, ' - 'but disk file was removed by concurrent ' - 'operations such as resize.'), - {'i_name': dom.name()}) - elif e.errno == errno.EACCES: - LOG.warn(_LW('Periodic task is updating the host stat, ' - 'it is trying to get disk %(i_name)s, ' - 'but access is denied. It is most likely ' - 'due to a VM that exists on the compute ' - 'node but is not managed by Nova.'), - {'i_name': dom.name()}) - else: - raise - except exception.VolumeBDMPathNotFound as e: - LOG.warn(_LW('Periodic task is updating the host stats, ' - 'it is trying to get disk info for %(i_name)s, ' - 'but the backing volume block device was removed ' - 'by concurrent operations such as resize. ' - 'Error: %(error)s'), - {'i_name': dom.name(), - 'error': e}) - # NOTE(gtt116): give other tasks a chance. - greenthread.sleep(0) - return disk_over_committed_size - - def unfilter_instance(self, instance, network_info): - """See comments of same method in firewall_driver.""" - self.firewall_driver.unfilter_instance(instance, - network_info=network_info) - - def get_available_nodes(self, refresh=False): - return [self._host.get_hostname()] - - def get_host_cpu_stats(self): - """Return the current CPU state of the host.""" - # Extract node's CPU statistics. - stats = self._conn.getCPUStats(libvirt.VIR_NODE_CPU_STATS_ALL_CPUS, 0) - # getInfo() returns various information about the host node - # No. 3 is the expected CPU frequency. - stats["frequency"] = self._conn.getInfo()[3] - return stats - - def get_host_uptime(self): - """Returns the result of calling "uptime".""" - out, err = utils.execute('env', 'LANG=C', 'uptime') - return out - - def manage_image_cache(self, context, all_instances): - """Manage the local cache of images.""" - self.image_cache_manager.update(context, all_instances) - - def _cleanup_remote_migration(self, dest, inst_base, inst_base_resize, - shared_storage=False): - """Used only for cleanup in case migrate_disk_and_power_off fails.""" - try: - if os.path.exists(inst_base_resize): - utils.execute('rm', '-rf', inst_base) - utils.execute('mv', inst_base_resize, inst_base) - if not shared_storage: - utils.execute('ssh', dest, 'rm', '-rf', inst_base) - except Exception: - pass - - def _is_storage_shared_with(self, dest, inst_base): - # NOTE (rmk): There are two methods of determining whether we are - # on the same filesystem: the source and dest IP are the - # same, or we create a file on the dest system via SSH - # and check whether the source system can also see it. - shared_storage = (dest == self.get_host_ip_addr()) - if not shared_storage: - tmp_file = uuid.uuid4().hex + '.tmp' - tmp_path = os.path.join(inst_base, tmp_file) - - try: - utils.execute('ssh', dest, 'touch', tmp_path) - if os.path.exists(tmp_path): - shared_storage = True - os.unlink(tmp_path) - else: - utils.execute('ssh', dest, 'rm', tmp_path) - except Exception: - pass - return shared_storage - - def migrate_disk_and_power_off(self, context, instance, dest, - flavor, network_info, - block_device_info=None, - timeout=0, retry_interval=0): - LOG.debug("Starting migrate_disk_and_power_off", - instance=instance) - - ephemerals = driver.block_device_info_get_ephemerals(block_device_info) - - # get_bdm_ephemeral_disk_size() will return 0 if the new - # instance's requested block device mapping contain no - # ephemeral devices. However, we still want to check if - # the original instance's ephemeral_gb property was set and - # ensure that the new requested flavor ephemeral size is greater - eph_size = (block_device.get_bdm_ephemeral_disk_size(ephemerals) or - instance.ephemeral_gb) - - # Checks if the migration needs a disk resize down. - root_down = flavor['root_gb'] < instance.root_gb - ephemeral_down = flavor['ephemeral_gb'] < eph_size - disk_info_text = self.get_instance_disk_info( - instance, block_device_info=block_device_info) - booted_from_volume = self._is_booted_from_volume(instance, - disk_info_text) - if (root_down and not booted_from_volume) or ephemeral_down: - reason = _("Unable to resize disk down.") - raise exception.InstanceFaultRollback( - exception.ResizeError(reason=reason)) - - disk_info = jsonutils.loads(disk_info_text) - - # NOTE(dgenin): Migration is not implemented for LVM backed instances. - if CONF.libvirt.images_type == 'lvm' and not booted_from_volume: - reason = _("Migration is not supported for LVM backed instances") - raise exception.InstanceFaultRollback( - exception.MigrationPreCheckError(reason=reason)) - - # copy disks to destination - # rename instance dir to +_resize at first for using - # shared storage for instance dir (eg. NFS). - inst_base = libvirt_utils.get_instance_path(instance) - inst_base_resize = inst_base + "_resize" - shared_storage = self._is_storage_shared_with(dest, inst_base) - - # try to create the directory on the remote compute node - # if this fails we pass the exception up the stack so we can catch - # failures here earlier - if not shared_storage: - try: - utils.execute('ssh', dest, 'mkdir', '-p', inst_base) - except processutils.ProcessExecutionError as e: - reason = _("not able to execute ssh command: %s") % e - raise exception.InstanceFaultRollback( - exception.ResizeError(reason=reason)) - - self.power_off(instance, timeout, retry_interval) - - block_device_mapping = driver.block_device_info_get_mapping( - block_device_info) - for vol in block_device_mapping: - connection_info = vol['connection_info'] - disk_dev = vol['mount_device'].rpartition("/")[2] - self._disconnect_volume(connection_info, disk_dev) - - try: - utils.execute('mv', inst_base, inst_base_resize) - # if we are migrating the instance with shared storage then - # create the directory. If it is a remote node the directory - # has already been created - if shared_storage: - dest = None - utils.execute('mkdir', '-p', inst_base) - - active_flavor = instance.get_flavor() - for info in disk_info: - # assume inst_base == dirname(info['path']) - img_path = info['path'] - fname = os.path.basename(img_path) - from_path = os.path.join(inst_base_resize, fname) - - if (fname == 'disk.swap' and - active_flavor.get('swap', 0) != flavor.get('swap', 0)): - # To properly resize the swap partition, it must be - # re-created with the proper size. This is acceptable - # because when an OS is shut down, the contents of the - # swap space are just garbage, the OS doesn't bother about - # what is in it. - - # We will not copy over the swap disk here, and rely on - # finish_migration/_create_image to re-create it for us. - continue - - on_execute = lambda process: self.job_tracker.add_job( - instance, process.pid) - on_completion = lambda process: self.job_tracker.remove_job( - instance, process.pid) - - if info['type'] == 'qcow2' and info['backing_file']: - tmp_path = from_path + "_rbase" - # merge backing file - utils.execute('qemu-img', 'convert', '-f', 'qcow2', - '-O', 'qcow2', from_path, tmp_path) - - if shared_storage: - utils.execute('mv', tmp_path, img_path) - else: - libvirt_utils.copy_image(tmp_path, img_path, host=dest, - on_execute=on_execute, - on_completion=on_completion) - utils.execute('rm', '-f', tmp_path) - - else: # raw or qcow2 with no backing file - libvirt_utils.copy_image(from_path, img_path, host=dest, - on_execute=on_execute, - on_completion=on_completion) - except Exception: - with excutils.save_and_reraise_exception(): - self._cleanup_remote_migration(dest, inst_base, - inst_base_resize, - shared_storage) - - return disk_info_text - - def _wait_for_running(self, instance): - state = self.get_info(instance).state - - if state == power_state.RUNNING: - LOG.info(_LI("Instance running successfully."), instance=instance) - raise loopingcall.LoopingCallDone() - - @staticmethod - def _disk_size_from_instance(instance, info): - """Determines the disk size from instance properties - - Returns the disk size by using the disk name to determine whether it - is a root or an ephemeral disk, then by checking properties of the - instance returns the size converted to bytes. - - Returns 0 if the disk name not match (disk, disk.local). - """ - fname = os.path.basename(info['path']) - if fname == 'disk': - size = instance.root_gb - elif fname == 'disk.local': - size = instance.ephemeral_gb - else: - size = 0 - return size * units.Gi - - @staticmethod - def _disk_raw_to_qcow2(path): - """Converts a raw disk to qcow2.""" - path_qcow = path + '_qcow' - utils.execute('qemu-img', 'convert', '-f', 'raw', - '-O', 'qcow2', path, path_qcow) - utils.execute('mv', path_qcow, path) - - @staticmethod - def _disk_qcow2_to_raw(path): - """Converts a qcow2 disk to raw.""" - path_raw = path + '_raw' - utils.execute('qemu-img', 'convert', '-f', 'qcow2', - '-O', 'raw', path, path_raw) - utils.execute('mv', path_raw, path) - - def _disk_resize(self, info, size): - """Attempts to resize a disk to size - - Attempts to resize a disk by checking the capabilities and - preparing the format, then calling disk.api.extend. - - Note: Currently only support disk extend. - """ - # If we have a non partitioned image that we can extend - # then ensure we're in 'raw' format so we can extend file system. - fmt, org = [info['type']] * 2 - pth = info['path'] - if (size and fmt == 'qcow2' and - disk.can_resize_image(pth, size) and - disk.is_image_extendable(pth, use_cow=True)): - self._disk_qcow2_to_raw(pth) - fmt = 'raw' - - if size: - use_cow = fmt == 'qcow2' - disk.extend(pth, size, use_cow=use_cow) - - if fmt != org: - # back to qcow2 (no backing_file though) so that snapshot - # will be available - self._disk_raw_to_qcow2(pth) - - def finish_migration(self, context, migration, instance, disk_info, - network_info, image_meta, resize_instance, - block_device_info=None, power_on=True): - LOG.debug("Starting finish_migration", instance=instance) - - # resize disks. only "disk" and "disk.local" are necessary. - disk_info = jsonutils.loads(disk_info) - for info in disk_info: - size = self._disk_size_from_instance(instance, info) - if resize_instance: - self._disk_resize(info, size) - if info['type'] == 'raw' and CONF.use_cow_images: - self._disk_raw_to_qcow2(info['path']) - - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - block_device_info) - # assume _create_image do nothing if a target file exists. - self._create_image(context, instance, disk_info['mapping'], - network_info=network_info, - block_device_info=None, inject_files=False, - fallback_from_host=migration.source_compute) - xml = self._get_guest_xml(context, instance, network_info, disk_info, - image_meta, - block_device_info=block_device_info, - write_to_disk=True) - # NOTE(mriedem): vifs_already_plugged=True here, regardless of whether - # or not we've migrated to another host, because we unplug VIFs locally - # and the status change in the port might go undetected by the neutron - # L2 agent (or neutron server) so neutron may not know that the VIF was - # unplugged in the first place and never send an event. - self._create_domain_and_network(context, xml, instance, network_info, - disk_info, - block_device_info=block_device_info, - power_on=power_on, - vifs_already_plugged=True) - if power_on: - timer = loopingcall.FixedIntervalLoopingCall( - self._wait_for_running, - instance) - timer.start(interval=0.5).wait() - - def _cleanup_failed_migration(self, inst_base): - """Make sure that a failed migrate doesn't prevent us from rolling - back in a revert. - """ - try: - shutil.rmtree(inst_base) - except OSError as e: - if e.errno != errno.ENOENT: - raise - - def finish_revert_migration(self, context, instance, network_info, - block_device_info=None, power_on=True): - LOG.debug("Starting finish_revert_migration", - instance=instance) - - inst_base = libvirt_utils.get_instance_path(instance) - inst_base_resize = inst_base + "_resize" - - # NOTE(danms): if we're recovering from a failed migration, - # make sure we don't have a left-over same-host base directory - # that would conflict. Also, don't fail on the rename if the - # failure happened early. - if os.path.exists(inst_base_resize): - self._cleanup_failed_migration(inst_base) - utils.execute('mv', inst_base_resize, inst_base) - - image_ref = instance.get('image_ref') - image_meta = compute_utils.get_image_metadata(context, - self._image_api, - image_ref, - instance) - - disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, - instance, - image_meta, - block_device_info) - xml = self._get_guest_xml(context, instance, network_info, disk_info, - image_meta, - block_device_info=block_device_info) - self._create_domain_and_network(context, xml, instance, network_info, - disk_info, - block_device_info=block_device_info, - power_on=power_on, - vifs_already_plugged=True) - - if power_on: - timer = loopingcall.FixedIntervalLoopingCall( - self._wait_for_running, - instance) - timer.start(interval=0.5).wait() - - def confirm_migration(self, migration, instance, network_info): - """Confirms a resize, destroying the source VM.""" - self._cleanup_resize(instance, network_info) - - @staticmethod - def _get_io_devices(xml_doc): - """get the list of io devices from the xml document.""" - result = {"volumes": [], "ifaces": []} - try: - doc = etree.fromstring(xml_doc) - except Exception: - return result - blocks = [('./devices/disk', 'volumes'), - ('./devices/interface', 'ifaces')] - for block, key in blocks: - section = doc.findall(block) - for node in section: - for child in node.getchildren(): - if child.tag == 'target' and child.get('dev'): - result[key].append(child.get('dev')) - return result - - def get_diagnostics(self, instance): - domain = self._host.get_domain(instance) - output = {} - # get cpu time, might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - try: - cputime = domain.vcpus()[0] - for i in range(len(cputime)): - output["cpu" + str(i) + "_time"] = cputime[i][2] - except libvirt.libvirtError: - pass - # get io status - xml = domain.XMLDesc(0) - dom_io = LibvirtDriver._get_io_devices(xml) - for guest_disk in dom_io["volumes"]: - try: - # blockStats might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - stats = domain.blockStats(guest_disk) - output[guest_disk + "_read_req"] = stats[0] - output[guest_disk + "_read"] = stats[1] - output[guest_disk + "_write_req"] = stats[2] - output[guest_disk + "_write"] = stats[3] - output[guest_disk + "_errors"] = stats[4] - except libvirt.libvirtError: - pass - for interface in dom_io["ifaces"]: - try: - # interfaceStats might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - stats = domain.interfaceStats(interface) - output[interface + "_rx"] = stats[0] - output[interface + "_rx_packets"] = stats[1] - output[interface + "_rx_errors"] = stats[2] - output[interface + "_rx_drop"] = stats[3] - output[interface + "_tx"] = stats[4] - output[interface + "_tx_packets"] = stats[5] - output[interface + "_tx_errors"] = stats[6] - output[interface + "_tx_drop"] = stats[7] - except libvirt.libvirtError: - pass - output["memory"] = domain.maxMemory() - # memoryStats might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - try: - mem = domain.memoryStats() - for key in mem.keys(): - output["memory-" + key] = mem[key] - except (libvirt.libvirtError, AttributeError): - pass - return output - - def get_instance_diagnostics(self, instance): - domain = self._host.get_domain(instance) - xml = domain.XMLDesc(0) - xml_doc = etree.fromstring(xml) - - (state, max_mem, mem, num_cpu, cpu_time) = \ - self._host.get_domain_info(domain) - config_drive = configdrive.required_by(instance) - launched_at = timeutils.normalize_time(instance.launched_at) - uptime = timeutils.delta_seconds(launched_at, - timeutils.utcnow()) - diags = diagnostics.Diagnostics(state=power_state.STATE_MAP[state], - driver='libvirt', - config_drive=config_drive, - hypervisor_os='linux', - uptime=uptime) - diags.memory_details.maximum = max_mem / units.Mi - diags.memory_details.used = mem / units.Mi - - # get cpu time, might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - try: - cputime = domain.vcpus()[0] - num_cpus = len(cputime) - for i in range(num_cpus): - diags.add_cpu(time=cputime[i][2]) - except libvirt.libvirtError: - pass - # get io status - dom_io = LibvirtDriver._get_io_devices(xml) - for guest_disk in dom_io["volumes"]: - try: - # blockStats might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - stats = domain.blockStats(guest_disk) - diags.add_disk(read_bytes=stats[1], - read_requests=stats[0], - write_bytes=stats[3], - write_requests=stats[2]) - except libvirt.libvirtError: - pass - for interface in dom_io["ifaces"]: - try: - # interfaceStats might launch an exception if the method - # is not supported by the underlying hypervisor being - # used by libvirt - stats = domain.interfaceStats(interface) - diags.add_nic(rx_octets=stats[0], - rx_errors=stats[2], - rx_drop=stats[3], - rx_packets=stats[1], - tx_octets=stats[4], - tx_errors=stats[6], - tx_drop=stats[7], - tx_packets=stats[5]) - except libvirt.libvirtError: - pass - - # Update mac addresses of interface if stats have been reported - if diags.nic_details: - nodes = xml_doc.findall('./devices/interface/mac') - for index, node in enumerate(nodes): - diags.nic_details[index].mac_address = node.get('address') - return diags - - def instance_on_disk(self, instance): - # ensure directories exist and are writable - instance_path = libvirt_utils.get_instance_path(instance) - LOG.debug('Checking instance files accessibility %s', instance_path) - shared_instance_path = os.access(instance_path, os.W_OK) - # NOTE(flwang): For shared block storage scenario, the file system is - # not really shared by the two hosts, but the volume of evacuated - # instance is reachable. - shared_block_storage = (self.image_backend.backend(). - is_shared_block_storage()) - return shared_instance_path or shared_block_storage - - def inject_network_info(self, instance, nw_info): - self.firewall_driver.setup_basic_filtering(instance, nw_info) - - def delete_instance_files(self, instance): - target = libvirt_utils.get_instance_path(instance) - # A resize may be in progress - target_resize = target + '_resize' - # Other threads may attempt to rename the path, so renaming the path - # to target + '_del' (because it is atomic) and iterating through - # twice in the unlikely event that a concurrent rename occurs between - # the two rename attempts in this method. In general this method - # should be fairly thread-safe without these additional checks, since - # other operations involving renames are not permitted when the task - # state is not None and the task state should be set to something - # other than None by the time this method is invoked. - target_del = target + '_del' - for i in six.moves.range(2): - try: - utils.execute('mv', target, target_del) - break - except Exception: - pass - try: - utils.execute('mv', target_resize, target_del) - break - except Exception: - pass - # Either the target or target_resize path may still exist if all - # rename attempts failed. - remaining_path = None - for p in (target, target_resize): - if os.path.exists(p): - remaining_path = p - break - - # A previous delete attempt may have been interrupted, so target_del - # may exist even if all rename attempts during the present method - # invocation failed due to the absence of both target and - # target_resize. - if not remaining_path and os.path.exists(target_del): - self.job_tracker.terminate_jobs(instance) - - LOG.info(_LI('Deleting instance files %s'), target_del, - instance=instance) - remaining_path = target_del - try: - shutil.rmtree(target_del) - except OSError as e: - LOG.error(_LE('Failed to cleanup directory %(target)s: ' - '%(e)s'), {'target': target_del, 'e': e}, - instance=instance) - - # It is possible that the delete failed, if so don't mark the instance - # as cleaned. - if remaining_path and os.path.exists(remaining_path): - LOG.info(_LI('Deletion of %s failed'), remaining_path, - instance=instance) - return False - - LOG.info(_LI('Deletion of %s complete'), target_del, instance=instance) - return True - - @property - def need_legacy_block_device_info(self): - return False - - def default_root_device_name(self, instance, image_meta, root_bdm): - - disk_bus = blockinfo.get_disk_bus_for_device_type( - CONF.libvirt.virt_type, image_meta, "disk") - cdrom_bus = blockinfo.get_disk_bus_for_device_type( - CONF.libvirt.virt_type, image_meta, "cdrom") - root_info = blockinfo.get_root_info( - CONF.libvirt.virt_type, image_meta, root_bdm, disk_bus, - cdrom_bus) - return block_device.prepend_dev(root_info['dev']) - - def default_device_names_for_instance(self, instance, root_device_name, - *block_device_lists): - image_meta = utils.get_image_from_system_metadata( - instance.system_metadata) - - ephemerals, swap, block_device_mapping = block_device_lists[:3] - - blockinfo.default_device_names(CONF.libvirt.virt_type, - nova_context.get_admin_context(), - instance, root_device_name, - ephemerals, swap, - block_device_mapping, - image_meta) - - def is_supported_fs_format(self, fs_type): - return fs_type in [disk.FS_FORMAT_EXT2, disk.FS_FORMAT_EXT3, - disk.FS_FORMAT_EXT4, disk.FS_FORMAT_XFS] - - def _get_power_state(self, virt_dom): - dom_info = self._host.get_domain_info(virt_dom) - return LIBVIRT_POWER_STATE[dom_info[0]] diff --git a/deployment_scripts/puppet/install_scaleio_compute/files/7.0/volume.py b/deployment_scripts/puppet/install_scaleio_compute/files/7.0/volume.py deleted file mode 100644 index 04456c6..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/files/7.0/volume.py +++ /dev/null @@ -1,2026 +0,0 @@ -# -# Copyright (c) 2011-2015 EMC Corporation -# All Rights Reserved -# EMC Confidential: Restricted Internal Distribution -# 4ebcffbc4faf87cb4da8841bbf214d32f045c8a8.ScaleIO -# - -"""Volume drivers for libvirt.""" - -import errno -import glob -import os -import platform -import re -import time -import urllib2 -import urlparse -import requests -import json -import sys -import urllib - -from oslo_concurrency import processutils -from oslo_config import cfg -from oslo_log import log as logging -from oslo_utils import strutils -import six -import six.moves.urllib.parse as urlparse - -from nova.compute import arch -from nova import exception -from nova.i18n import _ -from nova.i18n import _LE -from nova.i18n import _LI -from nova.i18n import _LW -from nova.openstack.common import loopingcall -from nova import paths -from nova.storage import linuxscsi -from nova import utils -from nova.virt.libvirt import config as vconfig -from nova.virt.libvirt import quobyte -from nova.virt.libvirt import remotefs -from nova.virt.libvirt import utils as libvirt_utils - -LOG = logging.getLogger(__name__) - -volume_opts = [ - cfg.IntOpt('num_iscsi_scan_tries', - default=5, - help='Number of times to rescan iSCSI target to find volume'), - cfg.IntOpt('num_iser_scan_tries', - default=5, - help='Number of times to rescan iSER target to find volume'), - cfg.StrOpt('rbd_user', - help='The RADOS client name for accessing rbd volumes'), - cfg.StrOpt('rbd_secret_uuid', - help='The libvirt UUID of the secret for the rbd_user' - 'volumes'), - cfg.StrOpt('nfs_mount_point_base', - default=paths.state_path_def('mnt'), - help='Directory where the NFS volume is mounted on the' - ' compute node'), - cfg.StrOpt('nfs_mount_options', - help='Mount options passed to the NFS client. See section ' - 'of the nfs man page for details'), - cfg.StrOpt('smbfs_mount_point_base', - default=paths.state_path_def('mnt'), - help='Directory where the SMBFS shares are mounted on the ' - 'compute node'), - cfg.StrOpt('smbfs_mount_options', - default='', - help='Mount options passed to the SMBFS client. See ' - 'mount.cifs man page for details. Note that the ' - 'libvirt-qemu uid and gid must be specified.'), - cfg.IntOpt('num_aoe_discover_tries', - default=3, - help='Number of times to rediscover AoE target to find volume'), - cfg.StrOpt('glusterfs_mount_point_base', - default=paths.state_path_def('mnt'), - help='Directory where the glusterfs volume is mounted on the ' - 'compute node'), - cfg.BoolOpt('iscsi_use_multipath', - default=False, - help='Use multipath connection of the iSCSI volume'), - cfg.BoolOpt('iser_use_multipath', - default=False, - help='Use multipath connection of the iSER volume'), - cfg.StrOpt('scality_sofs_config', - help='Path or URL to Scality SOFS configuration file'), - cfg.StrOpt('scality_sofs_mount_point', - default='$state_path/scality', - help='Base dir where Scality SOFS shall be mounted'), - cfg.ListOpt('qemu_allowed_storage_drivers', - default=[], - help='Protocols listed here will be accessed directly ' - 'from QEMU. Currently supported protocols: [gluster]'), - cfg.StrOpt('quobyte_mount_point_base', - default=paths.state_path_def('mnt'), - help='Directory where the Quobyte volume is mounted on the ' - 'compute node'), - cfg.StrOpt('quobyte_client_cfg', - help='Path to a Quobyte Client configuration file.'), - cfg.StrOpt('iscsi_iface', - deprecated_name='iscsi_transport', - help='The iSCSI transport iface to use to connect to target in ' - 'case offload support is desired. Supported transports ' - 'are be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx and ocs. ' - 'Default format is transport_name.hwaddress and can be ' - 'generated manually or via iscsiadm -m iface'), - # iser is also supported, but use LibvirtISERVolumeDriver - # instead - ] - -CONF = cfg.CONF -CONF.register_opts(volume_opts, 'libvirt') - - -class LibvirtBaseVolumeDriver(object): - """Base class for volume drivers.""" - def __init__(self, connection, is_block_dev): - self.connection = connection - self.is_block_dev = is_block_dev - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = vconfig.LibvirtConfigGuestDisk() - conf.driver_name = libvirt_utils.pick_disk_driver_name( - self.connection._host.get_version(), - self.is_block_dev - ) - - conf.source_device = disk_info['type'] - conf.driver_format = "raw" - conf.driver_cache = "none" - conf.target_dev = disk_info['dev'] - conf.target_bus = disk_info['bus'] - conf.serial = connection_info.get('serial') - - # Support for block size tuning - data = {} - if 'data' in connection_info: - data = connection_info['data'] - if 'logical_block_size' in data: - conf.logical_block_size = data['logical_block_size'] - if 'physical_block_size' in data: - conf.physical_block_size = data['physical_block_size'] - - # Extract rate_limit control parameters - if 'qos_specs' in data and data['qos_specs']: - tune_opts = ['total_bytes_sec', 'read_bytes_sec', - 'write_bytes_sec', 'total_iops_sec', - 'read_iops_sec', 'write_iops_sec'] - specs = data['qos_specs'] - if isinstance(specs, dict): - for k, v in specs.iteritems(): - if k in tune_opts: - new_key = 'disk_' + k - setattr(conf, new_key, v) - else: - LOG.warn(_LW('Unknown content in connection_info/' - 'qos_specs: %s'), specs) - - # Extract access_mode control parameters - if 'access_mode' in data and data['access_mode']: - access_mode = data['access_mode'] - if access_mode in ('ro', 'rw'): - conf.readonly = access_mode == 'ro' - else: - LOG.error(_LE('Unknown content in ' - 'connection_info/access_mode: %s'), - access_mode) - raise exception.InvalidVolumeAccessMode( - access_mode=access_mode) - - return conf - - def _get_secret_uuid(self, conf, password=None): - secret = self.connection._host.find_secret(conf.source_protocol, - conf.source_name) - if secret is None: - secret = self.connection._host.create_secret(conf.source_protocol, - conf.source_name, - password) - return secret.UUIDString() - - def _delete_secret_by_name(self, connection_info): - source_protocol = connection_info['driver_volume_type'] - netdisk_properties = connection_info['data'] - if source_protocol == 'rbd': - return - elif source_protocol == 'iscsi': - usage_type = 'iscsi' - usage_name = ("%(target_iqn)s/%(target_lun)s" % - netdisk_properties) - self.connection._host.delete_secret(usage_type, usage_name) - - def connect_volume(self, connection_info, disk_info): - """Connect the volume. Returns xml for libvirt.""" - pass - - def disconnect_volume(self, connection_info, disk_dev): - """Disconnect the volume.""" - pass - - -class LibvirtVolumeDriver(LibvirtBaseVolumeDriver): - """Class for volumes backed by local file.""" - def __init__(self, connection): - super(LibvirtVolumeDriver, - self).__init__(connection, is_block_dev=True) - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtVolumeDriver, - self).get_config(connection_info, disk_info) - conf.source_type = "block" - conf.source_path = connection_info['data']['device_path'] - return conf - - -class LibvirtFakeVolumeDriver(LibvirtBaseVolumeDriver): - """Driver to attach fake volumes to libvirt.""" - def __init__(self, connection): - super(LibvirtFakeVolumeDriver, - self).__init__(connection, is_block_dev=True) - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtFakeVolumeDriver, - self).get_config(connection_info, disk_info) - conf.source_type = "network" - conf.source_protocol = "fake" - conf.source_name = "fake" - return conf - - -class LibvirtNetVolumeDriver(LibvirtBaseVolumeDriver): - """Driver to attach Network volumes to libvirt.""" - def __init__(self, connection): - super(LibvirtNetVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtNetVolumeDriver, - self).get_config(connection_info, disk_info) - - netdisk_properties = connection_info['data'] - conf.source_type = "network" - conf.source_protocol = connection_info['driver_volume_type'] - conf.source_name = netdisk_properties.get('name') - conf.source_hosts = netdisk_properties.get('hosts', []) - conf.source_ports = netdisk_properties.get('ports', []) - auth_enabled = netdisk_properties.get('auth_enabled') - if (conf.source_protocol == 'rbd' and - CONF.libvirt.rbd_secret_uuid): - conf.auth_secret_uuid = CONF.libvirt.rbd_secret_uuid - auth_enabled = True # Force authentication locally - if CONF.libvirt.rbd_user: - conf.auth_username = CONF.libvirt.rbd_user - if conf.source_protocol == 'iscsi': - try: - conf.source_name = ("%(target_iqn)s/%(target_lun)s" % - netdisk_properties) - target_portal = netdisk_properties['target_portal'] - except KeyError: - raise exception.NovaException(_("Invalid volume source data")) - - ip, port = utils.parse_server_string(target_portal) - if ip == '' or port == '': - raise exception.NovaException(_("Invalid target_lun")) - conf.source_hosts = [ip] - conf.source_ports = [port] - if netdisk_properties.get('auth_method') == 'CHAP': - auth_enabled = True - conf.auth_secret_type = 'iscsi' - password = netdisk_properties.get('auth_password') - conf.auth_secret_uuid = self._get_secret_uuid(conf, password) - if auth_enabled: - conf.auth_username = (conf.auth_username or - netdisk_properties['auth_username']) - conf.auth_secret_type = (conf.auth_secret_type or - netdisk_properties['secret_type']) - conf.auth_secret_uuid = (conf.auth_secret_uuid or - netdisk_properties['secret_uuid']) - return conf - - def disconnect_volume(self, connection_info, disk_dev): - """Detach the volume from instance_name.""" - super(LibvirtNetVolumeDriver, - self).disconnect_volume(connection_info, disk_dev) - self._delete_secret_by_name(connection_info) - - -class LibvirtISCSIVolumeDriver(LibvirtBaseVolumeDriver): - """Driver to attach Network volumes to libvirt.""" - supported_transports = ['be2iscsi', 'bnx2i', 'cxgb3i', - 'cxgb4i', 'qla4xxx', 'ocs'] - - def __init__(self, connection): - super(LibvirtISCSIVolumeDriver, self).__init__(connection, - is_block_dev=True) - self.num_scan_tries = CONF.libvirt.num_iscsi_scan_tries - self.use_multipath = CONF.libvirt.iscsi_use_multipath - if CONF.libvirt.iscsi_iface: - self.transport = CONF.libvirt.iscsi_iface - else: - self.transport = 'default' - - def _get_transport(self): - if self._validate_transport(self.transport): - return self.transport - else: - return 'default' - - def _validate_transport(self, transport_iface): - """Check that given iscsi_iface uses only supported transports - - Accepted transport names for provided iface param are - be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx and ocs. iSER uses it's - own separate driver. Note the difference between transport and - iface; unlike iscsi_tcp/iser, this is not one and the same for - offloaded transports, where the default format is - transport_name.hwaddress - """ - # We can support iser here as well, but currently reject it as the - # separate iser driver has not yet been deprecated. - if transport_iface == 'default': - return True - # Will return (6) if iscsi_iface file was not found, or (2) if iscsid - # could not be contacted - out = self._run_iscsiadm_bare(['-m', - 'iface', - '-I', - transport_iface], - check_exit_code=[0, 2, 6])[0] or "" - LOG.debug("iscsiadm %(iface)s configuration: stdout=%(out)s", - {'iface': transport_iface, 'out': out}) - for data in [line.split() for line in out.splitlines()]: - if data[0] == 'iface.transport_name': - if data[2] in self.supported_transports: - return True - - LOG.warn(_LW("No useable transport found for iscsi iface %s. " - "Falling back to default transport"), - transport_iface) - return False - - def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs): - check_exit_code = kwargs.pop('check_exit_code', 0) - (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T', - iscsi_properties['target_iqn'], - '-p', iscsi_properties['target_portal'], - *iscsi_command, run_as_root=True, - check_exit_code=check_exit_code) - msg = ('iscsiadm %(command)s: stdout=%(out)s stderr=%(err)s' % - {'command': iscsi_command, 'out': out, 'err': err}) - # NOTE(bpokorny): iscsi_command can contain passwords so we need to - # sanitize the password in the message. - LOG.debug(strutils.mask_password(msg)) - return (out, err) - - def _iscsiadm_update(self, iscsi_properties, property_key, property_value, - **kwargs): - iscsi_command = ('--op', 'update', '-n', property_key, - '-v', property_value) - return self._run_iscsiadm(iscsi_properties, iscsi_command, **kwargs) - - def _get_target_portals_from_iscsiadm_output(self, output): - # return both portals and iqns - # - # as we are parsing a command line utility, allow for the - # possibility that additional debug data is spewed in the - # stream, and only grab actual ip / iqn lines. - targets = [] - for data in [line.split() for line in output.splitlines()]: - if len(data) == 2 and data[1].startswith('iqn.'): - targets.append(data) - return targets - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtISCSIVolumeDriver, - self).get_config(connection_info, disk_info) - conf.source_type = "block" - conf.source_path = connection_info['data']['device_path'] - return conf - - @utils.synchronized('connect_volume') - def connect_volume(self, connection_info, disk_info): - """Attach the volume to instance_name.""" - iscsi_properties = connection_info['data'] - - # multipath installed, discovering other targets if available - # multipath should be configured on the nova-compute node, - # in order to fit storage vendor - out = None - if self.use_multipath: - out = self._run_iscsiadm_discover(iscsi_properties) - - # There are two types of iSCSI multipath devices. One which shares - # the same iqn between multiple portals, and the other which use - # different iqns on different portals. Try to identify the type by - # checking the iscsiadm output if the iqn is used by multiple - # portals. If it is, it's the former, so use the supplied iqn. - # Otherwise, it's the latter, so try the ip,iqn combinations to - # find the targets which constitutes the multipath device. - ips_iqns = self._get_target_portals_from_iscsiadm_output(out) - same_portal = False - all_portals = set() - match_portals = set() - for ip, iqn in ips_iqns: - all_portals.add(ip) - if iqn == iscsi_properties['target_iqn']: - match_portals.add(ip) - if len(all_portals) == len(match_portals): - same_portal = True - - for ip, iqn in ips_iqns: - props = iscsi_properties.copy() - props['target_portal'] = ip.split(",")[0] - if not same_portal: - props['target_iqn'] = iqn - self._connect_to_iscsi_portal(props) - - self._rescan_iscsi() - else: - self._connect_to_iscsi_portal(iscsi_properties) - - # Detect new/resized LUNs for existing sessions - self._run_iscsiadm(iscsi_properties, ("--rescan",)) - - host_device = self._get_host_device(iscsi_properties) - - # The /dev/disk/by-path/... node is not always present immediately - # TODO(justinsb): This retry-with-delay is a pattern, move to utils? - tries = 0 - disk_dev = disk_info['dev'] - - # Check host_device only when transport is used, since otherwise it is - # directly derived from properties. Only needed for unit tests - while ((self._get_transport() != "default" and not host_device) - or not os.path.exists(host_device)): - if tries >= self.num_scan_tries: - raise exception.NovaException(_("iSCSI device not found at %s") - % (host_device)) - - LOG.warn(_LW("ISCSI volume not yet found at: %(disk_dev)s. " - "Will rescan & retry. Try number: %(tries)s"), - {'disk_dev': disk_dev, 'tries': tries}) - - # The rescan isn't documented as being necessary(?), but it helps - self._run_iscsiadm(iscsi_properties, ("--rescan",)) - - # For offloaded open-iscsi transports, host_device cannot be - # guessed unlike iscsi_tcp where it can be obtained from - # properties, so try and get it again. - if not host_device and self._get_transport() != "default": - host_device = self._get_host_device(iscsi_properties) - - tries = tries + 1 - if not host_device or not os.path.exists(host_device): - time.sleep(tries ** 2) - - if tries != 0: - LOG.debug("Found iSCSI node %(disk_dev)s " - "(after %(tries)s rescans)", - {'disk_dev': disk_dev, - 'tries': tries}) - - if self.use_multipath: - # we use the multipath device instead of the single path device - self._rescan_multipath() - - multipath_device = self._get_multipath_device_name(host_device) - - if multipath_device is not None: - host_device = multipath_device - connection_info['data']['multipath_id'] = \ - multipath_device.split('/')[-1] - - connection_info['data']['device_path'] = host_device - - def _run_iscsiadm_discover(self, iscsi_properties): - def run_iscsiadm_update_discoverydb(): - return utils.execute( - 'iscsiadm', - '-m', 'discoverydb', - '-t', 'sendtargets', - '-p', iscsi_properties['target_portal'], - '--op', 'update', - '-n', "discovery.sendtargets.auth.authmethod", - '-v', iscsi_properties['discovery_auth_method'], - '-n', "discovery.sendtargets.auth.username", - '-v', iscsi_properties['discovery_auth_username'], - '-n', "discovery.sendtargets.auth.password", - '-v', iscsi_properties['discovery_auth_password'], - run_as_root=True) - - out = None - if iscsi_properties.get('discovery_auth_method'): - try: - run_iscsiadm_update_discoverydb() - except processutils.ProcessExecutionError as exc: - # iscsiadm returns 6 for "db record not found" - if exc.exit_code == 6: - (out, err) = utils.execute( - 'iscsiadm', - '-m', 'discoverydb', - '-t', 'sendtargets', - '-p', iscsi_properties['target_portal'], - '--op', 'new', - run_as_root=True) - run_iscsiadm_update_discoverydb() - else: - raise - - out = self._run_iscsiadm_bare( - ['-m', - 'discoverydb', - '-t', - 'sendtargets', - '-p', - iscsi_properties['target_portal'], - '--discover'], - check_exit_code=[0, 255])[0] or "" - else: - out = self._run_iscsiadm_bare( - ['-m', - 'discovery', - '-t', - 'sendtargets', - '-p', - iscsi_properties['target_portal']], - check_exit_code=[0, 255])[0] or "" - return out - - @utils.synchronized('connect_volume') - def disconnect_volume(self, connection_info, disk_dev): - """Detach the volume from instance_name.""" - iscsi_properties = connection_info['data'] - host_device = self._get_host_device(iscsi_properties) - multipath_device = None - if self.use_multipath: - if 'multipath_id' in iscsi_properties: - multipath_device = ('/dev/mapper/%s' % - iscsi_properties['multipath_id']) - else: - multipath_device = self._get_multipath_device_name(host_device) - - super(LibvirtISCSIVolumeDriver, - self).disconnect_volume(connection_info, disk_dev) - - if self.use_multipath and multipath_device: - return self._disconnect_volume_multipath_iscsi(iscsi_properties, - multipath_device) - - # NOTE(vish): Only disconnect from the target if no luns from the - # target are in use. - device_byname = ("ip-%s-iscsi-%s-lun-" % - (iscsi_properties['target_portal'], - iscsi_properties['target_iqn'])) - devices = self.connection._get_all_block_devices() - devices = [dev for dev in devices if (device_byname in dev - and - dev.startswith( - '/dev/disk/by-path/'))] - if not devices: - self._disconnect_from_iscsi_portal(iscsi_properties) - elif host_device not in devices: - # Delete device if LUN is not in use by another instance - self._delete_device(host_device) - - def _delete_device(self, device_path): - device_name = os.path.basename(os.path.realpath(device_path)) - delete_control = '/sys/block/' + device_name + '/device/delete' - if os.path.exists(delete_control): - # Copy '1' from stdin to the device delete control file - utils.execute('cp', '/dev/stdin', delete_control, - process_input='1', run_as_root=True) - else: - LOG.warn(_LW("Unable to delete volume device %s"), device_name) - - def _remove_multipath_device_descriptor(self, disk_descriptor): - disk_descriptor = disk_descriptor.replace('/dev/mapper/', '') - try: - self._run_multipath(['-f', disk_descriptor], - check_exit_code=[0, 1]) - except processutils.ProcessExecutionError as exc: - # Because not all cinder drivers need to remove the dev mapper, - # here just logs a warning to avoid affecting those drivers in - # exceptional cases. - LOG.warn(_LW('Failed to remove multipath device descriptor ' - '%(dev_mapper)s. Exception message: %(msg)s') - % {'dev_mapper': disk_descriptor, - 'msg': exc.message}) - - def _disconnect_volume_multipath_iscsi(self, iscsi_properties, - multipath_device): - self._rescan_iscsi() - self._rescan_multipath() - block_devices = self.connection._get_all_block_devices() - devices = [] - for dev in block_devices: - if "/mapper/" in dev: - devices.append(dev) - else: - mpdev = self._get_multipath_device_name(dev) - if mpdev: - devices.append(mpdev) - - # Do a discovery to find all targets. - # Targets for multiple paths for the same multipath device - # may not be the same. - out = self._run_iscsiadm_discover(iscsi_properties) - - # Extract targets for the current multipath device. - ips_iqns = [] - entries = self._get_iscsi_devices() - for ip, iqn in self._get_target_portals_from_iscsiadm_output(out): - ip_iqn = "%s-iscsi-%s" % (ip.split(",")[0], iqn) - for entry in entries: - entry_ip_iqn = entry.split("-lun-")[0] - if entry_ip_iqn[:3] == "ip-": - entry_ip_iqn = entry_ip_iqn[3:] - elif entry_ip_iqn[:4] == "pci-": - # Look at an offset of len('pci-0000:00:00.0') - offset = entry_ip_iqn.find("ip-", 16, 21) - entry_ip_iqn = entry_ip_iqn[(offset + 3):] - if (ip_iqn != entry_ip_iqn): - continue - entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % - entry) - entry_mpdev = self._get_multipath_device_name(entry_real_path) - if entry_mpdev == multipath_device: - ips_iqns.append([ip, iqn]) - break - - if not devices: - # disconnect if no other multipath devices - self._disconnect_mpath(iscsi_properties, ips_iqns) - return - - # Get a target for all other multipath devices - other_iqns = [self._get_multipath_iqn(device) - for device in devices] - # Get all the targets for the current multipath device - current_iqns = [iqn for ip, iqn in ips_iqns] - - in_use = False - for current in current_iqns: - if current in other_iqns: - in_use = True - break - - # If no other multipath device attached has the same iqn - # as the current device - if not in_use: - # disconnect if no other multipath devices with same iqn - self._disconnect_mpath(iscsi_properties, ips_iqns) - return - elif multipath_device not in devices: - # delete the devices associated w/ the unused multipath - self._delete_mpath(iscsi_properties, multipath_device, ips_iqns) - - # else do not disconnect iscsi portals, - # as they are used for other luns, - # just remove multipath mapping device descriptor - self._remove_multipath_device_descriptor(multipath_device) - return - - def _connect_to_iscsi_portal(self, iscsi_properties): - # NOTE(vish): If we are on the same host as nova volume, the - # discovery makes the target so we don't need to - # run --op new. Therefore, we check to see if the - # target exists, and if we get 255 (Not Found), then - # we run --op new. This will also happen if another - # volume is using the same target. - try: - self._run_iscsiadm(iscsi_properties, ()) - except processutils.ProcessExecutionError as exc: - # iscsiadm returns 21 for "No records found" after version 2.0-871 - if exc.exit_code in [21, 255]: - self._reconnect(iscsi_properties) - else: - raise - - if iscsi_properties.get('auth_method'): - self._iscsiadm_update(iscsi_properties, - "node.session.auth.authmethod", - iscsi_properties['auth_method']) - self._iscsiadm_update(iscsi_properties, - "node.session.auth.username", - iscsi_properties['auth_username']) - self._iscsiadm_update(iscsi_properties, - "node.session.auth.password", - iscsi_properties['auth_password']) - - # duplicate logins crash iscsiadm after load, - # so we scan active sessions to see if the node is logged in. - out = self._run_iscsiadm_bare(["-m", "session"], - run_as_root=True, - check_exit_code=[0, 1, 21])[0] or "" - - portals = [{'portal': p.split(" ")[2], 'iqn': p.split(" ")[3]} - for p in out.splitlines() if p.startswith("tcp:")] - - stripped_portal = iscsi_properties['target_portal'].split(",")[0] - if len(portals) == 0 or len([s for s in portals - if stripped_portal == - s['portal'].split(",")[0] - and - s['iqn'] == - iscsi_properties['target_iqn']] - ) == 0: - try: - self._run_iscsiadm(iscsi_properties, - ("--login",), - check_exit_code=[0, 255]) - except processutils.ProcessExecutionError as err: - # as this might be one of many paths, - # only set successful logins to startup automatically - if err.exit_code in [15]: - self._iscsiadm_update(iscsi_properties, - "node.startup", - "automatic") - return - - self._iscsiadm_update(iscsi_properties, - "node.startup", - "automatic") - - def _disconnect_from_iscsi_portal(self, iscsi_properties): - self._iscsiadm_update(iscsi_properties, "node.startup", "manual", - check_exit_code=[0, 21, 255]) - self._run_iscsiadm(iscsi_properties, ("--logout",), - check_exit_code=[0, 21, 255]) - self._run_iscsiadm(iscsi_properties, ('--op', 'delete'), - check_exit_code=[0, 21, 255]) - - def _get_multipath_device_name(self, single_path_device): - device = os.path.realpath(single_path_device) - - out = self._run_multipath(['-ll', - device], - check_exit_code=[0, 1])[0] - mpath_line = [line for line in out.splitlines() - if "scsi_id" not in line] # ignore udev errors - if len(mpath_line) > 0 and len(mpath_line[0]) > 0: - return "/dev/mapper/%s" % mpath_line[0].split(" ")[0] - - return None - - def _get_iscsi_devices(self): - try: - devices = list(os.walk('/dev/disk/by-path'))[0][-1] - except IndexError: - return [] - iscsi_devs = [] - for entry in devices: - if (entry.startswith("ip-") or - (entry.startswith('pci-') and 'ip-' in entry)): - iscsi_devs.append(entry) - - return iscsi_devs - - def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns): - entries = self._get_iscsi_devices() - # Loop through ips_iqns to construct all paths - iqn_luns = [] - for ip, iqn in ips_iqns: - iqn_lun = '%s-lun-%s' % (iqn, - iscsi_properties.get('target_lun', 0)) - iqn_luns.append(iqn_lun) - for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]: - for iqn_lun in iqn_luns: - if iqn_lun in dev: - self._delete_device(dev) - - self._rescan_multipath() - - def _disconnect_mpath(self, iscsi_properties, ips_iqns): - for ip, iqn in ips_iqns: - props = iscsi_properties.copy() - props['target_portal'] = ip - props['target_iqn'] = iqn - self._disconnect_from_iscsi_portal(props) - - self._rescan_multipath() - - def _get_multipath_iqn(self, multipath_device): - entries = self._get_iscsi_devices() - for entry in entries: - entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % entry) - entry_multipath = self._get_multipath_device_name(entry_real_path) - if entry_multipath == multipath_device: - return entry.split("iscsi-")[1].split("-lun")[0] - return None - - def _run_iscsiadm_bare(self, iscsi_command, **kwargs): - check_exit_code = kwargs.pop('check_exit_code', 0) - (out, err) = utils.execute('iscsiadm', - *iscsi_command, - run_as_root=True, - check_exit_code=check_exit_code) - LOG.debug("iscsiadm %(command)s: stdout=%(out)s stderr=%(err)s", - {'command': iscsi_command, 'out': out, 'err': err}) - return (out, err) - - def _run_multipath(self, multipath_command, **kwargs): - check_exit_code = kwargs.pop('check_exit_code', 0) - (out, err) = utils.execute('multipath', - *multipath_command, - run_as_root=True, - check_exit_code=check_exit_code) - LOG.debug("multipath %(command)s: stdout=%(out)s stderr=%(err)s", - {'command': multipath_command, 'out': out, 'err': err}) - return (out, err) - - def _rescan_iscsi(self): - self._run_iscsiadm_bare(('-m', 'node', '--rescan'), - check_exit_code=[0, 1, 21, 255]) - self._run_iscsiadm_bare(('-m', 'session', '--rescan'), - check_exit_code=[0, 1, 21, 255]) - - def _rescan_multipath(self): - self._run_multipath(['-r'], check_exit_code=[0, 1, 21]) - - def _get_host_device(self, transport_properties): - """Find device path in devtemfs.""" - device = ("ip-%s-iscsi-%s-lun-%s" % - (transport_properties['target_portal'], - transport_properties['target_iqn'], - transport_properties.get('target_lun', 0))) - if self._get_transport() == "default": - return ("/dev/disk/by-path/%s" % device) - else: - host_device = None - look_for_device = glob.glob('/dev/disk/by-path/*%s' % device) - if look_for_device: - host_device = look_for_device[0] - return host_device - - def _reconnect(self, iscsi_properties): - # Note: iscsiadm does not support changing iface.iscsi_ifacename - # via --op update, so we do this at creation time - self._run_iscsiadm(iscsi_properties, - ('--interface', self._get_transport(), - '--op', 'new')) - - -class LibvirtISERVolumeDriver(LibvirtISCSIVolumeDriver): - """Driver to attach Network volumes to libvirt.""" - def __init__(self, connection): - super(LibvirtISERVolumeDriver, self).__init__(connection) - self.num_scan_tries = CONF.libvirt.num_iser_scan_tries - self.use_multipath = CONF.libvirt.iser_use_multipath - - def _get_transport(self): - return 'iser' - - def _get_multipath_iqn(self, multipath_device): - entries = self._get_iscsi_devices() - for entry in entries: - entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % entry) - entry_multipath = self._get_multipath_device_name(entry_real_path) - if entry_multipath == multipath_device: - return entry.split("iser-")[1].split("-lun")[0] - return None - - -class LibvirtNFSVolumeDriver(LibvirtBaseVolumeDriver): - """Class implements libvirt part of volume driver for NFS.""" - - def __init__(self, connection): - """Create back-end to nfs.""" - super(LibvirtNFSVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def _get_device_path(self, connection_info): - path = os.path.join(CONF.libvirt.nfs_mount_point_base, - utils.get_hash_str(connection_info['data']['export'])) - path = os.path.join(path, connection_info['data']['name']) - return path - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtNFSVolumeDriver, - self).get_config(connection_info, disk_info) - - conf.source_type = 'file' - conf.source_path = connection_info['data']['device_path'] - conf.driver_format = connection_info['data'].get('format', 'raw') - return conf - - def connect_volume(self, connection_info, disk_info): - """Connect the volume. Returns xml for libvirt.""" - options = connection_info['data'].get('options') - self._ensure_mounted(connection_info['data']['export'], options) - - connection_info['data']['device_path'] = \ - self._get_device_path(connection_info) - - def disconnect_volume(self, connection_info, disk_dev): - """Disconnect the volume.""" - - export = connection_info['data']['export'] - mount_path = os.path.join(CONF.libvirt.nfs_mount_point_base, - utils.get_hash_str(export)) - - try: - utils.execute('umount', mount_path, run_as_root=True) - except processutils.ProcessExecutionError as exc: - if ('device is busy' in exc.message or - 'target is busy' in exc.message): - LOG.debug("The NFS share %s is still in use.", export) - else: - LOG.exception(_LE("Couldn't unmount the NFS share %s"), export) - - def _ensure_mounted(self, nfs_export, options=None): - """@type nfs_export: string - @type options: string - """ - mount_path = os.path.join(CONF.libvirt.nfs_mount_point_base, - utils.get_hash_str(nfs_export)) - if not libvirt_utils.is_mounted(mount_path, nfs_export): - self._mount_nfs(mount_path, nfs_export, options, ensure=True) - return mount_path - - def _mount_nfs(self, mount_path, nfs_share, options=None, ensure=False): - """Mount nfs export to mount path.""" - utils.execute('mkdir', '-p', mount_path) - - # Construct the NFS mount command. - nfs_cmd = ['mount', '-t', 'nfs'] - if CONF.libvirt.nfs_mount_options is not None: - nfs_cmd.extend(['-o', CONF.libvirt.nfs_mount_options]) - if options: - nfs_cmd.extend(options.split(' ')) - nfs_cmd.extend([nfs_share, mount_path]) - - try: - utils.execute(*nfs_cmd, run_as_root=True) - except processutils.ProcessExecutionError as exc: - if ensure and 'already mounted' in exc.message: - LOG.warn(_LW("%s is already mounted"), nfs_share) - else: - raise - - -class LibvirtSMBFSVolumeDriver(LibvirtBaseVolumeDriver): - """Class implements libvirt part of volume driver for SMBFS.""" - - def __init__(self, connection): - super(LibvirtSMBFSVolumeDriver, - self).__init__(connection, is_block_dev=False) - self.username_regex = re.compile( - r"(user(?:name)?)=(?:[^ ,]+\\)?([^ ,]+)") - - def _get_device_path(self, connection_info): - smbfs_share = connection_info['data']['export'] - mount_path = self._get_mount_path(smbfs_share) - volume_path = os.path.join(mount_path, - connection_info['data']['name']) - return volume_path - - def _get_mount_path(self, smbfs_share): - mount_path = os.path.join(CONF.libvirt.smbfs_mount_point_base, - utils.get_hash_str(smbfs_share)) - return mount_path - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtSMBFSVolumeDriver, - self).get_config(connection_info, disk_info) - - conf.source_type = 'file' - conf.driver_cache = 'writethrough' - conf.source_path = connection_info['data']['device_path'] - conf.driver_format = connection_info['data'].get('format', 'raw') - return conf - - def connect_volume(self, connection_info, disk_info): - """Connect the volume.""" - smbfs_share = connection_info['data']['export'] - mount_path = self._get_mount_path(smbfs_share) - - if not libvirt_utils.is_mounted(mount_path, smbfs_share): - mount_options = self._parse_mount_options(connection_info) - remotefs.mount_share(mount_path, smbfs_share, - export_type='cifs', options=mount_options) - - device_path = self._get_device_path(connection_info) - connection_info['data']['device_path'] = device_path - - def disconnect_volume(self, connection_info, disk_dev): - """Disconnect the volume.""" - smbfs_share = connection_info['data']['export'] - mount_path = self._get_mount_path(smbfs_share) - remotefs.unmount_share(mount_path, smbfs_share) - - def _parse_mount_options(self, connection_info): - mount_options = " ".join( - [connection_info['data'].get('options') or '', - CONF.libvirt.smbfs_mount_options]) - - if not self.username_regex.findall(mount_options): - mount_options = mount_options + ' -o username=guest' - else: - # Remove the Domain Name from user name - mount_options = self.username_regex.sub(r'\1=\2', mount_options) - return mount_options.strip(", ").split(' ') - - -class LibvirtAOEVolumeDriver(LibvirtBaseVolumeDriver): - """Driver to attach AoE volumes to libvirt.""" - def __init__(self, connection): - super(LibvirtAOEVolumeDriver, - self).__init__(connection, is_block_dev=True) - - def _aoe_discover(self): - """Call aoe-discover (aoe-tools) AoE Discover.""" - (out, err) = utils.execute('aoe-discover', - run_as_root=True, check_exit_code=0) - return (out, err) - - def _aoe_revalidate(self, aoedev): - """Revalidate the LUN Geometry (When an AoE ID is reused).""" - (out, err) = utils.execute('aoe-revalidate', aoedev, - run_as_root=True, check_exit_code=0) - return (out, err) - - def _get_device_path(self, connection_info): - shelf = connection_info['data']['target_shelf'] - lun = connection_info['data']['target_lun'] - aoedev = 'e%s.%s' % (shelf, lun) - aoedevpath = '/dev/etherd/%s' % (aoedev) - return aoedevpath - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtAOEVolumeDriver, - self).get_config(connection_info, disk_info) - - conf.source_type = "block" - conf.source_path = connection_info['data']['device_path'] - return conf - - def connect_volume(self, connection_info, mount_device): - shelf = connection_info['data']['target_shelf'] - lun = connection_info['data']['target_lun'] - aoedev = 'e%s.%s' % (shelf, lun) - aoedevpath = '/dev/etherd/%s' % (aoedev) - - if os.path.exists(aoedevpath): - # NOTE(jbr_): If aoedevpath already exists, revalidate the LUN. - self._aoe_revalidate(aoedev) - else: - # NOTE(jbr_): If aoedevpath does not exist, do a discover. - self._aoe_discover() - - # NOTE(jbr_): Device path is not always present immediately - def _wait_for_device_discovery(aoedevpath, mount_device): - tries = self.tries - if os.path.exists(aoedevpath): - raise loopingcall.LoopingCallDone() - - if self.tries >= CONF.libvirt.num_aoe_discover_tries: - raise exception.NovaException(_("AoE device not found at %s") % - (aoedevpath)) - LOG.warn(_LW("AoE volume not yet found at: %(aoedevpath)s. " - "Try number: %(tries)s"), - {'aoedevpath': aoedevpath, 'tries': tries}) - - self._aoe_discover() - self.tries = self.tries + 1 - - self.tries = 0 - timer = loopingcall.FixedIntervalLoopingCall( - _wait_for_device_discovery, aoedevpath, mount_device) - timer.start(interval=2).wait() - - tries = self.tries - if tries != 0: - LOG.debug("Found AoE device %(aoedevpath)s " - "(after %(tries)s rediscover)", - {'aoedevpath': aoedevpath, - 'tries': tries}) - - connection_info['data']['device_path'] = \ - self._get_device_path(connection_info) - - -class LibvirtGlusterfsVolumeDriver(LibvirtBaseVolumeDriver): - """Class implements libvirt part of volume driver for GlusterFS.""" - - def __init__(self, connection): - """Create back-end to glusterfs.""" - super(LibvirtGlusterfsVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def _get_device_path(self, connection_info): - path = os.path.join(CONF.libvirt.glusterfs_mount_point_base, - utils.get_hash_str(connection_info['data']['export'])) - path = os.path.join(path, connection_info['data']['name']) - return path - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtGlusterfsVolumeDriver, - self).get_config(connection_info, disk_info) - - data = connection_info['data'] - - if 'gluster' in CONF.libvirt.qemu_allowed_storage_drivers: - vol_name = data['export'].split('/')[1] - source_host = data['export'].split('/')[0][:-1] - - conf.source_ports = ['24007'] - conf.source_type = 'network' - conf.source_protocol = 'gluster' - conf.source_hosts = [source_host] - conf.source_name = '%s/%s' % (vol_name, data['name']) - else: - conf.source_type = 'file' - conf.source_path = connection_info['data']['device_path'] - - conf.driver_format = connection_info['data'].get('format', 'raw') - - return conf - - def connect_volume(self, connection_info, mount_device): - data = connection_info['data'] - - if 'gluster' not in CONF.libvirt.qemu_allowed_storage_drivers: - self._ensure_mounted(data['export'], data.get('options')) - connection_info['data']['device_path'] = \ - self._get_device_path(connection_info) - - def disconnect_volume(self, connection_info, disk_dev): - """Disconnect the volume.""" - - if 'gluster' in CONF.libvirt.qemu_allowed_storage_drivers: - return - - export = connection_info['data']['export'] - mount_path = os.path.join(CONF.libvirt.glusterfs_mount_point_base, - utils.get_hash_str(export)) - - try: - utils.execute('umount', mount_path, run_as_root=True) - except processutils.ProcessExecutionError as exc: - if 'target is busy' in exc.message: - LOG.debug("The GlusterFS share %s is still in use.", export) - else: - LOG.exception(_LE("Couldn't unmount the GlusterFS share %s"), - export) - - def _ensure_mounted(self, glusterfs_export, options=None): - """@type glusterfs_export: string - @type options: string - """ - mount_path = os.path.join(CONF.libvirt.glusterfs_mount_point_base, - utils.get_hash_str(glusterfs_export)) - if not libvirt_utils.is_mounted(mount_path, glusterfs_export): - self._mount_glusterfs(mount_path, glusterfs_export, - options, ensure=True) - return mount_path - - def _mount_glusterfs(self, mount_path, glusterfs_share, - options=None, ensure=False): - """Mount glusterfs export to mount path.""" - utils.execute('mkdir', '-p', mount_path) - - gluster_cmd = ['mount', '-t', 'glusterfs'] - if options is not None: - gluster_cmd.extend(options.split(' ')) - gluster_cmd.extend([glusterfs_share, mount_path]) - - try: - utils.execute(*gluster_cmd, run_as_root=True) - except processutils.ProcessExecutionError as exc: - if ensure and 'already mounted' in exc.message: - LOG.warn(_LW("%s is already mounted"), glusterfs_share) - else: - raise - - -class LibvirtFibreChannelVolumeDriver(LibvirtBaseVolumeDriver): - """Driver to attach Fibre Channel Network volumes to libvirt.""" - - def __init__(self, connection): - super(LibvirtFibreChannelVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def _get_pci_num(self, hba): - # NOTE(walter-boring) - # device path is in format of - # /sys/devices/pci0000:00/0000:00:03.0/0000:05:00.3/host2/fc_host/host2 - # sometimes an extra entry exists before the host2 value - # we always want the value prior to the host2 value - pci_num = None - if hba is not None: - if "device_path" in hba: - index = 0 - device_path = hba['device_path'].split('/') - for value in device_path: - if value.startswith('host'): - break - index = index + 1 - - if index > 0: - pci_num = device_path[index - 1] - - return pci_num - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtFibreChannelVolumeDriver, - self).get_config(connection_info, disk_info) - - conf.source_type = "block" - conf.source_path = connection_info['data']['device_path'] - return conf - - def _get_lun_string_for_s390(self, lun): - target_lun = 0 - if lun < 256: - target_lun = "0x00%02x000000000000" % lun - elif lun <= 0xffffffff: - target_lun = "0x%08x00000000" % lun - return target_lun - - def _get_device_file_path_s390(self, pci_num, target_wwn, lun): - """Returns device file path""" - # NOTE the format of device file paths depends on the system - # architecture. Most architectures use a PCI based format. - # Systems following the S390, or S390x architecture use a format - # which is based upon the inherent channel architecture (ccw). - host_device = ("/dev/disk/by-path/ccw-%s-zfcp-%s:%s" % - (pci_num, - target_wwn, - lun)) - return host_device - - def _remove_lun_from_s390(self, connection_info): - """Rempove lun from s390 configuration""" - # If LUN scanning is turned off on systems following the s390, or - # s390x architecture LUNs need to be removed from the configuration - # using the unit_remove call. The unit_remove call needs to be issued - # for each (virtual) HBA and target_port. - fc_properties = connection_info['data'] - lun = int(fc_properties.get('target_lun', 0)) - target_lun = self._get_lun_string_for_s390(lun) - ports = fc_properties['target_wwn'] - - for device_num, target_wwn in self._get_possible_devices(ports): - libvirt_utils.perform_unit_remove_for_s390(device_num, - target_wwn, - target_lun) - - def _get_possible_devices(self, wwnports): - """Compute the possible valid fiber channel device options. - - :param wwnports: possible wwn addresses. Can either be string - or list of strings. - - :returns: list of (pci_id, wwn) tuples - - Given one or more wwn (mac addresses for fiber channel) ports - do the matrix math to figure out a set of pci device, wwn - tuples that are potentially valid (they won't all be). This - provides a search space for the device connection. - - """ - # the wwn (think mac addresses for fiber channel devices) can - # either be a single value or a list. Normalize it to a list - # for further operations. - wwns = [] - if isinstance(wwnports, list): - for wwn in wwnports: - wwns.append(str(wwn)) - elif isinstance(wwnports, six.string_types): - wwns.append(str(wwnports)) - - raw_devices = [] - hbas = libvirt_utils.get_fc_hbas_info() - for hba in hbas: - pci_num = self._get_pci_num(hba) - if pci_num is not None: - for wwn in wwns: - target_wwn = "0x%s" % wwn.lower() - raw_devices.append((pci_num, target_wwn)) - return raw_devices - - @utils.synchronized('connect_volume') - def connect_volume(self, connection_info, disk_info): - """Attach the volume to instance_name.""" - fc_properties = connection_info['data'] - mount_device = disk_info["dev"] - - possible_devs = self._get_possible_devices(fc_properties['target_wwn']) - # map the raw device possibilities to possible host device paths - host_devices = [] - for device in possible_devs: - pci_num, target_wwn = device - if platform.machine() in (arch.S390, arch.S390X): - target_lun = self._get_lun_string_for_s390( - fc_properties.get('target_lun', 0)) - host_device = self._get_device_file_path_s390( - pci_num, - target_wwn, - target_lun) - libvirt_utils.perform_unit_add_for_s390( - pci_num, target_wwn, target_lun) - else: - host_device = ("/dev/disk/by-path/pci-%s-fc-%s-lun-%s" % - (pci_num, - target_wwn, - fc_properties.get('target_lun', 0))) - host_devices.append(host_device) - - if len(host_devices) == 0: - # this is empty because we don't have any FC HBAs - msg = _("We are unable to locate any Fibre Channel devices") - raise exception.NovaException(msg) - - # The /dev/disk/by-path/... node is not always present immediately - # We only need to find the first device. Once we see the first device - # multipath will have any others. - def _wait_for_device_discovery(host_devices, mount_device): - tries = self.tries - for device in host_devices: - LOG.debug("Looking for Fibre Channel dev %(device)s", - {'device': device}) - if os.path.exists(device): - self.host_device = device - # get the /dev/sdX device. This is used - # to find the multipath device. - self.device_name = os.path.realpath(device) - raise loopingcall.LoopingCallDone() - - if self.tries >= CONF.libvirt.num_iscsi_scan_tries: - msg = _("Fibre Channel device not found.") - raise exception.NovaException(msg) - - LOG.warn(_LW("Fibre volume not yet found at: %(mount_device)s. " - "Will rescan & retry. Try number: %(tries)s"), - {'mount_device': mount_device, 'tries': tries}) - - linuxscsi.rescan_hosts(libvirt_utils.get_fc_hbas_info()) - self.tries = self.tries + 1 - - self.host_device = None - self.device_name = None - self.tries = 0 - timer = loopingcall.FixedIntervalLoopingCall( - _wait_for_device_discovery, host_devices, mount_device) - timer.start(interval=2).wait() - - tries = self.tries - if self.host_device is not None and self.device_name is not None: - LOG.debug("Found Fibre Channel volume %(mount_device)s " - "(after %(tries)s rescans)", - {'mount_device': mount_device, - 'tries': tries}) - - # see if the new drive is part of a multipath - # device. If so, we'll use the multipath device. - mdev_info = linuxscsi.find_multipath_device(self.device_name) - if mdev_info is not None: - LOG.debug("Multipath device discovered %(device)s", - {'device': mdev_info['device']}) - device_path = mdev_info['device'] - connection_info['data']['device_path'] = device_path - connection_info['data']['devices'] = mdev_info['devices'] - connection_info['data']['multipath_id'] = mdev_info['id'] - else: - # we didn't find a multipath device. - # so we assume the kernel only sees 1 device - device_path = self.host_device - device_info = linuxscsi.get_device_info(self.device_name) - connection_info['data']['device_path'] = device_path - connection_info['data']['devices'] = [device_info] - - @utils.synchronized('connect_volume') - def disconnect_volume(self, connection_info, mount_device): - """Detach the volume from instance_name.""" - super(LibvirtFibreChannelVolumeDriver, - self).disconnect_volume(connection_info, mount_device) - - # If this is a multipath device, we need to search again - # and make sure we remove all the devices. Some of them - # might not have shown up at attach time. - if 'multipath_id' in connection_info['data']: - multipath_id = connection_info['data']['multipath_id'] - mdev_info = linuxscsi.find_multipath_device(multipath_id) - devices = mdev_info['devices'] - LOG.debug("devices to remove = %s", devices) - else: - # only needed when multipath-tools work improperly - devices = connection_info['data'].get('devices', []) - LOG.warn(_LW("multipath-tools probably work improperly. " - "devices to remove = %s.") % devices) - - # There may have been more than 1 device mounted - # by the kernel for this volume. We have to remove - # all of them - for device in devices: - linuxscsi.remove_device(device) - if platform.machine() in (arch.S390, arch.S390X): - self._remove_lun_from_s390(connection_info) - - -class LibvirtScalityVolumeDriver(LibvirtBaseVolumeDriver): - """Scality SOFS Nova driver. Provide hypervisors with access - to sparse files on SOFS. - """ - - def __init__(self, connection): - """Create back-end to SOFS and check connection.""" - super(LibvirtScalityVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def _get_device_path(self, connection_info): - path = os.path.join(CONF.libvirt.scality_sofs_mount_point, - connection_info['data']['sofs_path']) - return path - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtScalityVolumeDriver, - self).get_config(connection_info, disk_info) - conf.source_type = 'file' - conf.source_path = connection_info['data']['device_path'] - - # The default driver cache policy is 'none', and this causes - # qemu/kvm to open the volume file with O_DIRECT, which is - # rejected by FUSE (on kernels older than 3.3). Scality SOFS - # is FUSE based, so we must provide a more sensible default. - conf.driver_cache = 'writethrough' - - return conf - - def connect_volume(self, connection_info, disk_info): - """Connect the volume. Returns xml for libvirt.""" - self._check_prerequisites() - self._mount_sofs() - - connection_info['data']['device_path'] = \ - self._get_device_path(connection_info) - - def _check_prerequisites(self): - """Sanity checks before attempting to mount SOFS.""" - - # config is mandatory - config = CONF.libvirt.scality_sofs_config - if not config: - msg = _LW("Value required for 'scality_sofs_config'") - LOG.warn(msg) - raise exception.NovaException(msg) - - # config can be a file path or a URL, check it - if urlparse.urlparse(config).scheme == '': - # turn local path into URL - config = 'file://%s' % config - try: - urllib2.urlopen(config, timeout=5).close() - except urllib2.URLError as e: - msg = _LW("Cannot access 'scality_sofs_config': %s") % e - LOG.warn(msg) - raise exception.NovaException(msg) - - # mount.sofs must be installed - if not os.access('/sbin/mount.sofs', os.X_OK): - msg = _LW("Cannot execute /sbin/mount.sofs") - LOG.warn(msg) - raise exception.NovaException(msg) - - def _mount_sofs(self): - config = CONF.libvirt.scality_sofs_config - mount_path = CONF.libvirt.scality_sofs_mount_point - sysdir = os.path.join(mount_path, 'sys') - - if not os.path.isdir(mount_path): - utils.execute('mkdir', '-p', mount_path) - if not os.path.isdir(sysdir): - utils.execute('mount', '-t', 'sofs', config, mount_path, - run_as_root=True) - if not os.path.isdir(sysdir): - msg = _LW("Cannot mount Scality SOFS, check syslog for errors") - LOG.warn(msg) - raise exception.NovaException(msg) - - -class LibvirtGPFSVolumeDriver(LibvirtBaseVolumeDriver): - """Class for volumes backed by gpfs volume.""" - def __init__(self, connection): - super(LibvirtGPFSVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtGPFSVolumeDriver, - self).get_config(connection_info, disk_info) - conf.source_type = "file" - conf.source_path = connection_info['data']['device_path'] - return conf - - -class LibvirtQuobyteVolumeDriver(LibvirtBaseVolumeDriver): - """Class implements libvirt part of volume driver for Quobyte.""" - - def __init__(self, connection): - """Create back-end to Quobyte.""" - super(LibvirtQuobyteVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def get_config(self, connection_info, disk_info): - conf = super(LibvirtQuobyteVolumeDriver, - self).get_config(connection_info, disk_info) - data = connection_info['data'] - conf.source_protocol = quobyte.SOURCE_PROTOCOL - conf.source_type = quobyte.SOURCE_TYPE - conf.driver_cache = quobyte.DRIVER_CACHE - conf.driver_io = quobyte.DRIVER_IO - conf.driver_format = data.get('format', 'raw') - - quobyte_volume = self._normalize_url(data['export']) - path = os.path.join(self._get_mount_point_for_share(quobyte_volume), - data['name']) - conf.source_path = path - - return conf - - @utils.synchronized('connect_volume') - def connect_volume(self, connection_info, disk_info): - """Connect the volume.""" - data = connection_info['data'] - quobyte_volume = self._normalize_url(data['export']) - mount_path = self._get_mount_point_for_share(quobyte_volume) - mounted = libvirt_utils.is_mounted(mount_path, - quobyte.SOURCE_PROTOCOL - + '@' + quobyte_volume) - if mounted: - try: - os.stat(mount_path) - except OSError as exc: - if exc.errno == errno.ENOTCONN: - mounted = False - LOG.info(_LI('Fixing previous mount %s which was not' - ' unmounted correctly.'), mount_path) - quobyte.umount_volume(mount_path) - - if not mounted: - quobyte.mount_volume(quobyte_volume, - mount_path, - CONF.libvirt.quobyte_client_cfg) - - quobyte.validate_volume(mount_path) - - @utils.synchronized('connect_volume') - def disconnect_volume(self, connection_info, disk_dev): - """Disconnect the volume.""" - - quobyte_volume = self._normalize_url(connection_info['data']['export']) - mount_path = self._get_mount_point_for_share(quobyte_volume) - - if libvirt_utils.is_mounted(mount_path, 'quobyte@' + quobyte_volume): - quobyte.umount_volume(mount_path) - else: - LOG.info(_LI("Trying to disconnected unmounted volume at %s"), - mount_path) - - def _normalize_url(self, export): - protocol = quobyte.SOURCE_PROTOCOL + "://" - if export.startswith(protocol): - export = export[len(protocol):] - return export - - def _get_mount_point_for_share(self, quobyte_volume): - """Return mount point for Quobyte volume. - - :param quobyte_volume: Example: storage-host/openstack-volumes - """ - return os.path.join(CONF.libvirt.quobyte_mount_point_base, - utils.get_hash_str(quobyte_volume)) - - -class LibvirtScaleIOVolumeDriver(LibvirtBaseVolumeDriver): - - """Class implements libvirt part of volume driver - for ScaleIO cinder driver.""" - local_sdc_id = None - mdm_id = None - pattern3 = None - - def __init__(self, connection): - """Create back-end to nfs.""" - LOG.warning("ScaleIO libvirt volume driver INIT") - super(LibvirtScaleIOVolumeDriver, - self).__init__(connection, is_block_dev=False) - - def find_volume_path(self, volume_id): - - LOG.info("looking for volume %s" % volume_id) - # look for the volume in /dev/disk/by-id directory - disk_filename = "" - tries = 0 - while not disk_filename: - if (tries > 15): - raise exception.NovaException( - "scaleIO volume {0} not found at expected \ - path ".format(volume_id)) - by_id_path = "/dev/disk/by-id" - if not os.path.isdir(by_id_path): - LOG.warn( - "scaleIO volume {0} not yet found (no directory \ - /dev/disk/by-id yet). Try number: {1} ".format( - volume_id, - tries)) - tries = tries + 1 - time.sleep(1) - continue - filenames = os.listdir(by_id_path) - LOG.warning( - "Files found in {0} path: {1} ".format( - by_id_path, - filenames)) - for filename in filenames: - if (filename.startswith("emc-vol") and - filename.endswith(volume_id)): - disk_filename = filename - if not disk_filename: - LOG.warn( - "scaleIO volume {0} not yet found. \ - Try number: {1} ".format( - volume_id, - tries)) - tries = tries + 1 - time.sleep(1) - - if (tries != 0): - LOG.warning( - "Found scaleIO device {0} after {1} retries ".format( - disk_filename, - tries)) - full_disk_name = by_id_path + "/" + disk_filename - LOG.warning("Full disk name is " + full_disk_name) - return full_disk_name -# path = os.path.realpath(full_disk_name) -# LOG.warning("Path is " + path) -# return path - - def _get_client_id(self, server_ip, server_port, - server_username, server_password, server_token, sdc_ip): - request = "https://" + server_ip + ":" + server_port + \ - "/api/types/Client/instances/getByIp::" + sdc_ip + "/" - LOG.info("ScaleIO get client id by ip request: %s" % request) - r = requests.get( - request, - auth=( - server_username, - server_token), - verify=False) - r = self._check_response( - r, - request, - server_ip, - server_port, - server_username, - server_password, - server_token) - - sdc_id = r.json() - if (sdc_id == '' or sdc_id is None): - msg = ("Client with ip %s wasn't found " % (sdc_ip)) - LOG.error(msg) - raise exception.NovaException(data=msg) - if (r.status_code != 200 and "errorCode" in sdc_id): - msg = ( - "Error getting sdc id from ip %s: %s " % - (sdc_ip, sdc_id['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - LOG.info("ScaleIO sdc id is %s" % sdc_id) - return sdc_id - - def _get_volume_id(self, server_ip, server_port, - server_username, server_password, - server_token, volname): - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') -# volname = volname.replace('/', '%252F') - LOG.info( - "volume name after double encoding is %s " % - volname_double_encoded) - request = "https://" + server_ip + ":" + server_port + \ - "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - r = requests.get( - request, - auth=( - server_username, - server_token), - verify=False) - r = self._check_response( - r, - request, - server_ip, - server_port, - server_username, - server_password, - server_token) - - volume_id = r.json() - if (volume_id == '' or volume_id is None): - msg = ("Volume with name %s wasn't found " % (volname)) - LOG.error(msg) - raise exception.NovaException(data=msg) - if (r.status_code != 200 and "errorCode" in volume_id): - msg = ( - "Error getting volume id from name %s: %s " % - (volname, volume_id['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - LOG.info("ScaleIO volume id is %s" % volume_id) - return volume_id - - def _check_response(self, response, request, server_ip, - server_port, server_username, - server_password, server_token, isGetRequest=True, params=None): - if (response.status_code == 401 or response.status_code == 403): - LOG.info("Token is invalid, going to re-login and get a new one") - login_request = "https://" + server_ip + \ - ":" + server_port + "/api/login" - r = requests.get( - login_request, - auth=( - server_username, - server_password), - verify=False) - token = r.json() - # repeat request with valid token - LOG.debug( - "going to perform request again {0} \ - with valid token".format(request)) - if isGetRequest: - res = requests.get( - request, - auth=( - server_username, - token), - verify=False) - else: - headers = {'content-type': 'application/json'} - res = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - server_username, - token), - verify=False) - return res - return response - - def get_config(self, connection_info, disk_info): - """Returns xml for libvirt.""" - conf = super(LibvirtScaleIOVolumeDriver, - self).get_config(connection_info, - disk_info) - conf.source_type = 'block' - volname = connection_info['data']['scaleIO_volname'] - server_ip = connection_info['data']['serverIP'] - server_port = connection_info['data']['serverPort'] - server_username = connection_info['data']['serverUsername'] - server_password = connection_info['data']['serverPassword'] - server_token = connection_info['data']['serverToken'] - volume_id = self._get_volume_id( - server_ip, - server_port, - server_username, - server_password, - server_token, - volname) - conf.source_path = self.find_volume_path(volume_id) - return conf - - def connect_volume(self, connection_info, disk_info): - """Connect the volume. Returns xml for libvirt.""" - conf = super(LibvirtScaleIOVolumeDriver, - self).get_config(connection_info, - disk_info) - LOG.info("scaleIO connect volume in scaleio libvirt volume driver") - data = connection_info - LOG.info("scaleIO connect to stuff " + str(data)) - data = connection_info['data'] -# LOG.info("scaleIO connect to joined "+str(data)) -# LOG.info("scaleIO Dsk info "+str(disk_info)) - volname = connection_info['data']['scaleIO_volname'] - # sdc ip here is wrong, probably not retrieved properly in cinder - # driver. Currently not used. - sdc_ip = connection_info['data']['hostIP'] - server_ip = connection_info['data']['serverIP'] - server_port = connection_info['data']['serverPort'] - server_username = connection_info['data']['serverUsername'] - server_password = connection_info['data']['serverPassword'] - server_token = connection_info['data']['serverToken'] - iops_limit = connection_info['data']['iopsLimit'] - bandwidth_limit = connection_info['data']['bandwidthLimit'] - LOG.debug( - "scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}, \ - REST Server username: {3}, REST Server password: {4}, iops limit: \ - {5}, bandwidth limit: {6}".format( - volname, - sdc_ip, - server_ip, - server_username, - server_password, - iops_limit, - bandwidth_limit)) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: " + str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.NovaException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - -# sdc_id = self._get_client_id(server_ip, server_port, \ -# server_username, server_password, server_token, sdc_ip) - -# params = {'sdcId' : sdc_id} - - params = {'guid': guid, 'allowMultipleMappings': 'TRUE'} - - volume_id = self._get_volume_id( - server_ip, - server_port, - server_username, - server_password, - server_token, - volname) - headers = {'content-type': 'application/json'} - request = "https://" + server_ip + ":" + server_port + \ - "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc" - LOG.info("map volume request: %s" % request) - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - server_username, - server_token), - verify=False) - r = self._check_response( - r, - request, - server_ip, - server_port, - server_username, - server_password, - server_token, - False, - params) -# LOG.info("map volume response: %s" % r.text) - - if (r.status_code != 200): - response = r.json() - error_code = response['errorCode'] - if (error_code == 81): - msg = ( - "Ignoring error mapping volume %s: volume already mapped" % - (volname)) - LOG.warning(msg) - else: - msg = ( - "Error mapping volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) - -# convert id to hex -# val = int(volume_id) -# id_in_hex = hex((val + (1 << 64)) % (1 << 64)) -# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0" - formated_id = volume_id - - conf.source_path = self.find_volume_path(formated_id) - conf.source_type = 'block' - -# set QoS settings after map was performed - if (iops_limit is not None or bandwidth_limit is not None): - params = { - 'guid': guid} - if (bandwidth_limit is not None): - params['bandwidthLimitInKbps'] = bandwidth_limit - if (iops_limit is not None): - params['iops_limit'] = iops_limit - request = "https://" + server_ip + ":" + server_port + \ - "/api/instances/Volume::" + \ - str(volume_id) + "/action/setMappedSdcLimits" - LOG.info("set client limit request: %s" % request) - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - server_username, - server_token), - verify=False) - r = self._check_response( - r, - request, - server_ip, - server_port, - server_username, - server_password, - server_token, - False, - params) - if (r.status_code != 200): - response = r.json() - LOG.info("set client limit response: %s" % response) - msg = ( - "Error setting client limits for volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - - return conf - - def disconnect_volume(self, connection_info, disk_info): - conf = super(LibvirtScaleIOVolumeDriver, - self).disconnect_volume(connection_info, - disk_info) - LOG.info("scaleIO disconnect volume in scaleio libvirt volume driver") - volname = connection_info['data']['scaleIO_volname'] - sdc_ip = connection_info['data']['hostIP'] - server_ip = connection_info['data']['serverIP'] - server_port = connection_info['data']['serverPort'] - server_username = connection_info['data']['serverUsername'] - server_password = connection_info['data']['serverPassword'] - server_token = connection_info['data']['serverToken'] - LOG.debug( - "scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: \ - {2}".format( - volname, - sdc_ip, - server_ip)) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: " + str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.NovaException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid': guid} - - headers = {'content-type': 'application/json'} - - volume_id = self._get_volume_id( - server_ip, - server_port, - server_username, - server_password, - server_token, - volname) - request = "https://" + server_ip + ":" + server_port + \ - "/api/instances/Volume::" + \ - str(volume_id) + "/action/removeMappedSdc" - LOG.info("unmap volume request: %s" % request) - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - server_username, - server_token), - verify=False) - r = self._check_response( - r, - request, - server_ip, - server_port, - server_username, - server_password, - server_token, - False, - params) - - if (r.status_code != 200): - response = r.json() - error_code = response['errorCode'] - if (error_code == 84): - msg = ( - "Ignoring error unmapping volume %s: volume not mapped" % - (volname)) - LOG.warning(msg) - else: - msg = ( - "Error unmapping volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.NovaException(data=msg) diff --git a/deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters b/deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters deleted file mode 100644 index 5179ff7..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters +++ /dev/null @@ -1,3 +0,0 @@ -[Filters] -drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root - diff --git a/deployment_scripts/puppet/install_scaleio_compute/manifests/centos.pp b/deployment_scripts/puppet/install_scaleio_compute/manifests/centos.pp deleted file mode 100644 index e335360..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/manifests/centos.pp +++ /dev/null @@ -1,50 +0,0 @@ -# ScaleIO Puppet Manifest for Compute Nodes for Centos - -class install_scaleio_compute::centos -{ - $nova_service = 'openstack-nova-compute' - $mdm_ip_1 = $plugin_settings['scaleio_mdm1'] - $mdm_ip_2 = $plugin_settings['scaleio_mdm2'] - - -#install ScaleIO SDC package - - exec { "install_sdc": - command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"", - path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin", - } - - #Configure nova-compute - ini_subsetting { 'nova-volume_driver': - ensure => present, - path => '/etc/nova/nova.conf', - subsetting_separator => ',', - section => 'libvirt', - setting => 'volume_drivers', - subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver', - notify => Service[$nova_service], - } - - file { 'scaleiolibvirtdriver.py': - path => '/usr/lib/python2.6/site-packages/nova/virt/libvirt/scaleiolibvirtdriver.py', - source => 'puppet:///modules/install_scaleio_compute/6.1/scaleiolibvirtdriver.py', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - } - - file { 'scaleio.filters': - path => '/usr/share/nova/rootwrap/scaleio.filters', - source => 'puppet:///modules/install_scaleio_compute/scaleio.filters', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - } - - service { $nova_service: - ensure => 'running', - } -} - diff --git a/deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp b/deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp deleted file mode 100644 index b18ddc1..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp +++ /dev/null @@ -1,11 +0,0 @@ -# ScaleIO Puppet Manifest for Compute Nodes - -class install_scaleio_compute -{ - if($::operatingsystem == 'Ubuntu') { - include install_scaleio_compute::ubuntu - }else { - include install_scaleio_compute::centos - } -} - diff --git a/deployment_scripts/puppet/install_scaleio_compute/manifests/ubuntu.pp b/deployment_scripts/puppet/install_scaleio_compute/manifests/ubuntu.pp deleted file mode 100644 index dec85c2..0000000 --- a/deployment_scripts/puppet/install_scaleio_compute/manifests/ubuntu.pp +++ /dev/null @@ -1,84 +0,0 @@ -# ScaleIO Puppet Manifest for Compute Nodes Ubuntu - -class install_scaleio_compute::ubuntu -{ - $nova_service = 'nova-compute' - $mdm_ip_1 = $plugin_settings['scaleio_mdm1'] - $mdm_ip_2 = $plugin_settings['scaleio_mdm2'] - $version = hiera('fuel_version') - -#install ScaleIO SDC package - -if($version == '6.1') { - file { 'scaleiolibvirtdriver.py': - path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/scaleiolibvirtdriver.py', - source => 'puppet:///modules/install_scaleio_compute/6.1/scaleiolibvirtdriver.py', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - } - -} -else -{ - file { 'driver.py': - path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py', - source => 'puppet:///modules/install_scaleio_compute/7.0/driver.py', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - } -> - file { 'volume.py': - path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py', - source => 'puppet:///modules/install_scaleio_compute/7.0/volume.py', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - } - } - -package{'emc-scaleio-sdc': - ensure => installed, -}-> - exec {"Add MDM to drv-cfg": - command => "bash -c 'echo mdm ${mdm_ip_1},${mdm_ip_2} >> /bin/emc/scaleio/drv_cfg.txt'", - path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ], - }-> - - - exec {"Start SDC": - command => "bash -c '/etc/init.d/scini restart'", - path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ], - }-> - - - - #Configure nova-compute - ini_subsetting { 'nova-volume_driver': - ensure => present, - path => '/etc/nova/nova.conf', - subsetting_separator => ',', - section => 'libvirt', - setting => 'volume_drivers', - subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver', - notify => Service[$nova_service], - }-> - - - file { 'scaleio.filters': - path => '/etc/nova/rootwrap.d/scaleio.filters', - source => 'puppet:///modules/install_scaleio_compute/scaleio.filters', - mode => '644', - owner => 'root', - group => 'root', - notify => Service[$nova_service], - }~> - - service { $nova_service: - ensure => 'running', - } -} - diff --git a/deployment_scripts/puppet/install_scaleio_controller/files/6.1/scaleio.py b/deployment_scripts/puppet/install_scaleio_controller/files/6.1/scaleio.py deleted file mode 100644 index cdc9693..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/files/6.1/scaleio.py +++ /dev/null @@ -1,1037 +0,0 @@ -# Copyright (c) 2013 EMC Corporation -# All Rights Reserved -from requests.exceptions import ConnectionError -from cinder.version import version_info - -# This software contains the intellectual property of EMC Corporation -# or is licensed to EMC Corporation from third parties. Use of this -# software and the intellectual property contained therein is expressly -# limited to the terms and conditions of the License Agreement under which -# it is provided by or on behalf of EMC. - -""" -Driver for EMC ScaleIO based on ScaleIO remote CLI. - -""" - -import requests -import base64 -import re -import os -import time -import sys -import ConfigParser -import json -import urllib -from cinder import exception -from cinder.openstack.common import log as logging -from cinder.volume import driver -from cinder.image import image_utils -from cinder import utils -from cinder.openstack.common import processutils -from cinder import context -from cinder.volume import volume_types -from cinder import version -from oslo.config import cfg -from xml.dom.minidom import parseString - -LOG = logging.getLogger(__name__) - -opt = cfg.StrOpt('cinder_scaleio_config_file', - default='/etc/cinder/cinder_scaleio.config', - help='use this file for cinder scaleio driver config data') -CONFIG_SECTION_NAME = 'scaleio' -STORAGE_POOL_NAME = 'sio:sp_name' -STORAGE_POOL_ID = 'sio:sp_id' -PROTECTION_DOMAIN_NAME = 'sio:pd_name' -PROTECTION_DOMAIN_ID = 'sio:pd_id' -PROVISIONING_KEY = 'sio:provisioning' -IOPS_LIMIT_KEY = 'sio:iops_limit' -BANDWIDTH_LIMIT = 'sio:bandwidth_limit' - -BLOCK_SIZE=8 -OK_STATUS_CODE=200 -VOLUME_NOT_FOUND_ERROR=3 -VOLUME_NOT_MAPPED_ERROR=84 -VOLUME_ALREADY_MAPPED_ERROR=81 - - - -class ScaleIODriver(driver.VolumeDriver): - """EMC ScaleIO Driver.""" - server_ip = None - server_username = None - server_password = None - server_token = None - storage_pool_name = None - storage_pool_id = None - protection_domain_name = None - protection_domain_id = None - config = None - - VERSION = "2.0" - - def __init__(self, *args, **kwargs): - super(ScaleIODriver, self).__init__(*args, **kwargs) - self.configuration.append_config_values([opt]) - - self.config = ConfigParser.ConfigParser() - filename = self.configuration.cinder_scaleio_config_file - dataset = self.config.read(filename) - # throw exception in case the config file doesn't exist - if (len(dataset) == 0): - raise RuntimeError("Failed to find configuration file") - - self.server_ip = self._get_rest_server_ip(self.config) - LOG.info("REST Server IP: %s" % self.server_ip) - self.server_port = self._get_rest_server_port(self.config) - LOG.info("REST Server port: %s" % self.server_port) - self.server_username = self._get_rest_server_username(self.config) - LOG.info("REST Server username: %s" % self.server_username) - self.server_password = self._get_rest_server_password(self.config) - LOG.info("REST Server password: %s" % self.server_password) - self.verify_server_certificate = self._get_verify_server_certificate(self.config) - LOG.info("verify server's certificate: %s" % self.verify_server_certificate) - if (self.verify_server_certificate == 'True'): - self.server_certificate_path = self._get_certificate_path(self.config) - - self.storage_pools = self._get_storage_pools(self.config); - LOG.info("storage pools names: %s" % self.storage_pools) - - self.storage_pool_name = self._get_storage_pool_name(self.config) - LOG.info("storage pool name: %s" % self.storage_pool_name) - self.storage_pool_id = self._get_storage_pool_id(self.config) - LOG.info("storage pool id: %s" % self.storage_pool_id) - - if (self.storage_pool_name == None and self.storage_pool_id == None): - LOG.warning("No storage pool name or id was found, using default storage pool") -# self.storage_pool_name = 'Default' - self.protection_domain_name = self._get_protection_domain_name(self.config) - LOG.info("protection domain name: %s" % self.protection_domain_name) - self.protection_domain_id = self._get_protection_domain_id(self.config) - LOG.info("protection domain id: %s" % self.protection_domain_id) - if (self.protection_domain_name == None and self.protection_domain_id == None): - LOG.warning("No protection domain name or id was specified in configuration") -# raise RuntimeError("Must specify protection domain name or id") - if (self.protection_domain_name != None and self.protection_domain_id != None): - raise RuntimeError("Cannot specify both protection domain name and protection domain id") - - - - def _get_rest_server_ip(self, config): - try: - server_ip = config.get(CONFIG_SECTION_NAME, 'rest_server_ip') - if server_ip == '' or server_ip is None: - LOG.debug("REST Server IP not found") - return server_ip - except: - raise RuntimeError("REST Server ip must by specified") - - def _get_rest_server_port(self, config): - warn_msg = "REST port is not set, using default 443" - try: - server_port = config.get(CONFIG_SECTION_NAME, 'rest_server_port') - if server_port == '' or server_port is None: - LOG.warning(warn_msg) - server_port = '443' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - server_port = '443' - return server_port - - def _get_rest_server_username(self, config): - try: - server_username = config.get(CONFIG_SECTION_NAME, 'rest_server_username') - if server_username == '' or server_username is None: - raise RuntimeError("REST Server username not found in conf file") - return server_username - except: - raise RuntimeError("REST Server username must be specified") - - def _get_rest_server_password(self, config): - try: - server_password = config.get(CONFIG_SECTION_NAME, 'rest_server_password') - return server_password - except: - raise RuntimeError("REST Server password must be specified") - - def _get_verify_server_certificate(self, config): - warn_msg = "verify certificate is not set, using default of false" - try: - verify_server_certificate = config.get(CONFIG_SECTION_NAME, 'verify_server_certificate') - if verify_server_certificate == '' or verify_server_certificate is None: - LOG.warning(warn_msg) - verify_server_certificate = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - verify_server_certificate = 'False' - return verify_server_certificate - - def _get_certificate_path(self, config): - try: - certificate_path = config.get(CONFIG_SECTION_NAME, 'server_certificate_path') - return certificate_path - except: - raise RuntimeError("Path to REST server's certificate must be specified") - - def _get_round_capacity(self, config): - warn_msg = "round_volume_capacity is not set, using default of True" - try: - round_volume_capacity = self.config.get(CONFIG_SECTION_NAME, 'round_volume_capacity') - if (round_volume_capacity == '' or round_volume_capacity is None): - LOG.warning(warn_msg) - round_volume_capacity = 'True' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - round_volume_capacity = 'True' - return round_volume_capacity - - def _get_force_delete(self, config): - warn_msg = "force_delete is not set, using default of False" - try: - force_delete = self.config.get(CONFIG_SECTION_NAME, 'force_delete') - if (force_delete == '' or force_delete is None): - LOG.warning(warn_msg) - force_delete = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - force_delete = 'False' - return force_delete - - def _get_unmap_volume_before_deletion(self, config): - warn_msg = "unmap_volume_before_deletion is not set, using default of False" - try: - unmap_before_delete = self.config.get(CONFIG_SECTION_NAME, 'unmap_volume_before_deletion') - if (unmap_before_delete == '' or unmap_before_delete is None): - LOG.warning(warn_msg) - unmap_before_delete = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - unmap_before_delete = 'False' - return unmap_before_delete - - def _get_protection_domain_id(self, config): - warn_msg = "protection domain id not found" - try: - protection_domain_id = config.get(CONFIG_SECTION_NAME, 'protection_domain_id') - if protection_domain_id == '' or protection_domain_id is None: - LOG.warning(warn_msg) - protection_domain_id = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - protection_domain_id = None - return protection_domain_id; - - def _get_protection_domain_name(self, config): - warn_msg = "protection domain name not found" - try: - protection_domain_name = config.get(CONFIG_SECTION_NAME, 'protection_domain_name') - if protection_domain_name == '' or protection_domain_name is None: - LOG.warning(warn_msg) - protection_domain_name = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - protection_domain_name = None - return protection_domain_name; - - def _get_storage_pools(self, config): - - storage_pools = [e.strip() for e in config.get(CONFIG_SECTION_NAME, 'storage_pools').split(',')] - # SPYS = [e.strip() for e in parser.get('global', 'spys').split(',')] - - # storage_pools = config.get(CONFIG_SECTION_NAME, 'storage_pools') - LOG.warning("storage pools are {0}".format(storage_pools)) - return storage_pools; - - def _get_storage_pool_name(self, config): - warn_msg = "storage pool name not found" - try: - storage_pool_name = config.get(CONFIG_SECTION_NAME, 'storage_pool_name') - if storage_pool_name == '' or storage_pool_name is None: - LOG.warning(warn_msg) - storage_pool_name = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - storage_pool_name = None - return storage_pool_name; - - def _get_storage_pool_id(self, config): - warn_msg = "storage pool id not found" - try: - storage_pool_id = config.get(CONFIG_SECTION_NAME, 'storage_pool_id') - if storage_pool_id == '' or storage_pool_id is None: - LOG.warning(warn_msg) - storage_pool_id = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - storage_pool_id = None - return storage_pool_id; - - def _find_storage_pool_id_from_storage_type(self, storage_type): - try: - pool_id = storage_type[STORAGE_POOL_ID] - except KeyError: - # Default to what was configured in configuration file if not defined - pool_id = None - return pool_id - - def _find_storage_pool_name_from_storage_type(self, storage_type): - try: - name = storage_type[STORAGE_POOL_NAME] - except KeyError: - # Default to what was configured in configuration file if not defined - name = None - return name - - def _find_protection_domain_id_from_storage_type(self, storage_type): - try: - domain_id = storage_type[PROTECTION_DOMAIN_ID] - except KeyError: - # Default to what was configured in configuration file if not defined - domain_id = None - return domain_id - - def _find_protection_domain_name_from_storage_type(self, storage_type): - try: - domain_name = storage_type[PROTECTION_DOMAIN_NAME] - except KeyError: - # Default to what was configured in configuration file if not defined - domain_name = None - return domain_name - - def _find_provisioning_type(self, storage_type): - try: - provisioning_type = storage_type[PROVISIONING_KEY] - except KeyError: - provisioning_type = None - return provisioning_type - - def _find_iops_limit(self, storage_type): - try: - iops_limit = storage_type[IOPS_LIMIT_KEY] - except KeyError: - iops_limit = None - return iops_limit - - def _find_bandwidth_limit(self, storage_type): - try: - bandwidth_limit = storage_type[BANDWIDTH_LIMIT] - except KeyError: - bandwidth_limit = None - return bandwidth_limit - - - def check_for_setup_error(self): - pass - - def id_to_base64(self, id): - # Base64 encode the id to get a volume name less than 32 characters due to ScaleIO limitation - name = str(id).translate(None, "-") - name = base64.b16decode(name.upper()) - encoded_name = base64.b64encode(name) - LOG.debug("Converted id {0} to scaleio name {1}".format(id, encoded_name)) - return encoded_name - - def create_volume(self, volume): - """Creates a scaleIO volume.""" - self._check_volume_size(volume.size) - - volname = self.id_to_base64(volume.id) - - storage_type = self._get_volumetype_extraspecs(volume) - LOG.info("volume type in create volume is %s" % storage_type) - storage_pool_name = self._find_storage_pool_name_from_storage_type(storage_type) - LOG.info("storage pool name: %s" % storage_pool_name) - storage_pool_id = self._find_storage_pool_id_from_storage_type(storage_type) - LOG.info("storage pool id: %s" % storage_pool_id) - protection_domain_id = self._find_protection_domain_id_from_storage_type(storage_type) - LOG.info("protection domain id: %s" % protection_domain_id) - protection_domain_name = self._find_protection_domain_name_from_storage_type(storage_type) - LOG.info("protection domain name: %s" % protection_domain_name) - provisioning_type = self._find_provisioning_type(storage_type) - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - if (storage_pool_name != None and storage_pool_id != None): - raise RuntimeError("Cannot specify both storage pool name and storage pool id") - if (storage_pool_name != None): - self.storage_pool_name = storage_pool_name - self.storage_pool_id = None - if (storage_pool_id != None): - self.storage_pool_id = storage_pool_id - self.storage_pool_name = None - if (protection_domain_name != None and protection_domain_id != None): - raise RuntimeError("Cannot specify both protection domain name and protection domain id") - if (protection_domain_name != None): - self.protection_domain_name = protection_domain_name - self.protection_domain_id = None - if (protection_domain_id != None): - self.protection_domain_id = protection_domain_id - self.protection_domain_name = None - if (self.protection_domain_name == None and self.protection_domain_id == None): - raise RuntimeError("Must specify protection domain name or id") - - domain_id = self.protection_domain_id - if (domain_id == None): -# TODO: add /api - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Domain/instances/getByName::" + self.protection_domain_name - LOG.info("ScaleIO get domain id by name request: %s" % request) - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) -# LOG.info("Get domain by name response: %s" % r.text) - domain_id = r.json() - if (domain_id == '' or domain_id is None): - msg = ("Domain with name %s wasn't found " % (self.protection_domain_name)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in domain_id): - msg = ("Error getting domain id from name %s: %s " % (self.protection_domain_name, domain_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("domain id is %s" % domain_id) - pool_name = self.storage_pool_name - pool_id = self.storage_pool_id - if (pool_name != None): - # TODO: add /api - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Pool/instances/getByName::" + domain_id + "," + pool_name - LOG.info("ScaleIO get pool id by name request: %s" % request) - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - pool_id = r.json() - if (pool_id == '' or pool_id is None): - msg = ("Pool with name %s wasn't found in domain %s " % (pool_name, domain_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in pool_id): - msg = ("Error getting pool id from name %s: %s " % (pool_name, pool_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("pool id is %s" % pool_id) - if (provisioning_type == 'thin'): - provisioning = "ThinProvisioned" -# default volume type is thick - else: - provisioning = "ThickProvisioned" - - LOG.info("ScaleIO create volume command ") - volume_size_kb = volume.size * 1048576 - params = {'protectionDomainId' : domain_id, 'volumeSizeInKb' : str(volume_size_kb), 'name' : volname, 'volumeType' : provisioning} - # add pool id to request params if it was specified, otherwise the default storage pool will be used. - if (pool_id != None): - params['storagePoolId'] = pool_id - LOG.info("Params for add volume request: %s" % params) - headers = {'content-type': 'application/json'} - - r = requests.post("https://" + self.server_ip + ":" + self.server_port + "/api/types/Volume/instances", data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - response = r.json() - LOG.info("add volume response: %s" % response) - - if (r.status_code != OK_STATUS_CODE and "errorCode" in response): - msg = ("Error creating volume: %s " % (response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("Created volume: " + volname) - - def _check_volume_size(self, size): - if (size % 8 != 0): - round_volume_capacity = self._get_round_capacity(self.config) - if (round_volume_capacity == 'False'): - exception_msg = ("Cannot create volume of size %s (not multiply of 8GB)" % (size)) - LOG.error(exception_msg) - raise exception.VolumeBackendAPIException(data=exception_msg) - - - - def create_snapshot(self, snapshot): - """Creates a scaleio snapshot.""" - volname = self.id_to_base64(snapshot.volume_id) - snapname = self.id_to_base64(snapshot.id) - self._snapshot_volume(volname, snapname) - - - def _snapshot_volume(self, volname, snapname): - vol_id = self._get_volume_id(volname); - params = {'snapshotDefs' : [{"volumeId" : vol_id, "snapshotName" : snapname}]} - headers = {'content-type': 'application/json'} - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - request = "https://" + self.server_ip + ":" + self.server_port + "/api/instances/System/action/snapshotVolumes" - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - response = r.json() - LOG.info("snapshot volume response: %s" % response) - if (r.status_code != OK_STATUS_CODE and "errorCode" in response): - msg = ("Failed creating snapshot for volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - def _check_response(self, response, request): - if (response.status_code == 401 or response.status_code == 403): - LOG.info("Token is invalid, going to re-login and get a new one") - login_request = "https://" + self.server_ip + ":" + self.server_port + "/api/login" - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get(login_request, auth=(self.server_username, self.server_password), verify=verify_cert) - token = r.json() - self.server_token = token - #repeat request with valid token - LOG.info("going to perform request again {0} with valid token".format(request)) - res = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - return res - return response - - - def create_volume_from_snapshot(self, volume, snapshot): - """Creates a volume from a snapshot.""" - #We interchange 'volume' and 'snapshot' because in ScaleIO snapshot is a volume: - #once a snapshot is generated it becomes a new unmapped volume in the system - #and the user may manipulate it in the same manner as any other volume exposed by the system - volname = self.id_to_base64(snapshot.id) - snapname = self.id_to_base64(volume.id) - LOG.info("ScaleIO create volume from snapshot: snapshot {0} to volume {1}".format(volname, snapname)) - self._snapshot_volume(volname, snapname) - - def _get_volume_id(self, volname): -# add /api - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') - LOG.info("volume name after double encoding is %s " % volname_double_encoded) - - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - - vol_id = r.json() - - if (vol_id == '' or vol_id is None): - msg = ("Volume with name %s wasn't found " % (volname)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in vol_id): - msg = ("Error getting volume id from name %s: %s" % (volname, vol_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("volume id is %s" % vol_id) - return vol_id - - def extend_volume(self, volume, new_size): - """Extends the size of an existing available ScaleIO volume.""" - - self._check_volume_size(new_size) - - volname = self.id_to_base64(volume.id) - - LOG.info("ScaleIO extend volume: volume {0} to size {1}".format(volname, new_size)) - - vol_id = self._get_volume_id(volname) - - request = "https://" + self.server_ip + ":" + self.server_port + "/api/instances/Volume::" + vol_id + "/action/setVolumeSize" - LOG.info("change volume capacity request: %s" % request) - volume_new_size = new_size - params = {'sizeInGB' : str(volume_new_size)} - headers = {'content-type': 'application/json'} - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) -# LOG.info("change volume response: %s" % r.text) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - msg = ("Error extending volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - - def create_cloned_volume(self, volume, src_vref): - """Creates a cloned volume.""" - volname = self.id_to_base64(src_vref.id) - snapname = self.id_to_base64(volume.id) - LOG.info("ScaleIO create cloned volume: volume {0} to volume {1}".format(volname, snapname)) - self._snapshot_volume(volname, snapname) - - - def delete_volume(self, volume): - """Deletes a logical volume""" - volname = self.id_to_base64(volume.id) - self._delete_volume(volname) - - - - def _delete_volume(self, volname): - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') -# volname = volname.replace('/', '%252F') - LOG.info("volume name after double encoding is %s " % volname_double_encoded) - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - #convert volume name to id - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - LOG.info("get by name response: %s" % r.text) - vol_id = r.json() - LOG.info("ScaleIO volume id to delete is %s" % vol_id) - - if (r.status_code != OK_STATUS_CODE and "errorCode" in vol_id): - msg = ("Error getting volume id from name %s: %s " % (volname, vol_id['message'])) - LOG.error(msg) - - error_code = vol_id['errorCode'] - if (error_code == VOLUME_NOT_FOUND_ERROR): - force_delete = self._get_force_delete(self.config) - if (force_delete == 'True'): - msg = ("Ignoring error in delete volume %s: volume not found due to force delete settings" % (volname)) - LOG.warning(msg) - return - - raise exception.VolumeBackendAPIException(data=msg) - - headers = {'content-type': 'application/json'} - - unmap_before_delete = self._get_unmap_volume_before_deletion(self.config) - # ensure that the volume is not mapped to any SDC before deletion in case unmap_before_deletion is enabled - if (unmap_before_delete == 'True'): - params = {'allSdcs' : ''} - request = "https://" + self.server_ip + ":" + self.server_port + "/api/instances/Volume::" + str(vol_id) + "/action/removeMappedSdc" - LOG.info("Trying to unmap volume from all sdcs before deletion: %s" % request) - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - LOG.debug("Unmap volume response: %s " % r.text) - - - LOG.info("ScaleIO delete volume command ") - params = {'removeMode' : 'ONLY_ME'} - r = requests.post("https://" + self.server_ip + ":" + self.server_port + "/api/instances/Volume::" + str(vol_id) + "/action/removeVolume", data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - -# LOG.info("delete volume response: %s" % r.json()) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == 78): - force_delete = self._get_force_delete(self.config) - if (force_delete == 'True'): - msg = ("Ignoring error in delete volume %s: volume not found due to force delete settings" % (vol_id)) - LOG.warning(msg) - else: - msg = ("Error deleting volume %s: volume not found" % (vol_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - else: - msg = ("Error deleting volume %s: %s" % (vol_id, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - - def delete_snapshot(self, snapshot): - """Deletes a ScaleIO snapshot.""" - snapname = self.id_to_base64(snapshot.id) - LOG.info("ScaleIO delete snapshot") - self._delete_volume(snapname) - - - def initialize_connection(self, volume, connector): - """Initializes the connection and returns connection info. - - The scaleio driver returns a driver_volume_type of 'scaleio'. """ - - LOG.debug("connector is {0} ".format(connector)) - volname = self.id_to_base64(volume.id) - properties = {} - - properties['scaleIO_volname'] = volname - properties['hostIP'] = connector['ip'] - properties['serverIP'] = self.server_ip - properties['serverPort'] = self.server_port - properties['serverUsername'] = self.server_username - properties['serverPassword'] = self.server_password - properties['serverToken'] = self.server_token - - storage_type = self._get_volumetype_extraspecs(volume) - LOG.info("volume type in create volume is %s" % storage_type) - iops_limit = self._find_iops_limit(storage_type) - LOG.info("iops limit is: %s" % iops_limit) - bandwidth_limit = self._find_bandwidth_limit(storage_type) - LOG.info("bandwidth limit is: %s" % bandwidth_limit) - properties['iopsLimit'] = iops_limit - properties['bandwidthLimit'] = bandwidth_limit - - return { - 'driver_volume_type': 'scaleio', - 'data': properties - } - - def terminate_connection(self, volume, connector, **kwargs): - LOG.info("scaleio driver terminate connection") - pass - - def _update_volume_stats(self): - stats = {} - - backend_name = self.configuration.safe_get('volume_backend_name') - stats['volume_backend_name'] = backend_name or 'scaleio' - stats['vendor_name'] = 'EMC' - stats['driver_version'] = self.VERSION - stats['storage_protocol'] = 'scaleio' - stats['total_capacity_gb'] = 'unknown' - stats['free_capacity_gb'] = 'unknown' - stats['reserved_percentage'] = 0 - stats['QoS_support'] = False - - pools = [] - - headers = {'content-type': 'application/json'} - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - for sp_name in self.storage_pools: - splitted_name = sp_name.split(':') - domain_name = splitted_name[0] - pool_name = splitted_name[1] - LOG.debug("domain name is {0}, pool name is {1}".format(domain_name, pool_name)) - #get domain id from name - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Domain/instances/getByName::" + domain_name - LOG.info("ScaleIO get domain id by name request: %s" % request) - LOG.info("username: %s, password: %s, verify_cert: %s " % (self.server_username, self.server_token, verify_cert)) - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - LOG.info("Get domain by name response: %s" % r.text) - domain_id = r.json() - if (domain_id == '' or domain_id is None): - msg = ("Domain with name %s wasn't found " % (self.protection_domain_name)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in domain_id): - msg = ("Error getting domain id from name %s: %s " % (self.protection_domain_name, domain_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("domain id is %s" % domain_id) - - #get pool id from name - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/Pool/instances/getByName::" + domain_id + "," + pool_name - LOG.info("ScaleIO get pool id by name request: %s" % request) - r = requests.get(request, auth=(self.server_username, self.server_token), verify=verify_cert) - pool_id = r.json() - if (pool_id == '' or pool_id is None): - msg = ("Pool with name %s wasn't found in domain %s " % (pool_name, domain_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in pool_id): - msg = ("Error getting pool id from name %s: %s " % (pool_name, pool_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("pool id is %s" % pool_id) - - request = "https://" + self.server_ip + ":" + self.server_port + "/api/types/StoragePool/instances/action/querySelectedStatistics" - params = {'ids' : [pool_id], 'properties' : ["capacityInUseInKb","capacityLimitInKb"]} - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - response = r.json() - LOG.info("query capacity stats response: %s" % response) - for res in response.itervalues(): - capacityInUse = res['capacityInUseInKb'] - capacityLimit = res['capacityLimitInKb'] - total_capacity_gb = capacityLimit/1048576 - used_capacity_gb = capacityInUse/1048576 - free_capacity_gb = total_capacity_gb - used_capacity_gb - LOG.info("free capacity of pool {0} is: {1}, total capacity: {2}".format(pool_name, free_capacity_gb, total_capacity_gb)) - pool = {'pool_name': sp_name, - 'total_capacity_gb': total_capacity_gb, - 'free_capacity_gb': free_capacity_gb, -# 'total_capacity_gb': 100000, -# 'free_capacity_gb': 100000, - 'QoS_support': False, - 'reserved_percentage': 0 - } - - pools.append(pool) - - stats['volume_backend_name'] = backend_name or 'scaleio' - stats['vendor_name'] = 'EMC' - stats['driver_version'] = self.VERSION - stats['storage_protocol'] = 'scaleio' - # Use zero capacities here so we always use a pool. - stats['total_capacity_gb'] = 0 - stats['free_capacity_gb'] = 0 - stats['reserved_percentage'] = 0 - stats['QoS_support'] = False - stats['pools'] = pools - - LOG.info("Backend name is "+stats["volume_backend_name"]) - - self._stats = stats - - def get_volume_stats(self, refresh=False): - """Get volume stats. - - If 'refresh' is True, run update the stats first. - """ - if refresh: - self._update_volume_stats() - - return self._stats - - - def _get_volumetype_extraspecs(self, volume): - specs = {} - ctxt = context.get_admin_context() - type_id = volume['volume_type_id'] - if type_id is not None: - volume_type = volume_types.get_volume_type(ctxt, type_id) - specs = volume_type.get('extra_specs') - for key, value in specs.iteritems(): - specs[key] = value - - return specs - - def find_volume_path(self, volume_id): - - LOG.info("looking for volume %s" % volume_id) - #look for the volume in /dev/disk/by-id directory - disk_filename = "" - tries = 0 - while not disk_filename: - if (tries > 15): - raise exception.VolumeBackendAPIException("scaleIO volume {0} not found at expected path ".format(volume_id)) - by_id_path = "/dev/disk/by-id" - if not os.path.isdir(by_id_path): - LOG.warn("scaleIO volume {0} not yet found (no directory /dev/disk/by-id yet). Try number: {1} ".format(volume_id, tries)) - tries = tries + 1 - time.sleep(1) - continue - filenames = os.listdir(by_id_path) - LOG.warning("Files found in {0} path: {1} ".format(by_id_path, filenames)) - for filename in filenames: - if (filename.startswith("emc-vol") and filename.endswith(volume_id)): - disk_filename = filename - if not disk_filename: - LOG.warn("scaleIO volume {0} not yet found. Try number: {1} ".format(volume_id, tries)) - tries = tries + 1 - time.sleep(1) - - if (tries != 0): - LOG.warning("Found scaleIO device {0} after {1} retries ".format(disk_filename, tries)) - full_disk_name = by_id_path + "/" + disk_filename - LOG.warning("Full disk name is " + full_disk_name) - return full_disk_name -# path = os.path.realpath(full_disk_name) -# LOG.warning("Path is " + path) -# return path - - - def _get_client_id(self, server_ip, server_username, server_password, sdc_ip): - request = "https://" + server_ip + ":" + self.server_port + "/api/types/Client/instances/getByIp::" + sdc_ip + "/" - LOG.info("ScaleIO get client id by ip request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get(request, auth=(server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - - sdc_id = r.json() - if (sdc_id == '' or sdc_id is None): - msg = ("Client with ip %s wasn't found " % (sdc_ip)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != 200 and "errorCode" in sdc_id): - msg = ("Error getting sdc id from ip %s: %s " % (sdc_ip, sdc_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("ScaleIO sdc id is %s" % sdc_id) - return sdc_id - - - def _attach_volume(self, volume, sdc_ip): - # We need to make sure we even *have* a local path - LOG.info("ScaleIO attach volume in scaleio cinder driver") - volname = self.id_to_base64(volume.id) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: "+str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid' : guid} - - volume_id = self._get_volume_id(volname) - headers = {'content-type': 'application/json'} - request = "https://" + self.server_ip + ":" + self.server_port + "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc" - LOG.info("map volume request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) -# LOG.info("map volume response: %s" % r.text) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_ALREADY_MAPPED_ERROR): - msg = ("Ignoring error mapping volume %s: volume already mapped" % (volname)) - LOG.warning(msg) - else: - msg = ("Error mapping volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - -# convert id to hex -# val = int(volume_id) -# id_in_hex = hex((val + (1 << 64)) % (1 << 64)) -# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0" - formated_id = volume_id - - return self.find_volume_path(formated_id) - - def _detach_volume(self, volume, sdc_ip): - LOG.info("ScaleIO detach volume in scaleio cinder driver") - volname = self.id_to_base64(volume.id) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: "+str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid' : guid} - headers = {'content-type': 'application/json'} - - volume_id = self._get_volume_id(volname) - request = "https://" + self.server_ip + ":" + self.server_port + "/api/instances/Volume::" + str(volume_id) + "/action/removeMappedSdc" - LOG.info("unmap volume request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post(request, data=json.dumps(params), headers=headers, auth=(self.server_username, self.server_token), verify=verify_cert) - r = self._check_response(r, request) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_NOT_MAPPED_ERROR): - msg = ("Ignoring error unmapping volume %s: volume not mapped" % (volname)) - LOG.warning(msg) - else: - msg = ("Error unmapping volume %s: %s" % (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - - def copy_image_to_volume(self, context, volume, image_service, image_id): - """Fetch the image from image_service and write it to the volume.""" - LOG.info("ScaleIO copy_image_to_volume volume: "+str(volume) + " image service: " + str(image_service) + " image id: " + str(image_id)) - properties = utils.brick_get_connector_properties() - sdc_ip = properties['ip'] - LOG.debug("SDC ip is: {0}".format(sdc_ip)) - - cinder_version = version.version_info.version_string() - # cinder_version = str(version.version_info) - LOG.debug("Cinder version is %s " % cinder_version) -# check if openstack version ia icehouse - if (cinder_version.startswith("2014")): - LOG.info("Cinder version is Icehouse ") - icehouse = 'True' - else: - LOG.info("Cinder version is Havana or older ") - icehouse = 'False' - - try: - if (icehouse == 'True'): - image_utils.fetch_to_raw(context, - image_service, - image_id, - self._attach_volume(volume, sdc_ip), - BLOCK_SIZE, - size=volume['size']) - else: - image_utils.fetch_to_raw(context, - image_service, - image_id, - self._attach_volume(volume, sdc_ip)) - - finally: - self._detach_volume(volume, sdc_ip) - - def copy_volume_to_image(self, context, volume, image_service, image_meta): - """Copy the volume to the specified image.""" - LOG.info("ScaleIO copy_volume_to_image volume: "+str(volume) + " image service: " + str(image_service) + " image meta: " + str(image_meta)) - properties = utils.brick_get_connector_properties() - sdc_ip = properties['ip'] - LOG.debug("SDC ip is: {0}".format(sdc_ip)) - try: - image_utils.upload_volume(context, - image_service, - image_meta, - self._attach_volume (volume, sdc_ip)) - finally: - self._detach_volume(volume, sdc_ip) - - - def ensure_export(self, context, volume): - """Driver entry point to get the export info for an existing volume.""" - pass - - def create_export(self, context, volume): - """Driver entry point to get the export info for a new volume.""" - pass - - def remove_export(self, context, volume): - """Driver entry point to remove an export for a volume.""" - pass - - def check_for_export(self, context, volume_id): - """Make sure volume is exported.""" - pass - diff --git a/deployment_scripts/puppet/install_scaleio_controller/files/7.0/scaleio.py b/deployment_scripts/puppet/install_scaleio_controller/files/7.0/scaleio.py deleted file mode 100644 index 30f1638..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/files/7.0/scaleio.py +++ /dev/null @@ -1,1298 +0,0 @@ -# -# Copyright (c) 2011-2015 EMC Corporation -# All Rights Reserved -# EMC Confidential: Restricted Internal Distribution -# 4ebcffbc4faf87cb4da8841bbf214d32f045c8a8.ScaleIO -# - -from requests.exceptions import ConnectionError -from cinder.version import version_info - -# This software contains the intellectual property of EMC Corporation -# or is licensed to EMC Corporation from third parties. Use of this -# software and the intellectual property contained therein is expressly -# limited to the terms and conditions of the License Agreement under which -# it is provided by or on behalf of EMC. - -#OpenStack Version = Kilo -""" -Driver for EMC ScaleIO - -""" - -import requests -import base64 -import re -import os -import time -import sys -import ConfigParser -import json -import urllib -from cinder import exception -from oslo_log import log as logging -from cinder.volume import driver -from cinder.image import image_utils -from cinder import utils -from oslo_concurrency import processutils -from cinder import context -from cinder.volume import volume_types -from cinder import version -from oslo.config import cfg -from xml.dom.minidom import parseString - -LOG = logging.getLogger(__name__) - -opt = cfg.StrOpt('cinder_scaleio_config_file', - default='/etc/cinder/cinder_scaleio.config', - help='use this file for cinder scaleio driver config data') -CONFIG_SECTION_NAME = 'scaleio' -STORAGE_POOL_NAME = 'sio:sp_name' -STORAGE_POOL_ID = 'sio:sp_id' -PROTECTION_DOMAIN_NAME = 'sio:pd_name' -PROTECTION_DOMAIN_ID = 'sio:pd_id' -PROVISIONING_KEY = 'sio:provisioning_type' -IOPS_LIMIT_KEY = 'sio:iops_limit' -BANDWIDTH_LIMIT = 'sio:bandwidth_limit' - -BLOCK_SIZE = 8 -OK_STATUS_CODE = 200 -VOLUME_NOT_FOUND_ERROR = 3 -VOLUME_NOT_MAPPED_ERROR = 84 -VOLUME_ALREADY_MAPPED_ERROR = 81 - - -class ScaleIODriver(driver.VolumeDriver): - - """EMC ScaleIO Driver.""" - server_ip = None - server_username = None - server_password = None - server_token = None - storage_pool_name = None - storage_pool_id = None - protection_domain_name = None - protection_domain_id = None - config = None - - VERSION = "2.0" - - def __init__(self, *args, **kwargs): - super(ScaleIODriver, self).__init__(*args, **kwargs) - self.configuration.append_config_values([opt]) - - self.config = ConfigParser.ConfigParser() - filename = self.configuration.cinder_scaleio_config_file - dataset = self.config.read(filename) - # throw exception in case the config file doesn't exist - if (len(dataset) == 0): - raise RuntimeError("Failed to find configuration file") - - self.server_ip = self._get_rest_server_ip(self.config) - LOG.info("REST Server IP: %s" % self.server_ip) - self.server_port = self._get_rest_server_port(self.config) - LOG.info("REST Server port: %s" % self.server_port) - self.server_username = self._get_rest_server_username(self.config) - LOG.info("REST Server username: %s" % self.server_username) - self.server_password = self._get_rest_server_password(self.config) - LOG.info("REST Server password: %s" % self.server_password) - self.verify_server_certificate = self._get_verify_server_certificate( - self.config) - LOG.info("verify server's certificate: %s" % - self.verify_server_certificate) - if (self.verify_server_certificate == 'True'): - self.server_certificate_path = self._get_certificate_path( - self.config) - - self.storage_pools = self._get_storage_pools(self.config) - LOG.info("storage pools names: %s" % self.storage_pools) - - self.storage_pool_name = self._get_storage_pool_name(self.config) - LOG.info("storage pool name: %s" % self.storage_pool_name) - self.storage_pool_id = self._get_storage_pool_id(self.config) - LOG.info("storage pool id: %s" % self.storage_pool_id) - - if (self.storage_pool_name is None and self.storage_pool_id is None): - LOG.warning( - "No storage pool name or id was found") - self.protection_domain_name = self._get_protection_domain_name( - self.config) - LOG.info("protection domain name: %s" % self.protection_domain_name) - self.protection_domain_id = self._get_protection_domain_id(self.config) - LOG.info("protection domain id: %s" % self.protection_domain_id) - if (self.protection_domain_name is None and - self.protection_domain_id is None): - LOG.warning( - "No protection domain name or id \ - was specified in configuration") -# raise RuntimeError("Must specify protection domain name or id") - if (self.protection_domain_name is not None and - self.protection_domain_id is not None): - raise RuntimeError( - "Cannot specify both protection domain name \ - and protection domain id") - - def _get_rest_server_ip(self, config): - try: - server_ip = config.get(CONFIG_SECTION_NAME, 'rest_server_ip') - if server_ip == '' or server_ip is None: - LOG.debug("REST Server IP not found") - return server_ip - except: - raise RuntimeError("REST Server ip must by specified") - - def _get_rest_server_port(self, config): - warn_msg = "REST port is not set, using default 443" - try: - server_port = config.get(CONFIG_SECTION_NAME, 'rest_server_port') - if server_port == '' or server_port is None: - LOG.warning(warn_msg) - server_port = '443' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - server_port = '443' - return server_port - - def _get_rest_server_username(self, config): - try: - server_username = config.get( - CONFIG_SECTION_NAME, 'rest_server_username') - if server_username == '' or server_username is None: - raise RuntimeError( - "REST Server username not found in conf file") - return server_username - except: - raise RuntimeError("REST Server username must be specified") - - def _get_rest_server_password(self, config): - try: - server_password = config.get( - CONFIG_SECTION_NAME, 'rest_server_password') - return server_password - except: - raise RuntimeError("REST Server password must be specified") - - def _get_verify_server_certificate(self, config): - warn_msg = "verify certificate is not set, using default of false" - try: - verify_server_certificate = config.get( - CONFIG_SECTION_NAME, 'verify_server_certificate') - if verify_server_certificate == '' or \ - verify_server_certificate is None: - LOG.warning(warn_msg) - verify_server_certificate = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - verify_server_certificate = 'False' - return verify_server_certificate - - def _get_certificate_path(self, config): - try: - certificate_path = config.get( - CONFIG_SECTION_NAME, 'server_certificate_path') - return certificate_path - except: - raise RuntimeError( - "Path to REST server's certificate must be specified") - - def _get_round_capacity(self, config): - warn_msg = "round_volume_capacity is not set, using default of True" - try: - round_volume_capacity = self.config.get( - CONFIG_SECTION_NAME, 'round_volume_capacity') - if (round_volume_capacity == '' or round_volume_capacity is None): - LOG.warning(warn_msg) - round_volume_capacity = 'True' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - round_volume_capacity = 'True' - return round_volume_capacity - - def _round_to_8_gran(self, size): - if size%8==0: - return size - return size + 8 - (size%8) - - def _get_force_delete(self, config): - warn_msg = "force_delete is not set, using default of False" - try: - force_delete = self.config.get(CONFIG_SECTION_NAME, 'force_delete') - if (force_delete == '' or force_delete is None): - LOG.warning(warn_msg) - force_delete = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - force_delete = 'False' - return force_delete - - def _get_unmap_volume_before_deletion(self, config): - warn_msg = "unmap_volume_before_deletion is not set, \ - using default of False" - try: - unmap_before_delete = self.config.get( - CONFIG_SECTION_NAME, 'unmap_volume_before_deletion') - if (unmap_before_delete == '' or unmap_before_delete is None): - LOG.warning(warn_msg) - unmap_before_delete = 'False' - except ConfigParser.Error as e: - LOG.warning(warn_msg) - unmap_before_delete = 'False' - return unmap_before_delete - - def _get_protection_domain_id(self, config): - warn_msg = "protection domain id not found" - try: - protection_domain_id = config.get( - CONFIG_SECTION_NAME, 'protection_domain_id') - if protection_domain_id == '' or protection_domain_id is None: - LOG.warning(warn_msg) - protection_domain_id = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - protection_domain_id = None - return protection_domain_id - - def _get_protection_domain_name(self, config): - warn_msg = "protection domain name not found" - try: - protection_domain_name = config.get( - CONFIG_SECTION_NAME, 'protection_domain_name') - if protection_domain_name == '' or protection_domain_name is None: - LOG.warning(warn_msg) - protection_domain_name = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - protection_domain_name = None - return protection_domain_name - - def _get_storage_pools(self, config): - - storage_pools = [e.strip() for e in config.get( - CONFIG_SECTION_NAME, 'storage_pools').split(',')] - # SPYS = [e.strip() for e in parser.get('global', 'spys').split(',')] - - # storage_pools = config.get(CONFIG_SECTION_NAME, 'storage_pools') - LOG.warning("storage pools are {0}".format(storage_pools)) - return storage_pools - - def _get_storage_pool_name(self, config): - warn_msg = "storage pool name not found" - try: - storage_pool_name = config.get( - CONFIG_SECTION_NAME, 'storage_pool_name') - if storage_pool_name == '' or storage_pool_name is None: - LOG.warning(warn_msg) - storage_pool_name = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - storage_pool_name = None - return storage_pool_name - - def _get_storage_pool_id(self, config): - warn_msg = "storage pool id not found" - try: - storage_pool_id = config.get( - CONFIG_SECTION_NAME, 'storage_pool_id') - if storage_pool_id == '' or storage_pool_id is None: - LOG.warning(warn_msg) - storage_pool_id = None - except ConfigParser.Error as e: - LOG.warning(warn_msg) - storage_pool_id = None - return storage_pool_id - - def _find_storage_pool_id_from_storage_type(self, storage_type): - try: - pool_id = storage_type[STORAGE_POOL_ID] - except KeyError: - # Default to what was configured in configuration file if not - # defined - pool_id = None - return pool_id - - def _find_storage_pool_name_from_storage_type(self, storage_type): - try: - name = storage_type[STORAGE_POOL_NAME] - except KeyError: - # Default to what was configured in configuration file if not - # defined - name = None - return name - - def _find_protection_domain_id_from_storage_type(self, storage_type): - try: - domain_id = storage_type[PROTECTION_DOMAIN_ID] - except KeyError: - # Default to what was configured in configuration file if not - # defined - domain_id = None - return domain_id - - def _find_protection_domain_name_from_storage_type(self, storage_type): - try: - domain_name = storage_type[PROTECTION_DOMAIN_NAME] - except KeyError: - # Default to what was configured in configuration file if not - # defined - domain_name = None - return domain_name - - def _find_provisioning_type(self, storage_type): - try: - provisioning_type = storage_type[PROVISIONING_KEY] - except KeyError: - provisioning_type = None - return provisioning_type - - def _find_iops_limit(self, storage_type): - try: - iops_limit = storage_type[IOPS_LIMIT_KEY] - except KeyError: - iops_limit = None - return iops_limit - - def _find_bandwidth_limit(self, storage_type): - try: - bandwidth_limit = storage_type[BANDWIDTH_LIMIT] - except KeyError: - bandwidth_limit = None - return bandwidth_limit - - def check_for_setup_error(self): - pass - - def id_to_base64(self, id): - # Base64 encode the id to get a volume name less than 32 characters due - # to ScaleIO limitation - name = str(id).translate(None, "-") - name = base64.b16decode(name.upper()) - encoded_name = base64.b64encode(name) - LOG.debug( - "Converted id {0} to scaleio name {1}".format(id, encoded_name)) - return encoded_name - - def create_volume(self, volume): - """Creates a scaleIO volume.""" - self._check_volume_size(volume.size) - - volname = self.id_to_base64(volume.id) - - storage_type = self._get_volumetype_extraspecs(volume) - LOG.info("volume type in create volume is %s" % storage_type) - storage_pool_name = self._find_storage_pool_name_from_storage_type( - storage_type) - LOG.info("storage pool name: %s" % storage_pool_name) - storage_pool_id = self._find_storage_pool_id_from_storage_type( - storage_type) - LOG.info("storage pool id: %s" % storage_pool_id) - protection_domain_id = \ - self._find_protection_domain_id_from_storage_type(storage_type) - LOG.info("protection domain id: %s" % protection_domain_id) - protection_domain_name = \ - self._find_protection_domain_name_from_storage_type(storage_type) - LOG.info("protection domain name: %s" % protection_domain_name) - provisioning_type = self._find_provisioning_type(storage_type) - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - if (storage_pool_name is not None and storage_pool_id is not None): - raise RuntimeError( - "Cannot specify both storage pool name and storage pool id") - if (storage_pool_name is not None): - self.storage_pool_name = storage_pool_name - self.storage_pool_id = None - if (storage_pool_id is not None): - self.storage_pool_id = storage_pool_id - self.storage_pool_name = None - if (self.storage_pool_name is None and - self.storage_pool_id is None): - raise RuntimeError("Must specify storage pool name or id") - - if (protection_domain_name is not None and - protection_domain_id is not None): - raise RuntimeError( - "Cannot specify both protection domain name and \ - protection domain id") - if (protection_domain_name is not None): - self.protection_domain_name = protection_domain_name - self.protection_domain_id = None - if (protection_domain_id is not None): - self.protection_domain_id = protection_domain_id - self.protection_domain_name = None - if (self.protection_domain_name is None and - self.protection_domain_id is None): - raise RuntimeError("Must specify protection domain name or id") - - domain_id = self.protection_domain_id - if (domain_id is None): - encoded_domain_name = urllib.quote(self.protection_domain_name, '') - - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Domain/instances/getByName::" + encoded_domain_name - LOG.info("ScaleIO get domain id by name request: %s" % request) - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request) -# LOG.info("Get domain by name response: %s" % r.text) - domain_id = r.json() - if (domain_id == '' or domain_id is None): - msg = ("Domain with name %s wasn't found " % - (self.protection_domain_name)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in domain_id): - msg = ("Error getting domain id from name %s: %s " % - (self.protection_domain_name, domain_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("domain id is %s" % domain_id) - pool_name = self.storage_pool_name - pool_id = self.storage_pool_id - if (pool_name is not None): - encoded_domain_name = urllib.quote(pool_name, '') - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Pool/instances/getByName::" + \ - domain_id + "," + encoded_domain_name - LOG.info("ScaleIO get pool id by name request: %s" % request) - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - pool_id = r.json() - if (pool_id == '' or pool_id is None): - msg = ("Pool with name %s wasn't found in domain %s " % - (pool_name, domain_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in pool_id): - msg = ("Error getting pool id from name %s: %s " % - (pool_name, pool_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("pool id is %s" % pool_id) - if (provisioning_type == 'thin'): - provisioning = "ThinProvisioned" -# default volume type is thick - else: - provisioning = "ThickProvisioned" - - LOG.info("ScaleIO create volume command ") - volume_size_kb = volume.size * 1048576 - params = {'protectionDomainId': domain_id, 'volumeSizeInKb': str( - volume_size_kb), 'name': volname, 'volumeType': provisioning, - 'storagePoolId': pool_id} - - LOG.info("Params for add volume request: %s" % params) - headers = {'content-type': 'application/json'} - - r = requests.post( - "https://" + - self.server_ip + - ":" + - self.server_port + - "/api/types/Volume/instances", - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - response = r.json() - LOG.info("add volume response: %s" % response) - - if (r.status_code != OK_STATUS_CODE and "errorCode" in response): - msg = ("Error creating volume: %s " % (response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("Created volume: " + volname) - - def _check_volume_size(self, size): - if (size % 8 != 0): - round_volume_capacity = self._get_round_capacity(self.config) - if (round_volume_capacity == 'False'): - exception_msg = ( - "Cannot create volume of size %s (not multiply of 8GB)" % - (size)) - LOG.error(exception_msg) - raise exception.VolumeBackendAPIException(data=exception_msg) - - def create_snapshot(self, snapshot): - """Creates a scaleio snapshot.""" - volname = self.id_to_base64(snapshot.volume_id) - snapname = self.id_to_base64(snapshot.id) - self._snapshot_volume(volname, snapname) - - def _snapshot_volume(self, volname, snapname): - vol_id = self._get_volume_id(volname) - params = { - 'snapshotDefs': [{"volumeId": vol_id, "snapshotName": snapname}]} - headers = {'content-type': 'application/json'} - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/instances/System/action/snapshotVolumes" - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) - response = r.json() - LOG.info("snapshot volume response: %s" % response) - if (r.status_code != OK_STATUS_CODE and "errorCode" in response): - msg = ("Failed creating snapshot for volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - def _check_response(self, response, request, isGetRequest=True, params=None): - if (response.status_code == 401 or response.status_code == 403): - LOG.info("Token is invalid, going to re-login and get a new one") - login_request = "https://" + self.server_ip + \ - ":" + self.server_port + "/api/login" - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get( - login_request, - auth=( - self.server_username, - self.server_password), - verify=verify_cert) - token = r.json() - self.server_token = token - # repeat request with valid token - LOG.info( - "going to perform request again {0} \ - with valid token".format(request)) - if isGetRequest: - res = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - else: - headers = {'content-type': 'application/json'} - res = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - - return res - return response - - def create_volume_from_snapshot(self, volume, snapshot): - """Creates a volume from a snapshot.""" - # We interchange 'volume' and 'snapshot' because in ScaleIO - # snapshot is a volume: once a snapshot is generated it - # becomes a new unmapped volume in the system and the user - # may manipulate it in the same manner as any other volume - # exposed by the system - volname = self.id_to_base64(snapshot.id) - snapname = self.id_to_base64(volume.id) - LOG.info( - "ScaleIO create volume from snapshot: snapshot {0} \ - to volume {1}".format( - volname, - snapname)) - self._snapshot_volume(volname, snapname) - - def _get_volume_id(self, volname): - # add /api - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') - LOG.info("volume name after double encoding is %s " % - volname_double_encoded) - - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request) - - vol_id = r.json() - - if (vol_id == '' or vol_id is None): - msg = ("Volume with name %s wasn't found " % (volname)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in vol_id): - msg = ("Error getting volume id from name %s: %s" % - (volname, vol_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - LOG.info("volume id is %s" % vol_id) - return vol_id - - def extend_volume(self, volume, new_size): - """Extends the size of an existing available ScaleIO volume.""" - - self._check_volume_size(new_size) - - volname = self.id_to_base64(volume.id) - - LOG.info( - "ScaleIO extend volume: volume {0} to size {1}".format( - volname, - new_size)) - - vol_id = self._get_volume_id(volname) - - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/instances/Volume::" + vol_id + "/action/setVolumeSize" - LOG.info("change volume capacity request: %s" % request) - volume_new_size = self._round_to_8_gran(new_size) - if self._get_round_capacity(self.config) == 'False': - LOG.warning("Warning: Your volume will be extended in granularity of 8 GBs.") - params = {'sizeInGB': str(volume_new_size)} - headers = {'content-type': 'application/json'} - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) -# LOG.info("change volume response: %s" % r.text) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - msg = ("Error extending volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - def create_cloned_volume(self, volume, src_vref): - """Creates a cloned volume.""" - volname = self.id_to_base64(src_vref.id) - snapname = self.id_to_base64(volume.id) - LOG.info( - "ScaleIO create cloned volume: volume {0} to volume {1}".format( - volname, - snapname)) - self._snapshot_volume(volname, snapname) - - def delete_volume(self, volume): - """Deletes a logical volume""" - volname = self.id_to_base64(volume.id) - self._delete_volume(volname) - - def _delete_volume(self, volname): - volname_encoded = urllib.quote(volname, '') - volname_double_encoded = urllib.quote(volname_encoded, '') -# volname = volname.replace('/', '%252F') - LOG.info("volume name after double encoding is %s " % - volname_double_encoded) - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - # convert volume name to id - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Volume/instances/getByName::" + volname_double_encoded - LOG.info("ScaleIO get volume id by name request: %s" % request) - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request) - LOG.info("get by name response: %s" % r.text) - vol_id = r.json() - LOG.info("ScaleIO volume id to delete is %s" % vol_id) - - if (r.status_code != OK_STATUS_CODE and "errorCode" in vol_id): - msg = ("Error getting volume id from name %s: %s " % - (volname, vol_id['message'])) - LOG.error(msg) - - error_code = vol_id['errorCode'] - if (error_code == VOLUME_NOT_FOUND_ERROR): - force_delete = self._get_force_delete(self.config) - if (force_delete == 'True'): - msg = ( - "Ignoring error in delete volume %s: volume not found due to \ - force delete settings" % - (volname)) - LOG.warning(msg) - return - - raise exception.VolumeBackendAPIException(data=msg) - - headers = {'content-type': 'application/json'} - - unmap_before_delete = self._get_unmap_volume_before_deletion( - self.config) - # ensure that the volume is not mapped to any SDC before deletion in - # case unmap_before_deletion is enabled - if (unmap_before_delete == 'True'): - params = {'allSdcs': ''} - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/instances/Volume::" + \ - str(vol_id) + "/action/removeMappedSdc" - LOG.info( - "Trying to unmap volume from all sdcs before deletion: %s" % - request) - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) - LOG.debug("Unmap volume response: %s " % r.text) - - LOG.info("ScaleIO delete volume command ") - params = {'removeMode': 'ONLY_ME'} - r = requests.post( - "https://" + - self.server_ip + - ":" + - self.server_port + - "/api/instances/Volume::" + - str(vol_id) + - "/action/removeVolume", - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) - -# LOG.info("delete volume response: %s" % r.json()) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == 78): - force_delete = self._get_force_delete(self.config) - if (force_delete == 'True'): - msg = ( - "Ignoring error in delete volume %s: volume not found \ - due to force delete settings" % - (vol_id)) - LOG.warning(msg) - else: - msg = ("Error deleting volume %s: volume not found" % - (vol_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - else: - msg = ("Error deleting volume %s: %s" % - (vol_id, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - def delete_snapshot(self, snapshot): - """Deletes a ScaleIO snapshot.""" - snapname = self.id_to_base64(snapshot.id) - LOG.info("ScaleIO delete snapshot") - self._delete_volume(snapname) - - def initialize_connection(self, volume, connector): - """Initializes the connection and returns connection info. - - The scaleio driver returns a driver_volume_type of 'scaleio'. """ - - LOG.debug("connector is {0} ".format(connector)) - volname = self.id_to_base64(volume.id) - properties = {} - - properties['scaleIO_volname'] = volname - properties['hostIP'] = connector['ip'] - properties['serverIP'] = self.server_ip - properties['serverPort'] = self.server_port - properties['serverUsername'] = self.server_username - properties['serverPassword'] = self.server_password - properties['serverToken'] = self.server_token - - storage_type = self._get_volumetype_extraspecs(volume) - LOG.info("volume type in create volume is %s" % storage_type) - iops_limit = self._find_iops_limit(storage_type) - LOG.info("iops limit is: %s" % iops_limit) - bandwidth_limit = self._find_bandwidth_limit(storage_type) - LOG.info("bandwidth limit is: %s" % bandwidth_limit) - properties['iopsLimit'] = iops_limit - properties['bandwidthLimit'] = bandwidth_limit - - return { - 'driver_volume_type': 'scaleio', - 'data': properties - } - - def terminate_connection(self, volume, connector, **kwargs): - LOG.info("scaleio driver terminate connection") - pass - - def _update_volume_stats(self): - stats = {} - - backend_name = self.configuration.safe_get('volume_backend_name') - stats['volume_backend_name'] = backend_name or 'scaleio' - stats['vendor_name'] = 'EMC' - stats['driver_version'] = self.VERSION - stats['storage_protocol'] = 'scaleio' - stats['total_capacity_gb'] = 'unknown' - stats['free_capacity_gb'] = 'unknown' - stats['reserved_percentage'] = 0 - stats['QoS_support'] = False - - pools = [] - - headers = {'content-type': 'application/json'} - - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - - max_free_capacity = 0 - total_capacity = 0 - - for sp_name in self.storage_pools: - splitted_name = sp_name.split(':') - domain_name = splitted_name[0] - pool_name = splitted_name[1] - LOG.debug( - "domain name is {0}, pool name is {1}".format( - domain_name, - pool_name)) - # get domain id from name - encoded_domain_name = urllib.quote(domain_name, '') - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Domain/instances/getByName::" + encoded_domain_name - LOG.info("ScaleIO get domain id by name request: %s" % request) - LOG.info("username: %s, password: %s, verify_cert: %s " % - (self.server_username, self.server_token, verify_cert)) - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request) - LOG.info("Get domain by name response: %s" % r.text) - domain_id = r.json() - if (domain_id == '' or domain_id is None): - msg = ("Domain with name %s wasn't found " % - (self.protection_domain_name)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in domain_id): - msg = ("Error getting domain id from name %s: %s " % - (self.protection_domain_name, domain_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("domain id is %s" % domain_id) - - # get pool id from name - encoded_pool_name = urllib.quote(pool_name, '') - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/Pool/instances/getByName::" + \ - domain_id + "," + encoded_pool_name - LOG.info("ScaleIO get pool id by name request: %s" % request) - r = requests.get( - request, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - pool_id = r.json() - if (pool_id == '' or pool_id is None): - msg = ("Pool with name %s wasn't found in domain %s " % - (pool_name, domain_id)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != OK_STATUS_CODE and "errorCode" in pool_id): - msg = ("Error getting pool id from name %s: %s " % - (pool_name, pool_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("pool id is %s" % pool_id) - - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/types/StoragePool/instances/action/" + \ - "querySelectedStatistics" - params = {'ids': [pool_id], 'properties': [ - "capacityInUseInKb", "capacityLimitInKb"]} - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - response = r.json() - LOG.info("query capacity stats response: %s" % response) - for res in response.itervalues(): - capacityInUse = res['capacityInUseInKb'] - capacityLimit = res['capacityLimitInKb'] - total_capacity_gb = capacityLimit / 1048576 - used_capacity_gb = capacityInUse / 1048576 - free_capacity_gb = total_capacity_gb - used_capacity_gb - LOG.info( - "free capacity of pool {0} is: {1}, \ - total capacity: {2}".format( - pool_name, - free_capacity_gb, - total_capacity_gb)) - pool = {'pool_name': sp_name, - 'total_capacity_gb': total_capacity_gb, - 'free_capacity_gb': free_capacity_gb, - # 'total_capacity_gb': 100000, - # 'free_capacity_gb': 100000, - 'QoS_support': False, - 'reserved_percentage': 0 - } - - pools.append(pool) - if (free_capacity_gb > max_free_capacity): - max_free_capacity = free_capacity_gb - total_capacity = total_capacity + total_capacity_gb - - stats['volume_backend_name'] = backend_name or 'scaleio' - stats['vendor_name'] = 'EMC' - stats['driver_version'] = self.VERSION - stats['storage_protocol'] = 'scaleio' - # Use zero capacities here so we always use a pool. - stats['total_capacity_gb'] = total_capacity - stats['free_capacity_gb'] = max_free_capacity - LOG.info( - "free capacity for backend is: {0}, total capacity: {1}".format( - max_free_capacity, - total_capacity)) - stats['reserved_percentage'] = 0 - stats['QoS_support'] = False - stats['pools'] = pools - - LOG.info("Backend name is " + stats["volume_backend_name"]) - - self._stats = stats - - def get_volume_stats(self, refresh=False): - """Get volume stats. - - If 'refresh' is True, run update the stats first. - """ - if refresh: - self._update_volume_stats() - - return self._stats - - def _get_volumetype_extraspecs(self, volume): - specs = {} - ctxt = context.get_admin_context() - type_id = volume['volume_type_id'] - if type_id is not None: - volume_type = volume_types.get_volume_type(ctxt, type_id) - specs = volume_type.get('extra_specs') - for key, value in specs.iteritems(): - specs[key] = value - - return specs - - def find_volume_path(self, volume_id): - - LOG.info("looking for volume %s" % volume_id) - # look for the volume in /dev/disk/by-id directory - disk_filename = "" - tries = 0 - while not disk_filename: - if (tries > 15): - raise exception.VolumeBackendAPIException( - "scaleIO volume {0} not found \ - at expected path ".format(volume_id)) - by_id_path = "/dev/disk/by-id" - if not os.path.isdir(by_id_path): - LOG.warn( - "scaleIO volume {0} not yet found (no directory \ - /dev/disk/by-id yet). Try number: {1} ".format( - volume_id, - tries)) - tries = tries + 1 - time.sleep(1) - continue - filenames = os.listdir(by_id_path) - LOG.warning( - "Files found in {0} path: {1} ".format(by_id_path, filenames)) - for filename in filenames: - if (filename.startswith("emc-vol") and - filename.endswith(volume_id)): - disk_filename = filename - if not disk_filename: - LOG.warn( - "scaleIO volume {0} not yet found. \ - Try number: {1} ".format( - volume_id, - tries)) - tries = tries + 1 - time.sleep(1) - - if (tries != 0): - LOG.warning( - "Found scaleIO device {0} after {1} retries ".format( - disk_filename, - tries)) - full_disk_name = by_id_path + "/" + disk_filename - LOG.warning("Full disk name is " + full_disk_name) - return full_disk_name -# path = os.path.realpath(full_disk_name) -# LOG.warning("Path is " + path) -# return path - - def _get_client_id( - self, server_ip, server_username, server_password, sdc_ip): - request = "https://" + server_ip + ":" + self.server_port + \ - "/api/types/Client/instances/getByIp::" + sdc_ip + "/" - LOG.info("ScaleIO get client id by ip request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.get( - request, - auth=( - server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request) - - sdc_id = r.json() - if (sdc_id == '' or sdc_id is None): - msg = ("Client with ip %s wasn't found " % (sdc_ip)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - if (r.status_code != 200 and "errorCode" in sdc_id): - msg = ("Error getting sdc id from ip %s: %s " % - (sdc_ip, sdc_id['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - LOG.info("ScaleIO sdc id is %s" % sdc_id) - return sdc_id - - def _attach_volume(self, volume, sdc_ip): - # We need to make sure we even *have* a local path - LOG.info("ScaleIO attach volume in scaleio cinder driver") - volname = self.id_to_base64(volume.id) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: " + str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid': guid} - - volume_id = self._get_volume_id(volname) - headers = {'content-type': 'application/json'} - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc" - LOG.info("map volume request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) -# LOG.info("map volume response: %s" % r.text) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_ALREADY_MAPPED_ERROR): - msg = ( - "Ignoring error mapping volume %s: volume already mapped" % - (volname)) - LOG.warning(msg) - else: - msg = ("Error mapping volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - -# convert id to hex -# val = int(volume_id) -# id_in_hex = hex((val + (1 << 64)) % (1 << 64)) -# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0" - formated_id = volume_id - - return self.find_volume_path(formated_id) - - def _detach_volume(self, volume, sdc_ip): - LOG.info("ScaleIO detach volume in scaleio cinder driver") - volname = self.id_to_base64(volume.id) - - cmd = ['drv_cfg'] - cmd += ["--query_guid"] - - LOG.info("ScaleIO sdc query guid command: " + str(cmd)) - - try: - (out, err) = utils.execute(*cmd, run_as_root=True) - LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err)) - except processutils.ProcessExecutionError as e: - msg = ("Error querying sdc guid: %s" % (e.stderr)) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - guid = out - msg = ("Current sdc guid: %s" % (guid)) - LOG.info(msg) - - params = {'guid': guid} - headers = {'content-type': 'application/json'} - - volume_id = self._get_volume_id(volname) - request = "https://" + self.server_ip + ":" + self.server_port + \ - "/api/instances/Volume::" + \ - str(volume_id) + "/action/removeMappedSdc" - LOG.info("unmap volume request: %s" % request) - if (self.verify_server_certificate == 'True'): - verify_cert = self.server_certificate_path - else: - verify_cert = False - r = requests.post( - request, - data=json.dumps(params), - headers=headers, - auth=( - self.server_username, - self.server_token), - verify=verify_cert) - r = self._check_response(r, request, False, params) - - if (r.status_code != OK_STATUS_CODE): - response = r.json() - error_code = response['errorCode'] - if (error_code == VOLUME_NOT_MAPPED_ERROR): - msg = ( - "Ignoring error unmapping volume %s: volume not mapped" % - (volname)) - LOG.warning(msg) - else: - msg = ("Error unmapping volume %s: %s" % - (volname, response['message'])) - LOG.error(msg) - raise exception.VolumeBackendAPIException(data=msg) - - def copy_image_to_volume(self, context, volume, image_service, image_id): - """Fetch the image from image_service and write it to the volume.""" - LOG.info( - "ScaleIO copy_image_to_volume volume: " + - str(volume) + - " image service: " + - str(image_service) + - " image id: " + - str(image_id)) - properties = utils.brick_get_connector_properties() - sdc_ip = properties['ip'] - LOG.debug("SDC ip is: {0}".format(sdc_ip)) - try: - image_utils.fetch_to_raw(context, - image_service, - image_id, - self._attach_volume(volume, sdc_ip), - BLOCK_SIZE, - size=volume['size']) - finally: - self._detach_volume(volume, sdc_ip) - - def copy_volume_to_image(self, context, volume, image_service, image_meta): - """Copy the volume to the specified image.""" - LOG.info( - "ScaleIO copy_volume_to_image volume: " + - str(volume) + - " image service: " + - str(image_service) + - " image meta: " + - str(image_meta)) - properties = utils.brick_get_connector_properties() - sdc_ip = properties['ip'] - LOG.debug("SDC ip is: {0}".format(sdc_ip)) - try: - image_utils.upload_volume(context, - image_service, - image_meta, - self._attach_volume(volume, sdc_ip)) - finally: - self._detach_volume(volume, sdc_ip) - - def ensure_export(self, context, volume): - """Driver entry point to get the export info for an existing volume.""" - pass - - def create_export(self, context, volume): - """Driver entry point to get the export info for a new volume.""" - pass - - def remove_export(self, context, volume): - """Driver entry point to remove an export for a volume.""" - pass - - def check_for_export(self, context, volume_id): - """Make sure volume is exported.""" - pass diff --git a/deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters b/deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters deleted file mode 100644 index 5179ff7..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters +++ /dev/null @@ -1,3 +0,0 @@ -[Filters] -drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root - diff --git a/deployment_scripts/puppet/install_scaleio_controller/manifests/centos.pp b/deployment_scripts/puppet/install_scaleio_controller/manifests/centos.pp deleted file mode 100644 index 6f92928..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/manifests/centos.pp +++ /dev/null @@ -1,104 +0,0 @@ -class install_scaleio_controller::centos -{ - $services = ['openstack-cinder-volume', 'openstack-cinder-api', 'openstack-cinder-scheduler', 'openstack-nova-scheduler'] - - $gw_ip = $plugin_settings['scaleio_GW'] - $mdm_ip_1 = $plugin_settings['scaleio_mdm1'] - $mdm_ip_2 = $plugin_settings['scaleio_mdm2'] - $admin = $plugin_settings['scaleio_Admin'] - $password = $plugin_settings['scaleio_Password'] - $volume_type = "scaleio-thin" -#1. Install SDC package - exec { "install_sdc1": - command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"", - path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin", - } -> -#2. Copy ScaleIO Files - file { 'scaleio.py': - path => '/usr/lib/python2.6/site-packages/cinder/volume/drivers/emc/scaleio.py', - source => 'puppet:///modules/install_scaleio_controller/6.1/scaleio.py', - mode => '644', - owner => 'root', - group => 'root', - } -> - - file { 'scaleio.filters': - path => '/usr/share/cinder/rootwrap/scaleio.filters', - source => 'puppet:///modules/install_scaleio_controller/scaleio.filters', - mode => '644', - owner => 'root', - group => 'root', - before => File['cinder_scaleio.config'], - } - -# 3. Create config for ScaleIO - $cinder_scaleio_config = "[scaleio] -rest_server_ip=$gw_ip -rest_server_username=$admin -rest_server_password=$password -protection_domain_name=${plugin_settings['protection_domain']} -storage_pools=${plugin_settings['protection_domain']}:${plugin_settings['storage_pool_1']} -storage_pool_name=${plugin_settings['storage_pool_1']} -round_volume_capacity=True -force_delete=True -verify_server_certificate=False -" - - file { 'cinder_scaleio.config': - ensure => present, - path => '/etc/cinder/cinder_scaleio.config', - content => $cinder_scaleio_config, - mode => 0644, - owner => root, - group => root, - before => Ini_setting['cinder_conf_enabled_backeds'], - } -> - - # 4. To /etc/cinder/cinder.conf add - ini_setting { 'cinder_conf_enabled_backeds': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'DEFAULT', - setting => 'enabled_backends', - value => 'ScaleIO', - before => Ini_setting['cinder_conf_volume_driver'], - } -> - ini_setting { 'cinder_conf_volume_driver': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'volume_driver', - value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver', - before => Ini_setting['cinder_conf_scio_config'], - } -> - ini_setting { 'cinder_conf_scio_config': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'cinder_scaleio_config_file', - value => '/etc/cinder/cinder_scaleio.config', - before => Ini_setting['cinder_conf_volume_backend_name'], - } -> - ini_setting { 'cinder_conf_volume_backend_name': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'volume_backend_name', - value => 'ScaleIO', - }~> - service { $services: - ensure => running, - }-> - exec { "Create Cinder volume type \'${volume_type}\'": - command => "bash -c 'source /root/openrc; cinder type-create ${volume_type}'", - path => ['/usr/bin', '/bin'], - unless => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'", - } -> - - exec { "Create Cinder volume type extra specs for \'${volume_type}\'": - command => "bash -c 'source /root/openrc; cinder type-key ${volume_type} set sio:pd_name=${plugin_settings['protection_domain']} sio:provisioning=thin sio:sp_name=${plugin_settings['storage_pool_1']}'", - path => ['/usr/bin', '/bin'], - onlyif => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'", - } - -} diff --git a/deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp b/deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp deleted file mode 100644 index 7a97a5b..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp +++ /dev/null @@ -1,10 +0,0 @@ -class install_scaleio_controller -{ - if($::operatingsystem == 'Ubuntu') { - include install_scaleio_controller::ubuntu - }else { - include install_scaleio_controller::centos - } -} - - diff --git a/deployment_scripts/puppet/install_scaleio_controller/manifests/ubuntu.pp b/deployment_scripts/puppet/install_scaleio_controller/manifests/ubuntu.pp deleted file mode 100644 index 5500e3c..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/manifests/ubuntu.pp +++ /dev/null @@ -1,132 +0,0 @@ -class install_scaleio_controller::ubuntu -{ - $services = ['cinder-volume', 'cinder-api', 'cinder-scheduler', 'nova-scheduler'] - - $gw_ip = $plugin_settings['scaleio_GW'] - $mdm_ip_1 = $plugin_settings['scaleio_mdm1'] - $mdm_ip_2 = $plugin_settings['scaleio_mdm2'] - $admin = $plugin_settings['scaleio_Admin'] - $password = $plugin_settings['scaleio_Password'] - $volume_type = "scaleio-thin" - $version = hiera('fuel_version') - -# 3. Create config for ScaleIO - $cinder_scaleio_config = "[scaleio] -rest_server_ip=$gw_ip -rest_server_username=$admin -rest_server_password=$password -protection_domain_name=${plugin_settings['protection_domain']} -storage_pools=${plugin_settings['protection_domain']}:${plugin_settings['storage_pool_1']} -storage_pool_name=${plugin_settings['storage_pool_1']} -round_volume_capacity=True -force_delete=True -verify_server_certificate=False -" - -if($version == '6.1') { - file { 'scaleio_6.py': - path => '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/scaleio.py', - source => 'puppet:///modules/install_scaleio_controller/6.1/scaleio.py', - mode => '644', - owner => 'root', - group => 'root', - } - -} -else -{ - file { 'scaleio_7.py': - path => '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/scaleio.py', - source => 'puppet:///modules/install_scaleio_controller/7.0/scaleio.py', - mode => '644', - owner => 'root', - group => 'root', - - } - } - - package {'emc-scaleio-sdc': - ensure => installed , - }-> - - exec {"Add MDM to drv-cfg": - command => "bash -c 'echo mdm ${mdm_ip_1} >>/bin/emc/scaleio/drv_cfg.txt'", - path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ], - - }-> - -exec {"Start SDC": - command => "bash -c '/etc/init.d/scini restart'", - path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ], - - }-> - -file { 'scaleio.filters': - path => '/etc/cinder/rootwrap.d/scaleio.filters', - source => 'puppet:///modules/install_scaleio_controller/scaleio.filters', - mode => '644', - owner => 'root', - group => 'root', - before => File['cinder_scaleio.config'], - }-> - -# 3. Create config for ScaleIO - - file { 'cinder_scaleio.config': - ensure => present, - path => '/etc/cinder/cinder_scaleio.config', - content => $cinder_scaleio_config, - mode => 0644, - owner => root, - group => root, - before => Ini_setting['cinder_conf_enabled_backeds'], - } -> - - # 4. To /etc/cinder/cinder.conf add - ini_setting { 'cinder_conf_enabled_backeds': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'DEFAULT', - setting => 'enabled_backends', - value => 'ScaleIO', - before => Ini_setting['cinder_conf_volume_driver'], - } -> - ini_setting { 'cinder_conf_volume_driver': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'volume_driver', - value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver', - before => Ini_setting['cinder_conf_scio_config'], - } -> - ini_setting { 'cinder_conf_scio_config': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'cinder_scaleio_config_file', - value => '/etc/cinder/cinder_scaleio.config', - before => Ini_setting['cinder_conf_volume_backend_name'], - } -> - ini_setting { 'cinder_conf_volume_backend_name': - ensure => present, - path => '/etc/cinder/cinder.conf', - section => 'ScaleIO', - setting => 'volume_backend_name', - value => 'ScaleIO', - }~> - service { $services: - ensure => running, - }-> - exec { "Create Cinder volume type \'${volume_type}\'": - command => "bash -c 'source /root/openrc; cinder type-create ${volume_type}'", - path => ['/usr/bin', '/bin'], - unless => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'", - } -> - - exec { "Create Cinder volume type extra specs for \'${volume_type}\'": - command => "bash -c 'source /root/openrc; cinder type-key ${volume_type} set sio:pd_name=${plugin_settings['protection_domain']} sio:provisioning=thin sio:sp_name=${plugin_settings['storage_pool_1']}'", - path => ['/usr/bin', '/bin'], - onlyif => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'", - } - -} diff --git a/deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo b/deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo deleted file mode 100644 index eeb729a..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo +++ /dev/null @@ -1,53 +0,0 @@ -# CentOS-Base.repo -# -# The mirror system uses the connecting IP address of the client and the -# update status of each mirror to pick mirrors that are updated to and -# geographically close to the client. You should use this for CentOS updates -# unless you are manually picking other mirrors. -# -# If the mirrorlist= does not work for you, as a fall back you can try the -# remarked out baseurl= line instead. -# -# - -[base] -name=CentOS-$releasever - Base -mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os -#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ -gpgcheck=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 - -#released updates -[updates] -name=CentOS-$releasever - Updates -mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates -#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ -gpgcheck=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 - -#additional packages that may be useful -[extras] -name=CentOS-$releasever - Extras -mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras -#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ -gpgcheck=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 - -#additional packages that extend functionality of existing packages -[centosplus] -name=CentOS-$releasever - Plus -mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus -#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ -gpgcheck=0 -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 - -#contrib - packages by Centos Users -[contrib] -name=CentOS-$releasever - Contrib -mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib -#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/ -gpgcheck=0 -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 - diff --git a/deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb b/deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb deleted file mode 100644 index bc3cbf0..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb +++ /dev/null @@ -1,10 +0,0 @@ -[scaleio] -rest_server_ip=10.225.25.200 -rest_server_username=admin -rest_server_password=Scaleio123 -protection_domain_name=default -#storage_pools=use-ash1-pd1:use-ash1-sp-tier1,use-ash1-pd1:use-ash1-sp-tier2 -storage_pool_name=default -round_volume_capacity=True -force_delete=True -verify_server_certificate=False \ No newline at end of file diff --git a/deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo b/deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo deleted file mode 100644 index 032c2e9..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo +++ /dev/null @@ -1,26 +0,0 @@ -[epel] -name=Extra Packages for Enterprise Linux 6 - $basearch -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch -failovermethod=priority -enabled=1 -gpgcheck=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 - -[epel-debuginfo] -name=Extra Packages for Enterprise Linux 6 - $basearch - Debug -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=0 - -[epel-source] -name=Extra Packages for Enterprise Linux 6 - $basearch - Source -#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=0 diff --git a/deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb b/deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb deleted file mode 100644 index 1a2ba56..0000000 --- a/deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb +++ /dev/null @@ -1,16 +0,0 @@ -mdm.ip.addresses=<%= @mdm_ip_1 %>,<%= @mdm_ip_2 %> -system.id= -mdm.port=6611 -gateway-admin.password=Scaleio123 -vmware-mode=false -im.parallelism=100 -do.not.update.user.properties.with.values.before.upgrade=false -features.enable_gateway_and_IM=true -features.enable_snmp=false -snmp.mdm.username= -snmp.mdm.password= -snmp.sampling_frequency=30 -snmp.traps_receiver_ip= -snmp.port=162 -im.ip.ignore.list= -lia.password=Scaleio123 \ No newline at end of file diff --git a/doc/TestPlanforScaleIOCinderFuelPlugin.docx b/doc/TestPlanforScaleIOCinderFuelPlugin.docx deleted file mode 100644 index 2cbcdef..0000000 Binary files a/doc/TestPlanforScaleIOCinderFuelPlugin.docx and /dev/null differ diff --git a/doc/TestReportforScaleio-Cinder-FuelPlugin.docx b/doc/TestReportforScaleio-Cinder-FuelPlugin.docx deleted file mode 100644 index eabc7b3..0000000 Binary files a/doc/TestReportforScaleio-Cinder-FuelPlugin.docx and /dev/null differ diff --git a/doc/fuel-plugin-scaleio-cinder.vsd b/doc/fuel-plugin-scaleio-cinder.vsd deleted file mode 100644 index 057bdc8..0000000 Binary files a/doc/fuel-plugin-scaleio-cinder.vsd and /dev/null differ diff --git a/doc/source/Makefile.txt b/doc/source/Makefile.txt deleted file mode 100644 index 0e782b8..0000000 --- a/doc/source/Makefile.txt +++ /dev/null @@ -1,177 +0,0 @@ -# Makefile for Sphinx documentation -# - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -PAPER = -BUILDDIR = _build - -# User-friendly check for sphinx-build -ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) -$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) -endif - -# Internal variables. -PAPEROPT_a4 = -D latex_paper_size=a4 -PAPEROPT_letter = -D latex_paper_size=letter -ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -# the i18n builder cannot share the environment and doctrees with the others -I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . - -.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext - -help: - @echo "Please use \`make ' where is one of" - @echo " html to make standalone HTML files" - @echo " dirhtml to make HTML files named index.html in directories" - @echo " singlehtml to make a single large HTML file" - @echo " pickle to make pickle files" - @echo " json to make JSON files" - @echo " htmlhelp to make HTML files and a HTML help project" - @echo " qthelp to make HTML files and a qthelp project" - @echo " devhelp to make HTML files and a Devhelp project" - @echo " epub to make an epub" - @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" - @echo " latexpdf to make LaTeX files and run them through pdflatex" - @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" - @echo " text to make text files" - @echo " man to make manual pages" - @echo " texinfo to make Texinfo files" - @echo " info to make Texinfo files and run them through makeinfo" - @echo " gettext to make PO message catalogs" - @echo " changes to make an overview of all changed/added/deprecated items" - @echo " xml to make Docutils-native XML files" - @echo " pseudoxml to make pseudoxml-XML files for display purposes" - @echo " linkcheck to check all external links for integrity" - @echo " doctest to run all doctests embedded in the documentation (if enabled)" - -clean: - rm -rf $(BUILDDIR)/* - -html: - $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html - @echo - @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." - -dirhtml: - $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml - @echo - @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." - -singlehtml: - $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml - @echo - @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." - -pickle: - $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle - @echo - @echo "Build finished; now you can process the pickle files." - -json: - $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json - @echo - @echo "Build finished; now you can process the JSON files." - -htmlhelp: - $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp - @echo - @echo "Build finished; now you can run HTML Help Workshop with the" \ - ".hhp project file in $(BUILDDIR)/htmlhelp." - -qthelp: - $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp - @echo - @echo "Build finished; now you can run "qcollectiongenerator" with the" \ - ".qhcp project file in $(BUILDDIR)/qthelp, like this:" - @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNSXvplugin.qhcp" - @echo "To view the help file:" - @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNSXvplugin.qhc" - -devhelp: - $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp - @echo - @echo "Build finished." - @echo "To view the help file:" - @echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXvplugin" - @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXvplugin" - @echo "# devhelp" - -epub: - $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub - @echo - @echo "Build finished. The epub file is in $(BUILDDIR)/epub." - -latex: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo - @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." - @echo "Run \`make' in that directory to run these through (pdf)latex" \ - "(use \`make latexpdf' here to do that automatically)." - -latexpdf: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo "Running LaTeX files through pdflatex..." - $(MAKE) -C $(BUILDDIR)/latex all-pdf - @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." - -latexpdfja: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo "Running LaTeX files through platex and dvipdfmx..." - $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja - @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." - -text: - $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text - @echo - @echo "Build finished. The text files are in $(BUILDDIR)/text." - -man: - $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man - @echo - @echo "Build finished. The manual pages are in $(BUILDDIR)/man." - -texinfo: - $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo - @echo - @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." - @echo "Run \`make' in that directory to run these through makeinfo" \ - "(use \`make info' here to do that automatically)." - -info: - $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo - @echo "Running Texinfo files through makeinfo..." - make -C $(BUILDDIR)/texinfo info - @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." - -gettext: - $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale - @echo - @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." - -changes: - $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes - @echo - @echo "The overview file is in $(BUILDDIR)/changes." - -linkcheck: - $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck - @echo - @echo "Link check complete; look for any errors in the above output " \ - "or in $(BUILDDIR)/linkcheck/output.txt." - -doctest: - $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest - @echo "Testing of doctests in the sources finished, look at the " \ - "results in $(BUILDDIR)/doctest/output.txt." - -xml: - $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml - @echo - @echo "Build finished. The XML files are in $(BUILDDIR)/xml." - -pseudoxml: - $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml - @echo - @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." \ No newline at end of file diff --git a/doc/source/appendix.rst b/doc/source/appendix.rst deleted file mode 100644 index 88f1935..0000000 --- a/doc/source/appendix.rst +++ /dev/null @@ -1,8 +0,0 @@ -Appendix -======== - -- `ScaleIO Web Site `_ -- `ScaleIO Documentation `_ -- `ScaleIO Download `_ -- `Fuel Enable Experimental Features `_ -- `Fuel Plugins Catalog `_ diff --git a/doc/source/conf.py b/doc/source/conf.py deleted file mode 100644 index fa7bac0..0000000 --- a/doc/source/conf.py +++ /dev/null @@ -1,340 +0,0 @@ -# -*- coding: utf-8 -*- -# -# fuel-plugin-scaleio-cinder documentation build configuration file, created by -# sphinx-quickstart on Wed Oct 7 12:48:35 2015. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -import sys -import os - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -#sys.path.insert(0, os.path.abspath('.')) - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -#needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ -# 'sphinx.ext.todo', -# 'sphinx.ext.coverage', -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# The suffix of source filenames. -source_suffix = '.rst' - -# The encoding of source files. -#source_encoding = 'utf-8-sig' - -# The master toctree document. -master_doc = 'index' - -# General information about the project. -project = u'The ScaleIO Cinder plugin for Fuel' -copyright = u'2015, EMC' - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = '1.0' -# The full version, including alpha/beta/rc tags. -release = '1.0-1.0.0-1' - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -#language = None - -# There are two options for replacing |today|: either, you set today to some -# non-false value, then it is used: -#today = '' -# Else, today_fmt is used as the format for a strftime call. -#today_fmt = '%B %d, %Y' - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -exclude_patterns = [] - -# The reST default role (used for this markup: `text`) to use for all -# documents. -#default_role = None - -# If true, '()' will be appended to :func: etc. cross-reference text. -#add_function_parentheses = True - -# If true, the current module name will be prepended to all description -# unit titles (such as .. function::). -#add_module_names = True - -# If true, sectionauthor and moduleauthor directives will be shown in the -# output. They are ignored by default. -#show_authors = False - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' - -# A list of ignored prefixes for module index sorting. -#modindex_common_prefix = [] - -# If true, keep warnings as "system message" paragraphs in the built documents. -#keep_warnings = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -html_theme = 'default' - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -#html_theme_options = {} - -# Add any paths that contain custom themes here, relative to this directory. -#html_theme_path = [] - -# The name for this set of Sphinx documents. If None, it defaults to -# " v documentation". -#html_title = None - -# A shorter title for the navigation bar. Default is the same as html_title. -#html_short_title = None - -# The name of an image file (relative to this directory) to place at the top -# of the sidebar. -#html_logo = None - -# The name of an image file (within the static path) to use as favicon of the -# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 -# pixels large. -#html_favicon = None - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] - -# Add any extra paths that contain custom files (such as robots.txt or -# .htaccess) here, relative to this directory. These files are copied -# directly to the root of the documentation. -#html_extra_path = [] - -# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, -# using the given strftime format. -#html_last_updated_fmt = '%b %d, %Y' - -# If true, SmartyPants will be used to convert quotes and dashes to -# typographically correct entities. -#html_use_smartypants = True - -# Custom sidebar templates, maps document names to template names. -#html_sidebars = {} - -# Additional templates that should be rendered to pages, maps page names to -# template names. -#html_additional_pages = {} - -# If false, no module index is generated. -#html_domain_indices = True - -# If false, no index is generated. -#html_use_index = True - -# If true, the index is split into individual pages for each letter. -#html_split_index = False - -# If true, links to the reST sources are added to the pages. -#html_show_sourcelink = True - -# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. -#html_show_sphinx = True - -# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. -#html_show_copyright = True - -# If true, an OpenSearch description file will be output, and all pages will -# contain a tag referring to it. The value of this option must be the -# base URL from which the finished HTML is served. -#html_use_opensearch = '' - -# This is the file name suffix for HTML files (e.g. ".xhtml"). -#html_file_suffix = None - -# Output file base name for HTML help builder. -htmlhelp_basename = 'fuel-plugin-scaleio-cinderdoc' - - -# -- Options for LaTeX output --------------------------------------------- - -latex_elements = { -# The paper size ('letterpaper' or 'a4paper'). -#'papersize': 'letterpaper', - -# The font size ('10pt', '11pt' or '12pt'). -#'pointsize': '10pt', - -# Additional stuff for the LaTeX preamble. -#'preamble': '', -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - ('index', 'fuel-plugin-scaleio-cinder.tex', u'The ScaleIO Cinder Plugin for Fuel Documentation', - u'EMC', 'manual'), -] - -# The name of an image file (relative to this directory) to place at the top of -# the title page. -#latex_logo = None - -# For "manual" documents, if this is true, then toplevel headings are parts, -# not chapters. -#latex_use_parts = False - -# If true, show page references after internal links. -#latex_show_pagerefs = False - -# If true, show URL addresses after external links. -#latex_show_urls = False - -# Documents to append as an appendix to all manuals. -#latex_appendices = [] - -# If false, no module index is generated. -#latex_domain_indices = True - -# make latex stop printing blank pages between sections -# http://stackoverflow.com/questions/5422997/sphinx-docs-remove-blank-pages-from-generated-pdfs -latex_elements = { 'classoptions': ',openany,oneside', 'babel' : '\\usepackage[english]{babel}' } - - -# -- Options for manual page output --------------------------------------- - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [ - ('index', 'fuel-plugin-scaleio-cinder', u'Guide to the ScaleIO Cinder Plugin ver. 1.0-1.0.0-1 for Fuel', - [u'EMC'], 1) -] - -# If true, show URL addresses after external links. -#man_show_urls = False - - -# -- Options for Texinfo output ------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - ('index', 'fuel-plugin-scaleio-cinder', u'The ScaleIO Cinder Plugin for Fuel Documentation', - u'EMC', 'fuel-plugin-scaleio-cinder', 'The ScaleIO Cinder Plugin for Fuel Documentation', - 'Miscellaneous'), -] - -# Documents to append as an appendix to all manuals. -#texinfo_appendices = [] - -# If false, no module index is generated. -#texinfo_domain_indices = True - -# How to display URL addresses: 'footnote', 'no', or 'inline'. -#texinfo_show_urls = 'footnote' - -# If true, do not generate a @detailmenu in the "Top" node's menu. -#texinfo_no_detailmenu = False - -# Insert footnotes where they are defined instead of -# at the end. -pdf_inline_footnotes = True - - - -# -- Options for Epub output ---------------------------------------------- - -# Bibliographic Dublin Core info. -epub_title = u'The ScaleIO Cinder Plugin for Fuel' -epub_author = u'EMC' -epub_publisher = u'EMC' -epub_copyright = u'2015, EMC' - -# The basename for the epub file. It defaults to the project name. -#epub_basename = u'fuel-plugin-mellanox' - -# The HTML theme for the epub output. Since the default themes are not optimized -# for small screen space, using the same theme for HTML and epub output is -# usually not wise. This defaults to 'epub', a theme designed to save visual -# space. -#epub_theme = 'epub' - -# The language of the text. It defaults to the language option -# or en if the language is not set. -#epub_language = '' - -# The scheme of the identifier. Typical schemes are ISBN or URL. -#epub_scheme = '' - -# The unique identifier of the text. This can be a ISBN number -# or the project homepage. -#epub_identifier = '' - -# A unique identification for the text. -#epub_uid = '' - -# A tuple containing the cover image and cover page html template filenames. -#epub_cover = () - -# A sequence of (type, uri, title) tuples for the guide element of content.opf. -#epub_guide = () - -# HTML files that should be inserted before the pages created by sphinx. -# The format is a list of tuples containing the path and title. -#epub_pre_files = [] - -# HTML files shat should be inserted after the pages created by sphinx. -# The format is a list of tuples containing the path and title. -#epub_post_files = [] - -# A list of files that should not be packed into the epub file. -epub_exclude_files = ['search.html'] - -# The depth of the table of contents in toc.ncx. -#epub_tocdepth = 3 - -# Allow duplicate toc entries. -#epub_tocdup = True - -# Choose between 'default' and 'includehidden'. -#epub_tocscope = 'default' - -# Fix unsupported image types using the PIL. -#epub_fix_images = False - -# Scale large images. -#epub_max_image_width = 0 - -# How to display URL addresses: 'footnote', 'no', or 'inline'. -#epub_show_urls = 'inline' - -# If false, no index is generated. -#epub_use_index = True diff --git a/doc/source/images/SIO_Support.png b/doc/source/images/SIO_Support.png deleted file mode 100644 index 1daac64..0000000 Binary files a/doc/source/images/SIO_Support.png and /dev/null differ diff --git a/doc/source/images/fuel-plugin-scaleio-cinder-1.jpg b/doc/source/images/fuel-plugin-scaleio-cinder-1.jpg deleted file mode 100644 index 4c3d2a0..0000000 Binary files a/doc/source/images/fuel-plugin-scaleio-cinder-1.jpg and /dev/null differ diff --git a/doc/source/images/fuel-plugin-scaleio-cinder-2.jpg b/doc/source/images/fuel-plugin-scaleio-cinder-2.jpg deleted file mode 100644 index 6773b2e..0000000 Binary files a/doc/source/images/fuel-plugin-scaleio-cinder-2.jpg and /dev/null differ diff --git a/doc/source/images/installation/image001.png b/doc/source/images/installation/image001.png deleted file mode 100644 index 883c7b8..0000000 Binary files a/doc/source/images/installation/image001.png and /dev/null differ diff --git a/doc/source/images/installation/image002.png b/doc/source/images/installation/image002.png deleted file mode 100644 index 5f55cff..0000000 Binary files a/doc/source/images/installation/image002.png and /dev/null differ diff --git a/doc/source/images/installation/image003.png b/doc/source/images/installation/image003.png deleted file mode 100644 index 622c040..0000000 Binary files a/doc/source/images/installation/image003.png and /dev/null differ diff --git a/doc/source/images/installation/image004.png b/doc/source/images/installation/image004.png deleted file mode 100644 index 7a2609f..0000000 Binary files a/doc/source/images/installation/image004.png and /dev/null differ diff --git a/doc/source/images/installation/image005.png b/doc/source/images/installation/image005.png deleted file mode 100644 index c1488b9..0000000 Binary files a/doc/source/images/installation/image005.png and /dev/null differ diff --git a/doc/source/images/installation/image006.png b/doc/source/images/installation/image006.png deleted file mode 100644 index 57ac45b..0000000 Binary files a/doc/source/images/installation/image006.png and /dev/null differ diff --git a/doc/source/images/installation/image007.png b/doc/source/images/installation/image007.png deleted file mode 100644 index d76fb29..0000000 Binary files a/doc/source/images/installation/image007.png and /dev/null differ diff --git a/doc/source/images/installation/image008-o.png b/doc/source/images/installation/image008-o.png deleted file mode 100644 index dfe4abe..0000000 Binary files a/doc/source/images/installation/image008-o.png and /dev/null differ diff --git a/doc/source/images/installation/image008.png b/doc/source/images/installation/image008.png deleted file mode 100644 index 42750e8..0000000 Binary files a/doc/source/images/installation/image008.png and /dev/null differ diff --git a/doc/source/images/installation/image009.png b/doc/source/images/installation/image009.png deleted file mode 100644 index 147c59d..0000000 Binary files a/doc/source/images/installation/image009.png and /dev/null differ diff --git a/doc/source/images/installation/image010.png b/doc/source/images/installation/image010.png deleted file mode 100644 index 79495be..0000000 Binary files a/doc/source/images/installation/image010.png and /dev/null differ diff --git a/doc/source/images/installation/image011.png b/doc/source/images/installation/image011.png deleted file mode 100644 index 69d861b..0000000 Binary files a/doc/source/images/installation/image011.png and /dev/null differ diff --git a/doc/source/images/installation/image012.png b/doc/source/images/installation/image012.png deleted file mode 100644 index 0d23979..0000000 Binary files a/doc/source/images/installation/image012.png and /dev/null differ diff --git a/doc/source/images/installation/image013.png b/doc/source/images/installation/image013.png deleted file mode 100644 index 485ed9d..0000000 Binary files a/doc/source/images/installation/image013.png and /dev/null differ diff --git a/doc/source/images/installation/image014.png b/doc/source/images/installation/image014.png deleted file mode 100644 index fb08cb9..0000000 Binary files a/doc/source/images/installation/image014.png and /dev/null differ diff --git a/doc/source/images/installation/image015.png b/doc/source/images/installation/image015.png deleted file mode 100644 index 4ee6ec4..0000000 Binary files a/doc/source/images/installation/image015.png and /dev/null differ diff --git a/doc/source/images/installation/image016.png b/doc/source/images/installation/image016.png deleted file mode 100644 index c8aeb92..0000000 Binary files a/doc/source/images/installation/image016.png and /dev/null differ diff --git a/doc/source/images/installation/image017.png b/doc/source/images/installation/image017.png deleted file mode 100644 index b2ca602..0000000 Binary files a/doc/source/images/installation/image017.png and /dev/null differ diff --git a/doc/source/images/installation/image018.png b/doc/source/images/installation/image018.png deleted file mode 100644 index 35a349f..0000000 Binary files a/doc/source/images/installation/image018.png and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-1.png b/doc/source/images/scaleio-cinder-install-1.png deleted file mode 100644 index b571062..0000000 Binary files a/doc/source/images/scaleio-cinder-install-1.png and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-10.PNG b/doc/source/images/scaleio-cinder-install-10.PNG deleted file mode 100644 index 0bfb310..0000000 Binary files a/doc/source/images/scaleio-cinder-install-10.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-11.PNG b/doc/source/images/scaleio-cinder-install-11.PNG deleted file mode 100644 index bd76333..0000000 Binary files a/doc/source/images/scaleio-cinder-install-11.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-12.png b/doc/source/images/scaleio-cinder-install-12.png deleted file mode 100644 index 9ec2ddb..0000000 Binary files a/doc/source/images/scaleio-cinder-install-12.png and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-2.png b/doc/source/images/scaleio-cinder-install-2.png deleted file mode 100644 index f08f181..0000000 Binary files a/doc/source/images/scaleio-cinder-install-2.png and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-3.PNG b/doc/source/images/scaleio-cinder-install-3.PNG deleted file mode 100644 index 81af72d..0000000 Binary files a/doc/source/images/scaleio-cinder-install-3.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-4.PNG b/doc/source/images/scaleio-cinder-install-4.PNG deleted file mode 100644 index 7c05721..0000000 Binary files a/doc/source/images/scaleio-cinder-install-4.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-5.PNG b/doc/source/images/scaleio-cinder-install-5.PNG deleted file mode 100644 index b0c8b28..0000000 Binary files a/doc/source/images/scaleio-cinder-install-5.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-6.PNG b/doc/source/images/scaleio-cinder-install-6.PNG deleted file mode 100644 index e713073..0000000 Binary files a/doc/source/images/scaleio-cinder-install-6.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-7.PNG b/doc/source/images/scaleio-cinder-install-7.PNG deleted file mode 100644 index 54de442..0000000 Binary files a/doc/source/images/scaleio-cinder-install-7.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-8.PNG b/doc/source/images/scaleio-cinder-install-8.PNG deleted file mode 100644 index c4a8799..0000000 Binary files a/doc/source/images/scaleio-cinder-install-8.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-9.PNG b/doc/source/images/scaleio-cinder-install-9.PNG deleted file mode 100644 index f9fa6f1..0000000 Binary files a/doc/source/images/scaleio-cinder-install-9.PNG and /dev/null differ diff --git a/doc/source/images/scaleio-cinder-install-complete.jpg b/doc/source/images/scaleio-cinder-install-complete.jpg deleted file mode 100644 index e83ddb0..0000000 Binary files a/doc/source/images/scaleio-cinder-install-complete.jpg and /dev/null differ diff --git a/doc/source/index.rst b/doc/source/index.rst deleted file mode 100644 index a560be4..0000000 --- a/doc/source/index.rst +++ /dev/null @@ -1,22 +0,0 @@ - -.. fuel-plugin-scaleio-cinder documentation master file, created by - sphinx-quickstart on Wed Oct 7 12:48:35 2015. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - -============================================================ -Guide to the ScaleIO Cinder Plugin ver. 1.0-1.0.0-1 for Fuel -============================================================ - -User documentation -================== - -.. toctree:: - :maxdepth: 2 - - - introduction.rst - installation.rst - user-guide.rst - appendix.rst - diff --git a/doc/source/installation.rst b/doc/source/installation.rst deleted file mode 100644 index 1bd0d62..0000000 --- a/doc/source/installation.rst +++ /dev/null @@ -1,92 +0,0 @@ -Install ScaleIO Cinder Plugin -============================= -To install the ScaleIO-Cinder Fuel plugin: - -#. Download it from the - `Fuel Plugins Catalog `_. - -#. Copy the *rpm* file to the Fuel Master node: - :: - - [root@home ~]# scp scaleio-cinder-1.5-1.5.0-1.noarch.rpm root@fuel:/tmp - -#. Log into Fuel Master node and install the plugin using the - `Fuel CLI `_: - - :: - - [root@fuel ~]# fuel plugins --install scaleio-cinder-1.5-1.5.0-1.noarch.rpm - -#. Verify that the plugin is installed correctly: - :: - - [root@fuel-master ~]# fuel plugins - id | name | version | package_version - ---|---------------|---------|---------------- - 1 | scaleio-cinder| 1.5.0 | 2.0.0 - - -.. raw:: pdf - - PageBreak - -Configure ScaleIO plugin ------------------------- -Once the plugin has been copied and installed at the -Fuel Master node, you can configure the nodes and set the parameters for the plugin: - -#. Start by creating a new OpenStack environment following the - `Mirantis OpenStack User Guide `_. - -#. `Configure your environment `_. - - .. image:: images/scaleio-cinder-install-2.png - - -#. Open the **Settings tab** of the Fuel web UI and scroll down the page. - Select the Fuel plugin checkbox to enable ScaleIO Cinder plugin for Fuel: - - .. image:: images/scaleio-cinder-install-4.PNG - - +----------------------------+----------------------------------------------------+ - | Parameter name | Parameter description | - | | | - +============================+====================================================+ - | userName | The ScaleIO User name | - +----------------------------+----------------------------------------------------+ - | Password | The ScaleIO password for the selected user name | - +----------------------------+----------------------------------------------------+ - | ScaleIO GW IP | The IP address of the the ScaleIO Gateway service | - +----------------------------+----------------------------------------------------+ - | ScaleIO Primary IP | The ScaleIO cluster's primary IP address | - +----------------------------+----------------------------------------------------+ - | ScaleIO Secondary IP | The ScaleIO cluster's secondary IP address | - +----------------------------+----------------------------------------------------+ - | ScaleIO protection domain | Name of the ScaleIO's protection domain | - +----------------------------+----------------------------------------------------+ - | ScaleIO storage pool 1 | Name of the first storage pool | - +----------------------------+----------------------------------------------------+ - - .. note:: Please refer to the ScaleIO documentation for more information on these parameters. - - This is an example of the ScaleIO configuration parameters populated: - - .. image:: images/scaleio-cinder-install-5.PNG - - -#. After the configuration is done, you can add the nodes to the Openstack Deployment. - - .. image:: images/scaleio-cinder-install-3.png - - -#. You can run the network verification check and - `deploy changes `_ then. - -#. After deployment is completed, you should see a success message: - - .. image:: images/scaleio-cinder-install-complete.jpg - - -.. note:: It may take an hour or more for the OpenStack deployment - to complete, depending on your hardware configuration. - diff --git a/doc/source/introduction.rst b/doc/source/introduction.rst deleted file mode 100644 index d670318..0000000 --- a/doc/source/introduction.rst +++ /dev/null @@ -1,104 +0,0 @@ - -Overview -========= - -The following diagram shows the plugin's high level architecture: - -.. image:: images/fuel-plugin-scaleio-cinder-2.jpg - :width: 100% - -From the figure we can see that we need the following OpenStack roles -and services: - -.. csv-table:: OpenStack roles and services - :header: "Service Role/Name", "Description", "Installed in" - :widths: 50, 50, 50 - - "Controller Node + Cinder Host", "A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability. In addition this node is a Cinder Host, that contains the Cinder Volume Manager", "OpenStack Cluster" - "Compute Node", "A node that runs the nova-compute daemon that manages Virtual Machine (VM) instances that provide a wide range of services, such as web applications and analytics", "OpenStack Cluster" - - - -In the **external ScaleIO cluster** we have installed the following -roles and services: - -.. csv-table:: ScaleIO cluster roles and services - :header: "Service Role", "Description", "Installed in" - :widths: 50, 50, 50 - - "ScaleIO Gateway (REST API)", "The ScaleIO Gateway Service, includes the REST API to communicate storage commands to the SclaeIO Cluster, in addtion this service is used for authentication and certificate management.", "ScaleIO Cluster" - "Meta-data Manager (MDM)", "Configures and monitors the ScaleIO system. The MDM can be configured in redundant Cluster Mode, with three members on three servers, or in Single Mode on a single server.", "ScaleIO Cluster" - "Tie Breaker (TB)", 'Tie Breaker service helps determining what service runs as a master vs. a slave.", "ScaleIO Cluster" - "Storage Data Server (SDS)", "Manages the capacity of a single server and acts as a back-end for data access.The SDS is installed on all servers contributing storage devices to the ScaleIO system. These devices are accessed through the SDS.", "ScaleIO Cluster" - "Storage Data Client (SDC)", "A lightweight device driver that exposes ScaleIO volumes as block devices to the application that resides on the same server on which the SDC is installed.", "Openstack Cluster" - -**Note:** for more information in how to deploy a ScaleIO Cluster, -please refer to the ScaleIO manuals located in the -`download packages `_ for -your platform and `watch the demo `__. - -Requirements -============ - -These are the plugin requirements: - -+--------------------------------------------------------------------------------+--------------------------------+ -| Requirement | Version/Comment | -+================================================================================+================================+ -| Mirantis OpenStack compatibility | 6.1 / 7.0 | -+--------------------------------------------------------------------------------+--------------------------------+ -| ScaleIO Version | >= 1.32 | -+--------------------------------------------------------------------------------+--------------------------------+ -| Controller and Compute Nodes' Operative System | CentOS 6.5/Ubuntu 14.04 LTS | -+--------------------------------------------------------------------------------+--------------------------------+ -| OpenStack Cluster (Controller/cinder-volume node) can access ScaleIO Cluster | via a TCP/IP Network | -+--------------------------------------------------------------------------------+--------------------------------+ -| OpenStack Cluster (Compute nodes) can access ScaleIO Cluster | via a TCP/IP Network | -+--------------------------------------------------------------------------------+--------------------------------+ -| Install ScaleIO Storage Data Client (SDC) in Controller and Compute Nodes | Plugin takes care of install | -+--------------------------------------------------------------------------------+--------------------------------+ - -Limitations -=========== - -Currently Fuel doesn't support multi-backend storage. Also the following table show the current support version and limitation - -.. image:: images/SIO_Support.png - :width: 100% - -Configuration -============= - -Plugin files and directories: - -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| File/Directory | Description | -+==============================+==============================================================================================================+ -| Deployment\_scripts | Folder that includes the bash/puppet manifests for deploying the services and roles required by the plugin | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| Deployment\_scripts/puppet | | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| environment\_config.yaml | Contains the ScaleIO plugin parameters/fields for the Fuel web UI | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| metadata.yaml | Contains the name, version and compatibility information for the ScaleIO plugin | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| pre\_build\_hook | Mandatory file - blank for the ScaleIO plugin | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| repositories/centos | Empty Directory, the plugin scripts will download the required CentOS packages | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| repositories/Ubuntu | Empty Directory, not used | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ -| taks.yaml | Contains the information about what scripts to run and how to run them | -+------------------------------+--------------------------------------------------------------------------------------------------------------+ - -Before starting a deployment there are some things that you should -verify: - -#. Your ScaleIO Cluster can route 10G Storage Network to all Compute - nodes as well as the Cinder Control/Manager node. - -#. An account on the ScaleIO cluster is created to use as the OpenStack - Administrator account (use the login/password for this account as - san\_login/password settings). - -#. The IP address from the ScaleIO cluster is obtained. diff --git a/doc/source/user-guide.rst b/doc/source/user-guide.rst deleted file mode 100644 index 7ff763a..0000000 --- a/doc/source/user-guide.rst +++ /dev/null @@ -1,121 +0,0 @@ -.. raw:: pdf - - PageBreak - -========== -User Guide -========== - -#. Install ScaleIO-Cinder plugin using the `Installation Guide <./installation.rst>`_. - -#. Create environment with enabled plugin in fuel ui, lunch the fuel - site and check setting section to make sure the Scaleio-Cinder - section exists - -#. Add 3 nodes with Controller role and 1 node with Compute and another - role: - - .. image:: images/installation/image006.png - -#. Picture of the External ScaleIO Cluster Running: - - .. image:: images/installation/image007.png - -#. Retrive the external ScaleIO Cluster information. For - our example these are the configuration settings: - - .. image:: images/installation/image007.png - - -#. Use the ScaleIO Cluster information to update the ScaleIO Plugin - information: - - .. image:: images/installation/image009.png - - -#. Apply network settings - -#. Use the networking settings that are appropriate for your - environment. For our example we used the default settings provided - by Fuel: - - .. image:: images/installation/image010.png - - -#. Run network verification check: - - .. image:: images/installation/image011.png - - -#. Deploy the cluster: - - .. image:: images/installation/image012.png - - -#. Once the deployment finished successfully, open OpenStack Dashboard (Horizon): - - .. image:: images/installation/image013.png - - -#. Check Storage tab under system information and make sure ScaleIO - service is up and running: - - .. image:: images/installation/image014.png - - -ScaleIO Cinder plugin OpenStack operations -========================================== - -Once the OpenStack Cluster is setup, we can setup ScaleIO Volumes. This -is an example in how to attach a Volume to a running VM: - -#. Login into the OpenStack Cluster: - - .. image:: images/scaleio-cinder-install-6.PNG - :alt: OpenStack Login - - -#. Review the Block storage services by navigating: Admin -> System -> - System Information secction. You should see the ScaleIO Cinder - Volume. - - .. image:: images/scaleio-cinder-install-7.PNG - :alt: Block Storage Services Verification - - -#. Review the System Volumes by navigating to: Admin -> System -> - Volumes. You should see the ScaleIO Volume Type: - - .. image:: images/scaleio-cinder-install-8.PNG - :alt: Volume Type Verification - - -#. Create a new OpenStack Volume: - - .. image:: images/scaleio-cinder-install-9.PNG - :alt: Volume Creation - - -#. View the newly created Volume: - - .. image:: images/scaleio-cinder-install-10.PNG - :alt: Volume Listing - - -#. In the ScaleIO Control Panel, you will see that no Volumes have been - mapped yet: - - .. image:: images/scaleio-cinder-install-11.PNG - :alt: ScaleIO UI No mapped Volumes - - - -#. Once the Volume is attached to a VM, the ScaleIO UI will reflect the - mapping: - - .. image:: images/scaleio-cinder-install-12.png - :alt: ScaleIO UI Mapped Volume - - - - diff --git a/environment_config.yaml b/environment_config.yaml deleted file mode 100644 index a4634e2..0000000 --- a/environment_config.yaml +++ /dev/null @@ -1,64 +0,0 @@ -attributes: - scaleio_Admin: - value: '' - label: 'Admin username' - description: 'Type ScaleIO Admin username' - weight: 5 - type: "text" - regex: - source: '^[\S]{4,}$' - error: "You must provide a username with at least 4 characters" - scaleio_Password: - value: '' - label: 'Admin Password' - description: 'Type ScaleIO Admin password' - weight: 10 - type: "password" - regex: - source: '^[\S]{4,}$' - error: "You must provide a password with at least 4 characters" - scaleio_GW: - value: '' - label: 'Gateway IP' - description: 'Type the ScaleIO Gateway IP or hostname' - weight: 15 - type: "text" - regex: - source: '\S' - error: "Gateway IP cannot be empty" - scaleio_mdm1: - value: '' - label: 'Primary MDM IP' - description: 'Type the primary MDM IP or hostname' - weight: 16 - type: "text" - regex: - source: '\S' - error: "Primary MDM IP cannot be empty" - scaleio_mdm2: - value: '' - label: 'Secondary MDM IP' - description: 'Type the secondary MDM IP or hostname' - weight: 17 - type: "text" - regex: - source: '\S' - error: "Secondary MDM IP cannot be empty" - protection_domain: - value: '' - label: 'Protection Domain' - description: 'Type the Protection Domain you want to use for OpenStack' - weight: 35 - type: "text" - regex: - source: '\S' - error: "Protection Domain cannot be empty" - storage_pool_1: - value: '' - label: 'Storage Pool' - description: 'Type the Storage Pool you want to use for OpenStack' - weight: 45 - type: "text" - regex: - source: '\S' - error: "Storage Pool cannot be empty" diff --git a/metadata.yaml b/metadata.yaml deleted file mode 100644 index cf536c4..0000000 --- a/metadata.yaml +++ /dev/null @@ -1,39 +0,0 @@ -# Plugin name -name: scaleio-cinder -# Human-readable name for your plugin -title: ScaleIO Cinder plugin -# Plugin version -version: '1.5.0' -# Description -description: Enable EMC ScaleIO as the block storage backend -# Required fuel version -fuel_version: ['6.1','7.0'] -# Specify license of your plugin -licenses: ['Apache License Version 2.0'] -# Specify author or company name -authors: ['Magdy Salem, EMC', 'Adrian Moreno Martinez, EMC'] -# A link to the plugin's page -homepage: 'https://github.com/stackforge/fuel-scaleio-cinder' -# Specify a group which your plugin implements, possible options: -# network, storage, storage::cinder, storage::glance, hypervisor -groups: [] -# The plugin is compatible with releases in the list -releases: - - os: centos - version: 2014.2.2-6.1 - mode: ['ha', 'multinode'] - deployment_scripts_path: deployment_scripts/ - repository_path: repositories/centos - - os: ubuntu - version: 2014.2.2-6.1 - mode: ['ha', 'multinode'] - deployment_scripts_path: deployment_scripts/ - repository_path: repositories/ubuntu - - os: ubuntu - version: 2015.1.0-7.0 - mode: ['ha', 'multinode'] - deployment_scripts_path: deployment_scripts/ - repository_path: repositories/ubuntu -#Version of plugin package -package_version: '2.0.0' - diff --git a/pre_build_hook b/pre_build_hook deleted file mode 100644 index dc05e98..0000000 --- a/pre_build_hook +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -# Add here any the actions which are required before plugin build -# like packages building, packages downloading from mirrors and so on. -# The script should return 0 if there were no errors. diff --git a/repositories/centos/.gitkeep b/repositories/centos/.gitkeep deleted file mode 100644 index e69de29..0000000 diff --git a/repositories/centos/Packages/EMC-ScaleIO-sdc.rpm b/repositories/centos/Packages/EMC-ScaleIO-sdc.rpm deleted file mode 100644 index 6ba6b8e..0000000 Binary files a/repositories/centos/Packages/EMC-ScaleIO-sdc.rpm and /dev/null differ diff --git a/repositories/centos/repodata/1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c-filelists.xml.gz b/repositories/centos/repodata/1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c-filelists.xml.gz deleted file mode 100644 index 3299df1..0000000 Binary files a/repositories/centos/repodata/1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c-filelists.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2 b/repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2 deleted file mode 100644 index 8a57fe2..0000000 Binary files a/repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e-primary.xml.gz b/repositories/centos/repodata/3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e-primary.xml.gz deleted file mode 100644 index 282c9e4..0000000 Binary files a/repositories/centos/repodata/3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e-primary.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz b/repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz deleted file mode 100644 index 995719d..0000000 Binary files a/repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3-other.sqlite.bz2 b/repositories/centos/repodata/49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3-other.sqlite.bz2 deleted file mode 100644 index 50da1ce..0000000 Binary files a/repositories/centos/repodata/49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3-other.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8-filelists.sqlite.bz2 b/repositories/centos/repodata/54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8-filelists.sqlite.bz2 deleted file mode 100644 index 482c4ce..0000000 Binary files a/repositories/centos/repodata/54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8-filelists.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz b/repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz deleted file mode 100644 index d44692a..0000000 Binary files a/repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9-primary.sqlite.bz2 b/repositories/centos/repodata/735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9-primary.sqlite.bz2 deleted file mode 100644 index 0e86c70..0000000 Binary files a/repositories/centos/repodata/735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9-primary.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1-other.xml.gz b/repositories/centos/repodata/a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1-other.xml.gz deleted file mode 100644 index 1e4d67c..0000000 Binary files a/repositories/centos/repodata/a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1-other.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2 b/repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2 deleted file mode 100644 index c18b20d..0000000 Binary files a/repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2 b/repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2 deleted file mode 100644 index ec5369a..0000000 Binary files a/repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2 and /dev/null differ diff --git a/repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz b/repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz deleted file mode 100644 index 2e5f2cf..0000000 Binary files a/repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz and /dev/null differ diff --git a/repositories/centos/repodata/repomd.xml b/repositories/centos/repodata/repomd.xml deleted file mode 100644 index dfd1579..0000000 --- a/repositories/centos/repodata/repomd.xml +++ /dev/null @@ -1,55 +0,0 @@ - - - 1450754058 - - 1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c - 35f31c37eeaf596efa22987d57d227a7bb369fc5d47ecd1d640b0c7f5835fa28 - - 1450754059 - 378 - 1003 - - - 3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e - 96a50c0d1b14905953cc6c59710879768b723cb4d1ef1b6c65e101685ac72f65 - - 1450754059 - 1069 - 3553 - - - 735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9 - fb5dd5c72a77eb670c83b9bd3564787553525157a8eb55a6a5120590fc3582e4 - - 1450754059.26 - 10 - 3169 - 24576 - - - 49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3 - af551d95cb3e21338f3aef8538595842a3615537cf43e12000ec0b7ffbb6eaac - - 1450754059.14 - 10 - 652 - 6144 - - - a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1 - 2b428c4c2cd6e583c89d0613aaaf4de4fac1c6216e35fd43ca260915be748572 - - 1450754059 - 249 - 306 - - - 54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8 - ae7b880e69c74359a00a090a2ebdf21dc7e27d73c495a9f15ecbf643d810b78e - - 1450754059.17 - 10 - 1050 - 7168 - - diff --git a/repositories/ubuntu/.gitkeep b/repositories/ubuntu/.gitkeep deleted file mode 100644 index e69de29..0000000 diff --git a/repositories/ubuntu/Packages.gz b/repositories/ubuntu/Packages.gz deleted file mode 100644 index 70f5178..0000000 Binary files a/repositories/ubuntu/Packages.gz and /dev/null differ diff --git a/repositories/ubuntu/Release b/repositories/ubuntu/Release deleted file mode 100644 index 3795a92..0000000 --- a/repositories/ubuntu/Release +++ /dev/null @@ -1,3 +0,0 @@ -Origin: Magdy Salem, EMC; Adrian Moreno Martinez, EMC -Label: scaleio-cinder -Version: 1.0 diff --git a/repositories/ubuntu/emc-scaleio-sdc.deb b/repositories/ubuntu/emc-scaleio-sdc.deb deleted file mode 100644 index 5d8a2c2..0000000 Binary files a/repositories/ubuntu/emc-scaleio-sdc.deb and /dev/null differ diff --git a/spec/fuel-plugin-scaleio-cinder-1-5-0-spec.rst b/spec/fuel-plugin-scaleio-cinder-1-5-0-spec.rst deleted file mode 100644 index 0528e35..0000000 --- a/spec/fuel-plugin-scaleio-cinder-1-5-0-spec.rst +++ /dev/null @@ -1,120 +0,0 @@ -=========================================================== -Fuel plugin for ScaleIO-Cinder clusters as a Cinder backend -=========================================================== - -ScaleIO-Cinder plugin for Fuel extends Mirantis OpenStack functionality by adding -support for ScaleIO clusters for Cinder backend. - -Problem description -=================== - -Currently, Fuel has no support for ScaleIO clusters as block storage for -OpenStack environments. ScaleIO-Cinder plugin aims to provide support for it. -This plugin will configure OpenStack environments with an existing ScaleIO cluster - -Proposed change -=============== - -Implement a Fuel plugin that will configure the ScaleIO-Cinder driver for -Cinder on all Controller nodes and Compute nodes. All Cinder services run -on controllers no additional Cinder node is required in environment. - -Alternatives ------------- -None - -Data model impact ------------------ - -None - -REST API impact ---------------- - -None - -Upgrade impact --------------- - -None - -Security impact ---------------- - -None - -Notifications impact --------------------- - -None - -Other end user impact ---------------------- - -None - -Performance Impact ------------------- - -The ScaleIO-Cinder storage clusters provide high performance block storage for -OpenStack environment, and therefore enabling the ScaleIO-Cinder driver in OpenStack -will greatly improve peformance of OpenStack. - -Other deployer impact ---------------------- - -None - -Developer impact ----------------- - -None - -Implementation -============== - -The plugin generates the approriate cinder.conf to enable the ScaleIO-Cinder -cluster within OpenStack. There are scaleio driver and filter required files, the plugin -contains the files and will copy them to the correct location. - -Cinder-volume service is installed on all Controller nodes and Compute nodes. -All instances of cinder-volume have the same host parameter in cinder.conf -file. This is required to achieve ability to manage all volumes in the -environment by any cinder-volume instance. - -Assignee(s) ------------ -| Patrick Butler Monterde -| Magdy Salem -| Adrin Moreno - -Work Items ----------- - -* Implement the Fuel plugin. -* Implement the Puppet manifests. -* Testing. -* Write the documentation. - -Dependencies -============ - -* Fuel 6.1 and higher. - -Testing -======= - -* Prepare a test plan. -* Test the plugin by deploying environments with all Fuel deployment modes. - -Documentation Impact -==================== - -* Deployment Guide (how to install the storage backends, how to prepare an - environment for installation, how to install the plugin, how to deploy an - OpenStack environment with the plugin). -* User Guide (which features the plugin provides, how to use them in the - deployed OpenStack environment). -* Test Plan. -* Test Report. - diff --git a/tasks.yaml b/tasks.yaml deleted file mode 100644 index 49db20e..0000000 --- a/tasks.yaml +++ /dev/null @@ -1,28 +0,0 @@ -# This tasks will be applied on controller nodes, -# here you can also specify several roles, for example -# ['cinder', 'compute'] will be applied only on -# cinder and compute nodes -#Install ScaleIO cluster -- role: ['compute'] - stage: post_deployment/2010 - type: puppet - parameters: - puppet_manifest: install_scaleio_compute.pp - puppet_modules: puppet/:/etc/puppet/modules - timeout: 600 -#Install ScaleIO on controller -- role: ['controller'] - stage: post_deployment/2110 - type: puppet - parameters: - puppet_manifest: install_scaleio_controller.pp - puppet_modules: puppet/:/etc/puppet/modules - timeout: 600 -#Install ScaleIO on controller -- role: ['primary-controller'] - stage: post_deployment/2120 - type: puppet - parameters: - puppet_manifest: install_scaleio_controller.pp - puppet_modules: puppet/:/etc/puppet/modules - timeout: 600