Retire PowerVMStacker SIG: Remove Project Content

Depends-on: https://review.opendev.org/c/openstack/project-config/+/909535
Change-Id: Icb1894348ef7b1602a3181dad3162df6d6ad53af
This commit is contained in:
Takashi Kajinami 2024-02-20 22:49:19 +09:00
parent 376d9493e2
commit 04e053c5cb
157 changed files with 6 additions and 32980 deletions

29
.gitignore vendored
View File

@ -1,29 +0,0 @@
# Add patterns in here to exclude files created by tools integrated with this
# repository, such as test frameworks from the project's recommended workflow,
# rendered documentation and package builds.
#
# Don't add patterns to exclude files created by preferred personal tools
# (editors, IDEs, your operating system itself even). These should instead be
# maintained outside the repository, for example in a ~/.gitignore file added
# with:
#
# git config --global core.excludesfile '~/.gitignore'
# Bytecompiled Python
*.py[cod]
# Packages
*.egg-info
# Unit test / coverage reports
.coverage
cover/
.stestr/
.tox/
# Sphinx
doc/build/
# pbr generates these
AUTHORS
ChangeLog

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./nova_powervm/tests
top_dir=./

View File

@ -1,7 +0,0 @@
- project:
templates:
- check-requirements
- openstack-lower-constraints-jobs
- openstack-python-jobs
- openstack-python36-jobs
- periodic-stable-jobs

View File

@ -1,19 +0,0 @@
Contributing to Nova-PowerVM
============================
If you would like to contribute to the development of OpenStack,
you must follow the steps in the "If you're a developer"
section of this page:
https://wiki.openstack.org/wiki/How_To_Contribute
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/nova-powervm

View File

@ -1,4 +0,0 @@
Nova-PowerVM Style Commandments
===============================
- Follow the Nova HACKING.rst

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,329 +1,7 @@
===================
PowerVM Nova Driver
===================
The contents of this repository are still available in the Git source
code management system. To see the contents of this repository before it
reached its end of life, please check out the previous commit with
"git checkout HEAD^1".
The IBM PowerVM hypervisor provides virtualization on POWER hardware. PowerVM
admins can see benefits in their environments by making use of OpenStack.
This driver (along with a Neutron ML2 compatible agent and Ceilometer agent)
provides the capability for operators of PowerVM to use OpenStack natively.
Problem Description
===================
As ecosystems continue to evolve around the POWER platform, a single OpenStack
driver does not meet all of the needs for the various hypervisors. The
standard libvirt driver provides support for KVM on POWER systems. This nova
driver provides PowerVM support to OpenStack environment.
This driver meets the following:
* Built within the community
* Fits the OpenStack model
* Utilizes automated functional and unit tests
* Enables use of PowerVM systems through the OpenStack APIs
* Allows attachment of volumes from Cinder over supported protocols
This driver makes the following use cases available for PowerVM:
* As a deployer, all of the standard lifecycle operations (start, stop,
reboot, migrate, destroy, etc.) should be supported on a PowerVM based
instance.
* As a deployer, I should be able to capture an instance to an image.
* VNC console to instances deployed.
Usage
=====
To use the driver, install the nova-powervm project on your NovaLink-based
PowerVM system. The nova-powervm project has a minimal set of configuration.
See the configuration options section of the dev-ref for more information.
It is recommended that operators also make use of the networking-powervm
project. The project ensures that the network bridge supports the VLAN-based
networks required for the workloads.
There is also a ceilometer-powervm project that can be included.
Future work will be done to include PowerVM into the various OpenStack
deployment models.
Overview of Architecture
========================
The driver enables the following:
* Provide deployments that work with the OpenStack model.
* Driver is implemented using a new version of the PowerVM REST API.
* Ephemeral disks are supported either with Virtual I/O Server (VIOS)
hosted local disks or via Shared Storage Pools (a PowerVM cluster file
system).
* Volume support is provided via Cinder through supported protocols for the
Hypervisor (virtual SCSI and N-Port ID Virtualization).
* Live migration support is available when using Shared Storage Pools or boot
from volume.
* Network integration is supported via the ML2 compatible Neutron Agent. This
is the openstack/networking-powervm project.
* Automated Functional Testing is provided to validate changes from the broader
OpenStack community against the PowerVM driver.
* Thorough unit, syntax, and style testing is provided and enforced for the
driver.
The intention is that this driver follows the OpenStack Nova model.
The driver is being promoted into the nova core project in stages, the first of
which is represented by blueprint `powervm-nova-compute-driver`_. The
coexistence of these two incarnations of the driver raises some `Upgrade
Considerations`_.
.. _`powervm-nova-compute-driver`: https://blueprints.launchpad.net/nova/+spec/powervm-nova-compute-driver
Data Model Impact
-----------------
* The evacuate API is supported as part of the PowerVM driver. It optionally
allows for the NVRAM data to be stored to a Swift database. However this
does not impact the data model itself. It simply provides a location to
optionally store the VM's NVRAM metadata in the event of a rebuild,
evacuate, shelve, migration or resize.
REST API Impact
---------------
No REST API impacts.
Security Impact
---------------
No known security impacts.
Notifications Impact
--------------------
No new notifications. The driver does expect that the Neutron agent will
return an event when the VIF plug has occurred, assuming that Neutron is
the network service.
Other End User Impact
---------------------
The administrator may notice new logging messages in the nova compute logs.
Performance Impact
------------------
The driver has a similar deployment speed and agility to other hypervisors.
It has been tested with up to 10 concurrent deploys with several hundred VMs
on a given server.
Most operations are comparable in speed. Deployment, attach/detach volumes,
lifecycle, etc... are quick.
Due to the nature of the project, any performance impacts are limited to the
Compute Driver. The API processes for instance are not impacted.
Other Deployer Impact
---------------------
The cloud administrator will need to refer to documentation on how to
configure OpenStack for use with a PowerVM hypervisor.
A 'powervm' configuration group is used to contain all the PowerVM specific
configuration settings. Existing configuration file attributes will be
reused as much as possible (e.g. vif_plugging_timeout). This reduces the number
of PowerVM specific items that will be needed.
It is the goal of the project to only require minimal additional attributes.
The deployer may specify additional attributes to fit their configuration.
Developer Impact
----------------
The code for this driver is currently contained within a powervm project.
The driver is within the /nova/virt/powervm_ext/ package and extends the
nova.virt.driver.ComputeDriver class.
The code interacts with PowerVM through the pypowervm library. This python
binding is a wrapper to the PowerVM REST API. All hypervisor operations
interact with the PowerVM REST API via this binding. The driver is
maintained to support future revisions of the PowerVM REST API as needed.
For ephemeral disk support, either a Virtual I/O Server hosted local disk or a
Shared Storage Pool (a PowerVM clustered file system) is supported. For
volume attachments, the driver supports Cinder-based attachments via
protocols supported by the hypervisor (e.g. Fibre Channel).
For networking, the networking-powervm project provides Neutron ML2 Agents.
The agents provide the necessary configuration on the Virtual I/O Server for
networking. The PowerVM Nova driver code creates the VIF for the client VM,
but the Neutron agent creates the VIF for VLANs.
Automated functional testing is provided through a third party continuous
integration system. It monitors for incoming Nova change sets, runs a set
of functional tests (lifecycle operations) against the incoming change, and
provides a non-gating vote (+1 or -1).
Developers should not be impacted by these changes unless they wish to try the
driver.
Community Impact
----------------
The intent of this project is to bring another driver to OpenStack that
aligns with the ideals and vision of the community. The intention is to
promote this to core Nova.
Alternatives
------------
No alternatives appear viable to bring PowerVM support into the OpenStack
community.
Implementation
==============
Assignee(s)
-----------
Primary assignees:
adreznec
efried
kyleh
thorst
Other contributors:
multiple
Dependencies
============
* Utilizes the PowerVM REST API specification for management. Will
utilize future versions of this specification as it becomes available:
http://ibm.co/1lThV9R
* Builds on top of the `pypowervm library`_. This is a prerequisite to
utilizing the driver.
.. _pypowervm library: https://github.com/powervm/pypowervm
Upgrade Considerations
======================
Prior to Ocata, only the out-of-tree nova_powervm driver existed. The in-tree
driver is introduced in Ocata.
Namespaces
----------
In Liberty and Mitaka, the namespace of the out-of-tree driver is
``nova_powervm.virt.powervm``. In Newton, it was moved to
``nova.virt.powervm``. In Ocata, the new in-tree driver occupies the
``nova.virt.powervm`` namespace, and the out-of-tree driver is moved to
``nova.virt.powervm_ext``. Ocata consumers have the option of using the
in-tree driver, which will provide limited functionality until it is fully
integrated; or the out-of-tree driver, which provides full functionality.
Refer to the documentation for the ``nova.conf`` settings required to load
the desired driver.
Live Migrate Data Object
------------------------
In order to use live migration prior to Ocata, it was necessary to run the
customized nova_powervm conductor to bring in the ``PowerVMLiveMigrateData``
object. In Ocata, this object is included in core nova, so no custom conductor
is necessary.
Testing
=======
Tempest Tests
-------------
Since the tempest tests should be implementation agnostic, the existing
tempest tests should be able to run against the PowerVM driver without issue.
Tempest tests that require function that the platform does not yet support
(e.g. iSCSI or Floating IPs) will not pass. These should be ommitted from
the Tempest test suite.
A `sample Tempest test configuration`_ for the PowerVM driver has been provided.
Thorough unit tests exist within the project to validate specific functions
within this implementation.
.. _`sample Tempest test configuration`: https://github.com/powervm/powervm-ci/tree/master/tempest
Functional Tests
----------------
A third party functional test environment has been created. It monitors
for incoming nova change sets. Once it detects a new change set, it will
execute the existing lifecycle API tests. A non-gating vote (+1 or -1) will
be provided with information provided (logs) based on the result.
API Tests
---------
Existing APIs should be valid. All testing is planned within the functional
testing system and via unit tests.
Documentation Impact
====================
User Documentation
------------------
See the dev-ref for documentation on how to configure, contribute, use, etc.
this driver implementation.
Developer Documentation
-----------------------
The existing Nova developer documentation should typically suffice. However,
until merge into Nova, we will maintain a subset of dev-ref documentation.
References
==========
* PowerVM REST API Specification (may require newer versions as they
become available): http://ibm.co/1lThV9R
* PowerVM Virtualization Introduction and Configuration:
http://www.redbooks.ibm.com/abstracts/sg247940.html
* PowerVM Best Practices: http://www.redbooks.ibm.com/abstracts/sg248062.html
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on OFTC.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,126 +0,0 @@
========================
Installing with DevStack
========================
What is DevStack?
--------------------------
DevStack is a script to quickly create an OpenStack development environment.
Find out more `here <https://docs.openstack.org/devstack/latest/>`_.
What are DevStack plugins?
--------------------------
DevStack plugins act as project-specific extensions of DevStack. They allow external projects to
execute code directly in the DevStack run, supporting configuration and installation changes as
part of the normal local.conf and stack.sh execution. For NovaLink, we have DevStack plugins for
each of our three projects - nova-powervm, networking-powervm, and ceilometer-powervm. These
plugins, with the appropriate local.conf settings for your environment, will allow you to simply
clone down DevStack, configure, run stack.sh, and end up with a working OpenStack/Novalink PowerVM
environment with no other scripting required.
More details can be `found here. <https://docs.openstack.org/devstack/latest/plugins.html>`_
How to use the NovaLink DevStack plugins:
-----------------------------------------
1. Download DevStack::
$ git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack
2. Set up your local.conf file to pull in our projects:
1. If you have an existing DevStack local.conf, modify it to pull in this project by adding::
[[local|localrc]]
enable_plugin nova-powervm http://git.openstack.org/openstack/nova-powervm
and following the instructions for networking-powervm and ceilometer-powervm
as needed for your environment.
2. If you're setting up DevStack for the first time, example files are available
in the nova-powervm project to provide reference on using this driver with the
corresponding networking-powervm and ceilometer-powervm drivers. Following these
example files will enable the appropriate drivers and services for each node type.
Example config files for all-in-one, compute, and control nodes
`can be found here. <https://github.com/openstack/nova-powervm/tree/master/devstack>`_
The nova-powervm project provides different sample local.conf files as a
starting point for devstack.
* local.conf.aio-sea-localdisk
* Runs on the NovaLink VM of the PowerVM system
* Provides a full 'all in one' devstack VM
* Uses Shared Ethernet Adapter networking (networking-powervm)
* Uses localdisk disk driver
* local.conf.aio-ovs-ssp
* Runs on the NovaLink VM of the PowerVM system
* Provides a full 'all in one' devstack VM
* Uses Open vSwitch networking (neutron)
* Uses Shared Storage Pool disk driver
* local.conf.control
* Can run on any devstack capable machine (POWER or x86)
* Provides the controller node for devstack. Typically paired with the local.conf.compute
* local.conf.compute
* Runs on the NovaLink VM of the PowerVM system
* Provides the compute node for a devstack. Typically paired with the local.conf.control
3. See our devrefs and plugin references for the configuration options for each driver,
then configure the installation in local.conf as needed for your environment.
* nova-powervm
* http://nova-powervm.readthedocs.org/en/latest/devref/index.html
* https://github.com/openstack/nova-powervm/blob/master/devstack/README.rst
* networking-powervm
* http://networking-powervm.readthedocs.io/en/latest/devref/index.html
* https://github.com/openstack/networking-powervm/blob/master/devstack/README.rst
* ceilometer-powervm
* http://ceilometer-powervm.readthedocs.org/en/latest/devref/index.html
* https://github.com/openstack/ceilometer-powervm/blob/master/devstack/README.rst
4. For nova-powervm, changing the DISK_DRIVER settings for your environment will be required.
The default configuration for other settings will be sufficient for most installs. ::
[[local|localrc]]
...
DISK_DRIVER =
VOL_GRP_NAME =
CLUSTER_NAME =
[[post-config|$NOVA_CONF]]
[powervm]
...
5. A few notes:
* By default this will pull in the latest/trunk versions of all the projects. If you want to
run a stable version instead, you can either check out that stable branch in the DevStack
repo (git checkout stable/liberty) which is the preferred method, or you can do it on a
project by project basis in the local.conf file as needed.
* If you need any special services enabled for your environment, you can also specify those
in your local.conf file. In our example files we demonstrate enabling and disabling services
(n-cpu, q-agt, etc) required for our drivers.
6. Run ``stack.sh`` from DevStack::
$ cd /opt/stack/devstack
$ FORCE=yes ./stack.sh
``FORCE=yes`` is needed on Ubuntu 15.10 since only Ubuntu LTS releases are officially supported
by DevStack. If you're running a control only node on a different, supported OS version you can
skip using ``FORCE=yes``.
7. At this point DevStack will run through stack.sh, and barring any DevStack issues, you should
end up with a standard link to your Horizon portal at the end of the stack run. Congratulations!

View File

@ -1,69 +0,0 @@
# This is an example devstack local.conf for and all-in-one stack using
# Open vSwitch networking.
[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=~/screen_log/
LOGDAYS=1
LOG_COLOR=True
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_PASSWORD=admin
SERVICE_TOKEN=service
MULTI_HOST=0
HOST_NAME=$(hostname)
# Networking configuration. Update these values based on your network.
PUBLIC_INTERFACE=
FLOATING_RANGE=
FIXED_RANGE=
NETWORK_GATEWAY=
PUBLIC_NETWORK_GATEWAY=
Q_FLOATING_ALLOCATION_POOL=
HOST_IP=
# ML2 Configuration
Q_ML2_TENANT_NETWORK_TYPE=vlan,vxlan,flat
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,vxlan,flat
# Forces nova to use config drive
FORCE_CONFIG_DRIVE=True
# TODO: The default version for etcd3 is 3.1.7. Power is not supported for this version.
# Using the 3.2.0 RC until 3.2.0 is release at which point this can be removed.
ETCD_VERSION=v3.2.0-rc.1
ETCD_SHA256="c2d846326586afe169e6ca81266815196d6c14bc023f9c7d0c9d622f3c14505c"
# Use the common SSP pool on the system.
DISK_DRIVER=ssp
# Enable plugins
enable_plugin nova-powervm https://git.openstack.org/openstack/nova-powervm.git
enable_plugin neutron https://git.openstack.org/openstack/neutron
# Enable services
enable_service n-novnc neutron neutron-api neutron-agent neutron-l3 neutron-dhcp neutron-metadata-agent
disable_service cinder n-net ceilometer-aipmi q-agt q-svc q-l3 q-dhcp q-meta
[[post-config|$NOVA_CONF]]
[DEFAULT]
debug=False
default_log_levels=pypowervm=DEBUG,nova_powervm=DEBUG,nova=DEBUG,iamqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN
use_rootwrap_daemon = True
[powervm]
use_rmc_ipv6_scheme=False
[[post-config|$NEUTRON_CONF]]
[DEFAULT]
debug=False
verbose=False
default_log_levels=pypowervm=DEBUG,neutron=DEBUG,iamqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN
[[post-config|$KEYSTONE_CONF]]
[DEFAULT]
debug=False

View File

@ -1,63 +0,0 @@
[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=~/screen_log/
LOGDAYS=1
LOG_COLOR=True
DATA_DIR=/var/stack
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_PASSWORD=admin
SERVICE_TOKEN=service
MULTI_HOST=0
HOST_NAME=$(hostname)
# Networking Configuration
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
Q_USE_PROVIDERNET_FOR_PUBLIC=False
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=default
TENANT_VLAN_RANGE=1000:2000
Q_AGENT=pvm_sea
NEUTRON_AGENT=pvm_sea
Q_ML2_PLUGIN_MECHANISM_DRIVERS=pvm_sea
ML2_L3_PLUGIN=
Q_USE_PROVIDER_NETWORKING=False
NEUTRON_CREATE_INITIAL_NETWORKS=False
NEUTRON_CORE_PLUGIN=ml2
Q_PLUGIN_CONF_FILE=etc/neutron/plugins/ml2/ml2_conf.ini
# Forces nova to use config drive
FORCE_CONFIG_DRIVE=True
# localdisk or ssp. localdisk requires VOL_GRP_NAME. Set to the
# volume group that will host the volume groups. Must not be rootvg.
DISK_DRIVER=localdisk
VOL_GRP_NAME=devstackvg
# TODO: The default version for etcd3 is 3.1.7. Power is not supported for this version.
# Using a 3.2.0 RC until 3.2.0 is released at which point this can be removed.
ETCD_VERSION=v3.2.0-rc.1
ETCD_SHA256="c2d846326586afe169e6ca81266815196d6c14bc023f9c7d0c9d622f3c14505c"
# Enable plugins
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
enable_plugin ceilometer-powervm https://git.openstack.org/openstack/ceilometer-powervm.git
enable_plugin nova-powervm https://git.openstack.org/openstack/nova-powervm.git
enable_plugin networking-powervm https://git.openstack.org/openstack/networking-powervm.git
enable_plugin neutron https://git.openstack.org/openstack/neutron
# Enable services
enable_service n-novnc neutron neutron-api pvm-q-sea-agt
disable_service cinder n-net neutron-metering neutron-l3 neutron-dhcp neutron-agent
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2_type_vlan]
network_vlan_ranges=default:1:4094
[ml2]
tenant_network_types=vlan
extension_drivers=port_security

View File

@ -1,57 +0,0 @@
[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=~/screen_log/
LOGDAYS=1
ADMIN_PASSWORD=labstack
MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_PASSWORD=admin
SERVICE_TOKEN=service
MULTI_HOST=1
HOST_IP=192.168.42.12 #Change this for each compute node
HOST_NAME=$(hostname)
SERVICE_HOST=192.168.42.11 #Change this to your controller IP
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
FLAT_INTERFACE=eth0
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=default
TENANT_VLAN_RANGE=1000:1999
# TODO: Set disk driver details for your environment
# DISK_DRIVER: localdisk or ssp. localdisk requires VOL_GRP_NAME. Set to the
# volume group that will host the volume groups. Must not be rootvg.
DISK_DRIVER=localdisk
VOL_GRP_NAME=devstackvg
NOVA_VNC_ENABLED=True
NOVNCPROXY_BASE_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
# Set enabled services (pvm-q-agt and pvm-ceilometer-acompute started by their plugins)
ENABLED_SERVICES=n-cpu,neutron,n-api-meta
# Enable plugins
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
enable_plugin nova-powervm https://git.openstack.org/openstack/nova-powervm.git
enable_plugin networking-powervm https://git.openstack.org/openstack/networking-powervm.git
enable_plugin ceilometer-powervm https://git.openstack.org/openstack/ceilometer-powervm.git
# Disable services
disable_service ceilometer-acentral ceilometer-collector ceilometer-api
[[post-config|$NOVA_CONF]]
[DEFAULT]
debug=False
default_log_levels=nova_powervm=DEBUG,nova=DEBUG,pypowervm=INFO,iamqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN

View File

@ -1,42 +0,0 @@
[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=~/screen_log/
LOGDAYS=1
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_PASSWORD=admin
SERVICE_TOKEN=service
MULTI_HOST=1
HOST_NAME=$(hostname)
FLOATING_RANGE=192.168.2.0/24
FIXED_RANGE=10.11.12.0/24
NETWORK_GATEWAY=10.11.12.1
PUBLIC_NETWORK_GATEWAY=192.168.2.1
Q_FLOATING_ALLOCATION_POOL=start=192.168.2.225,end=192.168.2.250
FLAT_INTERFACE=eth0
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=default
TENANT_VLAN_RANGE=1000:1999
# Enable services
enable_service n-novnc neutron q-svc q-l3 q-dhcp q-meta
disable_service n-net n-cpu q-agt c-vol
# Enable plugins
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
enable_plugin networking-powervm https://git.openstack.org/openstack/networking-powervm.git
# Disable ceilometer-acompute, as it's not needed on a control-only node
disable_service ceilometer-acompute
[[post-config|$NOVA_CONF]]
[DEFAULT]
debug=False
default_log_levels=nova=DEBUG,iamqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN

View File

@ -1,3 +0,0 @@
# Plug-in overrides
VIRT_DRIVER=powervm

View File

@ -1,144 +0,0 @@
#!/bin/bash
#
# plugin.sh - Devstack extras script to install and configure the nova compute
# driver for powervm
# This driver is enabled in override-defaults with:
# VIRT_DRIVER=${VIRT_DRIVER:-powervm}
# The following entry points are called in this order for nova-powervm:
#
# - install_nova_powervm
# - configure_nova_powervm
# - start_nova_powervm
# - stop_nova_powervm
# - cleanup_nova_powervm
# Save trace setting
MY_XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
# Set up base directories
NOVA_DIR=${NOVA_DIR:-$DEST/nova}
NOVA_CONF_DIR=${NOVA_CONF_DIR:-/etc/nova}
NOVA_CONF=${NOVA_CONF:-NOVA_CONF_DIR/nova.conf}
# nova-powervm directories
NOVA_POWERVM_DIR=${NOVA_POWERVM_DIR:-${DEST}/nova-powervm}
NOVA_POWERVM_PLUGIN_DIR=$(readlink -f $(dirname ${BASH_SOURCE[0]}))
# Support entry points installation of console scripts
if [[ -d $NOVA_DIR/bin ]]; then
NOVA_BIN_DIR=$NOVA_DIR/bin
else
NOVA_BIN_DIR=$(get_python_exec_prefix)
fi
# Source functions
source $NOVA_POWERVM_PLUGIN_DIR/powervm-functions.sh
# Entry Points
# ------------
# configure_nova_powervm() - Configure the system to use nova_powervm
function configure_nova_powervm {
# Default configuration
iniset $NOVA_CONF DEFAULT compute_driver $PVM_DRIVER
iniset $NOVA_CONF DEFAULT instance_name_template $INSTANCE_NAME_TEMPLATE
iniset $NOVA_CONF DEFAULT compute_available_monitors $COMPUTE_MONITORS
iniset $NOVA_CONF DEFAULT compute_monitors ComputeDriverCPUMonitor
iniset $NOVA_CONF DEFAULT force_config_drive $FORCE_CONFIG_DRIVE
iniset $NOVA_CONF DEFAULT injected_network_template $INJECTED_NETWORK_TEMPLATE
iniset $NOVA_CONF DEFAULT flat_injected $FLAT_INJECTED
iniset $NOVA_CONF DEFAULT use_ipv6 $USE_IPV6
iniset $NOVA_CONF DEFAULT firewall_driver $FIREWALL_DRIVER
# PowerVM specific configuration
iniset $NOVA_CONF powervm disk_driver $DISK_DRIVER
if [[ -n $VOL_GRP_NAME ]]; then
iniset $NOVA_CONF powervm volume_group_name $VOL_GRP_NAME
fi
if [[ -n $CLUSTER_NAME ]]; then
iniset $NOVA_CONF powervm cluster_name $CLUSTER_NAME
fi
}
# install_nova_powervm() - Install nova_powervm and necessary dependencies
function install_nova_powervm {
# Install the nova-powervm package
setup_develop $NOVA_POWERVM_DIR
}
# start_nova_powervm() - Start the nova_powervm process
function start_nova_powervm {
# Check that NovaLink is installed and running
check_novalink_install
# This function intentionally functionless as the
# compute service will start normally
}
# stop_nova_powervm() - Stop the nova_powervm process
function stop_nova_powervm {
# This function intentionally left blank as the
# compute service will stop normally
:
}
# cleanup_nova_powervm() - Cleanup the nova_powervm process
function cleanup_nova_powervm {
# This function intentionally left blank
:
}
# Core Dispatch
# -------------
if is_service_enabled nova-powervm; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
# Install NovaLink if set
if [[ "$INSTALL_NOVALINK" = "True" ]]; then
echo_summary "Installing NovaLink"
install_novalink
fi
fi
if [[ "$1" == "stack" && "$2" == "install" ]]; then
# Perform installation of nova-powervm
echo_summary "Installing nova-powervm"
install_nova_powervm
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
# Lay down configuration post install
echo_summary "Configuring nova-powervm"
configure_nova_powervm
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# Initialize and start the nova-powervm/nova-compute service
echo_summary "Starting nova-powervm"
start_nova_powervm
fi
if [[ "$1" == "unstack" ]]; then
# Shut down nova-powervm/nova-compute
echo_summary "Stopping nova-powervm"
stop_nova_powervm
fi
if [[ "$1" == "clean" ]]; then
# Remove any lingering configuration data
# clean.sh first calls unstack.sh
echo_summary "Cleaning up nova-powervm and associated data"
cleanup_nova_powervm
fi
fi
# Restore xtrace
$MY_XTRACE
# Local variables:
# mode: shell-script
# End:

View File

@ -1,38 +0,0 @@
#!/bin/bash
# devstack/powervm-functions.sh
# Functions to control the installation and configuration of the PowerVM compute services
# TODO (adreznec) Uncomment when public NovaLink PPA available
# NOVALINK_PPA=${NOVALINK_PPA:-TBD}
function check_novalink_install {
echo_summary "Checking NovaLink installation"
if ! ( is_package_installed pvm-novalink ); then
echo "WARNING: You are using the NovaLink drivers, but NovaLink is not installed on this system."
fi
# The user that nova runs as should be a member of **pvm_admin** group
if ! getent group $PVM_ADMIN_GROUP >/dev/null; then
sudo groupadd $PVM_ADMIN_GROUP
fi
add_user_to_group $STACK_USER $PVM_ADMIN_GROUP
}
function install_novalink {
echo_summary "Installing NovaLink"
if is_ubuntu; then
# Set up the NovaLink PPA
# TODO (adreznec) Uncomment when public NovaLink PPA available
# echo "deb ${NOVALINK_PPA} ${DISTRO} main" | sudo tee /etc/apt/sources.list.d/novalink-${DISTRO}.list
# echo "deb-src ${NOVALINK_PPA} ${DISTRO} main" | sudo tee --append /etc/apt/sources.list.d/novalink-${DISTRO}.list
NO_UPDATE_REPOS=FALSE
REPOS_UPDATED=FALSE
else
die $LINENO "NovaLink is currently supported only on Ubuntu platforms"
fi
install_package pvm-novalink
echo_summary "NovaLink install complete"
}

View File

@ -1,28 +0,0 @@
# Devstack settings
# These defaults can be overridden in the localrc section of the local.conf file
# Add nova-powervm to enabled services
enable_service nova-powervm
# NovaLink install/upgrade settings
INSTALL_NOVALINK=$(trueorfalse False INSTALL_NOVALINK)
PVM_ADMIN_GROUP=${PVM_ADMIN_GROUP:-pvm_admin}
# Nova settings
PVM_DRIVER=powervm_ext.driver.PowerVMDriver
INSTANCE_NAME_TEMPLATE=${INSTANCE_NAME_TEMPLATE:-"%(display_name).13s-%(uuid).8s-pvm"}
COMPUTE_MONITORS=${COMPUTE_MONITORS:-nova.compute.monitors.all_monitors}
FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-True}
INJECTED_NETWORK_TEMPLATE=${INJECTED_NETWORK_TEMPLATE:-$DEST/nova/nova/virt/interfaces.template}
FLAT_INJECTED=${FLAT_INJECTED:-true}
# This is required to be true to support the PowerVM RMC management network
USE_IPV6=${USE_IPV6:-True}
FIREWALL_DRIVER=${FIREWALL_DRIVER:-"nova.virt.firewall.NoopFirewallDriver"}
# PowerVM settings
# DISK_DRIVER : 'localdisk' (the default) or 'ssp'
DISK_DRIVER=${DISK_DRIVER:-ssp}
# VOL_GRP_NAME only required for localdisk driver
# VOL_GRP_NAME=${VOL_GRP_NAME:-devstackvg}
# CLUSTER_NAME used by SSP driver
# CLUSTER_NAME=${CLUSTER_NAME:-devstack_cluster}

View File

@ -1,4 +0,0 @@
sphinx!=1.6.6,!=1.6.7,>=1.6.2,<2.0.0;python_version=='2.7' # BSD
sphinx!=1.6.6,!=1.6.7,>=1.6.2,!=2.1.0;python_version>='3.4' # BSD
openstackdocstheme>=1.19.0 # Apache-2.0
sphinx-feature-classification>=0.2.0 # Apache-2.0

View File

@ -1,84 +0,0 @@
# nova-powervm documentation build configuration file
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../'))
# -- General configuration ------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocstheme',
'sphinx_feature_classification.support_matrix'
]
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'nova-powervm'
copyright = u'2015, IBM'
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'openstackdocs'
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# -- Options for LaTeX output ---------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'IBM', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index',
'%s' % project,
u'%s Documentation' % project,
u'IBM', 1)
]
# -- Options for openstackdocstheme ---------------------------------------
repository_name = 'openstack/nova-powervm'
bug_project = 'nova-powervm'
bug_tag = ''

View File

@ -1,55 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Setting Up a Development Environment
====================================
This page describes how to setup a working Python development
environment that can be used in developing Nova-PowerVM.
These instructions assume you're already familiar with
Git and Gerrit, which is a code repository mirror and code review toolset,
however if you aren't please see `this Git tutorial`_ for an introduction
to using Git and `this guide`_ for a tutorial on using Gerrit and Git for
code contribution to OpenStack projects.
.. _this Git tutorial: http://git-scm.com/book/en/Getting-Started
.. _this guide: http://docs.openstack.org/infra/manual/developers.html#development-workflow
Getting the code
----------------
Grab the code::
git clone https://git.openstack.org/openstack/nova-powervm
cd nova-powervm
Setting up your environment
---------------------------
The purpose of this project is to provide the 'glue' between OpenStack
Compute (Nova) and PowerVM. The `pypowervm`_ project is used to control
PowerVM systems.
It is recommended that you clone down the OpenStack Nova project along with
pypowervm into your respective development environment.
Running the tox python targets for tests will automatically clone these down
via the requirements.
Additional project requirements may be found in the requirements.txt file.
.. _pypowervm: https://github.com/powervm/pypowervm

View File

@ -1,48 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Developer Guide
===============
In the Developer Guide, you will find information on how to develop for
Nova-PowerVM and how it interacts with Nova compute. You will also find
information on setup and usage of Nova-PowerVM
Internals and Programming
-------------------------
.. toctree::
:maxdepth: 3
project_structure
development_environment
usage
Testing
-------
.. toctree::
:maxdepth: 3
testing
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,117 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Source Code Structure
=====================
Since nova-powervm strives to be integrated into the upstream Nova project,
the source code structure matches a standard driver.
::
nova_powervm/
virt/
powervm/
disk/
tasks/
volume/
...
tests/
virt/
powervm/
disk/
tasks/
volume/
...
nova_powervm/virt/powervm
~~~~~~~~~~~~~~~~~~~~~~~~~
The main directory for the overall driver. Provides the driver
implementation, image support, and some high level classes to interact with
the PowerVM system (ex. host, vios, vm, etc...)
The driver attempts to utilize `TaskFlow`_ for major actions such as spawn.
This allows the driver to create atomic elements (within the tasks) to
drive operations against the system (with revert capabilities).
.. _TaskFlow: https://wiki.openstack.org/wiki/TaskFlow
nova_powervm/virt/powervm/disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The disk folder contains the various 'nova ephemeral' disk implementations.
These are basic images that do not involve Cinder.
Two disk implementations exist currently.
* localdisk - supports Virtual I/O Server Volume Groups. This configuration
uses any Volume Group on the system, allowing operators to make use of the
physical disks local to their system. Images will be cached on the same
volume group as the VMs. The cached images will be periodically cleaned up
by the Nova imagecache manager, at a rate determined by the ``nova.conf``
setting: image_cache_manager_interval. Also supports file-backed ephemeral
storage, which is specified by using the ``QCOW VG - default`` volume group.
Note: Resizing instances with file-backed ephemeral is not currently
supported.
* Shared Storage Pool - utilizes PowerVM's distributed storage. As such this
implementation allows operators to make use of live migration capabilities.
The standard interface between these two implementations is defined in the
driver.py. This ensures that the nova-powervm compute driver does not need
to know the specifics about which disk implementation it is using.
nova_powervm/virt/powervm/tasks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The task folder contains `TaskFlow`_ classes. These implementations simply
wrap around other methods, providing logical units that the compute
driver can use when building a string of actions.
For instance, spawning an instance may require several atomic tasks:
- Create VM
- Plug Networking
- Create Disk from Glance
- Attach Disk to VM
- Power On
The tasks in this directory encapsulate this. If anything fails, they have
corresponding reverts. The logic to perform these operations is contained
elsewhere; these are simple wrappers that enable embedding into Taskflow.
.. _TaskFlow: https://wiki.openstack.org/wiki/TaskFlow
nova_powervm/virt/powervm/volume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The volume folder contains the Cinder volume connectors. A volume connector
is the code that connects a Cinder volume (which is visible to the host) to
the Virtual Machine.
The PowerVM Compute Driver has an interface for the volume connectors defined
in this folder's `driver.py`.
The PowerVM Compute Driver provides two implementations for Fibre Channel
attached disks.
* Virtual SCSI (vSCSI): The disk is presented to a Virtual I/O Server and
the data is passed through to the VM through a virtualized SCSI
connection.
* N-Port ID Virtualization (NPIV): The disk is presented directly to the
VM. The VM will have virtual Fibre Channel connections to the disk, and
the Virtual I/O Server will not have the disk visible to it.

View File

@ -1,64 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Running Nova-PowerVM Tests
==========================
This page describes how to run the Nova-PowerVM tests. This page assumes you
have already set up an working Python environment for Nova-PowerVM development.
With `tox`
~~~~~~~~~~
Nova-PowerVM, like other OpenStack projects, uses `tox`_ for managing the virtual
environments for running test cases. It uses `Testr`_ for managing the running
of the test cases.
Tox handles the creation of a series of `virtualenvs`_ that target specific
versions of Python.
Testr handles the parallel execution of series of test cases as well as
the tracking of long-running tests and other things.
For more information on the standard tox-based test infrastructure used by
OpenStack and how to do some common test/debugging procedures with Testr,
see this wiki page:
https://wiki.openstack.org/wiki/Testr
.. _Testr: https://wiki.openstack.org/wiki/Testr
.. _tox: http://tox.readthedocs.org/en/latest/
.. _virtualenvs: https://pypi.org/project/virtualenv/
PEP8 and Unit Tests
+++++++++++++++++++
Running pep8 and unit tests is as easy as executing this in the root
directory of the Nova-PowerVM source code::
tox
To run only pep8::
tox -e pep8
To restrict the pylint check to only the files altered by the latest patch changes::
tox -e pep8 HEAD~1
To run only the unit tests::
tox -e py27,py34

View File

@ -1,212 +0,0 @@
..
Copyright 2015, 2016 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Usage
=====
To make use of the PowerVM drivers, a PowerVM system set up with `NovaLink`_ is
required. The nova-powervm driver should be installed on the management VM.
.. _NovaLink: http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS215-262&appname=USN
**Note:** Installing the NovaLink software creates the ``pvm_admin`` group. In
order to function properly, the user executing the Nova compute service must
be a member of this group. Use the ``usermod`` command to add the user. For
example, to add the user ``stacker`` to the ``pvm_admin`` group, execute::
sudo usermod -a -G pvm_admin stacker
The user must re-login for the change to take effect.
The NovaLink architecture is such that the compute driver runs directly on the
PowerVM system. No external management element (e.g. Hardware Management
Console or PowerVC) is needed. Management of the virtualization is driven
through a thin virtual machine running on the PowerVM system.
Configuration of the PowerVM system and NovaLink is required ahead of time. If
the operator is using volumes or Shared Storage Pools, they are required to be
configured ahead of time.
Configuration File Options
--------------------------
After nova-powervm has been installed the user must enable PowerVM as the
compute driver. To do so, set the ``compute_driver`` value in the ``nova.conf``
file to ``compute_driver = powervm_ext.driver.PowerVMDriver``.
The standard nova configuration options are supported. In particular, to use
PowerVM SR-IOV vNIC for networking, the ``pci_passthrough_whitelist`` option
must be set. See the `networking-powervm usage devref`_ for details.
.. _`networking-powervm usage devref`: http://networking-powervm.readthedocs.io/en/latest/devref/usage.html
Additionally, a ``[powervm]`` section is used to provide additional
customization to the driver.
By default, no additional inputs are needed. The base configuration allows for
a Nova driver to support ephemeral disks to a local volume group (only
one can be on the system in the default config). Connecting Fibre Channel
hosted disks via Cinder will use the Virtual SCSI connections through the
Virtual I/O Servers.
Operators may change the disk driver (nova based disks - NOT Cinder) via the
``disk_driver`` property.
All of these values are under the ``[powervm]`` section. The tables are broken
out into logical sections.
To generate a sample config file for ``[powervm]`` run::
oslo-config-generator --namespace nova_powervm > nova_powervm_sample.conf
The ``[powervm]`` section of the sample can then be edited and pasted into the
full nova.conf file.
VM Processor Options
~~~~~~~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| proc_units_factor = 0.1 | (FloatOpt) Factor used to calculate the processor units |
| | per vcpu. Valid values are: 0.05 - 1.0 |
+--------------------------------------+------------------------------------------------------------+
| uncapped_proc_weight = 64 | (IntOpt) The processor weight to assign to newly created |
| | VMs. Value should be between 1 and 255. Represents the |
| | relative share of the uncapped processor cycles the |
| | Virtual Machine will receive when unused processor cycles |
| | are available. |
+--------------------------------------+------------------------------------------------------------+
Disk Options
~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| disk_driver = localdisk | (StrOpt) The disk driver to use for PowerVM disks. Valid |
| | options are: localdisk, ssp |
| | |
| | If localdisk is specified and only one non-rootvg Volume |
| | Group exists on one of the Virtual I/O Servers, then no |
| | further config is needed. If multiple volume groups exist,|
| | then further specification can be done via the |
| | volume_group_name option. |
| | |
| | Live migration is not supported with a localdisk config. |
| | |
| | If ssp is specified, then a Shared Storage Pool will be |
| | used. If only one SSP exists on the system, no further |
| | configuration is needed. If multiple SSPs exist, then the |
| | cluster_name property must be specified. Live migration |
| | can be done within a SSP cluster. |
+--------------------------------------+------------------------------------------------------------+
| cluster_name = None | (StrOpt) Cluster hosting the Shared Storage Pool to use |
| | for storage operations. If none specified, the host is |
| | queried; if a single Cluster is found, it is used. Not |
| | used unless disk_driver option is set to ssp. |
+--------------------------------------+------------------------------------------------------------+
| volume_group_name = None | (StrOpt) Volume Group to use for block device operations. |
| | Must not be rootvg. If disk_driver is localdisk, and more |
| | than one non-rootvg volume group exists across the |
| | Virtual I/O Servers, then this attribute must be specified.|
+--------------------------------------+------------------------------------------------------------+
Volume Options
~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| fc_attach_strategy = vscsi | (StrOpt) The Fibre Channel Volume Strategy defines how FC |
| | Cinder volumes should be attached to the Virtual Machine. |
| | The options are: npiv or vscsi. |
| | |
| | It should be noted that if NPIV is chosen, the WWPNs will |
| | not be active on the backing fabric during the deploy. |
| | Some Cinder drivers will operate without issue. Others |
| | may query the fabric and thus will fail attachment. It is |
| | advised that if an issue occurs using NPIV, the operator |
| | fall back to vscsi based deploys. |
+--------------------------------------+------------------------------------------------------------+
| vscsi_vios_connections_required = 1 | (IntOpt) Indicates a minimum number of Virtual I/O Servers |
| | that are required to support a Cinder volume attach with |
| | the vSCSI volume connector. |
+--------------------------------------+------------------------------------------------------------+
| ports_per_fabric = 1 | (IntOpt) (NPIV only) The number of physical ports that |
| | should be connected directly to the Virtual Machine, per |
| | fabric. |
| | |
| | Example: 2 fabrics and ports_per_fabric set to 2 will |
| | result in 4 NPIV ports being created, two per fabric. If |
| | multiple Virtual I/O Servers are available, will attempt |
| | to span ports across I/O Servers. |
+--------------------------------------+------------------------------------------------------------+
| fabrics = A | (StrOpt) (NPIV only) Unique identifier for each physical |
| | FC fabric that is available. This is a comma separated |
| | list. If there are two fabrics for multi-pathing, then |
| | this could be set to A,B. |
| | |
| | The fabric identifiers are used for the |
| | 'fabric_<identifier>_port_wwpns' key. |
+--------------------------------------+------------------------------------------------------------+
| fabric_<name>_port_wwpns | (StrOpt) (NPIV only) A comma delimited list of all the |
| | physical FC port WWPNs that support the specified fabric. |
| | Is tied to the NPIV 'fabrics' key. |
+--------------------------------------+------------------------------------------------------------+
Config Drive Options
~~~~~~~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| vopt_media_volume_group = root_vg | (StrOpt) The volume group on the system that should be |
| | used to store the config drive metadata that will be |
| | attached to the VMs. |
+--------------------------------------+------------------------------------------------------------+
| vopt_media_rep_size = 1 | (IntOpt) The size of the media repository (in GB) for the |
| | metadata for config drive. Only used if the media |
| | repository needs to be created. |
+--------------------------------------+------------------------------------------------------------+
| image_meta_local_path = /tmp/cfgdrv/ | (StrOpt) The location where the config drive ISO files |
| | should be built. |
+--------------------------------------+------------------------------------------------------------+
LPAR Detailed Settings
~~~~~~~~~~~~~~~~~~~~~~
Fine grained control over LPAR settings can be achieved by setting PowerVM
specific properties (``extra-specs``) on the flavors being used to instantiate a VM. For the
complete list of PowerVM properties see `IBM PowerVC documentation`_.
.. _`IBM PowerVC documentation`: https://www.ibm.com/support/knowledgecenter/en/SSXK2N_1.4.2/com.ibm.powervc.standard.help.doc/powervc_pg_flavorsextraspecs_hmc.html
For example, to create a VM with one VCPU and 0.7 entitlement (0.7 of the physical
CPU resource), a user could use a flavor created as follows::
openstack flavor create --vcpus 1 --ram 6144 --property \
powervm:proc_units=0.7 pvm-6-1-0.7
In the example above ``powervm:proc_units`` property was used to specify CPU
entitlement for the VM.
Remarks For IBM i Users
~~~~~~~~~~~~~~~~~~~~~~~
By default all VMs are created as ``AIX/Linux`` type LPARs. In order to create
IBM i VM (LPAR type ``OS400``) user must add ``os_distro`` property of value
``ibmi`` to the Glance image being used to create the instance. For example,
to add the property to sample image ``i5OSR730``, execute::
openstack image set --property os_distro=ibmi i5OSR730

View File

@ -1,64 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Welcome to nova-powervm's documentation!
========================================
This project provides a Nova-compatible compute driver for `PowerVM`_ systems.
The project aims to integrate into OpenStack's Nova project. Initial
development is occurring in a separate project until it has matured and met the
Nova core team's requirements. As such, all development practices should
mirror those of the Nova project.
Documentation on Nova can be found at the `Nova Devref`_.
.. _`PowerVM`: http://www-03.ibm.com/systems/power/software/virtualization/
.. _`Nova Devref`: https://docs.openstack.org/nova/latest/
Overview
--------
.. toctree::
:maxdepth: 1
readme
support-matrix
Policies
--------
.. toctree::
:maxdepth: 1
policies/index
Devref
------
.. toctree::
:maxdepth: 1
devref/index
Specifications
--------------
.. toctree::
:maxdepth: 1
specs/template
specs/index

View File

@ -1,26 +0,0 @@
Nova-PowerVM Bugs
=================
Nova-PowerVM maintains all of its bugs in `Launchpad <https://bugs.launchpad.net/nova-powervm>`_.
All of the current open Nova-PowerVM bugs can be found in that link.
Bug Triage Process
------------------
The process of bug triaging consists of the following steps:
1. Check if a bug was filed for a correct component (project). If not, either change the project
or mark it as "Invalid".
2. Add appropriate tags. Even if the bug is not valid or is a duplicate of another one, it still
may help bug submitters and corresponding sub-teams.
3. Check if a similar bug was filed before. If so, mark it as a duplicate of the previous bug.
4. Check if the bug description is consistent, e.g. it has enough information for developers to
reproduce it. If it's not consistent, ask submitter to provide more info and mark a bug as
"Incomplete".
5. Depending on ease of reproduction (or if the issue can be spotted in the code), mark it as
"Confirmed".
6. Assign the importance. Bugs that obviously break core and widely used functionality should get
assigned as "High" or "Critical" importance. The same applies to bugs that were filed for gate
failures.
7. (Optional). Add comments explaining the issue and possible strategy of fixing/working around
the bug.

View File

@ -1,13 +0,0 @@
Code Reviews
============
Code reviews are a critical component of all OpenStack projects. Code reviews provide a
way to enforce a level of consistency across the project, and also allow for the careful
onboarding of contributions from new contributors.
Code Review Practices
---------------------
Nova-PowerVM follows the `code review guidelines <https://wiki.openstack.org/wiki/ReviewChecklist>`_ as
set forth for all OpenStack projects. It is expected that all reviewers are following the guidelines
set forth on that page.

View File

@ -1 +0,0 @@
.. include:: ../../../CONTRIBUTING.rst

View File

@ -1,39 +0,0 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Nova-PowerVM Policies
=====================
In the Policies Guide, you will find documented policies for developing with
Nova-PowerVM. This includes the processes we use for blueprints and specs,
bugs, contributor onboarding, and other procedural items.
Policies
--------
.. toctree::
:maxdepth: 3
bugs
contributing
code-reviews
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,12 +0,0 @@
Nova-PowerVM Specifications
===========================
Contents:
.. toctree::
:maxdepth: 2
:glob:
:reversed:
*/index

View File

@ -1,7 +0,0 @@
Newton Specifications
=====================
.. toctree::
:glob:
*

View File

@ -1,183 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================
Linux Bridge and OVS VIF Support
================================
`Launchpad BluePrint`_
.. _`Launchpad BluePrint` : https://blueprints.launchpad.net/nova-powervm/+spec/powervm-addl-vif-types
Currently the PowerVM driver requires a PowerVM specific Neutron agent. This
blueprint will add support for additional agent types - specifically the
Open vSwitch and Linux Bridge agents provided by Neutron.
Problem description
===================
PowerVM has support for virtualizing an Ethernet port using the Virtual I/O
Server and Shared Ethernet. This is provided using networking-powervm
Shared Ethernet Agent. This agent provides key PowerVM use cases such as I/O
redundancy.
There are a subset of operators that have asked for VIF support in line with
other hypervisors. This would be support for the Neutron Linux Bridge Agent
and Open vSwitch agent. While these agents do not provide use cases such as
I/O redundancy, they do enable operators to utilize common upstream networking
solutions when deploying PowerVM with OpenStack
Use Cases
---------
An operator should be able to deploy an environment using Linux Bridge or
Open vSwitch Neutron agents. In order to do this, the physical I/O must be
assigned to the NovaLink partition on the PowerVM system (the partition with
virtualization admin authority).
A user should be able to do the standard VIF use cases with either of these
agents:
* Add NIC
* Remove NIC
* Security Groups
* Multiple Network Types (Flat, VLAN, vxlan)
* Bandwidth limiting
The existing Neutron agents should be used without any changes from PowerVM.
All of the changes that should occur will be in nova-powervm. Any limitations
of the agents themselves will be limitations to the PowerVM implementation.
There is one exception to the use case support. The Open vSwitch support will
enable live migration. There is no plan for Linux Bridges live migration
support.
Proposed change
===============
* Create a parent VIF driver for NovaLink based I/O. This will hold the code
that is common between the Linux Bridge VIFs and OVS VIFs. There will be
common code due to both needing to run on the NovaLink management VM.
* The VIF drivers should create a Trunk VEA on the NovaLink partition for
each VIF. It will be given a unique channel of communication to the VM.
The device will be named according to the Neutron device name.
* The OVS VIF driver will use the nova linux_net code to set the metadata on
the trunk adapter.
* Live migration will suspend the VIF on the target host until it has been
treated. Treating means ensuring that the communication to the VM is on
a unique channel (its own VLAN on a vSwitch).
* A private PowerVM virtual switch named 'NovaLinkVEABridge' will be created
to support the private communication between the trunk adapters and the
VMs.
* Live migration on the source will need to clean up the remaining trunk
adapter for Open vSwitch that is left around on the management VM.
It should be noted that Hybrid VIF plugging will not be supported. Instead,
PowerVM will use the conntrack integration in Ubuntu 16.04/OVS 2.5 to support
the OVSFirewallDriver. As of OVS 2.5, that allows the firewall function
without needing Hybrid VIF Plugging.
Alternatives
------------
None.
Security impact
---------------
None.
End user impact
---------------
None.
Performance Impact
------------------
Performance will not be impacted for the deployment of VMs. However, the
end user performance may change as it is a new networking technology. Both
the Linux Bridge and Open vSwitch support should operate with similar
performance characteristics as other platforms that support these technologies.
Deployer impact
---------------
The deployer will need to do the following:
* Attach an Ethernet I/O Card to the NovaLink partition. Configure the ports
in accordance with the Open vSwitch or Linux Bridge Neutron Agent's
requirements.
* Run the agent on their NovaLink management VM.
No major changes are anticipated outside of this. The Shared Ethernet
Adapter Neutron agent will not work in conjunction with this on the same
system.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
thorst
Other contributors:
kriskend
tjakobs
Work Items
----------
See Proposed Change
Dependencies
============
* NovaLink core changes will be needed with regard to the live migration flows.
This requires NovaLink 1.0.0.3 or later.
Testing
=======
Testing will be done on live systems. Future work will be done to integrate
into the PowerVM Third-Party CI, however this will not be done initially as the
LB and OVS agents are heavily tested. The SEA Agent continues to need to be
tested.
Documentation Impact
====================
Deployer documentation will be built around how to configure this.
References
==========
`Neutron Networking Guide`_
.. _`Neutron Networking Guide`: https://docs.openstack.org/newton/networking-guide/

View File

@ -1,350 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=================================
Nova support for SR-IOV VIF Types
=================================
https://blueprints.launchpad.net/nova-powervm/+spec/powervm-sriov-nova
This blueprint will address support of SR-IOV in conjunction with SR-IOV
VF attached to VM via PowerVM vNIC into nova-powervm. SR-IOV support
was added to Juno release of OpenStack, this blueprint will fit
this scenario implementation into it.
A separate `blueprint for networking-powervm`_ has been made available for
design elements regarding networking-powervm.
These blueprints will be implemented during Newton cycle of OpenStack
development. Referring to Newton schedule, development should be completed
during newton-3.
Refer to glossary section for explanation of terms.
.. _`blueprint for networking-powervm`: https://review.openstack.org/#/c/322210/
Problem Description
===================
OpenStack PowerVM drivers currently support networking aspect of PowerVM
virtualization using Shared Ethernet Adapter, Open vSwitch and Linux Bridge.
There is a need for supporting SR-IOV ports with redundancy/failover and
migration. It is possible to associate SR-IOV VF to a VM directly, but this path
will not be supported by this design. Such a setup will not provide migration
support anyway. Support for this configuration will be added in future. This
path also does not utilize advantages of hardware level virtualization offered
by SR-IOV architecture.
Users should be able to manage a VM with SR-IOV vNIC as a network interface.
This management should include migration of VM with SR-IOV vNIC attached to it.
PowerVM has a feature called vNIC which can is tied in with SR-IOV. By using
vNIC the following use cases are supported:
- Fail over I/O to a different I/O Server and physical function
- Live Migration with SR-IOV, without significant intervention
The vNIC is exposed to the VM, and the mac address of the client vNIC will
match the neutron port.
In summary, this blueprint will solve support of SR-IOV in nova-powervm for
these scenarios:
1. Ability to attach/detach a SR-IOV VF to a VM as a network interface using
vNIC intermediary during and after deployment, including migration.
2. Ability to provide redundancy/failover support across VFs from Physical Ports
within or across SR-IOV cards using vNIC intermediary.
3. Ability to associate a VLAN with vNIC backed by SR-IOV VF.
Ability to associate a SR-IOV VF directly to a VM will be done in future.
Refer to separate `blueprint for networking-powervm`_ for changes in
networking-powervm component. This blueprint will focus on changes to
nova-powervm only.
Use Cases
---------
1. Attach vNIC backed by SR-IOV VF(s) to a VM during boot time
2. Attach vNIC backed by SR-IOV VF(s) to a VM after it is deployed
3. Detach vNIC backed by SR-IOV VF(s) from a VM
4. When a VM with vNIC backed by SR-IOV is deleted, perform detach and cleanup
5. Live migrate a VM if using vNIC backed SR-IOV support
6. Provide redundancy/failover support of vNIC backed by SR-IOV VF attached to
a VM during both deploy and post deploy scenarios.
Proposed changes
================
The changes will be made in two areas:
1. **Compute virt driver.**
PowerVM compute driver is in nova.virt.powervm.driver.PowerVMDriver and it will
be enhanced for SR-IOV vNIC support. A dictionary is maintained in virt driver
vif code to map between vif type and vif driver class. Based on vif type of vif
object that needs to be plugged, appropriate vif driver will be invoked. This
dictionary will be modified to include a new vif driver class and its vif type
(pvm_sriov).
The PCI Claims process expects to be able to "claim" a VF from the
``pci_passthrough_devices`` list each time a vNIC is plugged, and return it to
the pool on unplug. Thus the ``get_available_resource`` API will be enhanced to
populate this device list with a suitable number of VFs.
2. **VIF driver.**
PowerVM VIF driver is in nova_powervm.virt.powervm.vif.PvmVifDriver. A VIF
driver to attach network interface via vNIC (PvmVnicSriovVifDriver) and plug/
unplug methods will be implemented. Plug and unplug methods will use pypowervm
code to create VF/vNIC server/vNIC clients and attach/detach them. Neutron port
carries binding:vif_type and binding:vnic_type attributes. The vif type for this
implementation will be pvm_sriov. The vnic_type will be 'direct'.
A VIF driver (PvmVFSriovVifDriver) for directly attached to VM will get
implemented in future.
Deployment of VM with SR-IOV vNIC will involve picking Physical Port(s),
VIOS(es) and a VM and invoking pypowervm library. Similarly, attachment of the
same to an existing VM will be implemented. RMC will be required. Evacuate and
migration of VM will be supported with changes to compute virt driver and VIF
driver via pypowervm library.
Physical Port information will be derived from port label attribute of physical
ports on SR-IOV adapters. Port label attribute of physical ports will have to be
updated with 'physical network' names during configuration of the environment.
During attachment of SR-IOV backed vNIC to a VM, physical network attribute of
neutron network will be matched with port labels of physical ports to gather a
list of physical ports.
**Failover/redundancy:** VIF plug during deploy (or attach of network interface
to a VM) will pass more than one Physical port and VIOS(es) (as stated above in
deploy scenario) to pypowervm library to create vNIC on VIOS with redundancy. It
should be noted that failover is handled automatically by the platform when a
vNIC is backed by multiple VFs. The redundancy level will be controlled by an
``AGENT`` option ``vnic_required_vfs`` in the ML2 configuration file (see the
`blueprint for networking-powervm`_). It will have a default of 2.
**Quality of Service:** Each VF backing a vNIC can be configured with a capacity
value, dictating the minimum percentage of the physical port's total bandwidth
that will be available to that VF. The ML2 configuration file allows a
``vnic_vf_capacity`` option in the ``AGENT`` section to set the capacity for all
vNIC-backing VFs. If omitted, the platform defaults to the capacity granularity
for each physical port. See the `blueprint for networking-powervm`_ for
details of the configuration option; and see section 1.3.3 of the `IBM Power
Systems SR-IOV Technical Overview and Introduction
<https://www.redbooks.ibm.com/redpapers/pdfs/redp5065.pdf>`_ for details on VF
capacity.
For future implementation of VF - VM direct attach of SR-IOV to a VM, the
request will include physical network name. PvmVFSriovVifDriver can lookup
devname(s) associated with it from port label, get physical port information
and create a SR-IOV logical port on the corresponding VM.
Or may include a configuration option to allow the user to dictate how many
ports to attach. Using NIB technique, users can setup redundancy.
For VF - vNIC - VM attach of SR-IOV port to a VM, the corresponding neutron
network object will include physical network name, PvmVnicSriovVifDriver can
lookup devname(s) associated with it from port label, get physical port
information. Along with adapter ID and physical port ID, VIOS information will
be added and a VNIC dedicated port on the corresponding VM will be created.
For migration scenario, physical network names should match on source and
destination compute nodes, and accordingly in the physical port labels. On the
destination, vNICs will be rebuilt based on the SR-IOV port configuration. The
platform decides how to reconstruct the vNIC on the destination in terms of
number and distribution of backing VFs, etc.
Alternatives
------------
None
Security impact
---------------
None
Other end user impact
---------------------
None
Performance impact
------------------
Since the number of VMs deployed on the host will depend on number of VFs
offered by SR-IOV cards in the environment, scale tests will be limited in
density of VMs.
Deployer impact
---------------
1. SR-IOV cards must be configured in ``Sriov`` mode. This can be done via the
``pvmctl`` command, e.g.:
``pvmctl sriov update -i phys_loc=U78C7.001.RCH0004-P1-C1 -s mode=Sriov``
2. SR-IOV physical ports must be labeled with the name of the neutron physical
network to which they are cabled. This can be done via the ``pvmctl``
command, e.g.:
``pvmctl sriov update --port-loc U78C7.001.RCH0004-P1-C1-T1 -s label=prod_net``
3. The ``pci_passthrough_whitelist`` option in the nova configuration file must
include entries for each neutron physical network to be enabled for vNIC.
Only the ``physical_network`` key is required. For example:
``pci_passthrough_whitelist = [{"physical_network": "default"}, {"physical_network": "prod_net"}]``
Configuration is also required on the networking side - see the `blueprint for
networking-powervm`_ for details.
**To deploy a vNIC to a VM,** the neutron port(s) must be pre-created with vnic
type ``direct``, e.g.:
``neutron port-create --vnic-type direct``
Developer impact
----------------
None
Dependencies
------------
#. SR-IOV cards and SR-IOV-capable hardware
#. Updated levels of system firmware and the Virtual I/O Server operating system
#. An updated version of Novalink PowerVM feature
#. pypowervm library - https://github.com/powervm/pypowervm
Implementation
==============
Assignee(s)
-----------
- Eric Fried (efried)
- Sridhar Venkat (svenkat)
- Eric Larese (erlarese)
- Esha Seth (eshaseth)
- Drew Thorstensen (thorst)
Work Items
----------
nova-powervm changes:
- Updates to PowerVM compute driver to support attachment of SR-IOV VF via vNIC.
- VIF driver for SR-IOV VF connected to VM via vNIC.
- Migration of VM with SR-IOV VF connected to VM via vNIC. This involves live
migration, cold migration and evacuation.
- Failover/redundancy support for SR-IOV VF(s) connected to VM via vNIC(s).
VIF driver for SR-IOV VF connected to VM directly will be a future work item.
Testing
=======
1. Unit test
All developed code will accompany structured unit test around them. These
tests validate granular function logic.
2. Function test
Function test will be performed along with CI infrastructure. Changes
implemented for this blueprint will be tested via CI framework that exists
and used by IBM team. CI framework needs to be enhanced with SR-IOV hardware.
The tests can be executed in batch mode, probably as nightly jobs.
Documentation impact
====================
All use-cases need to be documented in developer docs that accompany
nova-powervm.
References
==========
1. This blog describes how to work with SR-IOV and vNIC (without redundancy/
failover) using HMC interface: http://chmod666.org/index.php/a-first-look-at-sriov-vnic-adapters/
2. These describe vNIC and its usage with SR-IOV.
- https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/vNIC%20-%20Introducing%20a%20New%20PowerVM%20Virtual%20Networking%20Technology
- https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/Introduction%20to%20SR-IOV%20FAQs
- https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/Introduction%20to%20vNIC%20FAQs
- https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Power%20Systems/page/vNIC%20Frequently%20Asked%20Questions
3. These describe SR-IOV in OpenStack.
- https://wiki.openstack.org/wiki/Nova-neutron-sriov
- http://docs.openstack.org/mitaka/networking-guide/adv-config-sriov.html
4. This blueprint addresses SR-IOV attach/detach function in nova: https://review.openstack.org/#/c/139910/
5. networking-powervm blueprint for same work: https://review.openstack.org/#/c/322210/
6. This is a detailed description of SR-IOV implementation in PowerVM: https://www.redbooks.ibm.com/redpapers/pdfs/redp5065.pdf
7. This provides a overall view of SR-IOV support in nova: https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
8. Attach/detach of SR-IOV ports to VM with respect to libvirt. Provided here
for comparison purposes: https://review.openstack.org/#/c/139910/
9. SR-IOV PCI passthrough reference: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
10. pypowervm: https://github.com/powervm/pypowervm
Glossary
========
:SR-IOV: Single Root I/O Virtualization, used for virtual environments where VMs
need direct access to network interface without any hypervisor overheads.
:Physical Port: Represents Physical port in SR-IOV adapter. This is not same
as Physical Function. A Physical Port can have many physical functions
associated with it. To clarify further, if a Physical Port supports RCoE, then
it will have two Physical Functions. In other words, one Physical Function per
protocol that port supports.
:Virtual Function (VF): Represents Virtual port belonging to a Physical Port
(PF). Either directly or indirectly (using vNIC) a Virtual Function (VF) is
connected to a VM. This is otherwise called SR-IOV logical port.
:Dedicated SR-IOV: This is equivalent to any regular ethernet card and it
can be used with SEA. A logical port of a physical port can be assigned as a
backing device for SEA.
:Shared SR-IOV: A VF to VM is not supported in Newton release. But an SR-IOV
card in sriov mode is what we will be used for vNIC as described in this
blueprint. Also, a SR-IOV in Sriov mode can have a promiscous VF assigned to
the VIOS and configured for SEA(said configuration to be done outside of the
auspices of OpenStack), which can then be used just like any other SEA
configuration, and is supported (as described in next item below).
:Shared Ethernet Adapter: Alternate technique to provide network interface to a
VM.
This involves attachment to a physical interface on PowerVM host and one or
many virtual interfaces that are connected to VMs. A VF of PF in SR-IOV based
environment can be a physical interface to Shared Ethernet Adapter. Existing
support for this configuration in nova-powervm and networking-powervm will
continue.
:vNIC: A vNIC is an intermediary between VF of PF and VM. This resides on VIOS
and connects to a VF one one end and vNIC client adapter inside a VM. This is
mainly to support migration of VMs across hosts.
:vNIC failover/redundancy: Multiple vNIC servers (connected to as many VFs that
belong to as many PFs either on same SR-IOV card or across) connected to same
VM as one network interface. Failure of one vNIC/VF/PF path will result in
activation of other such path.
:VIOS: A partition in PowerVM systems dedicated for i/o operations. In the
context of this blueprint, vNIC server will be created on VIOS. For redundancy
management purposes, a specific PowerVM system may employ more than one VIOS
partitions.
:VM migration types:
- **Live Migration:** migration of VM while both host and VM are alive.
- **Cold Migration:** migration of VM while host is alive and VM is down.
- **Evacuation:** migration of VM while hots is down (VM is down as well).
- **Rebuild:** recreation of a VM.
:pypowervm: A python library that runs on the PowerVM management VM and allows
virtualization control of the system. This is similar to the python library
for libvirt.
History
=======
============ ===========
Release Name Description
============ ===========
Newton Introduced
============ ===========

View File

@ -1,179 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================================
Image Cache Support for localdisk driver
========================================
https://blueprints.launchpad.net/nova-powervm/+spec/image-cache-powervm
The image cache allows for a nova driver to pull an image from glance once,
then use a local copy of that image for future VM creation. This saves
bandwidth between the compute host and glance. It also improves VM
deployment speed and reduces the stress on the overall infrastructure.
Problem description
===================
Deploy times on PowerVM can be high when using the localdisk driver. This is
partially due to not having linked clones. The image cache offers a way to
reduce those deploy times by transferring the image to the host once, and then
subsequent deploys will reuse that image rather than streaming from glance.
There are complexities with this of course. The cached images take up disk space,
but the overall image cache from core Nova takes that into account. The value
of using the nova image cache design is that it has hooks in the code to help solve
these problems.
Use Cases
---------
* As an end user, subsequent deploys of the same image should go faster
Proposed change
===============
Create a subclass of nova.virt.imagecache.ImageManager in the nova-powervm
project. It should implement the necessary methods of the cache:
* _scan_base_images
* _age_and_verify_cached_images
* _get_base
* update
The nova-powervm driver will need to be updated to utilize the cache. This
includes:
* Implementing the manage_image_cache method
* Adding the has_imagecache capability
The localdisk driver within nova-powervm will be updated to have the
following logic. It will check the volume group backing the instance. If the
volume group has a disk with the name 'i_<partial uuid of image>', it will
simply copy that disk into a new disk named after the UUID of the instance.
Otherwise, it will create a disk with the name 'i_<partial uuid of image>'
that contains the image.
The image cache manager's purpose is simply to clean out old images that are
not needed by any instances anymore.
Further extension, not part of this blueprint, can be done to manage overall
disk space in the volume group to make sure that the image cache is not
overwhelming the backing disks.
Alternatives
------------
* Leave as is, all deploys potentially slow
* Implement support for linked clones. This is an eventual goal, but
the image cache is still needed in this case as it will also manage the
root disk image.
Security impact
---------------
None
End user impact
---------------
None
Performance Impact
------------------
Performance of subsequent deploys of the same image should be faster.
The deploys will have improved image copy times and reduced network
bandwidth requirements.
Performance of single deploys using different images will be slower.
Deployer impact
---------------
This change will take effect without any deployer impact immediately after
merging. The deployer will not need to take any specific upgrade actions to
make use of it; however the deployer may need to tune the image cache to make
sure it is not using too much disk space.
A conf option may be added to force the image cache off if deemed necessary.
This will be based off of operator feedback in the event that we need a way
to reduce disk usage.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
tjakobs
Other contributors:
None
Work Items
----------
* Implement the image cache code for the PowerVM driver
* Include support for the image cache in the PowerVM driver. Tolerate it
for other disk drivers, such as SSP.
Dependencies
============
None
Testing
=======
* Unit tests for all code
* Deployment tests in local environments to verify speed increases
Documentation Impact
====================
The deployer docs will be updated to reflect this.
References
==========
None
History
=======
Optional section intended to be used each time the spec is updated to describe
new design.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Newton
- Introduced

View File

@ -1,7 +0,0 @@
Ocata Specifications
====================
.. toctree::
:glob:
*

View File

@ -1,142 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================
File I/O Cinder Connector
=========================
https://blueprints.launchpad.net/nova-powervm/+spec/file-io-cinder-connector
There are several Cinder drivers that support having the file system mounted
locally and then connecting in to the VM as a volume (ex. GPFS, NFS, etc...).
There is the ability to support this type of volume in PowerVM, if the user
has mounted the file system to the NovaLink. This blueprint adds support to
the PowerVM driver to support such Cinder volumes.
Problem description
===================
The PowerVM driver supports Fibre Channel and iSCSI based volumes. It does not
currently support volumes that are presented on a file system as files.
The recent release of PowerVM NovaLink has added support for this in the REST
API. This blueprint looks to take advantage of that support.
Use Cases
---------
* As a user, I want to attach a volume that is backed by a file based Cinder
volume (ex. NFS or GPFS).
* As a user, I want to detach a volume that is backed by a file based Cinder
volume (ex. NFS or GPFS).
Proposed change
===============
Add nova_powervm/virt/powervm/volume/fileio.py. This would extend the existing
volume drivers. It would store the LUN ID on the scsi bus.
This does not support traditional VIOS. Like the iSCSI change, it would
require running through the NovaLink partition.
Alternatives
------------
None
Security impact
---------------
None.
One may consider the permission of the file presented by Cinder. The Cinder
driver's BDM will provide a path to a file. The hypervisor will map that file
as the root user. So file permissions of the volume should not be a concern.
This seems consistent with the other hypervisors utilizing these types of
Cinder drivers.
End user impact
---------------
None
Performance Impact
------------------
None
Deployer impact
---------------
Deployer must set up the backing Cinder driver and connect the file systems to
the NovaLink partition in their environment.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
thorst
Other contributors:
shyama
Work Items
----------
* Create a nova-powervm fileio cinder volume connector. Create associated UT.
* Validate with the GPFS cinder backend.
Dependencies
============
* pypowervm 1.0.0.4 or higher
Testing
=======
Unit Testing is obvious.
Manual testing will be driven via connecting to a GPFS back-end.
CI environments will be evaluated to determine if there is a way to add this
to the current CI infrastructure.
Documentation Impact
====================
None. Will update the nova-powervm dev-ref to reflect that 'file I/O drivers'
are supported, but the support matrix doesn't go into the detail of what cinder
drivers work with nova drivers.
References
==========
* pypowervm add storage element to scsi mapping: https://github.com/powervm/pypowervm/blob/release/1.0.0.4/pypowervm/tasks/scsi_mapper.py#L49
* pypowervm file storage element: https://github.com/powervm/pypowervm/blob/release/1.0.0.4/pypowervm/wrappers/storage.py#L689

View File

@ -1,120 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============
File I/O Driver
===============
https://blueprints.launchpad.net/nova-powervm/+spec/file-io-driver
The PowerVM driver currently uses logical volumes for localdisk ephemeral
storage. This blueprint will add support for using file-backed disks as a
localdisk ephemeral storage option.
Problem description
===================
The PowerVM driver only supports logical volumes for localdisk ephemeral
storage. It does not currently support storage that is presented as a file.
Use Cases
---------
* As a user, I want to have the instance ephemeral storage backed by a file.
Proposed change
===============
Add nova_powervm/virt/powervm/disk/fileio.py. This would extend the existing
disk driver. Use the DISK_DRIVER powervm conf option to select file I/O.
Will utilize the nova.conf option instances_path.
Alternatives
------------
None
Security impact
---------------
None
End user impact
---------------
None
Performance Impact
------------------
Performance may change as the backing storage methods of VMs will be different.
Deployer impact
---------------
The deployer must set the DISK_DRIVER conf option to fileio and ensure that
the instances_path conf option is set in order to utilize the changes described
in the blueprint.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
tjakobs
Other contributors:
None
Work Items
----------
* Create a nova-powervm fileio driver. Create associated UT.
Dependencies
============
Novalink 1.0.0.5
Testing
=======
* Unit tests for all code
* Manual test will be driven using a File I/O ephemeral disk.
Documentation Impact
====================
Will update the nova-powervm dev-ref to include File I/O as an additional
ephemeral disk option.
References
==========
None

View File

@ -1,7 +0,0 @@
Pike Specifications
===================
.. toctree::
:glob:
*

View File

@ -1,145 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==============================================
Allow dynamic enable/disable of SRR capability
==============================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/nova-powervm/+spec/srr-capability-dynamic-toggle
Currently to enable or disable the SRR capability on the VM we need to have
the VM in shutoff state. We should be able to toggle this field dynamically
so that the shutdown of a VM is not needed.
Problem description
===================
The simplified remote restart (SRR) capability governs whether a VM can be
rebuilt (remote restarted) on a different host when the host on which the
VM resides is down. Currently this attribute can be changed only when the VM
is in shut-off state. This blueprint addresses that by enabling toggle
of simplified remote restart capability dynamically (while the VM is still
active).
Use Cases
---------
The end user would like to :
- Enable the srr capability on the VM without shutting it down so that any
workloads on the VM are unaffected.
- Disable the srr capability for a VM which need not be rebuilt to another
host while the VM is still up and running.
Proposed change
===============
The SRR capability is a VM level attribute and can be changed using
the resize operation. In case of a resize operation for an active VM
- Check if the hypervisor supports dynamic toggle of srr capability.
- If it is supported proceed with updating of srr capability if it has been
changed.
- Throw a warning if update of srr capability is not supported.
Alternatives
------------
None
Security impact
---------------
None
End user impact
---------------
None
Performance Impact
------------------
The change is srr capability is not likely to happen very frequently so this
should not have a major impact. When the change happens the impact on the
performance of any other component (the VM, the compute service, the REST
service, etc.) should be negligible.
Deployer impact
---------------
End user will be able to dynamically the toggle the srr capability for the
VM. The changes can be utilized immediately once they are deployed.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
manasmandlekar
Other contributors:
shyvenug
Work Items
----------
NA
Dependencies
============
Need to work with PowerVM platform team to ensure that the srr toggle
capability is exposed for the Compute driver to consume.
Testing
=======
The testing of the change requires full Openstack environment with
Compute resources configured.
- Ensure srr state for VM can be toggled when it is up and running.
- Ensure srr state for VM can be toggled when it is shut-off.
- Perform rebuild operations to ensure that the capability is indeed
getting utilized.
Documentation Impact
====================
None
References
==========
None
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Pike
- Introduced

View File

@ -1,414 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================
Device Passthrough
==================
https://blueprints.launchpad.net/nova-powervm/+spec/device-passthrough
Provide a generic way to identify hardware devices such as GPUs and attach them
to VMs.
Problem description
===================
Deployers want to be able to attach accelerators and other adapters to their
VMs. Today in Nova this is possible only in very restricted circumstances. The
goal of this blueprint is to enable generic passthrough of devices for
consumers of the nova-powervm driver.
While these efforts may enable more, and should be extensible going forward,
the primary goal for the current release is to pass through entire physical
GPUs. That is, we are not attempting to pass through:
* Physical functions, virtual functions, regions, etc. I.e. granularity smaller
than "whole adapter". This requires device type-specific support at the
platform level to perform operations such as discovery/inventorying,
configuration, and attach/detach.
* Devices with "a wire out the back" - i.e. those which are physically
connected to anything (networks, storage, etc.) external to the host. These
will require the operator to understand and be able to specify/select
specific connection parameters for proper placement.
Use Cases
---------
As an admin, I wish to be able to configure my host and flavors to allow
passthrough of whole physical GPUs to VMs.
As a user, I wish to make use of appropriate flavors to create VMs with GPUs
attached.
Proposed change
===============
Device Identification and Whitelisting
--------------------------------------
The administrator can identify and allow (explicitly) or deny (by omission)
passthrough of devices by way of a YAML file per compute host.
.. note:: **Future:** We may someday figure out a way to support a config file
on the controller. This would allow e.g. cloud-wide whitelisting and
specification for particular device types by vendor/product ID, which
could then be overridden (or not) by the files on the compute nodes.
The path to the config will be hardcoded as ``/etc/nova/inventory.yaml``.
The file shall contain paragraphs, each of which will:
* Identify zero or more devices based on information available on the
``IOSlot`` NovaLink REST object. In pypowervm, given a ManagedSystem wrapper
``sys_w``, a list of ``IOSlot`` wrappers is available via
``sys_w.asio_config.io_slots``. See `identification`_. Any device not
identified by any paragraph in the file is denied for passthrough. But see
the `allow`_ section for future plans around supporting explicit denials.
* Name the resource class to associate with the resource provider inventory unit
by which the device will be exposed in the driver. If not specified,
``CUSTOM_IOSLOT`` is used. See `resource_class`_.
* List traits to include on the resource provider in addition to those generated
automatically. See `traits`_.
A `formal schema`_ is proposed for review.
.. _formal schema: https://review.openstack.org/#/c/579289/3/nova_powervm/virt/powervm/passthrough_schema.yaml
Here is a summary description of each section.
Name
~~~~
Each paragraph will be introduced by a key which is a human-readable name for
the paragraph. The name has no programmatic significance other than to separate
paragraphs. Each paragraph's name must be unique within the file.
identification
~~~~~~~~~~~~~~
Each paragraph will have an ``identification`` section, which is an object
containing one or more keys corresponding to ``IOSlot`` properties, as follows:
================ ==================== =====================================
YAML key IOSlot property Description
================ ==================== =====================================
vendor_id pci_vendor_id \X{4} (four uppercase hex digits)
device_id pci_dev_id \X{4} "
subsys_vendor_id pci_subsys_vendor_id \X{4} "
subsys_device_id pci_subsys_dev_id \X{4} "
class pci_class \X{4} "
revision_id pci_rev_id \X{2} (two uppercase hex digits)
drc_index drc_index \X{8} (eight uppercase hex digits)
drc_name drc_name String (physical location code)
================ ==================== =====================================
The values are expected to match those produced by ``pvmctl ioslot list -d
<property>`` for a given property.
The ``identification`` section is required, and must contain at least one of
the above keys.
When multiple keys are provided in a paragraph, they are matched with ``AND``
logic.
.. note:: It is a stretch goal of this blueprint to allow wildcards in (some
of) the values. E.g. ``drc_name: U78CB.001.WZS0JZB-P1-*`` would
allow everything on the ``P1`` planar of the ``U78CB.001.WZS0JZB``
enclosure. If we get that far, a spec amendment will be proposed with
the specifics (what syntax, which fields, etc.).
allow
~~~~~
.. note:: The ``allow`` section will not be supported initially, but is
documented here because we thought through what it should look like.
In the initial implementation, any device encompassed by a paragraph
is allowed for passthrough.
Each paragraph will support a boolean ``allow`` keyword.
If omitted, the default is ``true`` - i.e. devices identified by this
paragraph's ``identification`` section are permitted for passthrough. (Note,
however, that devices not encompassed by the union of all the
``identification`` paragraphs in the file are denied for passthrough.)
If ``allow`` is ``false``, the only other section allowed is
``identification``, since the rest don't make sense.
A given device can only be represented once across all ``allow=true``
paragraphs (implicit or explicit); an "allowed" device found more than once
will result in an error.
A given device can be represented zero or more times across all ``allow=false``
paragraphs.
We will first apply the ``allow=true`` paragraphs to construct a preliminary
list of devices; and then apply each ``allow=false`` paragraph and remove
explicitly denied devices from that list.
.. note:: Again, we're not going to support the ``allow`` section at all
initially. It will be a stretch goal to add it as part of this
release, or it may be added in a subsequent release.
resource_class
~~~~~~~~~~~~~~
If ``allow`` is omitted or ``true``, an optional ``resource_class`` key is
supported. Its string value allows the author to designate the resource class
to be used for the inventory unit representing the device on the resource
provider. If omitted, ``CUSTOM_IOSLOT`` will be used as the default.
.. note:: **Future:** We may be able to get smarter about dynamically
defaulting the resource class based on inspecting the device
metadata. For now, we have to rely on the author of the config file
to tell us what kind of device we're looking at.
traits
~~~~~~
If ``allow`` is omitted or ``true``, an optional ``traits`` subsection is
supported. Its value is an array of strings, each of which is the name of a
trait to be added to the resource providers of each device represented by this
paragraph. If the ``traits`` section is included, it must have at least one
value in the list. (If no additional traits are desired, omit the section.)
The values must be valid trait names (either standard from ``os-traits`` or
custom, matching ``CUSTOM_[A-Z0-9_]*``). These will be in addition to the
traits automatically added by the driver - see `Generated Traits`_ below.
Traits which conflict with automatically-generated traits will result in an
error: the driver must be the single source of truth for the traits it
generates.
Traits may be used to indicate any static attribute of a device - for example,
a capability (``CUSTOM_CAPABILITY_WHIZBANG``) not otherwise indicated by
`Generated Traits`_.
Resource Providers
------------------
The driver shall create nested resource providers, one per device (slot), as
children of the compute node provider generated by Nova.
.. TODO: Figure out how NVLink devices appear and how to handle them - ideally
by hiding them and automatically attaching them with their corresponding
device.
The provider name shall be generated as ``PowerVM IOSlot %(drc_index)08X`` e.g.
``PowerVM IOSlot 1C0FFEE1``. We shall let the placement service generate the
UUID. This naming scheme allows us to identify the full set of providers we
"own". This includes identifying providers we may have created on a previous
iteration (potentially in a different process) which now need to be purged
(e.g. because the slot no longer exists on the system). It also helps us
provide a clear migration path in the future, if, for example, Cyborg takes
over generating these providers. It also paves the way for providers
corresponding to things smaller than a slot; e.g. PFs might be namespaced
``PowerVM PF %(drc_index)08X``.
Inventory
~~~~~~~~~
Each device RP shall have an inventory of::
total: 1
reserved: 0
min_unit: 1
max_unit: 1
step_size: 1
allocation_ratio: 1.0
of the `resource_class`_ specified in the config file for the paragraph
matching this device (``CUSTOM_IOSLOT`` by default).
.. note:: **Future:** Some day we will provide SR-IOV VFs, vGPUs, FPGA
regions/functions, etc. At that point we will conceivably have
inventory of multiple units of multiple resource classes, etc.
Generated Traits
~~~~~~~~~~~~~~~~
The provider for a device shall be decorated with the following
automatically-generated traits:
* ``CUSTOM_POWERVM_IOSLOT_VENDOR_ID_%(vendor_id)04X``
* ``CUSTOM_POWERVM_IOSLOT_DEVICE_ID_%(device_id)04X``
* ``CUSTOM_POWERVM_IOSLOT_SUBSYS_VENDOR_ID_%(subsys_vendor_id)04X``
* ``CUSTOM_POWERVM_IOSLOT_SUBSYS_DEVICE_ID_%(subsys_device_id)04X``
* ``CUSTOM_POWERVM_IOSLOT_CLASS_%(class)04X``
* ``CUSTOM_POWERVM_IOSLOT_REVISION_ID_%(revision_id)02X``
* ``CUSTOM_POWERVM_IOSLOT_DRC_INDEX_%(drc_index)08X``
* ``CUSTOM_POWERVM_IOSLOT_DRC_NAME_%(drc_name)s`` where ``drc_name`` is
normalized via ``os_traits.normalize_name``.
In addition, the driver shall decorate the provider with any `traits`_
specified in the config file paragraph identifying this device. If that
paragraph specifies any of the above generated traits, an exception shall be
raised (we'll blow up the compute service).
update_provider_tree
~~~~~~~~~~~~~~~~~~~~
The above provider tree structure/data shall be provided to Nova by overriding
the ``ComputeDriver.update_provider_tree`` method. The algorithm shall be as
follows:
* Parse the config file.
* Discover devices (``GET /ManagedSystem``, pull out
``.asio_config.io_slots``).
* Merge the config data with the discovered devices to produce a list of
devices to pass through, along with inventory of the appropriate resource
class name, and traits (generated and specified).
* Ensure the tree contains entries according to this calculated passthrough
list, with appropriate inventory and traits.
* Set-subtract the names of the providers in the calculated passthrough list
from those in the provider tree whose names are prefixed with ``PowerVM
IOSlot`` and delete the resulting "orphans".
This is in addition to the standard ``update_provider_tree`` contract of
ensuring appropriate ``VCPU``, ``MEMORY_MB``, and ``DISK_GB`` resources on the
compute node provider.
.. note:: It is a stretch goal of this blueprint to implement caching and/or
other enhancements to the above algorithm to optimize performance by
minimizing the need to call PowerVM REST and/or process whitelist
files every time.
Flavor Support
--------------
Existing Nova support for generic resource specification via flavor extra specs
should "just work". For example, a flavor requesting two GPUs might look like::
resources:VCPU=1
resources:MEMORY_MB=2048
resources:DISK_GB=100
resources1:CUSTOM_GPU=1
traits1:CUSTOM_POWERVM_IOSLOT_VENDOR_ID_G00D=required
traits1:CUSTOM_POWERVM_IOSLOT_PRODUCT_ID_F00D=required
resources2:CUSTOM_GPU=1
traits2:CUSTOM_POWERVM_IOSLOT_DRC_INDEX_1C0FFEE1=required
PowerVMDriver
-------------
spawn
~~~~~
During ``spawn``, we will query placement to retrieve the resource provider
records listed in the ``allocations`` parameter. Any provider names which are
prefixed with ``PowerVM IOSlot`` will be parsed to extract the DRC index (the
last eight characters of the provider name). The corresponding slots will be
extracted from the ``ManagedSystem`` payload and added to the
``LogicalPartition`` payload for the instance as it is being created.
destroy
~~~~~~~
IOSlots are detached automatically when we ``DELETE`` the ``LogicalPartition``,
so no changes should be required here.
Live Migration
~~~~~~~~~~~~~~
Since we can't migrate the state of an active GPU, we will block live migration
of a VM with an attached IOSlot.
.. _`Cold Migration`:
Cold Migration, Rebuild, Remote Restart
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We should get these for free, but need to make sure they're tested.
Hot plug/unplug
~~~~~~~~~~~~~~~
This is not in the scope of the current effort. For now, attaching/detaching
devices to/from existing VMs can only be accomplished via resize (`Cold
Migration`_).
Alternatives
------------
Use Nova's PCI passthrough subsystem. We've all agreed this sucks and is not
the way forward.
Use oslo.config instead of a YAML file. Experience with the
``[pci]passthrough_whitelist`` has led us to conclude that config format is too
restrictive/awkward. The direction for Nova (as discussed in the Queens PTG in
Denver) will be toward some kind of YAML format; we're going to be the pioneers
on this front.
Security impact
---------------
It is the operator's responsibility to ensure that the passthrough YAML config
file has appropriate permissions, and lists only devices which do not
themselves pose a security risk if attached to a malicious VM.
End user impact
---------------
Users get acceleration for their workloads \o/
Performance Impact
------------------
Discovery
~~~~~~~~~
For the `update_provider_tree`_ flow, we're adding the step of loading and
parsing the passthrough YAML config file. This should be negligible compared to
e.g. retrieving the ``ManagedSystem`` object (which we're already doing, so no
impact there).
spawn/destroy
~~~~~~~~~~~~~
There's no impact from the community side. It may take longer to create or
destroy a LogicalPartition with attached IOSlots.
Deployer impact
---------------
None.
Developer impact
----------------
None.
Upgrade impact
--------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
efried
Other contributors:
edmondsw, mdrabe
Work Items
----------
See `Proposed change`_.
Dependencies
============
os-traits 0.9.0 to pick up the ``normalize_name`` method.
Testing
=======
Testing this in the CI will be challenging, given that we are not likely to
score GPUs for all of our nodes.
We will likely need to rely on manual testing and PowerVC to cover the code
paths described under `PowerVMDriver`_ with a handful of various device
configurations.
Documentation Impact
====================
* Add a section to our support matrix for generic device passthrough.
* User documentation for:
* How to build the passthrough YAML file.
* How to construct flavors accordingly.
References
==========
None.
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Rocky
- Introduced

View File

@ -1,7 +0,0 @@
Rocky Specifications
====================
.. toctree::
:glob:
*

View File

@ -1,316 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/nova-powervm/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about the nova-powervm spec and blueprint process:
* Not all blueprints need a spec. For more information see
https://docs.openstack.org/nova/latest/contributor/blueprints.html#specs
* The aim of this document is first to define the problem we need to solve,
and second agree the overall approach to solve that problem.
* This is not intended to be extensive documentation for a new feature.
For example, there is no need to specify the exact configuration changes,
nor the exact details of any DB model changes. But you should still define
that such changes are required, and be clear on how that will affect
upgrades.
* You should aim to get your spec approved before writing your code.
While you are free to write prototypes and code before getting your spec
approved, its possible that the outcome of the spec review process leads
you towards a fundamentally different solution than you first envisaged.
* But, API changes are held to a much higher level of scrutiny.
As soon as an API change merges, we must assume it could be in production
somewhere, and as such, we then need to support that API change forever.
To avoid getting that wrong, we do want lots of details about API changes
upfront.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example: https://blueprints.launchpad.net/nova-powervm/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox and see the generated
HTML file in doc/build/html/specs/<path_of_your_file>
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
* If your specification proposes any changes to the Nova REST API such
as changing parameters which can be returned or accepted, or even
the semantics of what happens when a client calls into the API, then
you should add the APIImpact flag to the commit message. Specifications with
the APIImpact flag can be found with the following query:
https://review.openstack.org/#/q/status:open+project:openstack/nova-powervm+message:apiimpact,n,z
Problem description
===================
A detailed description of the problem. What problem is this blueprint
addressing?
Use Cases
---------
What use cases does this address? What impact on actors does this change have?
Ensure you are clear about the actors in each use case: Developer, End User,
Deployer etc.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
At this point, if you would like to just get feedback on if the problem and
proposed change fit in nova-powervm, you can stop here and post this for review
to get preliminary feedback. If so please say:
Posting to get preliminary feedback on the scope of this spec.
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
End user impact
---------------
How would the end user be impacted by this change? The "End User" is defined
as the users of the deployed cloud.
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Deployer impact
---------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Are the default values ones which will
work well in real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features.
Developer impact
----------------
Discuss things that will affect other developers working on the driver or
OpenStack in general.
Upgrade impact
--------------
Describe any potential upgrade impact on the system, such as:
* If this change adds a new feature to the compute host that the controller
services rely on, the controller services may need to check the minimum
compute service version in the deployment before using the new feature. For
example, in Ocata, the FilterScheduler did not use the Placement API until
all compute services were upgraded to at least Ocata.
* Nova supports N-1 version *nova-compute* services for rolling upgrades. Does
the proposed change need to consider older code running that may impact how
the new change functions, for example, by changing or overwriting global
state in the database? This is generally most problematic when making changes
that involve multiple compute hosts, like move operations such as migrate,
resize, unshelve and evacuate.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in nova-powervm, or
in other projects, that this one either depends on or is related to. For
example, a dependency on pypowervm changes should be documented here.
* If this requires functionality of another project that is not currently used
by nova-powervm document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss the important scenarios needed to test here, as well as
specific edge cases we should be ensuring work correctly. For each
scenario please specify if this requires specialized hardware, a full
openstack environment, or can be simulated inside the nova-powervm tree.
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
Which audiences are affected most by this change, and which documentation
titles on nova-powervm.readthedocs.io should be updated because of this change?
Don't repeat details discussed above, but reference them here in the context of
documentation for multiple audiences. For example, the Operations Guide targets
cloud operators, and the End User Guide would need to be updated if the change
offers a new feature available through the CLI or dashboard. If a config option
changes or is deprecated, note here that the documentation needs to be updated
to reflect this specification's change.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to
History
=======
Optional section intended to be used each time the spec is updated to describe
new design, API or any database schema updated. Useful to let reader understand
what's happened along the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Rocky
- Introduced

View File

@ -1,654 +0,0 @@
# Copyright (C) 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# For information about the format of this file, refer to the documentation
# for sphinx-feature-classification:
[driver.powervm]
title=PowerVM
[operation.attach-volume]
title=Attach block volume to instance
status=optional
notes=The attach volume operation provides a means to hotplug
additional block storage to a running instance. This allows
storage capabilities to be expanded without interruption of
service. In a cloud model it would be more typical to just
spin up a new instance with large storage, so the ability to
hotplug extra storage is for those cases where the instance
is considered to be more of a pet than cattle. Therefore
this operation is not considered to be mandatory to support.
cli=nova volume-attach <server> <volume>
driver.powervm=complete
[operation.attach-tagged-volume]
title=Attach tagged block device to instance
status=optional
notes=Attach a block device with a tag to an existing server instance. See
"Device tags" for more information.
cli=nova volume-attach <server> <volume> [--tag <tag>]
driver.powervm=missing
[operation.detach-volume]
title=Detach block volume from instance
status=optional
notes=See notes for attach volume operation.
cli=nova volume-detach <server> <volume>
driver.powervm=complete
[operation.extend-volume]
title=Extend block volume attached to instance
status=optional
notes=The extend volume operation provides a means to extend
the size of an attached volume. This allows volume size
to be expanded without interruption of service.
In a cloud model it would be more typical to just
spin up a new instance with large storage, so the ability to
extend the size of an attached volume is for those cases
where the instance is considered to be more of a pet than cattle.
Therefore this operation is not considered to be mandatory to support.
cli=cinder extend <volume> <new_size>
driver.powervm=partial
driver-notes.powervm=Not supported for rbd volumes.
[operation.attach-interface]
title=Attach virtual network interface to instance
status=optional
notes=The attach interface operation provides a means to hotplug
additional interfaces to a running instance. Hotplug support
varies between guest OSes and some guests require a reboot for
new interfaces to be detected. This operation allows interface
capabilities to be expanded without interruption of service.
In a cloud model it would be more typical to just spin up a
new instance with more interfaces.
cli=nova interface-attach <server>
driver.powervm=complete
[operation.attach-tagged-interface]
title=Attach tagged virtual network interface to instance
status=optional
notes=Attach a virtual network interface with a tag to an existing
server instance. See "Device tags" for more information.
cli=nova interface-attach <server> [--tag <tag>]
driver.powervm=missing
[operation.detach-interface]
title=Detach virtual network interface from instance
status=optional
notes=See notes for attach-interface operation.
cli=nova interface-detach <server> <port_id>
driver.powervm=complete
[operation.maintenance-mode]
title=Set the host in a maintenance mode
status=optional
notes=This operation allows a host to be placed into maintenance
mode, automatically triggering migration of any running
instances to an alternative host and preventing new
instances from being launched. This is not considered
to be a mandatory operation to support.
The driver methods to implement are "host_maintenance_mode" and
"set_host_enabled".
cli=nova host-update <host>
driver.powervm=complete
[operation.evacuate]
title=Evacuate instances from a host
status=optional
notes=A possible failure scenario in a cloud environment is the outage
of one of the compute nodes. In such a case the instances of the down
host can be evacuated to another host. It is assumed that the old host
is unlikely ever to be powered back on, otherwise the evacuation
attempt will be rejected. When the instances get moved to the new
host, their volumes get re-attached and the locally stored data is
dropped. That happens in the same way as a rebuild.
This is not considered to be a mandatory operation to support.
cli=nova evacuate <server>;nova host-evacuate <host>
driver.powervm=complete
[operation.rebuild]
title=Rebuild instance
status=optional
notes=A possible use case is additional attributes need to be set
to the instance, nova will purge all existing data from the system
and remakes the VM with given information such as 'metadata' and
'personalities'. Though this is not considered to be a mandatory
operation to support.
cli=nova rebuild <server> <image>
driver.powervm=complete
[operation.get-guest-info]
title=Guest instance status
status=mandatory
notes=Provides realtime information about the power state of the guest
instance. Since the power state is used by the compute manager for
tracking changes in guests, this operation is considered mandatory to
support.
cli=
driver.powervm=complete
[operation.get-host-uptime]
title=Guest host uptime
status=optional
notes=Returns the result of host uptime since power on,
it's used to report hypervisor status.
cli=
driver.powervm=complete
[operation.get-host-ip]
title=Guest host ip
status=optional
notes=Returns the ip of this host, it's used when doing
resize and migration.
cli=
driver.powervm=complete
[operation.live-migrate]
title=Live migrate instance across hosts
status=optional
notes=Live migration provides a way to move an instance off one
compute host, to another compute host. Administrators may use
this to evacuate instances from a host that needs to undergo
maintenance tasks, though of course this may not help if the
host is already suffering a failure. In general instances are
considered cattle rather than pets, so it is expected that an
instance is liable to be killed if host maintenance is required.
It is technically challenging for some hypervisors to provide
support for the live migration operation, particularly those
built on the container based virtualization. Therefore this
operation is not considered mandatory to support.
cli=nova live-migration <server>;nova host-evacuate-live <host>
driver.powervm=complete
[operation.force-live-migration-to-complete]
title=Force live migration to complete
status=optional
notes=Live migration provides a way to move a running instance to another
compute host. But it can sometimes fail to complete if an instance has
a high rate of memory or disk page access.
This operation provides the user with an option to assist the progress
of the live migration. The mechanism used to complete the live
migration depends on the underlying virtualization subsystem
capabilities. If libvirt/qemu is used and the post-copy feature is
available and enabled then the force complete operation will cause
a switch to post-copy mode. Otherwise the instance will be suspended
until the migration is completed or aborted.
cli=nova live-migration-force-complete <server> <migration>
driver.powervm=missing
[operation.launch]
title=Launch instance
status=mandatory
notes=Importing pre-existing running virtual machines on a host is
considered out of scope of the cloud paradigm. Therefore this
operation is mandatory to support in drivers.
cli=
driver.powervm=complete
[operation.pause]
title=Stop instance CPUs (pause)
status=optional
notes=Stopping an instances CPUs can be thought of as roughly
equivalent to suspend-to-RAM. The instance is still present
in memory, but execution has stopped. The problem, however,
is that there is no mechanism to inform the guest OS that
this takes place, so upon unpausing, its clocks will no
longer report correct time. For this reason hypervisor vendors
generally discourage use of this feature and some do not even
implement it. Therefore this operation is considered optional
to support in drivers.
cli=nova pause <server>
driver.powervm=missing
[operation.reboot]
title=Reboot instance
status=optional
notes=It is reasonable for a guest OS administrator to trigger a
graceful reboot from inside the instance. A host initiated
graceful reboot requires guest co-operation and a non-graceful
reboot can be achieved by a combination of stop+start. Therefore
this operation is considered optional.
cli=nova reboot <server>
driver.powervm=complete
[operation.rescue]
title=Rescue instance
status=optional
notes=The rescue operation starts an instance in a special
configuration whereby it is booted from an special root
disk image. The goal is to allow an administrator to
recover the state of a broken virtual machine. In general
the cloud model considers instances to be cattle, so if
an instance breaks the general expectation is that it be
thrown away and a new instance created. Therefore this
operation is considered optional to support in drivers.
cli=nova rescue <server>
driver.powervm=complete
[operation.resize]
title=Resize instance
status=optional
notes=The resize operation allows the user to change a running
instance to match the size of a different flavor from the one
it was initially launched with. There are many different
flavor attributes that potentially need to be updated. In
general it is technically challenging for a hypervisor to
support the alteration of all relevant config settings for a
running instance. Therefore this operation is considered
optional to support in drivers.
cli=nova resize <server> <flavor>
driver.powervm=complete
[operation.resume]
title=Restore instance
status=optional
notes=See notes for the suspend operation
cli=nova resume <server>
driver.powervm=missing
[operation.set-admin-password]
title=Set instance admin password
status=optional
notes=Provides a mechanism to (re)set the password of the administrator
account inside the instance operating system. This requires that the
hypervisor has a way to communicate with the running guest operating
system. Given the wide range of operating systems in existence it is
unreasonable to expect this to be practical in the general case. The
configdrive and metadata service both provide a mechanism for setting
the administrator password at initial boot time. In the case where this
operation were not available, the administrator would simply have to
login to the guest and change the password in the normal manner, so
this is just a convenient optimization. Therefore this operation is
not considered mandatory for drivers to support.
cli=nova set-password <server>
driver.powervm=missing
[operation.snapshot]
title=Save snapshot of instance disk
status=optional
notes=The snapshot operation allows the current state of the
instance root disk to be saved and uploaded back into the
glance image repository. The instance can later be booted
again using this saved image. This is in effect making
the ephemeral instance root disk into a semi-persistent
storage, in so much as it is preserved even though the guest
is no longer running. In general though, the expectation is
that the root disks are ephemeral so the ability to take a
snapshot cannot be assumed. Therefore this operation is not
considered mandatory to support.
cli=nova image-create <server> <name>
driver.powervm=complete
[operation.suspend]
title=Suspend instance
status=optional
notes=Suspending an instance can be thought of as roughly
equivalent to suspend-to-disk. The instance no longer
consumes any RAM or CPUs, with its live running state
having been preserved in a file on disk. It can later
be restored, at which point it should continue execution
where it left off. As with stopping instance CPUs, it suffers from the fact
that the guest OS will typically be left with a clock that
is no longer telling correct time. For container based
virtualization solutions, this operation is particularly
technically challenging to implement and is an area of
active research. This operation tends to make more sense
when thinking of instances as pets, rather than cattle,
since with cattle it would be simpler to just terminate
the instance instead of suspending. Therefore this operation
is considered optional to support.
cli=nova suspend <server>
driver.powervm=missing
[operation.swap-volume]
title=Swap block volumes
status=optional
notes=The swap volume operation is a mechanism for changing a running
instance so that its attached volume(s) are backed by different
storage in the host. An alternative to this would be to simply
terminate the existing instance and spawn a new instance with the
new storage. In other words this operation is primarily targeted towards
the pet use case rather than cattle, however, it is required for volume
migration to work in the volume service. This is considered optional to
support.
cli=nova volume-update <server> <attachment> <volume>
driver.powervm=missing
[operation.terminate]
title=Shutdown instance
status=mandatory
notes=The ability to terminate a virtual machine is required in
order for a cloud user to stop utilizing resources and thus
avoid indefinitely ongoing billing. Therefore this operation
is mandatory to support in drivers.
cli=nova delete <server>
driver.powervm=complete
[operation.trigger-crash-dump]
title=Trigger crash dump
status=optional
notes=The trigger crash dump operation is a mechanism for triggering
a crash dump in an instance. The feature is typically implemented by
injecting an NMI (Non-maskable Interrupt) into the instance. It provides
a means to dump the production memory image as a dump file which is useful
for users. Therefore this operation is considered optional to support.
cli=nova trigger-crash-dump <server>
driver.powervm=missing
[operation.unpause]
title=Resume instance CPUs (unpause)
status=optional
notes=See notes for the "Stop instance CPUs" operation
cli=nova unpause <server>
driver.powervm=missing
[guest.disk.autoconfig]
title=Auto configure disk
status=optional
notes=Partition and resize FS to match the size specified by
flavors.root_gb, As this is hypervisor specific feature.
Therefore this operation is considered optional to support.
cli=
driver.powervm=missing
[guest.disk.rate-limit]
title=Instance disk I/O limits
status=optional
notes=The ability to set rate limits on virtual disks allows for
greater performance isolation between instances running on the
same host storage. It is valid to delegate scheduling of I/O
operations to the hypervisor with its default settings, instead
of doing fine grained tuning. Therefore this is not considered
to be an mandatory configuration to support.
cli=nova limits
driver.powervm=missing
[guest.setup.configdrive]
title=Config drive support
status=choice(guest.setup)
notes=The config drive provides an information channel into
the guest operating system, to enable configuration of the
administrator password, file injection, registration of
SSH keys, etc. Since cloud images typically ship with all
login methods locked, a mechanism to set the administrator
password or keys is required to get login access. Alternatives
include the metadata service and disk injection. At least one
of the guest setup mechanisms is required to be supported by
drivers, in order to enable login access.
cli=
driver.powervm=complete
[guest.setup.inject.file]
title=Inject files into disk image
status=optional
notes=This allows for the end user to provide data for multiple
files to be injected into the root filesystem before an instance
is booted. This requires that the compute node understand the
format of the filesystem and any partitioning scheme it might
use on the block device. This is a non-trivial problem considering
the vast number of filesystems in existence. The problem of injecting
files to a guest OS is better solved by obtaining via the metadata
service or config drive. Therefore this operation is considered
optional to support.
cli=
driver.powervm=missing
[guest.setup.inject.networking]
title=Inject guest networking config
status=optional
notes=This allows for static networking configuration (IP
address, netmask, gateway and routes) to be injected directly
into the root filesystem before an instance is booted. This
requires that the compute node understand how networking is
configured in the guest OS which is a non-trivial problem
considering the vast number of operating system types. The
problem of configuring networking is better solved by DHCP
or by obtaining static config via
config drive. Therefore this operation is considered optional
to support.
cli=
driver.powervm=missing
[console.rdp]
title=Remote desktop over RDP
status=choice(console)
notes=This allows the administrator to interact with the graphical
console of the guest OS via RDP. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Some operating
systems may prefer to emit messages via the serial console for
easier consumption. Therefore support for this operation is not
mandatory, however, a driver is required to support at least one
of the listed console access operations.
cli=nova get-rdp-console <server> <console-type>
driver.powervm=missing
[console.serial.log]
title=View serial console logs
status=choice(console)
notes=This allows the administrator to query the logs of data
emitted by the guest OS on its virtualized serial port. For
UNIX guests this typically includes all boot up messages and
so is useful for diagnosing problems when an instance fails
to successfully boot. Not all guest operating systems will be
able to emit boot information on a serial console, others may
only support graphical consoles. Therefore support for this
operation is not mandatory, however, a driver is required to
support at least one of the listed console access operations.
cli=nova console-log <server>
driver.powervm=missing
[console.serial.interactive]
title=Remote interactive serial console
status=choice(console)
notes=This allows the administrator to interact with the serial
console of the guest OS. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Not all guest
operating systems will be able to emit boot information on a serial
console, others may only support graphical consoles. Therefore support
for this operation is not mandatory, however, a driver is required to
support at least one of the listed console access operations.
This feature was introduced in the Juno release with blueprint
https://blueprints.launchpad.net/nova/+spec/serial-ports
cli=nova get-serial-console <server>
driver.powervm=missing
[console.spice]
title=Remote desktop over SPICE
status=choice(console)
notes=This allows the administrator to interact with the graphical
console of the guest OS via SPICE. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Some operating
systems may prefer to emit messages via the serial console for
easier consumption. Therefore support for this operation is not
mandatory, however, a driver is required to support at least one
of the listed console access operations.
cli=nova get-spice-console <server> <console-type>
driver.powervm=missing
[console.vnc]
title=Remote desktop over VNC
status=choice(console)
notes=This allows the administrator to interact with the graphical
console of the guest OS via VNC. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Some operating
systems may prefer to emit messages via the serial console for
easier consumption. Therefore support for this operation is not
mandatory, however, a driver is required to support at least one
of the listed console access operations.
cli=nova get-vnc-console <server> <console-type>
driver.powervm=complete
[storage.block]
title=Block storage support
status=optional
notes=Block storage provides instances with direct attached
virtual disks that can be used for persistent storage of data.
As an alternative to direct attached disks, an instance may
choose to use network based persistent storage. OpenStack provides
object storage via the Swift service, or a traditional filesystem
such as NFS may be used. Some types of instances may
not require persistent storage at all, being simple transaction
processing systems reading requests & sending results to and from
the network. Therefore support for this configuration is not
considered mandatory for drivers to support.
cli=
driver.powervm=complete
[storage.block.backend.fibrechannel]
title=Block storage over fibre channel
status=optional
notes=To maximise performance of the block storage, it may be desirable
to directly access fibre channel LUNs from the underlying storage
technology on the compute hosts. Since this is just a performance
optimization of the I/O path it is not considered mandatory to support.
cli=
driver.powervm=complete
[storage.block.backend.iscsi]
title=Block storage over iSCSI
status=condition(storage.block==complete)
notes=If the driver wishes to support block storage, it is common to
provide an iSCSI based backend to access the storage from cinder.
This isolates the compute layer for knowledge of the specific storage
technology used by Cinder, albeit at a potential performance cost due
to the longer I/O path involved. If the driver chooses to support
block storage, then this is considered mandatory to support, otherwise
it is considered optional.
cli=
driver.powervm=complete
[storage.block.backend.iscsi.auth.chap]
title=CHAP authentication for iSCSI
status=optional
notes=If accessing the cinder iSCSI service over an untrusted LAN it
is desirable to be able to enable authentication for the iSCSI
protocol. CHAP is the commonly used authentication protocol for
iSCSI. This is not considered mandatory to support. (?)
cli=
driver.powervm=complete
[storage.image]
title=Image storage support
status=mandatory
notes=This refers to the ability to boot an instance from an image
stored in the glance image repository. Without this feature it
would not be possible to bootstrap from a clean environment, since
there would be no way to get block volumes populated and reliance
on external PXE servers is out of scope. Therefore this is considered
a mandatory storage feature to support.
cli=nova boot --image <image> <name>
driver.powervm=complete
[networking.firewallrules]
title=Network firewall rules
status=optional
notes=Unclear how this is different from security groups
cli=
driver.powervm=missing
[networking.routing]
title=Network routing
status=optional
notes=Unclear what this refers to
cli=
driver.powervm=complete
[networking.securitygroups]
title=Network security groups
status=optional
notes=The security groups feature provides a way to define rules
to isolate the network traffic of different instances running
on a compute host. This would prevent actions such as MAC and
IP address spoofing, or the ability to setup rogue DHCP servers.
In a private cloud environment this may be considered to be a
superfluous requirement. Therefore this is considered to be an
optional configuration to support.
cli=
driver.powervm=missing
[networking.topology.flat]
title=Flat networking
status=choice(networking.topology)
notes=Provide network connectivity to guests using a
flat topology across all compute nodes. At least one
of the networking configurations is mandatory to
support in the drivers.
cli=
driver.powervm=complete
[networking.topology.vlan]
title=VLAN networking
status=choice(networking.topology)
notes=Provide network connectivity to guests using VLANs to define the
topology when using nova-network. At least one of the networking
configurations is mandatory to support in the drivers.
cli=
driver.powervm=complete
[operation.uefi-boot]
title=uefi boot
status=optional
notes=This allows users to boot a guest with uefi firmware.
cli=
driver.powervm=missing
[operation.device-tags]
title=Device tags
status=optional
notes=This allows users to set tags on virtual devices when creating a
server instance. Device tags are used to identify virtual device
metadata, as exposed in the metadata API and on the config drive.
For example, a network interface tagged with "nic1" will appear in
the metadata along with its bus (ex: PCI), bus address
(ex: 0000:00:02.0), MAC address, and tag (nic1). If multiple networks
are defined, the order in which they appear in the guest operating
system will not necessarily reflect the order in which they are given
in the server boot request. Guests should therefore not depend on
device order to deduce any information about their network devices.
Instead, device role tags should be used. Device tags can be
applied to virtual network interfaces and block devices.
cli=nova boot
driver.powervm=missing
[operation.quiesce]
title=quiesce
status=optional
notes=Quiesce the specified instance to prepare for snapshots.
For libvirt, guest filesystems will be frozen through qemu
agent.
cli=
driver.powervm=missing
[operation.unquiesce]
title=unquiesce
status=optional
notes=See notes for the quiesce operation
cli=
driver.powervm=missing
[operation.multiattach-volume]
title=Attach block volume to multiple instances
status=optional
notes=The multiattach volume operation is an extension to
the attach volume operation. It allows to attach a
single volume to multiple instances. This operation is
not considered to be mandatory to support.
Note that for the libvirt driver, this is only supported
if qemu<2.10 or libvirt>=3.10.
cli=nova volume-attach <server> <volume>
driver.powervm=missing

View File

@ -1,41 +0,0 @@
Feature Support Matrix
======================
.. warning::
Please note, while this document is still being maintained, this is slowly
being updated to re-group and classify features
When considering which capabilities should be marked as mandatory the
following general guiding principles were applied
* **Inclusivity** - people have shown ability to make effective
use of a wide range of virtualization technologies with broadly
varying featuresets. Aiming to keep the requirements as inclusive
as possible, avoids second-guessing what a user may wish to use
the cloud compute service for.
* **Bootstrapping** - a practical use case test is to consider that
starting point for the compute deploy is an empty data center
with new machines and network connectivity. The look at what
are the minimum features required of a compute service, in order
to get user instances running and processing work over the
network.
* **Competition** - an early leader in the cloud compute service space
was Amazon EC2. A sanity check for whether a feature should be
mandatory is to consider whether it was available in the first
public release of EC2. This had quite a narrow featureset, but
none the less found very high usage in many use cases. So it
serves to illustrate that many features need not be considered
mandatory in order to get useful work done.
* **Reality** - there are many virt drivers currently shipped with
Nova, each with their own supported feature set. Any feature which is
missing in at least one virt driver that is already in-tree, must
by inference be considered optional until all in-tree drivers
support it. This does not rule out the possibility of a currently
optional feature becoming mandatory at a later date, based on other
principles above.
.. support_matrix:: support-matrix.ini

View File

@ -1,186 +0,0 @@
libxml2-python==2.6.21
lxml==4.3.4
alembic==0.9.8
amqp==2.2.2
appdirs==1.4.3
asn1crypto==0.24.0
attrs==17.4.0
automaton==1.14.0
Babel==2.3.4
bashate==0.5.1
bandit==1.1.0
bcrypt==3.1.4
cachetools==2.0.1
castellan==0.16.0
certifi==2018.1.18
cffi==1.11.5
chardet==3.0.4
cliff==2.11.0
cmd2==0.8.1
colorama==0.3.9
contextlib2==0.5.5
coverage==4.0
cryptography==2.1.4
cursive==0.2.1
ddt==1.0.1
debtcollector==1.19.0
decorator==3.4.0
deprecation==2.0
dogpile.cache==0.6.5
enum34==1.0.4
enum-compat==0.0.2
eventlet==0.20.0
extras==1.0.0
fasteners==0.14.1
fixtures==3.0.0
flake8==2.5.5
future==0.16.0
futurist==1.8.0
gabbi==1.35.0
gitdb2==2.0.3
GitPython==2.1.8
greenlet==0.4.10
hacking==0.12.0
idna==2.6
iso8601==0.1.11
Jinja2==2.10
jmespath==0.9.3
jsonpatch==1.21
jsonpath-rw==1.4.0
jsonpath-rw-ext==1.1.3
jsonpointer==2.0
jsonschema==2.6.0
keystoneauth1==3.9.0
keystonemiddleware==4.20.0
kombu==4.1.0
linecache2==1.0.0
lxml==3.4.1
Mako==1.0.7
MarkupSafe==1.0
mccabe==0.2.1
microversion-parse==0.2.1
mock==2.0.0
monotonic==1.4
mox3==0.20.0
msgpack==0.5.6
msgpack-python==0.5.6
munch==2.2.0
netaddr==0.7.18
netifaces==0.10.4
networkx==1.11
numpy==1.14.2
openstacksdk==0.12.0
os-brick==2.6.1
os-client-config==1.29.0
os-resource-classes==0.1.0
os-service-types==1.2.0
os-traits==0.12.0
os-vif==1.14.0
os-win==3.0.0
os-xenapi==0.3.3
osc-lib==1.10.0
oslo.cache==1.26.0
oslo.concurrency==3.26.0
oslo.config==6.1.0
oslo.context==2.19.2
oslo.db==4.44.0
oslo.i18n==3.15.3
oslo.log==3.36.0
oslo.messaging==7.0.0
oslo.middleware==3.31.0
oslo.policy==1.35.0
oslo.privsep==1.32.0
oslo.reports==1.18.0
oslo.rootwrap==5.8.0
oslo.serialization==2.21.1
oslo.service==1.34.0
oslo.upgradecheck==0.1.1
oslo.utils==3.37.0
oslo.versionedobjects==1.35.0
oslo.vmware==2.17.0
oslotest==3.2.0
osprofiler==1.4.0
ovs==2.10.0
ovsdbapp==0.15.0
packaging==17.1
paramiko==2.0.0
Paste==2.0.2
PasteDeploy==1.5.0
pbr==2.0.0
pep8==1.5.7
pika-pool==0.1.3
pika==0.10.0
pluggy==0.6.0
ply==3.11
prettytable==0.7.1
psutil==3.2.2
libpq-dev==9.0.0
psycopg2==2.8.3
py==1.5.2
pyasn1==0.4.2
pyasn1-modules==0.2.1
pycadf==2.7.0
pycparser==2.18
pyflakes==0.8.1
pycodestyle==2.0.0
pyinotify==0.9.6
pyroute2==0.5.4
PyJWT==1.7.0
PyMySQL==0.7.6
PyNaCl==1.2.1
pyOpenSSL==17.5.0
pyparsing==2.2.0
pyperclip==1.6.0
pypowervm==1.1.23
pytest==3.4.2
python-barbicanclient==4.5.2
python-cinderclient==3.3.0
python-dateutil==2.5.3
python-editor==1.0.3
python-glanceclient==2.8.0
python-ironicclient==2.7.0
python-keystoneclient==3.15.0
python-mimeparse==1.6.0
python-neutronclient==6.7.0
python-subunit==1.2.0
python-swiftclient==3.2.0
pytz==2018.3
PyYAML==3.12
repoze.lru==0.7
requests==2.14.2
requests-mock==1.2.0
requestsexceptions==1.4.0
retrying==1.3.3
rfc3986==1.1.0
Routes==2.3.1
simplejson==3.13.2
six==1.10.0
smmap2==2.0.3
sortedcontainers==2.1.0
SQLAlchemy==1.0.10
Sphinx==1.6.2
sqlalchemy-migrate==0.11.0
sqlparse==0.2.4
statsd==3.2.2
stestr==1.0.0
stevedore==1.20.0
setuptools==21.0.0
suds-jurko==0.6
taskflow==2.16.0
Tempita==0.5.2
tenacity==4.9.0
testrepository==0.0.20
testresources==2.0.0
testscenarios==0.4
testtools==2.2.0
tooz==1.58.0
traceback2==1.4.0
unittest2==1.1.0
urllib3==1.22
vine==1.1.4
voluptuous==0.11.1
warlock==1.3.0
WebOb==1.8.2
websockify==0.8.0
wrapt==1.10.11
wsgi-intercept==1.7.0

View File

@ -1,18 +0,0 @@
# Copyright 2016 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Required to play nicely with namespace composition (PEP420).
__import__('pkg_resources').declare_namespace(__name__)

View File

@ -1,18 +0,0 @@
# Copyright 2016 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Required to play nicely with namespace composition (PEP420).
__import__('pkg_resources').declare_namespace(__name__)

View File

@ -1,33 +0,0 @@
# Copyright 2016, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Shim layer for nova_powervm.virt.powervm.driver.PowerVMDriver.
Duplicate all public symbols. This is necessary for the constants as well as
the classes - because instances of the classes need to be able to resolve
references to the constants.
"""
import nova_powervm.virt.powervm.driver as real_drv
LOG = real_drv.LOG
CONF = real_drv.CONF
DISK_ADPT_NS = real_drv.DISK_ADPT_NS
DISK_ADPT_MAPPINGS = real_drv.DISK_ADPT_MAPPINGS
NVRAM_NS = real_drv.NVRAM_NS
NVRAM_APIS = real_drv.NVRAM_APIS
KEEP_NVRAM_STATES = real_drv.KEEP_NVRAM_STATES
FETCH_NVRAM_STATES = real_drv.FETCH_NVRAM_STATES
PowerVMDriver = real_drv.PowerVMDriver

View File

@ -1,23 +0,0 @@
# Copyright 2016 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import nova.conf
from nova_powervm.conf import powervm
CONF = nova.conf.CONF
powervm.register_opts(CONF)

View File

@ -1,260 +0,0 @@
# Copyright 2016, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
CONF = cfg.CONF
powervm_group = cfg.OptGroup(
'powervm',
title='PowerVM Options')
powervm_opts = [
cfg.IntOpt('uncapped_proc_weight',
default=64, min=1, max=255,
help='The processor weight to assign to newly created VMs. '
'Value should be between 1 and 255. Represents how '
'aggressively LPARs grab CPU when unused cycles are '
'available.'),
cfg.StrOpt('vopt_media_volume_group',
default='rootvg',
help='The volume group on the system that should be used '
'to store the config drive metadata that will be attached '
'to VMs. If not specified and no media repository '
'exists, rootvg will be used. This option is ignored if '
'a media repository already exists.'),
cfg.IntOpt('vopt_media_rep_size',
default=1, min=1,
help='The size of the media repository (in GB) for the '
'metadata for config drive. Only used if the media '
'repository needs to be created.'),
cfg.StrOpt('image_meta_local_path',
default='/tmp/cfgdrv/',
help='The location where the config drive ISO files should be '
'built.'),
cfg.StrOpt('pvm_vswitch_for_novalink_io',
default='NovaLinkVEABridge',
help="Name of the PowerVM virtual switch to be used when "
"mapping Linux based network ports to PowerVM virtual "
"Ethernet devices"),
cfg.BoolOpt('remove_vopt_media_on_boot',
default=False,
help="If enabled, tells the PowerVM driver to trigger the "
"removal of the media from the virtual optical device "
"used for initialization of VMs on spawn after "
"'remove_vopt_media_time' minutes."),
cfg.IntOpt('remove_vopt_media_time',
default=60, min=0,
help="The amount of time in minutes after a VM has been "
"created for the virtual optical media to be removed."),
cfg.BoolOpt('use_rmc_mgmt_vif',
default=True,
help="If enabled, tells the PowerVM Driver to create an RMC "
"network interface on the deploy of a VM. This is an "
"adapter that can only talk to the NovaLink partition "
"and enables DLPAR actions."),
cfg.BoolOpt('use_rmc_ipv6_scheme',
default=True,
help="Only used if use_rmc_mgmt_vif is True and config drive "
"is being used. If set, the system will configure the "
"RMC network interface with an IPv6 link local address. "
"This is generally set to True, but users may wish to "
"turn this off if their operating system has "
"compatibility issues."),
cfg.IntOpt('vios_active_wait_timeout',
default=300,
help="Default time in seconds to wait for Virtual I/O Server "
"to be up and running.")
]
ssp_opts = [
cfg.StrOpt('cluster_name',
default='',
help='Cluster hosting the Shared Storage Pool to use for '
'storage operations. If none specified, the host is '
'queried; if a single Cluster is found, it is used. '
'Not used unless disk_driver option is set to ssp.')
]
vol_adapter_opts = [
cfg.StrOpt('fc_attach_strategy',
choices=['vscsi', 'npiv'], ignore_case=True,
default='vscsi', mutable=True,
help='The Fibre Channel Volume Strategy defines how FC Cinder '
'volumes should be attached to the Virtual Machine. The '
'options are: npiv or vscsi. If npiv is selected then '
'the ports_per_fabric and fabrics option should be '
'specified and at least one fabric_X_port_wwpns option '
'(where X corresponds to the fabric name) must be '
'specified.'),
cfg.StrOpt('fc_npiv_adapter_api',
default='nova_powervm.virt.powervm.volume.npiv.'
'NPIVVolumeAdapter',
help='Volume Adapter API to connect FC volumes using NPIV'
'connection mechanism.'),
cfg.StrOpt('fc_vscsi_adapter_api',
default='nova_powervm.virt.powervm.volume.vscsi.'
'PVVscsiFCVolumeAdapter',
help='Volume Adapter API to connect FC volumes through Virtual '
'I/O Server using PowerVM vSCSI connection mechanism.'),
cfg.IntOpt('vscsi_vios_connections_required',
default=1, min=1,
help='Indicates a minimum number of Virtual I/O Servers that '
'are required to support a Cinder volume attach with the '
'vSCSI volume connector.'),
cfg.BoolOpt('volume_use_multipath',
default=False,
help="Use multipath connections when attaching iSCSI or FC"),
cfg.StrOpt('iscsi_iface',
default='default',
help="The iSCSI transport iface to use to connect to target in "
"case offload support is desired. Do not confuse the "
"iscsi_iface parameter to be provided here with the "
"actual transport name."),
cfg.StrOpt('rbd_user',
default='',
help="Refer to this user when connecting and authenticating "
"with the Ceph RBD server.")
]
# NPIV Options. Only applicable if the 'fc_attach_strategy' is set to 'npiv'.
# Otherwise this section can be ignored.
npiv_opts = [
cfg.IntOpt('ports_per_fabric',
default=1, min=1,
help='The number of physical ports that should be connected '
'directly to the Virtual Machine, per fabric. '
'Example: 2 fabrics and ports_per_fabric set to 2 will '
'result in 4 NPIV ports being created, two per fabric. '
'If multiple Virtual I/O Servers are available, will '
'attempt to span ports across I/O Servers.'),
cfg.StrOpt('fabrics', default='A',
help='Unique identifier for each physical FC fabric that is '
'available. This is a comma separated list. If there '
'are two fabrics for multi-pathing, then this could be '
'set to A,B.'
'The fabric identifiers are used for the '
'\'fabric_<identifier>_port_wwpns\' key.')
]
remote_restart_opts = [
cfg.StrOpt('nvram_store',
choices=['none', 'swift'], ignore_case=True,
default='none',
help='The NVRAM store to use to hold the PowerVM NVRAM for '
'virtual machines.'),
]
swift_opts = [
cfg.StrOpt('swift_container', default='powervm_nvram',
help='The Swift container to store the PowerVM NVRAM in. This '
'must be configured the same value for all compute hosts.'),
cfg.StrOpt('swift_username', default='powervm',
help='The Swift user name to use for operations that use '
'the Swift store.'),
cfg.StrOpt('swift_user_domain_name', default='powervm',
help='The Swift domain the user is a member of.'),
cfg.StrOpt('swift_password', secret=True,
help='The password for the Swift user.'),
cfg.StrOpt('swift_project_name', default='powervm',
help='The Swift project.'),
cfg.StrOpt('swift_project_domain_name', default='powervm',
help='The Swift project domain.'),
cfg.StrOpt('swift_auth_version', default='3', help='The Keystone API '
'version.'),
cfg.StrOpt('swift_auth_url', help='The Keystone authorization url. '
'Example: "http://keystone-hostname:5000/v3"'),
cfg.StrOpt('swift_cacert', required=False, help='Path to CA certificate '
'file. Example: /etc/swiftclient/myca.pem'),
cfg.StrOpt('swift_endpoint_type', help='The endpoint/interface type for '
'the Swift client to select from the Keystone Service Catalog '
'for the connection URL. Swift defaults to "publicURL".')
]
vnc_opts = [
cfg.BoolOpt('vnc_use_x509_auth', default=False,
help='If enabled, uses X509 Authentication for the '
'VNC sessions started for each VM.'),
cfg.StrOpt('vnc_ca_certs', help='Path to CA certificate '
'to use for verifying VNC X509 Authentication.'),
cfg.StrOpt('vnc_server_cert', help='Path to Server certificate '
'to use for verifying VNC X509 Authentication.'),
cfg.StrOpt('vnc_server_key', help='Path to Server private key '
'to use for verifying VNC X509 Authentication.')
]
STATIC_OPTIONS = (powervm_opts + ssp_opts + vol_adapter_opts + npiv_opts
+ remote_restart_opts + swift_opts + vnc_opts)
# Dictionary where the key is the NPIV Fabric Name, and the value is a list of
# Physical WWPNs that match the key.
NPIV_FABRIC_WWPNS = {}
FABRIC_WWPN_HELP = ('A comma delimited list of all the physical FC port '
'WWPNs that support the specified fabric. Is tied to '
'the NPIV fabrics key.')
# This is only used to provide a sample for the list_opt() method
fabric_sample = [
cfg.StrOpt('fabric_A_port_wwpns', default='', help=FABRIC_WWPN_HELP),
cfg.StrOpt('fabric_B_port_wwpns', default='', help=FABRIC_WWPN_HELP),
]
def _register_fabrics(conf, fabric_mapping):
"""Registers the fabrics to WWPNs options and builds a mapping.
This method registers the 'fabric_X_port_wwpns' (where X is determined by
the 'fabrics' option values) and then builds a dictionary that mapps the
fabrics to the WWPNs. This mapping can then be later used without having
to reparse the options.
"""
# At this point, the fabrics should be specified. Iterate over those to
# determine the port_wwpns per fabric.
if conf.powervm.fabrics is not None:
port_wwpn_keys = []
fabrics = conf.powervm.fabrics.split(',')
for fabric in fabrics:
opt = cfg.StrOpt('fabric_%s_port_wwpns' % fabric,
default='', help=FABRIC_WWPN_HELP)
port_wwpn_keys.append(opt)
conf.register_opts(port_wwpn_keys, group='powervm')
# Now that we've registered the fabrics, saturate the NPIV dictionary
for fabric in fabrics:
key = 'fabric_%s_port_wwpns' % fabric
wwpns = conf.powervm[key].split(',')
wwpns = [x.upper().strip(':') for x in wwpns]
fabric_mapping[fabric] = wwpns
def register_opts(conf):
conf.register_group(powervm_group)
conf.register_opts(STATIC_OPTIONS, group=powervm_group)
_register_fabrics(conf, NPIV_FABRIC_WWPNS)
# To generate a sample config run:
# $ oslo-config-generator --namespace nova_powervm > nova_powervm_sample.conf
def list_opts():
# The nova conf tooling expects each module to return a dict of options.
# When powervm is pulled into nova proper the return value would be in
# this form:
# return {powervm_group.name: STATIC_OPTIONS + fabric_sample}
#
# The oslo-config-generator tooling expects a tuple:
return [(powervm_group.name, STATIC_OPTIONS + fabric_sample)]

View File

@ -1,21 +0,0 @@
# Copyright 2016 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.hacking import checks
def factory(register):
checks.factory(register)

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=n != 1;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Erwartet wurde genau ein Host; gefunden: %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"Die Momentaufnahmeoperation wird in Verbindung mit der "
"CONF.powervm.disk_driver-Einstellung %s nicht unterstützt."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "VIF einfügen fehlgeschlagen, da Instanz %s nicht gefunden wurde."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "VIF einfügen wegen eines unerwarteten Fehlers fehlgeschlagen."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "Plattengröße kann nicht verringert werden."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "Lokale Festplatten können nicht migriert werden."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"Das VNC-basierte Terminal für Instanz %(instance_name)s konnte nicht geöffnet werden: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"Die Datenträgergruppe %(vol_grp)s, in der die virtuellen "
"optischen Medien gespeichert werden sollen, wurde nicht gefunden. Das Medienrepository konnte nicht erstellt werden."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"Der SCSI-Bus %(bus)x auf der Managementpartition wurde durchsucht, die Platte mit "
"UDID %(udid)s erschien nach %(polls)d Abfragen über %(timeout)d "
"Sekunden nicht."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Erwartet wurde genau eine Platte auf der Managementpartition unter "
"%(path_pattern)s; gefunden wurden %(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"Die Einheit %(devpath)s ist immer noch auf der Managementpartition vorhanden, nachdem "
"versucht wurde, sie zu löschen. Es wurde %(polls)d Mal in %(timeout)d "
"Sekunden abgefragt."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"Fehler beim Zuordnen des Bootdatenträgers von Instanz %(instance_name)s zur Managementpartition "
"eines virtuellen E/A-Servers."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Die neu erstellte Zuordnung des Speicherelements %(stg_name)s vom"
" virtuellen E/A-Server %(vios_name)s zur Managementpartition konnte nicht gefunden werden."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "Die Datenträgergruppe '%(vg_name)s' für diesen Vorgang konnte nicht lokalisiert werden."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "Cluster '%(clust_name)s' für diesen Vorgang konnte nicht lokalisiert werden."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "Es konnte kein Cluster für diesen Vorgang lokalisiert werden."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Gefunden wurden unerwartet %(clust_count)d Cluster zu dem Namen "
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"Kein cluster_name angegeben. Verweigerung der Auswahl eines der %(clust_count)d "
" gefundenen Cluster."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Der Speicher (ID: %(volume_id)s) konnte nicht zur virtuellen Maschine "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"Fehler beim Erweitern des Datenträgers (ID: %(volume_id)s) auf der virtuellen Maschine "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Der Datenträger (ID: %(volume_id)s) konnte nicht von der virtuellen Maschine "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"Schritte vor der Livemigration für Datenträger (ID: %(volume_id)s) "
"von der virtuellen Maschine %(instance_name)s konnten nicht durchgeführt werden."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "PowerVM-API fehlgeschlagen für Instanz=%(inst_name)s.%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Es ist kein virtueller E/A-Server (VIOS) verfügbar. Der Treiber hat"
" %(wait_time)d Sekunden darauf gewartet, dass ein VIOS aktiv wird. Der Compute-Agent "
"kann nicht gestartet werden, wenn keine virtuellen E/A-Server verfügbar sind. Überprüfen Sie "
"die RMC-Konnektivität zwischen den PowerVM-NovaLink- und den virtuellen E/A-"
"Servern und starten Sie dann den Nova Compute-Agenten erneut."
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "Es sind keine aktiven virtuellen E/A-Server verfügbar."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "Die virtuelle Maschine kann auf einem neuen Host nicht neu erstellt werden. Fehler: %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"Die Option %(then_opt)s ist erforderlich, wenn %(if_opt)s angegeben wurde als "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "Livemigration der Instanz '%(name)s' fehlgeschlagen. Grund: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"Migration der Instanz %(name)s konnte nicht durchgeführt werden, da der Datenträger %(volume)s nicht "
"an den Zielhost %(host)s angehängt werden kann."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"Migration der Instanz %(name)s konnte nicht durchgeführt werden, da der Host %(host)s nur %(allowed)s "
" gleichzeitige Migrationen zulässt und %(running)s Migrationen derzeit ausgeführt werden."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"Migration der Instanz '%(name)s' konnte nicht durchgeführt werden, da die Speicherregionsgröße der "
"Quelle (%(source_mrs)d MB) nicht mit der Speicherregionsgröße des "
"Ziels (%(target_mrs)d MB) übereinstimmt."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"Migration der Instanz %(name)s konnte nicht durchgeführt werden, da ihr Prozessorkompatibilitätsmodus %(mode)s"
" in der Liste der durch den Zielhost unterstützten Modi \"%(modes)s\" nicht enthalten ist."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"Livemigration der Instanz '%(name)s' fehlgeschlagen. Grund: Migrationsstatus "
"lautet: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"Livemigration der Instanz '%(name)s' fehlgeschlagen, da sie nicht bereit ist. "
"Grund: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "Der Parameter vif_type muss für diese vif_driver-Implementierung vorhanden sein"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"Es kann kein geeigneter PowerVM-VIF-Driver für den VIF-Typ %(vif_type)s "
"auf der Instanz %(instance)s gefunden werden"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"Gefunden wurden keine zulässigen Ethernet-Anschlüsse auf dem physischen Netz "
"'%(physnet)s' für Instanz %(inst)s für den SRIOV-basierten VIF mit der MAC-Adresse "
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Mehrere gemeinsam genutzte Verarbeitungspools mit dem Namen %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "Gemeinsam genutzter Verarbeitungspool %(pool)s nicht gefunden"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"Versionsattribut %(attr)s muss True oder False sein. Der aktuelle Wert "
"%(val)s ist nicht zulässig."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "Die konfigurierte Platte unterstützt keine Migration oder Größenänderung."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "Das Ändern der Größe von dateigestützten Instanzen wird derzeit nicht unterstützt."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"Der Host ist kein Element desselben SSP-Clusters. Quellenhost-"
"Cluster: %(source_clust_name)s. Quellenhost-SSP: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Der nicht flüchtige Arbeitsspeicher konnte für Instanz %(instance)s nicht gespeichert werden. Grund: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Der nicht flüchtige Arbeitsspeicher konnte für Instanz %(instance)s nicht abgerufen werden. Grund: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Der nicht flüchtige Arbeitsspeicher konnte für Instanz %(instance)s nicht gelöscht werden. Grund: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "Die Konfigurationsoption '%(option)s' muss festgelegt werden."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "NVRAM konnte nach %d Versuchen nicht gespeichert werden"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "Objekt ist in Swift nicht vorhanden."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Ungültiger Verbindungstyp von %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Es konnte kein virtueller E/A-Server gefunden werden, der die NPIV-Port-Zuordnung für den Server hostet. "
""
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Fehler beim Erkennen einer gültigen HDisk auf einem virtuellen E/A-Server für Datenträger "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Es konnte keine HDisk in der erforderlichen Anzahl von virtuellen E/A-Servern erkannt werden. "
"Der Datenträger %(volume_id)s erfordert %(vios_req)d virtuelle E/A-Server,"
" aber der Datenträger wurde nur auf %(vios_act)d virtuellen E/A-Servern gefunden."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=n != 1;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Se esperaba exactamente un solo host; se han encontrado %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"La operación de instantánea no recibe soporte junto con el valor "
"CONF.powervm.disk_driver de %s."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "La conexión de vif ha fallado porque no se ha encontrado la instancia %s."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "La conexión de vif ha fallado debido a un error inesperado."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "No se puede reducir el tamaño de disco."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "No se puede migrar los discos locales."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"No se ha podido abrir el terminal basado en VNC para la instancia %(instance_name)s: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"No se puede ubicar el grupo de volúmenes %(vol_grp)s en el que almacenar "
"el soporte óptico virtual. No se puede crear el repositorio de soportes."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"Tras haber explorado el bus SCSI %(bus)x en la partición de gestión, el disco con "
"UDID %(udid)s no ha aparecido después de los sondeos %(polls)d en %(timeout)d "
"segundos."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Se esperaba encontrar un único disco en la partición de gestión en "
"%(path_pattern)s; se han encontrado %(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"El dispositivo %(devpath)s todavía está presente en la partición de gestión después "
"de intentar suprimirlo. Se ha sondeado %(polls)d veces durante %(timeout)d "
"segundos."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"No se ha podido correlacionar el disco de arranque de la instancia %(instance_name)s con la partición "
"de gestión desde ningún servidor de E/S virtual."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"No se ha encontrado la correlación recién creada del elemento de almacenamiento %(stg_name)s del"
" servidor de E/S virtual %(vios_name)s con la partición de gestión."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "No se puede ubicar el grupo de volúmenes '%(vg_name)s' para esta operación."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "No se puede ubicar el clúster '%(clust_name)s' para esta operación."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "No se puede ubicar ningún clúster para esta operación."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Inesperadamente, se han encontrado clústeres %(clust_count)d coincidentes con el nombre "
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"No se ha especificado cluster_name. Se rechaza seleccionar uno de los clústeres %(clust_count)d "
" encontrados."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"No se puede asociar el almacenamiento (id: %(volume_id)s) con la máquina virtual "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"No se puede ampliar el volumen (id: %(volume_id)s) en la máquina virtual "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"No se puede desconectar el volumen (id: %(volume_id)s) de la máquina virtual "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"No se pueden realizar los pasos previos a la migración en directo en el volumen (id: %(volume_id)s) "
"desde la máquina virtual %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "La interfaz de programación de aplicaciones de PowerVM no se ha podido completar para la instancia=%(inst_name)s.%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"No hay servidores de E/S virtuales disponibles. El controlador ha intentado esperar a que un "
"VIOS pasara estar activo durante %(wait_time)d segundos. El agente de cálculo "
"no se puede iniciar si no hay ningún servidor de E/S virtual disponible. Compruebe "
"la conectividad RMC entre NovaLink de PowerVM y los servidores de E/S virtuales "
"y luego reinicie el agente de cálculo Nova. "
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "No hay servidores de E/S virtuales activos disponibles."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "No se puede recrear la máquina virtual en el host nuevo. El error es %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"La opción %(then_opt)s es necesaria si %(if_opt)s se especifica como "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "La migración en vivo de la instancia '%(name)s' ha fallado por la razón: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"No se puede migrar %(name)s porque no se puede conectar el volumen %(volume)s "
"en el host de destino %(host)s."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"No se puede migrar %(name)s porque el host %(host)s solo permite %(allowed)s"
" migraciones simultáneas y hay actualmente %(running)s migraciones en ejecución."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"No se puede migrar la instancia '%(name)s' porque el tamaño de región de memoria del "
"origen (%(source_mrs)d MB) no coincide con el tamaño de región de memoria del "
"destino (%(target_mrs)d MB)."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"No se puede migrar %(name)s porque su modalidad de compatibilidad del procesador %(mode)s"
" no está en la lista de modalidades \"%(modes)s\" soportadas por el host de destino."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"La migración en vivo de la instancia '%(name)s' ha fallado porque el estado de migración "
"es: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"La migración en vivo de la instancia '%(name)s' ha fallado porque no está lista. "
"Razón: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "El parámetro vif_type debe estar presente para esta implementación de vif_driver."
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"No se ha podido encontrar el controlador de VIF de PowerVM para el tipo de VIF %(vif_type)s "
"en la instancia %(instance)s."
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"No se pueden encontrar puertos Ethernet aceptables en la red física "
"'%(physnet)s' para la instancia %(inst)s para el VIF basado en SRIOV con la dirección MAC "
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Varias agrupaciones de proceso compartidas con el nombre %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "No se puede encontrar la agrupación de proceso compartida %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"El atributo de flavor %(attr)s debe ser True o False. El valor actual "
"%(val)s no está permitido."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "El controlador de disco configurado no admite la migración ni el redimensionamiento."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "El redimensionamiento de instancias con archivos de copia de seguridad no está soportado actualmente."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"El host no es miembro del mismo clúster de SSP. El clúster de host de "
"origen: %(source_clust_name)s. El SSP del host de origen: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"La NVRAM no se ha podido almacenar para la instancia %(instance)s. Razón: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"La NVRAM no se ha podido captar para la instancia %(instance)s. Razón: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"La NVRAM no se ha podido suprimir para la instancia %(instance)s. Razón: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "La opción de configuración '%(option)s' debe establecerse."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "No se puede almacenar NVRAM después de %d intentos"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "El objeto no existe en Swift."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Tipo de conexión no válido de %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"No se ha podido encontrar ningún servidor de E/S virtual que aloje la correlación de puerto de NPIV para el "
"servidor."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"No se ha podido descubrir hdisk válido en ningún servidor de E/S virtual para el volumen "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Se ha encontrado un error en el descubrimiento del hdisk en el número necesario de servidores de E/S "
"virtuales. El volumen %(volume_id)s necesita %(vios_req)d servidores de E/S virtuales, "
" pero el disco solo se ha encontrado en %(vios_act)d servidores de E/S virtuales."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,427 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=n>1;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Précisément un hôte attendu ; trouvé %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"Opération d'instantané non prise en charge en association avec "
"un paramètre CONF.powervm.disk_driver de %s."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "Echec de connexion vif car l'instance %s est introuvable."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "Echec de connexion vif en raison d'une erreur inattendue."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "Impossible de réduire la taille du disque."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "Impossible de migrer des disques locaux."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"Echec d'ouverture du terminal basé VNC pour l'instance %(instance_name)s : "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"Impossible de localiser le groupe de volumes %(vol_grp)s dans lequel "
"est stocké le support optique virtuel. Impossible de créer "
"le référentiel de supports."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"Après analyse du bus SCSI %(bus)x sur la partition de gestion, le disque "
"avec l'UDID %(udid)s n'est pas apparu après %(polls)d interrogations en "
"%(timeout)d secondes."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Précisément un disque attendu sur la partition de gestion à l'adresse "
"Attendu %(path_pattern)s ; trouvé %(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"L'unité %(devpath)s est encore présente sur la partition de gestion après "
"la tentative de suppression. %(polls)d interrogations en %(timeout)d "
"secondes."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"Echec du mappage du disque d'amorçage de l'instance %(instance_name)s "
"sur la partition de gestion depuis tout serveur Virtual I/O Server."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Echec de détection du mappage nouvellement créé de l'élément de stockage"
" %(stg_name)s du serveur VIOS %(vios_name)s vers la partition de gestion."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "Impossible de localiser le groupe de volumes '%(vg_name)s' pour cette opération."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "Impossible de localiser la grappe '%(clust_name)s' pour cette opération."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "Impossible de localiser une grappe pour cette opération."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Détection inattendue de %(clust_count)d grappes avec un nom correspondant. "
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"Aucun cluster_name spécifié. Refus de sélectionner une des %(clust_count)d"
" grappes détectées."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Impossible de connecter le stockage (ID : %(volume_id)s) à la machine "
"virtuelle %(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"Impossible d'étendre le volume (ID : %(volume_id)s) de la machine "
"virtuelle %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Impossible de déconnecter le volume (ID : %(volume_id)s) de la machine "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"Impossible d'effectuer la procédure de pré-migration sur le volume "
"(ID : %(volume_id)s) depuis la machine virtuelle %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "Echec de l'API PowerVM pour l'instance=%(inst_name)s.%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Aucun serveur VIOS disponible. Le pilote a tenté d'attendre qu'un VIOS"
" soit disponible pendant %(wait_time)d s. L'agent de calcul ne peut pas "
"démarrer si aucun serveur VIOS n'est disponible. Vérifiez la connectivité "
"RMC entre les serveurs PowerVM NovaLink et Virtual I/O Server, puis "
"redémarrez l'aget de calcul Nova. "
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "Aucun serveur Virtual I/O Server actif disponible."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "Impossible de régénérer la machine virtuelle sur le nouvel hôte. Erreur : %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"L'option %(then_opt)s est obligatoire si %(if_opt)s est spécifié pour "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "Echec de la migration active de l'instance '%(name)s' ; motif : %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"Impossible de migrer %(name)s car le volume %(volume)s ne peut pas être "
"connecté à l'hôte de destination %(host)s."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"Impossible de migrer %(name)s car l'hôte %(host)s autorise smt %(allowed)s"
" %(allowed)s migrations simultanées et %(running)s sont déjà en cours."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"Impossible de migrer l'instance '%(name)s' car la taille de région de "
"mémoire de la source (%(source_mrs)d Mo) ne correspond pas à celle de "
"la cible (%(target_mrs)d Mo)."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"Impossible de migrer %(name)s car son mode de compatibilité processeur"
" %(mode)s n'est pas dans la liste de modes \"%(modes)s\" pris en charge "
"par l'hôte cible."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"Echec de la migration active de l'instance '%(name)s' car l'état de "
"la migration est %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"Echec de la migration active de l'instance '%(name)s' car non prête. "
"Motif : %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "Le paramètre vif_type doit être présent pour cette implémentation de vif_driver."
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"Pilote VIF PowerVM approprié introuvable pour le type VIF %(vif_type)s "
"sur l'instance %(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"Impossible de trouver des ports Ethernet acceptables sur le réseau "
"physique '%(physnet)s' pour l'instance %(inst)s pour SRIOV basé VIF "
"avec l'adresse MAC %(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Plusieurs pools de traitement partagé avec le nom %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "Impossible de trouver le pool de traitement partagé %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"L'attribut de style %(attr)s doit être Vrai ou Faux. La valeur en cours "
"%(val)s n'est pas admise."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "Le pilote de disque configuré ne prend pas en charge la migration ou le redimensionnement."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "Le redimensionnement des instances à base de fichiers n'est pas pris en charge actuellement."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"L'hôte n'est pas membre de la même grappe SSP. Grappe d'hôtes "
"source : %(source_clust_name)s. SSP hôte source : %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossible stocker NVRAM pour l'instance %(instance)s. Motif : "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossible extraire NVRAM pour l'instance %(instance)s. Motif : "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossible supprimer NVRAM pour l'instance %(instance)s. Motif : "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "L'option de configuration '%(option)s' doit être définie."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "Impossible de stocker la mémoire rémanente après %d tentatives"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "L'objet n'existe pas dans Swift."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Type de connexion non valide : %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Impossible de trouver un serveur VIOS hébergeant la mappe de port NPIV "
"pour le serveur."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Echec de reconnaissance de hdisk valide sur un serveur Virtual I/O Server "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Echec de reconnaissance du hdisk sur le nombre requis de serveurs "
"Virtual I/O Server. Volume %(volume_id)s requérant %(vios_req)d serveurs"
" VIOS mais disque détecté seulement sur %(vios_act)d serveurs VIOS."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=n != 1;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Previsto un solo host; trovati %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"L'operazione di istantanea non è supportata in congiunzione con "
"un'impostazione CONF.powervm.disk_driver di %s."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "Collegamento vif non riuscito perché l'istanza %s non è stata trovata."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "Collegamento vif non riuscito a causa di un errore imprevisto."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "Impossibile ridurre la dimensione del disco."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "Impossibile migrare i dischi locali."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"L'apertura del terminale basato su VNC per l'istanza %(instance_name)s non è riuscita: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"Impossibile individuare il gruppo di volumi %(vol_grp)s per memorizzarvi i supporti ottici"
"virtuali. Impossibile creare il repository di supporti."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"Avendo effettuato la scansione del bus SCSI %(bus)x sulla partizione di gestione, non è stato possibile rilevare "
"il disco con UDID %(udid)s dopo l'esecuzione di %(polls)d operazioni di polling nell'arco di %(timeout)d secondi."
"per l'istanza del servizio."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Era previsto trovare un solo disco sulla partizione di gestione in "
"%(path_pattern)s; trovati %(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"Il dispositivo %(devpath)s è ancora presente nella partizione dopo "
"il tentativo di eliminarlo. Operazione di polling eseguita %(polls)d volte nell'arco di %(timeout)d"
"secondi."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"Impossibile associare il disco di avvio dell'istanza %(instance_name)s "
"alla partizione di gestione da qualsiasi Virtual I/O Server."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Impossibile trovare l'associazione appena creata dell'elemento memoria %(stg_name)s"
" dal Virtual I/O Server %(vios_name)s alla partizione di gestione."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "Impossibile individuare il gruppo di volumi '%(vg_name)s' per questa operazione."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "Impossibile individuare il cluster '%(clust_name)s' per questa operazione."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "Impossibile individuare un cluster per questa operazione."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Sono stati trovati inaspettatamente %(clust_count)d cluster che corrispondono al nome"
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"Nessun cluster_name specificato. Rifiutata la selezione di uno dei %(clust_count)d"
" cluster trovati."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Impossibile collegare la memoria (id: %(volume_id)s) alla macchina virtuale "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"Impossibile estendere il volume (id: %(volume_id)s) sulla macchina virtuale "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Impossibile scollegare il volume (id: %(volume_id)s) dalla macchina virtuale "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"Impossibile eseguire i passi preliminari della migrazione live sul volume (id: %(volume_id)s) "
"dalla macchina virtuale %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "Impossibile completare l'API PowerVM per l'istanza=%(inst_name)s.%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Non è disponibile alcun Virtual I/O Server. Il driver ha provato ad attendere che un"
" VIOS diventasse disponibile per %(wait_time)d secondi. L'agent di calcolo "
"non è in grado di avviarsi, se non sono disponibili VIOS (Virtual I/O Server). Controllare "
"la connettività RMC tra PowerVM NovaLink e i Virtual I/O "
"Server, quindi, riavviare l'agent di calcolo Nova."
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "Non sono disponibili Virtual I/O Server attivi."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "Impossibile ricreare la macchina virtuale sul nuovo host. L'errore è %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"L'opzione %(then_opt)s è richiesta se %(if_opt)s è specificato come "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "La migrazione live dell'istanza '%(name)s' non è riuscita per il motivo: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"Impossibile migrare %(name)s, perché il volume %(volume)s non può essere collegato "
"all'host di destinazione %(host)s."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"Impossibile migrare %(name)s perché l'host %(host)s consente solo %(allowed)s"
" migrazioni simultanee e attualmente sono in esecuzione %(running)s migrazioni."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"Impossibile migrare l'istanza '%(name)s' perché la dimensione dell'area di memoria "
"dell'origine (%(source_mrs)d MB) non corrisponde alla dimensione dell'area di memoria della "
"destinazione (%(target_mrs)d MB)."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"Impossibile migrare %(name)s, perché la sua modalità di compatibilità del processore %(mode)s "
" non è inclusa nell'elenco di modalità \"%(modes)s\" supportate dall'host di destinazione."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"La migrazione live dell'istanza '%(name)s' non è riuscita perché lo stato della migrazione "
"è: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"La migrazione live dell'istanza '%(name)s' non è riuscita perché non è pronta. "
"Motivo: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "il parametro vif_type deve essere presente per questa implementazione di vif_driver"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"Impossibile trovare il driver PowerVM VIF appropriato per il tipo VIF %(vif_type)s "
"sull'istanza %(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"Impossibile trovare porte Ethernet accettabili sulla rete fisica "
"'%(physnet)s' per l'istanza %(inst)s, per il VIF basato su SRIOV con indirizzo MAC "
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Più pool di elaborazione condivisi con nome %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "Impossibile trovare il pool di elaborazione condiviso %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"L'attributo versione %(attr)s deve essere True o False. Il valore corrente "
"%(val)s non è consentito."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "Il driver disco configurato non supporta la migrazione o il ridimensionamento."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "Il ridimensionamento delle istanze con backup su file non è attualmente supportato."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"L'host non è un membro dello stesso cluster SSP. Il cluster dell'host "
"di origine: %(source_clust_name)s. SSP host di origine: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossibile memorizzare NVRAM per l'istanza %(instance)s. Motivo:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossibile recuperare NVRAM per l'istanza %(instance)s. Motivo:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Impossibile eliminare NVRAM per l'istanza %(instance)s. Motivo:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "È necessario impostare l'opzione di configurazione '%(option)s'."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "Impossibile memorizzare NVRAM dopo %d tentativi"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "L'oggetto non esiste in Swift."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Tipo di connessione non valido di %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Impossibile trovare un Virtual I/O Server che ospiti l'associazione porta NPIV per il "
"server."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Impossibile rilevare un disco valido su qualsiasi Virtual I/O Server per il volume "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Impossibile rilevare l'hdisk sul numero richiesto di Virtual I/O "
"Server. Il volume %(volume_id)s richiedeva %(vios_req)d Virtual I/O Server, "
" ma il disco è stato trovato solo su %(vios_act)d Virtual I/O Server."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,423 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "予期されたホストは 1 つのみです。検出されたのは %d 個です"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"このスナップショット操作は %s の CONF.powervm.disk_driver 設定と一緒では"
"サポートされません。"
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "インスタンス %s が見つからなかったため、Plug vif は失敗しました。"
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "予期しないエラーが発生したため、Plug vif は失敗しました。"
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "ディスク・サイズを削減できません。"
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "ローカル・ディスクをマイグレーションできません。"
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"インスタンス %(instance_name)s の VNC ベースの端末を開くことができませんでした: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"仮想光メディアの保管場所となるボリューム・グループ %(vol_grp)s が "
"が見つかりません。 メディア・リポジトリーを作成できません。"
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"管理区画で SCSI バス %(bus)x がスキャンされました。%(timeout)d 秒にわたる "
"%(polls)d 回のポーリング後、UDID %(udid)s のディスクは検出されませんでした。"
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"%(path_pattern)s の管理区画で 1 つのディスクのみが見つかると予期されて"
"いましたが、%(count)d 個が見つかりました。"
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"デバイス %(devpath)s は、削除の試行後にも依然として管理区画上に存在します。"
" %(timeout)d 秒にわたって %(polls)d 回ポーリングが行われました。"
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"インスタンス %(instance_name)s のブート・ディスクを、どの "
"Virtual I/O Server からも管理区画にマップできませんでした。"
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Virtual I/O Server %(vios_name)s から管理区画へのストレージ・エレメント "
"%(stg_name)s の新規作成されたマッピングが見つかりませんでした。"
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "この操作用のボリューム・グループ「%(vg_name)s」が見つかりません。"
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "この操作用のクラスター「%(clust_name)s」が見つかりません。"
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "この操作用のクラスターが見つかりません。"
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"名前「%(clust_name)s」に合致するクラスターが予期せず "
"%(clust_count)d 個見つかりました。"
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"cluster_name が指定されていません。 見つかった %(clust_count)d 個の"
"クラスターのうち 1 つを選択することを拒否しています。"
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"ストレージ (id: %(volume_id)s) を仮想マシン %(instance_name)s に"
"接続できません。%(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"仮想マシン %(instance_name)s 上でボリューム (id: %(volume_id)s) を"
"拡張できません。"
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"ボリューム (id: %(volume_id)s) を仮想マシン %(instance_name)s から"
"切り離すことができません。%(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"ボリューム (id: %(volume_id)s) で仮想マシン %(instance_name)s から"
"ライブ・マイグレーション前手順を実行できません。"
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "PowerVM API はインスタンス %(inst_name)s について完了しませんでした。%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"使用可能な Virtual I/O Server がありません。 ドライバーは、VIOS がアクティブに"
"なるまで %(wait_time)d 秒間待機しようとしました。 使用可能な Virtual I/O "
"Server がない場合、計算エージェントは開始できません。 PowerVM NovaLink と "
"Virtual I/O Server の間の RMC 接続を調べて、Nova 計算エージェントを再始動して"
"ください。"
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "使用可能なアクティブ Virtual I/O Server がありません。"
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "新規ホスト上で仮想マシンを再構築できません。 エラーは %(error)s です"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"%(if_opt)s が「%(if_value)s」と指定されている場合、%(then_opt)s オプションが必要です。"
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "インスタンス「%(name)s」のライブ・マイグレーションが次の理由で失敗しました: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"宛先ホスト %(host)s 上でボリューム %(volume)s を接続できないため、"
"%(name)s をマイグレーションできません。"
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"ホスト %(host)s で許可されている並行マイグレーションは "
"%(allowed)s 個であり、現在実行されているマイグレーションは "
"%(running)s 個であるため、%(name)s をマイグレーションできません。"
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"ソースのメモリー領域サイズ (%(source_mrs)d MB) がターゲットのメモリー"
"領域サイズ (%(target_mrs)d MB) と一致しないため、インスタンス"
"「%(name)s」をマイグレーションできません。"
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"ターゲット・ホストでサポートされるモードのリスト「%(modes)s」に"
"プロセッサー互換モード %(mode)s がないため、%(name)s を"
"マイグレーションできません。"
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"マイグレーション状態が次の状態であったため、インスタンス「%(name)s」の"
"ライブ・マイグレーションが失敗しました: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"準備ができていないため、インスタンス「%(name)s」のライブ・マイグレーションが失敗しました。 "
"理由: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "この vif_driver 実装には vif_type パラメーターを指定する必要があります。"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"インスタンス %(instance)s 上で VIF タイプ %(vif_type)s に対して適切な "
"PowerVM VIF ドライバーが見つかりません"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"SRIOV ベースの VIF (MAC アドレス %(vif_mac)s) のインスタンス %(inst)s について"
"物理ネットワーク「%(physnet)s」上に受け入れ可能なイーサネット・ポートが見つかりません。"
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "%(pool)s という名前の共用処理プールが複数あります。"
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "共用処理プール %(pool)s が見つかりません"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"フレーバー属性 %(attr)s は True または False でなければなりません。 現行値 "
"%(val)s は許可されていません。"
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "この構成済みディスク・ドライバーはマイグレーションもサイズ変更もサポートしていません。"
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "ファイル支援のインスタンスのサイズ変更は、現在サポートされていません。"
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"このホストは同じ SSP クラスターのメンバーではありません。 ソース・ホスト・"
"クラスター: %(source_clust_name)s。 ソース・ホスト SSP: %(source_ssp_name)s。"
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"インスタンス %(instance)s について NVRAM を格納できませんでした。 理由:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"インスタンス %(instance)s について NVRAM を取り出すことができませんでした。 理由:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"インスタンス %(instance)s について NVRAM を削除できませんでした。 理由:"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "構成オプション「%(option)s」を設定する必要があります。"
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "%d 回試みましたが NVRAM を保管できません。"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "オブジェクトが Swift に存在しません。"
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "%s の接続タイプが無効です"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Virtual I/O Server (このサーバー自体の NPIV ポート・マップをホストするもの) が"
"見つかりません。"
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Virtual I/O Server 上でボリュームに対して有効な hdisk をディスカバーできませんでした "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"必要な数の Virtual I/O Server 上で hdisk を検出できませんでした。ボリューム "
"%(volume_id)s には %(vios_req)d 個の Virtual I/O Server が必要でしたが、"
"ディスクは %(vios_act)d 個の Virtual I/O Server 上でのみ検出されました。"
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "정확히 하나의 호스트를 예상했지만 %d개를 찾았습니다."
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"스냅샷 조작은 CONF.powervm.disk_driver 설정이 "
"%s인 경우에는 지원되지 않습니다. "
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "%s 인스턴스를 찾을 수 없으므로 vif 플러그에 실패했습니다."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "예기치 않은 오류 때문에 vif 플러그에 실패했습니다."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "디스크 크기를 줄일 수 없습니다."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "로컬 디스크를 마이그레이션할 수 없습니다."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"인스턴스 %(instance_name)s에 대한 VNC 기반 터미널을 열지 못함: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"가상 광학 매체가 저장될 볼륨 그룹 %(vol_grp)s을(를) "
"찾을 수 없습니다. 매체 저장소를 작성할 수 없습니다."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"관리 파티션에서 SCSI 버스 %(bus)x을(를) 스캔한 경우, "
"UDID %(udid)s의 디스크가 %(timeout)d초 동안 %(polls)d번 폴링한 이후 발견되지 "
"않습니다."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"%(path_pattern)s에서 관리 파티션의 디스크를 정확히 하나를 "
"찾을 것으로 예상했지만, %(count)d개를 찾았습니다. "
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"삭제를 시도한 후에 장치 %(devpath)s이(가) 아직 관리 파티션에 "
"존재합니다. %(timeout)d초 동안 %(polls)d번 "
"폴링했습니다."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"인스턴스 %(instance_name)s의 부트 디스크를 "
"Virtual I/O Server의 관리 파티션에 맵핑할 수 없습니다. "
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Virtual I/O Server %(vios_name)s에서 관리 파티션으로 "
" 스토리지 요소 %(stg_name)s의 새로 작성된 맵핑을 찾을 수 없습니다. "
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "이 조작의 볼륨 그룹 '%(vg_name)s'을(를) 찾을 수 없습니다. "
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "이 조작의 클러스터 '%(clust_name)s'을(를) 찾을 수 없습니다. "
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "이 조작의 클러스터를 찾을 수 없습니다."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"다음 이름과 일치하는 %(clust_count)d개의 클러스터를 예상치 않게 찾았습니다."
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"cluster_name이 지정되지 않습니다. 발견된 %(clust_count)d개의 "
" 클러스터 중 하나를 선택할 것을 거부 중입니다. "
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"스토리지(id: %(volume_id)s)를 가상 머신에 연결할 수 없습니다. "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"가상 머신에서 볼륨(id: %(volume_id)s)을 확장할 수 없습니다. "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"가상 머신에서 볼륨(id: %(volume_id)s)의 연결을 끊을 수 없습니다. "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"볼륨(id: %(volume_id)s)의 이전 실시간 마이그레이션 단계를 "
"%(instance_name)s에서 수행할 수 없습니다. "
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "PowerVM API: instance=%(inst_name)s에 대해 완료에 실패했습니다. 이유: %(reason)s "
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Virtual I/O Server를 사용할 수 없습니다. 드라이버가 VIOS의 활성화 시점까지 "
" %(wait_time)d초 동안 대기하려고 시도했습니다. Virtual I/O Server를 "
"사용할 수 없으면 계산 에이전트를 시작할 수 없습니다. PowerVM NovaLink 및 "
"Virtual I/O Server 간의 RMC 연결을 확인한 후"
"Nova 계산 에이전트를 다시 시작하십시오."
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "활성 Virtual I/O Server가 사용 가능하지 않습니다. "
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "새 호스트에서 가상 머신을 다시 빌드할 수 없습니다. 오류: %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"%(if_opt)s이(가) 지정된 경우 %(then_opt)s 옵션이 필요합니다."
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "인스턴스 '%(name)s'의 실시간 마이그레이션에 실패했습니다. 이유: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"볼륨 %(volume)s을(를) 대상 호스트 %(host)s에 연결할 수 없으므로 %(name)s을(를) "
"마이그레이션할 수 없습니다."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"호스트 %(host)s이(가) %(allowed)s 동시 마이그레이션만 허용하고 "
" %(running)s 마이그레이션이 현재 실행 중이므로 %(name)s을(를) 마이그레이션할 수 없습니다."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"소스의 메모리 영역 크기(%(source_mrs)d MB)가 대상의 "
"메모리 영역 크기(%(target_mrs)d MB)와 일치하지 않으므로 "
"'%(name)s' 인스턴스를 마이그레이션할 수 없습니다."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"해당 프로세서 호환성 모드 %(mode)s이(가) 대상 호스트에서 지원하는 모드 \"%(modes)s\"의 목록에 없으므로 "
" %(name)s을(를) 마이그레이션할 수 없습니다."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"마이그레이션 상태가 %(state)s이므로 인스턴스 '%(name)s'의 실시간 마이그레이션에 "
"실패했습니다."
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"인스턴스 '%(name)s'의 실시간 마이그레이션이 준비되지 않았으므로 실패했습니다. "
"이유: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "이 vif_driver 구현을 위해 vif_type 매개변수가 존재해야 함"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"VIF 유형 %(vif_type)s에 해당하는 PowerVM VIF 드라이버를 "
"%(instance)s 인스턴스에서 찾을 수 없음"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"MAC 주소가 사용된 SRIOV 기반 VIF의 인스턴스 %(inst)s에 대해 "
"물리적 네트워크 '%(physnet)s'에서 허용되는 이더넷 포트를 찾을 수 없음"
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "이름이 %(pool)s인 다중 공유 처리 풀"
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "공유 처리 풀 %(pool)s을(를) 찾을 수 없음"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"플레이버 속성 %(attr)s은(는) true 또는 false여야 합니다. 현재 값 "
"%(val)s은(는) 허용되지 않습니다."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "구성된 디스크 드라이버에서 마이그레이션 또는 크기 조정을 지원하지 않습니다."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "파일 지원 인스턴스 크기 조정이 현재 지원되지 않습니다."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"호스트가 동일한 SSP 클러스터의 멤버가 아닙니다. 소스 호스트 "
"클러스터: %(source_clust_name)s. 소스 호스트 SSP: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"인스턴스 %(instance)s에 대해 NVRAM을 저장할 수 없습니다. 이유: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"인스턴스 %(instance)s에 대해 NVRAM을 페치할 수 없습니다. 이유: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"인스턴스 %(instance)s에 대해 NVRAM을 삭제할 수 없습니다. 이유: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "구성 옵션 '%(option)s'을(를) 설정해야 합니다."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "%d번의 시도 후에는 NVRAM을 저장할 수 없음"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "Swift에 오브젝트가 없습니다."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "%s의 올바르지 않은 연결 유형"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"서버에 대한 NPIV 포트 맵을 호스트하는 Virtual I/O Server를 찾을 수 "
"없습니다."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"볼륨 %(volume_id)s에 대한 Virtual I/O Server에서 올바른 hdisk를 검색하는 데 "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"필수 개수의 Virtual I/O Server에서 hdisk를 검색하지 못했습니다. "
"없었습니다. 볼륨 %(volume_id)s에서 %(vios_req)d Virtual I/O Server가 필요하지만, "
" 디스크는 %(vios_act)d개의 Virtual I/O Server에서만 검색되었습니다."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,348 +0,0 @@
# Translations template for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr ""
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr ""
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr ""
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr ""
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr ""
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr ""
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr ""
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr ""
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr ""
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr ""
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr ""
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr ""
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr ""
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr ""
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr ""
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr ""
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=n>1;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Esperado exatamente um host; localizados %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"A operação de captura instantânea não é suportada em conjunto com uma "
"configuração CONF.powervm.disk_driver de %s."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "Plugue vif falhou porque a instância %s não foi localizada."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "Plugue vif falhou devido a erro inesperado."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "Impossível reduzir o tamanho do disco."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "Não é possível migrar discos locais."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"O VNC baseado em terminal para a instância %(instance_name)s falhou ao abrir: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"Não é possível localizar o grupo de volumes %(vol_grp)s no qual armazenar a mídia "
"virtual ótica. Impossível criar o repositório de mídia."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"Tendo barramento SCSI digitalizado %(bus)x na partição de gerenciamento, disco com "
"UDID%(udid)s falhou em aparecer após as pesquisas %(polls)d em %(timeout)d "
"segundos."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Esperava localizar exatamente um disco na partição de gerenciamento no "
"%(path_pattern)s; localizado%(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"O dispositivo %(devpath)s ainda está presente na partição de gerenciamento após "
"tentar excluí-lo. Pesquisado %(polls)d vezes em %(timeout)d "
"segundos."
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"Falha em mapear o disco de inicialização da instância %(instance_name)s para a partição de "
"gerenciamento de qualquer Virtual I/O Server."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Falha ao localizar o mapeamento recém-criado do elemento de armazenamento %(stg_name)s do"
" Virtual I/O Server %(vios_name)s para a partição de gerenciamento."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "Não é possível localizar o grupo de volumes '%(vg_name)s' para esta operação."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "Não é possível localizar o Cluster '%(clust_name)s' para esta operação."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "Não é possível localizar um cluster para esta operação."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Clusters com nomes correspondentes %(clust_count)d localizados inesperadamente "
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"Nenhum cluster_name especificado. Recusando-se selecionar um dos %(clust_count)d"
" localizados."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Não é possível conectar o armazenamento (ID: %(volume_id)s) à máquina virtual "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"Não é possível estender o volume (id:%(volume_id)s) na máquina virtual "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Não é possível remover o volume (ID: %(volume_id)s) da máquina virtual "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"Não é possível executar as etapas de pré-migração em tempo real no volume (id:%(volume_id)s) "
"a partir da máquina virtual %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "A API do PowerVM falhou em concluir instance=%(inst_name)s.%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Nenhum Virtual I/O Server está disponível. O driver tentou aguardar um"
" VIOS (Virtual I/O Server) se tornar ativo por %(wait_time)d segundos. O agente de cálculo "
"não é capaz de iniciar se nenhum Virtual I/O Server está disponível. Verifique "
"a conectividade do RMC entre o PowerVM NovaLink e o Virtual I/O "
"Server e, em seguida, reinicie o Nova Compute Agent."
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "Não há nenhum Virtual I/O Server ativo disponível."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "Não é possível reconstruir a máquina virtual no novo host. O erro é %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"A opção %(then_opt)s será necessária se %(if_opt)s for especificado como "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "Migração em tempo real da instância %(name)s' falhou devido a: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"Impossível migrar %(name)s porque o volume %(volume)s não pode ser anexado "
"no host de destino %(host)s."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"Impossível migrar %(name)s porque o host %(host)s somente permite %(allowed)s"
" migrações simultâneas e %(running)s migrações está em execução no momento."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"Impossível migrar instância '%(name)s' porque o tamanho da região de memória da "
"origem (%(source_mrs)d MB) não corresponde ao tamanho da região de memória do "
"destino (%(target_mrs)d MB)."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"Impossível migrar %(name)s porque seu modo de capacidade de processador %(mode)s"
" não está na lista de modos \"%(modes)s\" suportados pelo host de destino."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"Migração em tempo real da instância '%(name)s' falhou porque o estado da migração "
"é: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"A migração em tempo real da instância '%(name)s' falhou porque ela não está pronta. "
"Motivo: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "o parâmetro vif_type deve estar presente para esta implementação de vif_driver"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"Não é possível localizar o driver da VIF do PowerVM apropriado para o tipo de VIF %(vif_type)s "
"na instância %(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"Não é possível localizar portas Ethernet aceitáveis na rede física "
"'%(physnet)s' para a instância %(inst)s para o VIF baseado em SRIOV com endereço MAC "
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Vários conjuntos de processo compartilhados com o nome %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "Impossível localizar o conjunto de processamento compartilhado %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"O atributo flavor %(attr)s deve ser True ou False. O valor atual "
"%(val)s não é permitido."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "O driver do disco configurado não suporta migração ou redimensionamento."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "O redimensionamento das instâncias suportadas por arquivo não é suportado atualmente."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"O host não é um membro do mesmo cluster do SSP. A máquina do host de origem "
"cluster: %(source_clust_name)s. O SSP do host de origem: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"O NVRAM não pôde ser armazenado para a instância %(instance)s. Razão: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"O NVRAM não pôde ser buscado para a instância %(instance)s. Razão: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"O NVRAM não pôde ser excluído para a instância %(instance)s. Razão: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "A opção de configuração '%(option)s' deve ser configurada."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "Não é possível armazenar a NVRAM (memória de acesso aleatório não volátil) após %d tentativas"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "O objeto não existe no Swift."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Tipo de conexão inválida de %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Não é possível localizar um Virtual I/O Server que hospede o mapa de porta NPIV para o "
"servidor rabbitmq."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Falha em descobrir hdisk válido em qualquer Virtual I/O Server para o volume "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Falha ao descobrir o hdisk no número necessário de Virtual I/O "
"Server. O volume %(volume_id)s requeria %(vios_req)d Virtual I/O Servers,"
" mas o disco somente foi localizado em %(vios_act)d Virtual I/O Servers."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "Ожидался только один хост; обнаружено: %d"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"Операция моментальной копии не поддерживается, если для параметра "
"CONF.powervm.disk_driver указано значение %s."
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "Подключение vif не выполнено, поскольку экземпляр %s не найден."
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "Подключение vif не выполнено вследствие непредвиденной ошибки."
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "Невозможно уменьшить размер диска."
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "Невозможно выполнить миграцию локальных дисков."
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"Не удалось открыть терминал VNC для экземпляра %(instance_name)s: "
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"Не удалось найти группу томов %(vol_grp)s для размещения виртуального "
"оптического носителя. Не удалось создать хранилище носителей."
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"После сканирования шины SCSI %(bus)x в разделе управления "
"не удалось отобразить диск с UDID %(udid)s после %(polls)d опросов за %(timeout)d "
"секундах."
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"Ожидался ровно один диск в разделе управления "
"%(path_pattern)s; обнаружено %(count)d."
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"Устройство %(devpath)s по-прежнему присутствует в разделе управления после "
"попытки удалить его. Опрошено %(polls)d раз за %(timeout)d "
"сек. "
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"Не удалось подключить загрузочный диск экземпляра %(instance_name)s к разделу "
"управления ни через один сервер VIOS."
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"Не найдена только что созданная связь элемента системы хранения %(stg_name)s"
" из сервера VIOS %(vios_name)s с разделом управления."
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "Не найдена группа томов '%(vg_name)s', необходимая для этой операции."
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "Не найден кластер '%(clust_name)s', необходимый для этой операции."
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "Не найден ни один кластер для выполнения этой операции."
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"Неожиданно обнаружено %(clust_count)d кластеров с именем "
"'%(clust_name)s'."
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"Не задано значение cluster_name. Невозможно выбрать ни один из %(clust_count)d"
" найденных кластеров."
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Не удалось подключить устройство хранения (ИД: %(volume_id)s) к виртуальной машине "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"Не удалось расширить том (ИД: %(volume_id)s) в виртуальной машине "
"%(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"Не удалось отключить том (ИД: %(volume_id)s) от виртуальной машины "
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"Не удалось выполнить предварительные шаги оперативной миграции для тома (ИД: %(volume_id)s) "
"в виртуальной машине %(instance_name)s."
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "Сбой API PowerVM для экземпляра %(inst_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"Не доступен ни один сервер VIOS. Драйвер ждал, пока какой-либо"
" VIOS станет активным, в течение %(wait_time)d с. Вычислительный агент "
"нельзя запустить, если нет активных серверов VIOS. Проверьте "
"соединение между PowerVM NovaLink и VIOS, "
"затем перезапустите вычислительный агент Nova."
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "Нет ни одного активного сервера VIOS."
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "Невозможно заново скомпоновать виртуальную машину на новом хосте. Ошибка: %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"Должен быть указан параметр %(then_opt)s, если в параметре %(if_opt)s указано "
"'%(if_value)s'."
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "Сбой оперативной миграции экземпляра '%(name)s', причина: %(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"Невозможно выполнить миграцию %(name)s, поскольку том %(volume)s нельзя подключить "
"к целевому хосту %(host)s."
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"Невозможно выполнить миграцию %(name)s, так как хост %(host)s допускает не более %(allowed)s"
" параллельных операций миграции, а в данный момент выполняется %(running)s миграций."
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"Невозможно выполнить миграцию экземпляра '%(name)s', поскольку размер исходной области памяти "
"(%(source_mrs)d МБ) не совпадает с размером целевой области памяти "
"(%(target_mrs)d МБ)."
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"Невозможно выполнить миграцию %(name)s, поскольку режим совместимости процессора %(mode)s"
" отсутствует в списке поддерживаемых режимов \"%(modes)s\" целевого хоста."
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"Сбой оперативной миграции экземпляра '%(name)s', поскольку миграция находится в следующем "
"состоянии: %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"Сбой оперативной миграции экземпляра '%(name)s', поскольку подготовка не выполнена. "
"Причина: %(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "Параметр vif_type должен присутствовать для этой реализации vif_driver"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"Не найден соответствующий драйвер VIF PowerVM для типа VIF %(vif_type)s "
"в экземпляре %(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"В физической сети '%(physnet)s' не найдены подходящие порты Ethernet "
"для экземпляра %(inst)s для VIF на основе SRIOV с MAC-адресом "
"%(vif_mac)s."
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "Несколько общих пулов процессоров с именем %(pool)s."
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "Не удалось найти общий пул процессоров %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"Атрибут Flavor %(attr)s должен иметь значение True или False. Текущее значение "
"%(val)s недопустимо."
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "Настроенный драйвер диска не поддерживает миграцию или изменение размера."
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "Изменение экземпляров на основе файлов пока не поддерживается."
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"Хост не является элементом того же кластера SSP. Кластер исходного хоста: "
"%(source_clust_name)s. SSP исходного хоста: %(source_ssp_name)s."
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Не удалось сохранить NVRAM для экземпляра %(instance)s. Причина: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Не удалось получить NVRAM для экземпляра %(instance)s. Причина: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"Невозможно удалить NVRAM для экземпляра %(instance)s. Причина: "
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "Должен быть задан параметр конфигурации '%(option)s'."
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "Не удалось сохранить NVRAM за %d попыток"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "Объект не существует в Swift."
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "Недопустимый тип соединения %s"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"Не найден VIOS с картой портов NPIV для "
"сервера."
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"Не удалось найти допустимый жесткий диск на серверах виртуального ввода-вывода для тома "
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"Не найден жесткий диск на требуемом числе VIOS. "
"VIOS. Для тома %(volume_id)s требуется %(vios_req)d VIOS, "
" однако диск найден только на %(vios_act)d VIOS."
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "期望刚好找到一个主机;但是找到 %d 个主机"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"当 CONF.powervm.disk_driver 设置为"
"%s 时,不支持快照操作。"
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "插入 VIF 失败,因为找不到实例 %s。"
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "插入 VIF 失败,因为发生了意外错误。"
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "无法减小磁盘大小。"
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "无法迁移本地磁盘。"
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"对于 %(instance_name)s 实例,未能打开基于 VNC 的终端:"
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"找不到用于存储虚拟光学介质的卷组 %(vol_grp)s。"
"无法创建介质存储库。"
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"已扫描管理分区上的 SCSI 总线 %(bus)x"
"在 %(timeout)d 秒内进行 %(polls)d 次轮询后UDID 为 %(udid)s 的磁盘未能"
"显示。"
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"期望以下位置的管理分区中找到正好一个磁盘:"
"%(path_pattern)s但发现 %(count)d 个磁盘。"
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"尝试删除设备 %(devpath)s 之后,"
"该设备仍然存在管理分区上。已轮询 %(polls)d 次,耗时 %(timeout)d"
"秒。"
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"无法将实例 %(instance_name)s 的引导磁盘映射至任何"
"Virtual I/O Server 中的管理分区。"
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"找不到存储元素 %(stg_name)s 的新建映射"
" (从 Virtual I/O Server %(vios_name)s 映射到管理分区)。"
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "找不到对应此操作的卷组“%(vg_name)s”。"
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "找不到对应此操作的集群“%(clust_name)s”。"
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "找不到对应此操作的任何集群。"
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"意外找到 %(clust_count)d 个与名称"
"“%(clust_name)s”匹配的集群。"
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"未指定 cluster_name。拒绝选择所发现的 %(clust_count)d 个"
" 集群中的一个。"
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"无法将存储器(标识:%(volume_id)s连接至虚拟机"
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"无法扩展虚拟机 %(instance_name)s 上的"
"卷(标识:%(volume_id)s。"
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"无法将卷(标识:%(volume_id)s从虚拟机"
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"无法从虚拟机 %(instance_name)s 对卷(标识:%(volume_id)s"
"执行预先实时迁移步骤。"
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "未能对实例 %(inst_name)s 完成 PowerVM API。原因%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"没有可用的 Virtual I/O Server。驱动程序已尝试等待"
" %(wait_time)d 秒以使 VIOS 变为活动状态。没有可用的"
"Virtual I/O Server 时,计算代理程序无法启动。请检查"
"PowerVM NovaLink 与 Virtual I/O Server 之间的 RMC 连接,"
"然后重新启动 Nova 计算代理程序。"
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "没有活动可用 Virtual I/O Server。"
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "无法在新主机上重建虚拟机。错误为 %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"%(then_opt)s 选项为必需(如果 %(if_opt)s 指定为"
"“%(if_value)s”。"
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "实时迁移实例“%(name)s”失败原因%(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"无法迁移 %(name)s因为在目标主机 %(host)s 上"
"无法连接卷 %(volume)s。"
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"无法迁移 %(name)s因为主机 %(host)s 只允许 %(allowed)s 个"
" 个并行迁移,但是有 %(running)s 个迁移当前正在运行。"
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"无法迁移实例“%(name)s”"
"因为源的内存区域大小 (%(source_mrs)d MB)"
"与目标的内存区域大小 (%(target_mrs)d MB) 不匹配。"
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"无法迁移 %(name)s因为它的处理器兼容性方式 %(mode)s"
" 不在目标主机所支持的方式列表“%(modes)s”中。"
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"实时迁移实例“%(name)s”失败"
"因为迁移状态为 %(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"实时迁移实例“%(name)s”失败因为它未就绪。"
"原因:%(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "对于此 vif_driver 实现,必须存在 vif_type 参数"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"在下列实例上,找不到 VIF 类型 %(vif_type)s 的相应 PowerVM VIF 驱动程序:"
"%(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"对于具有 MAC 地址 %(vif_mac)s 的基于 SRIOV 的 VIF 的"
"实例 %(inst)s在物理网络“%(physnet)s”上找不到可接受的"
"以太网端口。"
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "存在多个名称为 %(pool)s 的共享处理池。"
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "找不到共享处理池 %(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"flavor 属性 %(attr)s 必须为 True 或 False。"
"不允许使用当前值 %(val)s。"
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "所配置的磁盘驱动程序不支持迁移或调整大小。"
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "当前不支持调整文件备份实例的大小。"
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"该主机不是同一 SSP 集群的成员。源主机"
"集群:%(source_clust_name)s。源主机 SSP%(source_ssp_name)s。"
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"无法存储实例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"无法访存实例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"无法删除实例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "必须设置配置选项“%(option)s”。"
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "尝试 %d 次之后仍然无法存储 NVRAM"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "Swift 中没有对象。"
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "%s 的连接类型无效"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"找不到用来管理服务器的 NPIV 端口映射的"
"Virtual I/O Server。"
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"未能在任何 Virtual I/O Server 上发现卷的有效 hdisk"
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"未能在所需数量的 Virtual I/O Server 上发现"
"hdisk。卷 %(volume_id)s 需要 %(vios_req)d 个 Virtual I/O Server"
" 但仅在 %(vios_act)d 个 Virtual I/O Server 上找到磁盘。"
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,425 +0,0 @@
# English translations for nova_powervm.
# Copyright (C) 2018 ORGANIZATION
# This file is distributed under the same license as the nova_powervm
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2018.
#
msgid ""
msgstr ""
"Project-Id-Version: nova_powervm 6.0.0\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2018-03-19 18:06-0400\n"
"PO-Revision-Date: 2018-03-19 18:07-0400\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
"Language-Team: en <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.5.3\n"
#: nova_powervm/virt/powervm/driver.py:216
#, python-format
msgid "Expected exactly one host; found %d"
msgstr "預期只有一個主機;但找到 %d 個"
#: nova_powervm/virt/powervm/driver.py:821
#, python-format
msgid ""
"The snapshot operation is not supported in conjunction with a "
"CONF.powervm.disk_driver setting of %s."
msgstr ""
"當 CONF.powervm.disk_driver 設定為"
"%s 時,不支援 Snapshot 作業。"
#: nova_powervm/virt/powervm/driver.py:1023
#, python-format
msgid "Plug vif failed because instance %s was not found."
msgstr "插入 VIF 失敗,因為找不到實例 %s。"
#: nova_powervm/virt/powervm/driver.py:1028
msgid "Plug vif failed because of an unexpected error."
msgstr "插入 VIF 失敗,因為發生了非預期的錯誤。"
#: nova_powervm/virt/powervm/driver.py:1118
msgid "Cannot reduce disk size."
msgstr "無法減少磁碟大小。"
#: nova_powervm/virt/powervm/driver.py:1132
#: nova_powervm/virt/powervm/driver.py:1240
msgid "Cannot migrate local disks."
msgstr "無法移轉本端磁碟。"
#: nova_powervm/virt/powervm/driver.py:1757
#, python-format
msgid ""
"VNC based terminal for instance %(instance_name)s failed to open: "
"%(exc_msg)s"
msgstr ""
"針對 %(instance_name)s 實例,未能開啟 VNC 型終端機:"
"%(exc_msg)s"
#: nova_powervm/virt/powervm/exception.py:38
#, python-format
msgid ""
"Unable to locate the volume group %(vol_grp)s to store the virtual "
"optical media within. Unable to create the media repository."
msgstr ""
"找不到在其中儲存虛擬光學媒體的磁區群組 %(vol_grp)s。"
"無法建立媒體儲存庫。"
#: nova_powervm/virt/powervm/exception.py:45
#, python-format
msgid ""
"Having scanned SCSI bus %(bus)x on the management partition, disk with "
"UDID %(udid)s failed to appear after %(polls)d polls over %(timeout)d "
"seconds."
msgstr ""
"在管理分割區上掃描 SCSI 匯流排 %(bus)x 時,"
"UDID 為 %(udid)s 的磁碟未在 %(timeout)d 秒內的 %(polls)d 次輪詢之後"
"出現。"
#: nova_powervm/virt/powervm/exception.py:52
#, python-format
msgid ""
"Expected to find exactly one disk on the management partition at "
"%(path_pattern)s; found %(count)d."
msgstr ""
"預期在 %(path_pattern)s 處的管理分割區上只找到"
"一個磁碟;但卻找到 %(count)d 個。"
#: nova_powervm/virt/powervm/exception.py:58
#, python-format
msgid ""
"Device %(devpath)s is still present on the management partition after "
"attempting to delete it. Polled %(polls)d times over %(timeout)d "
"seconds."
msgstr ""
"在嘗試刪除裝置 %(devpath)s 之後,該裝置仍"
"呈現在管理分割區上。已輪詢 %(polls)d 次,歷時 %(timeout)d"
"秒。"
#: nova_powervm/virt/powervm/exception.py:64
#, python-format
msgid ""
"Failed to map boot disk of instance %(instance_name)s to the management "
"partition from any Virtual I/O Server."
msgstr ""
"無法透過任何 Virtual I/O Server 將實例 %(instance_name)s 的開機磁碟"
"對映至管理分割區。"
#: nova_powervm/virt/powervm/exception.py:70
#, python-format
msgid ""
"Failed to find newly-created mapping of storage element %(stg_name)s from"
" Virtual I/O Server %(vios_name)s to the management partition."
msgstr ""
"找不到儲存體元素 %(stg_name)s 的新建對映"
" (從 Virtual I/O Server %(vios_name)s 對映至管理分割區)。"
#: nova_powervm/virt/powervm/exception.py:76
#, python-format
msgid "Unable to locate the volume group '%(vg_name)s' for this operation."
msgstr "找不到用於這項作業的磁區群組 '%(vg_name)s'。"
#: nova_powervm/virt/powervm/exception.py:81
#, python-format
msgid "Unable to locate the Cluster '%(clust_name)s' for this operation."
msgstr "找不到用於這項作業的叢集 '%(clust_name)s'。"
#: nova_powervm/virt/powervm/exception.py:86
msgid "Unable to locate any Cluster for this operation."
msgstr "找不到用於這項作業的任何叢集。"
#: nova_powervm/virt/powervm/exception.py:90
#, python-format
msgid ""
"Unexpectedly found %(clust_count)d Clusters matching name "
"'%(clust_name)s'."
msgstr ""
"非預期地找到 %(clust_count)d 個符合名稱"
"'%(clust_name)s' 的叢集。"
#: nova_powervm/virt/powervm/exception.py:95
#, python-format
msgid ""
"No cluster_name specified. Refusing to select one of the %(clust_count)d"
" Clusters found."
msgstr ""
"未指定 cluster_name。將拒絕選取所找到的 %(clust_count)d "
" 個叢集中的一個。"
#: nova_powervm/virt/powervm/exception.py:100
#, python-format
msgid ""
"Unable to attach storage (id: %(volume_id)s) to virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"無法將儲存體ID%(volume_id)s連接至虛擬機器"
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:105
#, python-format
msgid ""
"Unable to extend volume (id: %(volume_id)s) on virtual machine "
"%(instance_name)s."
msgstr ""
"無法延伸虛擬機器 %(instance_name)s 上的"
"磁區ID%(volume_id)s。"
#: nova_powervm/virt/powervm/exception.py:110
#, python-format
msgid ""
"Unable to detach volume (id: %(volume_id)s) from virtual machine "
"%(instance_name)s. %(reason)s"
msgstr ""
"無法將磁區ID%(volume_id)s從下列虛擬機器分離"
"%(instance_name)s. %(reason)s"
#: nova_powervm/virt/powervm/exception.py:115
#, python-format
msgid ""
"Unable to perform pre live migration steps on volume (id: %(volume_id)s) "
"from virtual machine %(instance_name)s."
msgstr ""
"從以下虛擬機器中無法對磁區ID%(volume_id)s執行前置即時移轉步驟"
"%(instance_name)s。"
#: nova_powervm/virt/powervm/exception.py:120
#, python-format
msgid "PowerVM API failed to complete for instance=%(inst_name)s.%(reason)s"
msgstr "未能對實例 %(inst_name)s 完成 PowerVM API。%(reason)s"
#: nova_powervm/virt/powervm/exception.py:125
#, python-format
msgid ""
"No Virtual I/O Servers are available. The driver attempted to wait for a"
" VIOS to become active for %(wait_time)d seconds. The compute agent is "
"not able to start if no Virtual I/O Servers are available. Please check "
"the RMC connectivity between the PowerVM NovaLink and the Virtual I/O "
"Servers and then restart the Nova Compute Agent."
msgstr ""
"沒有 Virtual I/O Server 可用。驅動程式已嘗試等待"
" VIOS 變為作用中狀態達 %(wait_time)d 秒。沒有可用的"
"Virtual I/O Server 時,計算代理程式無法啟動。請檢查"
"PowerVM NovaLink 與 Virtual I/O Server 之間的 RMC 連線功能,"
"然後重新啟動 Nova 計算代理程式。"
#: nova_powervm/virt/powervm/exception.py:134
msgid "There are no active Virtual I/O Servers available."
msgstr "沒有作用中的 Virtual I/O Server 可用。"
#: nova_powervm/virt/powervm/exception.py:138
#, python-format
msgid "Unable to rebuild virtual machine on new host. Error is %(error)s"
msgstr "無法在新主機上重建虛擬機器。錯誤為 %(error)s"
#: nova_powervm/virt/powervm/exception.py:143
#, python-format
msgid ""
"The %(then_opt)s option is required if %(if_opt)s is specified as "
"'%(if_value)s'."
msgstr ""
"%(then_opt)s 選項是需要的(如果 %(if_opt)s 指定為"
"'%(if_value)s'。"
#: nova_powervm/virt/powervm/live_migration.py:44
#, python-format
msgid "Live migration of instance '%(name)s' failed for reason: %(reason)s"
msgstr "實例 '%(name)s' 的即時移轉失敗,原因:%(reason)s"
#: nova_powervm/virt/powervm/live_migration.py:49
#, python-format
msgid ""
"Cannot migrate %(name)s because the volume %(volume)s cannot be attached "
"on the destination host %(host)s."
msgstr ""
"無法移轉 %(name)s因為磁區 %(volume)s 無法連接到"
"目的地主機 %(host)s。"
#: nova_powervm/virt/powervm/live_migration.py:59
#, python-format
msgid ""
"Cannot migrate %(name)s because the host %(host)s only allows %(allowed)s"
" concurrent migrations and %(running)s migrations are currently running."
msgstr ""
"無法移轉 %(name)s因為主機 %(host)s 只容許 %(allowed)s"
" 個並行移轉,但卻有 %(running)s 個移轉目前在執行中。"
#: nova_powervm/virt/powervm/live_migration.py:109
#, python-format
msgid ""
"Cannot migrate instance '%(name)s' because the memory region size of the "
"source (%(source_mrs)d MB) does not match the memory region size of the "
"target (%(target_mrs)d MB)."
msgstr ""
"無法移轉實例 '%(name)s',因為來源的記憶體範圍大小"
"(%(source_mrs)d MB) 與目標的記憶體範圍大小"
"(%(target_mrs)d MB) 不符。"
#: nova_powervm/virt/powervm/live_migration.py:279
#, python-format
msgid ""
"Cannot migrate %(name)s because its processor compatibility mode %(mode)s"
" is not in the list of modes \"%(modes)s\" supported by the target host."
msgstr ""
"無法移轉 %(name)s因為它的處理器相容模式 %(mode)s"
" 不在目標主機所支援的模式清單 \"%(modes)s\" 中。"
#: nova_powervm/virt/powervm/live_migration.py:294
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because the migration state "
"is: %(state)s"
msgstr ""
"實例 '%(name)s' 的即時移轉失敗,因為移轉狀態為:"
"%(state)s"
#: nova_powervm/virt/powervm/live_migration.py:455
#, python-format
msgid ""
"Live migration of instance '%(name)s' failed because it is not ready. "
"Reason: %(reason)s"
msgstr ""
"實例 '%(name)s' 的即時移轉失敗,因為該實例尚未備妥。"
"原因:%(reason)s"
#: nova_powervm/virt/powervm/vif.py:85
msgid "vif_type parameter must be present for this vif_driver implementation"
msgstr "此 vif_driver 實作的 vif_type 參數必須存在"
#: nova_powervm/virt/powervm/vif.py:95
#, python-format
msgid ""
"Unable to find appropriate PowerVM VIF Driver for VIF type %(vif_type)s "
"on instance %(instance)s"
msgstr ""
"在下列實例上,找不到 VIF 類型 %(vif_type)s 的適當 PowerVM VIF 驅動程式:"
"%(instance)s"
#: nova_powervm/virt/powervm/vif.py:540
#, python-format
msgid ""
"Unable to find acceptable Ethernet ports on physical network "
"'%(physnet)s' for instance %(inst)s for SRIOV based VIF with MAC address "
"%(vif_mac)s."
msgstr ""
"對於 MAC 位址為 %(vif_mac)s 的 SRIOV 型 VIF 的"
"實例 %(inst)s在實體網路 '%(physnet)s' 上找不到可接受的"
"乙太網路埠。"
#: nova_powervm/virt/powervm/vm.py:449
#, python-format
msgid "Multiple Shared Processing Pools with name %(pool)s."
msgstr "多個「共用處理程序儲存區」具有名稱 %(pool)s。"
#: nova_powervm/virt/powervm/vm.py:453
#, python-format
msgid "Unable to find Shared Processing Pool %(pool)s"
msgstr "找不到「共用處理程序儲存區」%(pool)s"
#: nova_powervm/virt/powervm/vm.py:475
#, python-format
msgid ""
"Flavor attribute %(attr)s must be either True or False. Current value "
"%(val)s is not allowed."
msgstr ""
"flavor 屬性 %(attr)s 必須為 True 或 False。不容許現行值"
"%(val)s。"
#: nova_powervm/virt/powervm/disk/driver.py:129
msgid "The configured disk driver does not support migration or resize."
msgstr "所配置的磁碟驅動程式不支援移轉或調整大小。"
#: nova_powervm/virt/powervm/disk/localdisk.py:300
msgid "Resizing file-backed instances is not currently supported."
msgstr "目前不支援重新調整檔案所支持實例的大小。"
#: nova_powervm/virt/powervm/disk/ssp.py:119
#, python-format
msgid ""
"The host is not a member of the same SSP cluster. The source host "
"cluster: %(source_clust_name)s. The source host SSP: %(source_ssp_name)s."
msgstr ""
"主機不是同一 SSP 叢集的成員。來源主機"
"叢集:%(source_clust_name)s。來源主機 SSP%(source_ssp_name)s。"
#: nova_powervm/virt/powervm/nvram/api.py:25
#, python-format
msgid ""
"The NVRAM could not be stored for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"無法儲存實例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:30
#, python-format
msgid ""
"The NVRAM could not be fetched for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"無法提取實例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:35
#, python-format
msgid ""
"The NVRAM could not be deleted for instance %(instance)s. Reason: "
"%(reason)s"
msgstr ""
"無法刪除實例 %(instance)s 的 NVRAM。原因"
"%(reason)s"
#: nova_powervm/virt/powervm/nvram/api.py:40
#, python-format
msgid "The configuration option '%(option)s' must be set."
msgstr "必須設定配置選項 '%(option)s'。"
#: nova_powervm/virt/powervm/nvram/swift.py:195
#, python-format
msgid "Unable to store NVRAM after %d attempts"
msgstr "嘗試 %d 次之後仍然無法儲存 NVRAM"
#: nova_powervm/virt/powervm/nvram/swift.py:272
msgid "Object does not exist in Swift."
msgstr "物件不存在於 Swift 中。"
#: nova_powervm/virt/powervm/volume/__init__.py:65
#, python-format
msgid "Invalid connection type of %s"
msgstr "連線類型 %s 無效"
#: nova_powervm/virt/powervm/volume/npiv.py:522
msgid ""
"Unable to find a Virtual I/O Server that hosts the NPIV port map for the "
"server."
msgstr ""
"找不到用來管理伺服器之 NPIV 埠對映的"
"Virtual I/O Server。"
#: nova_powervm/virt/powervm/volume/volume.py:117
#, python-format
msgid ""
"Failed to discover valid hdisk on any Virtual I/O Server for volume "
"%(volume_id)s."
msgstr ""
"針對下列磁區,無法在任何 Virtual I/O Server 上探索有效硬碟:"
"%(volume_id)s."
#: nova_powervm/virt/powervm/volume/volume.py:121
#, python-format
msgid ""
"Failed to discover the hdisk on the required number of Virtual I/O "
"Servers. Volume %(volume_id)s required %(vios_req)d Virtual I/O Servers,"
" but the disk was only found on %(vios_act)d Virtual I/O Servers."
msgstr ""
"無法在所需數量的 Virtual I/O Server 上探索到"
"硬碟。磁區 %(volume_id)s 需要 %(vios_req)d 個 Virtual I/O Server"
" 但卻只在 %(vios_act)d 個 Virtual I/O Server 上找到磁碟。"
# ENGL1SH_VERS10N 62006_10 DO NOT REMOVE OR CHANGE THIS LINE
# T9N_SRC_ID 28
# T9N_SH1P_STR1NG VC141AAP001 1

View File

@ -1,163 +0,0 @@
# Copyright 2016, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import test
import oslo_config
from nova_powervm import conf as cfg
CONF = cfg.CONF
class TestConf(test.NoDBTestCase):
def setUp(self):
super(TestConf, self).setUp()
def test_conf(self):
"""Tests that powervm config values are configured."""
# Try an option from each grouping of static options
# Base set of options
self.assertEqual(0.1, CONF.powervm.proc_units_factor)
# Local disk
self.assertEqual('', CONF.powervm.volume_group_name)
# SSP disk
self.assertEqual('', CONF.powervm.cluster_name)
# Volume attach
self.assertEqual('vscsi', CONF.powervm.fc_attach_strategy)
# NPIV
self.assertEqual(1, CONF.powervm.ports_per_fabric)
class TestConfBounds(test.NoDBTestCase):
def setUp(self):
super(TestConfBounds, self).setUp()
def _bounds_test(self, should_pass, opts, **kwargs):
"""Test the bounds of an option."""
# Use the Oslo fixture to create a temporary conf object
with oslo_config.fixture.Config(oslo_config.cfg.ConfigOpts()) as fx:
# Load the raw values
fx.load_raw_values(group='powervm', **kwargs)
# Register the options
fx.register_opts(opts, group='powervm')
# For each kwarg option passed, validate it.
for kw in kwargs:
if not should_pass:
# Reference the option to cause a bounds exception
self.assertRaises(oslo_config.cfg.ConfigFileValueError,
lambda: fx.conf.powervm[kw])
else:
# It's expected to succeed
fx.conf.powervm[kw]
def test_bounds(self):
# Uncapped proc weight
self._bounds_test(False, cfg.powervm.powervm_opts,
uncapped_proc_weight=0)
self._bounds_test(False, cfg.powervm.powervm_opts,
uncapped_proc_weight=256)
self._bounds_test(True, cfg.powervm.powervm_opts,
uncapped_proc_weight=200)
# vopt media repo size
self._bounds_test(False, cfg.powervm.powervm_opts,
vopt_media_rep_size=0)
self._bounds_test(True, cfg.powervm.powervm_opts,
vopt_media_rep_size=10)
# vscsi connections
self._bounds_test(False, cfg.powervm.vol_adapter_opts,
vscsi_vios_connections_required=0)
self._bounds_test(True, cfg.powervm.vol_adapter_opts,
vscsi_vios_connections_required=2)
# ports per fabric
self._bounds_test(False, cfg.powervm.npiv_opts,
ports_per_fabric=0)
self._bounds_test(True, cfg.powervm.npiv_opts,
ports_per_fabric=2)
class TestConfChoices(test.NoDBTestCase):
def setUp(self):
super(TestConfChoices, self).setUp()
def _choice_test(self, invalid_choice, valid_choices, opts, option,
ignore_case=True):
"""Test the choices of an option."""
def _setup(fx, value):
# Load the raw values
fx.load_raw_values(group='powervm', **{option: value})
# Register the options
fx.register_opts(opts, group='powervm')
def _build_list():
for val in valid_choices:
yield val
yield val.lower()
yield val.upper()
if ignore_case:
# We expect to be able to ignore upper/lower case, so build a list
# of possibilities and ensure we do ignore them.
valid_choices = [x for x in _build_list()]
if invalid_choice:
# Use the Oslo fixture to create a temporary conf object
with oslo_config.fixture.Config(oslo_config.cfg.ConfigOpts()
) as fx:
_setup(fx, invalid_choice)
# Reference the option to cause an exception
self.assertRaises(oslo_config.cfg.ConfigFileValueError,
lambda: fx.conf.powervm[option])
for choice in valid_choices:
# Use the Oslo fixture to create a temporary conf object
with oslo_config.fixture.Config(oslo_config.cfg.ConfigOpts()
) as fx:
_setup(fx, choice)
# It's expected to succeed
fx.conf.powervm[option]
def test_choices(self):
# FC attachment
self._choice_test('bad_value', ['vscsi', 'npiv'],
cfg.powervm.vol_adapter_opts, 'fc_attach_strategy')
class TestConfDynamic(test.NoDBTestCase):
def setUp(self):
super(TestConfDynamic, self).setUp()
self.conf_fx = self.useFixture(
oslo_config.fixture.Config(oslo_config.cfg.ConfigOpts()))
# Set the raw values in the config
self.conf_fx.load_raw_values(group='powervm', fabrics='A,B',
fabric_A_port_wwpns='WWPN1',
fabric_B_port_wwpns='WWPN2')
# Now register the NPIV options with the values
self.conf_fx.register_opts(cfg.powervm.npiv_opts, group='powervm')
self.conf = self.conf_fx.conf
def test_npiv(self):
"""Tests that NPIV dynamic options are registered correctly."""
# Register the dynamic FC values
fabric_mapping = {}
cfg.powervm._register_fabrics(self.conf, fabric_mapping)
self.assertEqual('A,B', self.conf.powervm.fabrics)
self.assertEqual('WWPN1', self.conf.powervm.fabric_A_port_wwpns)
self.assertEqual('WWPN2', self.conf.powervm.fabric_B_port_wwpns)
# Ensure the NPIV data was setup correctly
self.assertEqual({'B': ['WWPN2'], 'A': ['WWPN1']}, fabric_mapping)

View File

@ -1,93 +0,0 @@
# Copyright 2014, 2016 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.compute import power_state
from nova.compute import task_states
from nova.compute import vm_states
from nova.objects import flavor
from nova.objects import image_meta
from nova.objects import instance
import os
import sys
TEST_FLAVOR = flavor.Flavor(memory_mb=2048,
swap=0,
vcpu_weight=None,
root_gb=10,
id=2,
name=u'm1.small',
ephemeral_gb=0,
rxtx_factor=1.0,
flavorid=u'1',
vcpus=1)
TEST_INSTANCE = {
'id': 1,
'uuid': '49629a5c-f4c4-4721-9511-9725786ff2e5',
'display_name': 'Fake Instance',
'root_gb': 10,
'ephemeral_gb': 0,
'instance_type_id': '5',
'system_metadata': {'image_os_distro': 'rhel'},
'host': 'host1',
'flavor': TEST_FLAVOR,
'task_state': None,
'vm_state': vm_states.ACTIVE,
'power_state': power_state.SHUTDOWN,
}
TEST_INST_SPAWNING = dict(TEST_INSTANCE, task_state=task_states.SPAWNING,
uuid='b3c04455-a435-499d-ac81-371d2a2d334f')
TEST_INST1 = instance.Instance(**TEST_INSTANCE)
TEST_INST2 = instance.Instance(**TEST_INST_SPAWNING)
TEST_MIGRATION = {
'id': 1,
'source_compute': 'host1',
'dest_compute': 'host2',
'migration_type': 'resize',
'old_instance_type_id': 1,
'new_instance_type_id': 2,
}
TEST_MIGRATION_SAME_HOST = dict(TEST_MIGRATION, dest_compute='host1')
IMAGE1 = {
'id': '3e865d14-8c1e-4615-b73f-f78eaecabfbd',
'name': 'image1',
'size': 300,
'container_format': 'bare',
'disk_format': 'raw',
'checksum': 'b518a8ba2b152b5607aceb5703fac072',
}
TEST_IMAGE1 = image_meta.ImageMeta.from_dict(IMAGE1)
EMPTY_IMAGE = image_meta.ImageMeta.from_dict({})
# NOTE(mikal): All of this is because if dnspython is present in your
# environment then eventlet monkeypatches socket.getaddrinfo() with an
# implementation which doesn't work for IPv6. What we're checking here is
# that the magic environment variable was set when the import happened.
if ('eventlet' in sys.modules):
if (os.environ.get('EVENTLET_NO_GREENDNS', '').lower() != 'yes'):
raise ImportError('eventlet imported before nova/cmd/__init__ '
'(env var set to %s)'
% os.environ.get('EVENTLET_NO_GREENDNS'))
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
import eventlet
eventlet.monkey_patch(os=False)

View File

@ -1,60 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova_powervm.virt.powervm.disk import driver as disk_dvr
class FakeDiskAdapter(disk_dvr.DiskAdapter):
"""A fake subclass of DiskAdapter.
This is done so that the abstract methods/properties can be stubbed and the
class can be instantiated for testing.
"""
def vios_uuids(self):
pass
def _disk_match_func(self, disk_type, instance):
pass
def disconnect_disk_from_mgmt(self, vios_uuid, disk_name):
pass
def capacity(self):
pass
def capacity_used(self):
pass
def disconnect_disk(self, instance):
pass
def delete_disks(self, storage_elems):
pass
def create_disk_from_image(self, context, instance, image_meta):
pass
def connect_disk(self, instance, disk_info, stg_ftsk):
pass
def extend_disk(self, instance, disk_info, size):
pass
def check_instance_shared_storage_local(self, context, instance):
pass
def check_instance_shared_storage_remote(self, context, data):
pass

View File

@ -1,88 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova import test
from pypowervm import const as pvm_const
from nova_powervm.tests.virt.powervm.disk import fake_adapter
from nova_powervm.tests.virt.powervm import fixtures as fx
class TestDiskAdapter(test.NoDBTestCase):
"""Unit Tests for the generic storage driver."""
def setUp(self):
super(TestDiskAdapter, self).setUp()
self.useFixture(fx.ImageAPI())
# Return the mgmt uuid
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
'nova_powervm.virt.powervm.mgmt.mgmt_uuid')).mock
self.mgmt_uuid.return_value = 'mp_uuid'
# The values (adapter and host uuid) are not used in the base.
# Default them to None. We use the fake adapter here because we can't
# instantiate DiskAdapter which is an abstract base class.
self.st_adpt = fake_adapter.FakeDiskAdapter(None, None)
def test_get_info(self):
# Ensure the base method returns empty dict
self.assertEqual({}, self.st_adpt.get_info())
def test_validate(self):
# Ensure the base method returns error message
self.assertIsNotNone(self.st_adpt.validate(None))
@mock.patch("pypowervm.util.sanitize_file_name_for_api")
def test_get_disk_name(self, mock_san):
inst = mock.Mock()
inst.configure_mock(name='a_name_that_is_longer_than_eight',
uuid='01234567-abcd-abcd-abcd-123412341234')
# Long
self.assertEqual(mock_san.return_value,
self.st_adpt._get_disk_name('type', inst))
mock_san.assert_called_with(inst.name, prefix='type_',
max_len=pvm_const.MaxLen.FILENAME_DEFAULT)
mock_san.reset_mock()
# Short
self.assertEqual(mock_san.return_value,
self.st_adpt._get_disk_name('type', inst, short=True))
mock_san.assert_called_with('a_name_t_0123', prefix='t_',
max_len=pvm_const.MaxLen.VDISK_NAME)
@mock.patch("pypowervm.util.sanitize_file_name_for_api")
def test_get_name_by_uuid(self, mock_san):
uuid = '01234567-abcd-abcd-abcd-123412341234'
# Long
self.assertEqual(mock_san.return_value,
self.st_adpt.get_name_by_uuid('type', uuid))
mock_san.assert_called_with(uuid, prefix='type_',
max_len=pvm_const.MaxLen.FILENAME_DEFAULT)
mock_san.reset_mock()
# Short
self.assertEqual(mock_san.return_value,
self.st_adpt.get_name_by_uuid('type', uuid,
short=True))
mock_san.assert_called_with(uuid, prefix='t_',
max_len=pvm_const.MaxLen.VDISK_NAME)

View File

@ -1,103 +0,0 @@
# Copyright 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova import test
from pypowervm.wrappers import storage as pvm_stor
from pypowervm.wrappers import virtual_io_server as pvm_vios
from nova_powervm.virt.powervm.disk import imagecache
class TestImageCache(test.NoDBTestCase):
"""Unit Tests for the LocalDisk storage driver."""
def setUp(self):
super(TestImageCache, self).setUp()
self.mock_vg = mock.MagicMock(virtual_disks=[])
# Initialize the ImageManager
self.adpt = mock.MagicMock()
self.vg_uuid = 'vg_uuid'
self.vios_uuid = 'vios_uuid'
self.img_cache = imagecache.ImageManager(self.vios_uuid, self.vg_uuid,
self.adpt)
# Setup virtual_disks to be used later
self.inst1 = pvm_stor.VDisk.bld(None, 'b_inst1', 10)
self.inst2 = pvm_stor.VDisk.bld(None, 'b_inst2', 10)
self.image = pvm_stor.VDisk.bld(None, 'i_bf8446e4_4f52', 10)
def test_get_base(self):
self.mock_vg_get = self.useFixture(fixtures.MockPatch(
'pypowervm.wrappers.storage.VG.get')).mock
self.mock_vg_get.return_value = self.mock_vg
vg_wrap = self.img_cache._get_base()
self.assertEqual(vg_wrap, self.mock_vg)
self.mock_vg_get.assert_called_once_with(
self.adpt, uuid=self.vg_uuid,
parent_type=pvm_vios.VIOS.schema_type, parent_uuid=self.vios_uuid)
def test_scan_base_image(self):
# No cached images
self.mock_vg.virtual_disks = [self.inst1, self.inst2]
base_images = self.img_cache._scan_base_image(self.mock_vg)
self.assertEqual([], base_images)
# One 'cached' image
self.mock_vg.virtual_disks.append(self.image)
base_images = self.img_cache._scan_base_image(self.mock_vg)
self.assertEqual([self.image], base_images)
@mock.patch('pypowervm.tasks.storage.rm_vg_storage')
@mock.patch('nova.virt.imagecache.ImageCacheManager.'
'_list_running_instances')
@mock.patch('nova_powervm.virt.powervm.disk.imagecache.ImageManager.'
'_scan_base_image')
def test_age_and_verify(self, mock_scan, mock_list, mock_rm):
mock_context = mock.MagicMock()
all_inst = mock.MagicMock()
mock_scan.return_value = [self.image]
# Two instances backed by image 'bf8446e4_4f52'
# Mock dict returned from _list_running_instances
used_images = {'': [self.inst1, self.inst2],
'bf8446e4_4f52': [self.inst1, self.inst2]}
mock_list.return_value = {'used_images': used_images}
self.mock_vg.virtual_disks = [self.inst1, self.inst2, self.image]
self.img_cache._age_and_verify_cached_images(mock_context, all_inst,
self.mock_vg)
mock_rm.assert_not_called()
mock_scan.assert_called_once_with(self.mock_vg)
mock_rm.reset_mock()
# No instances
mock_list.return_value = {'used_images': {}}
self.img_cache._age_and_verify_cached_images(mock_context, all_inst,
self.mock_vg)
mock_rm.assert_called_once_with(self.mock_vg, vdisks=[self.image])
@mock.patch('nova_powervm.virt.powervm.disk.imagecache.ImageManager.'
'_get_base')
@mock.patch('nova_powervm.virt.powervm.disk.imagecache.ImageManager.'
'_age_and_verify_cached_images')
def test_update(self, mock_age, mock_base):
mock_base.return_value = self.mock_vg
mock_context = mock.MagicMock()
mock_all_inst = mock.MagicMock()
self.img_cache.update(mock_context, mock_all_inst)
mock_base.assert_called_once_with()
mock_age.assert_called_once_with(mock_context, mock_all_inst,
self.mock_vg)

View File

@ -1,447 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova import exception as nova_exc
from nova import test
from oslo_utils.fixture import uuidsentinel as uuids
from pypowervm import const as pvm_const
from pypowervm.tasks import storage as tsk_stor
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.wrappers import storage as pvm_stor
from pypowervm.wrappers import virtual_io_server as pvm_vios
from nova_powervm.tests.virt import powervm
from nova_powervm.tests.virt.powervm import fixtures as fx
from nova_powervm.virt.powervm.disk import driver as disk_dvr
from nova_powervm.virt.powervm.disk import localdisk as ld
from nova_powervm.virt.powervm import exception as npvmex
from nova_powervm.virt.powervm import vm
class TestLocalDisk(test.NoDBTestCase):
"""Unit Tests for the LocalDisk storage driver."""
def setUp(self):
super(TestLocalDisk, self).setUp()
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
# The mock VIOS needs to have scsi_mappings as a list. Internals are
# set by individual test cases as needed.
smaps = [mock.Mock()]
self.vio_to_vg = mock.Mock(spec=pvm_vios.VIOS, scsi_mappings=smaps,
uuid='vios-uuid')
# Set up mock for internal VIOS.get()s
self.mock_vios_get = self.useFixture(fixtures.MockPatch(
'pypowervm.wrappers.virtual_io_server.VIOS',
autospec=True)).mock.get
# For our tests, we want find_maps to return the mocked list of scsi
# mappings in our mocked VIOS.
self.mock_find_maps = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.scsi_mapper.find_maps', autospec=True)).mock
self.mock_find_maps.return_value = smaps
# Set up for the mocks for get_ls
self.mock_find_vg = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.storage.find_vg', autospec=True)).mock
self.vg_uuid = uuids.vg_uuid
self.vg = mock.Mock(spec=pvm_stor.VG, uuid=self.vg_uuid)
self.mock_find_vg.return_value = (self.vio_to_vg, self.vg)
# Return the mgmt uuid
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
'nova_powervm.virt.powervm.mgmt.mgmt_uuid')).mock
self.mgmt_uuid.return_value = 'mp_uuid'
self.flags(volume_group_name='fakevg', group='powervm')
@staticmethod
def get_ls(adpt):
return ld.LocalStorage(adpt, 'host_uuid')
def test_init(self):
local = self.get_ls(self.apt)
self.mock_find_vg.assert_called_once_with(self.apt, 'fakevg')
self.assertEqual('vios-uuid', local._vios_uuid)
self.assertEqual(self.vg_uuid, local.vg_uuid)
self.assertEqual(self.apt, local.adapter)
self.assertEqual('host_uuid', local.host_uuid)
@mock.patch('pypowervm.tasks.storage.crt_copy_vdisk', autospec=True)
@mock.patch('nova_powervm.virt.powervm.disk.localdisk.LocalStorage.'
'_get_or_upload_image')
def test_create_disk_from_image(self, mock_get_image, mock_copy):
mock_copy.return_value = 'vdisk'
inst = mock.Mock()
inst.configure_mock(name='Inst Name',
uuid='d5065c2c-ac43-3fa6-af32-ea84a3960291',
flavor=mock.Mock(root_gb=20))
mock_image = mock.MagicMock()
mock_image.name = 'cached_image'
mock_get_image.return_value = mock_image
vdisk = self.get_ls(self.apt).create_disk_from_image(
None, inst, powervm.TEST_IMAGE1)
self.assertEqual('vdisk', vdisk)
mock_get_image.reset_mock()
exception = Exception
mock_get_image.side_effect = exception
with mock.patch('time.sleep', autospec=True) as mock_sleep:
self.assertRaises(exception,
self.get_ls(self.apt).create_disk_from_image,
None, inst, powervm.TEST_IMAGE1)
self.assertEqual(mock_get_image.call_count, 4)
self.assertEqual(3, mock_sleep.call_count)
@mock.patch('pypowervm.tasks.storage.upload_new_vdisk', autospec=True)
@mock.patch('nova.image.api.API.download')
@mock.patch('nova_powervm.virt.powervm.disk.driver.IterableToFileAdapter')
@mock.patch('nova_powervm.virt.powervm.disk.localdisk.LocalStorage.'
'_get_vg_wrap')
def test_get_or_upload_image(self, mock_get_vg, mock_it2f, mock_img_api,
mock_upload_vdisk):
mock_wrapper = mock.Mock()
mock_wrapper.configure_mock(name='vg_name', virtual_disks=[])
mock_get_vg.return_value = mock_wrapper
local = self.get_ls(self.apt)
self.assertEqual(
mock_upload_vdisk.return_value[0].udid,
local._get_or_upload_image('ctx', powervm.TEST_IMAGE1))
# Make sure the upload was invoked properly
mock_upload_vdisk.assert_called_once_with(
self.apt, 'vios-uuid', self.vg_uuid, mock_it2f.return_value,
'i_3e865d14_8c1e', powervm.TEST_IMAGE1.size,
d_size=powervm.TEST_IMAGE1.size,
upload_type=tsk_stor.UploadType.IO_STREAM,
file_format=powervm.TEST_IMAGE1.disk_format)
mock_it2f.assert_called_once_with(mock_img_api.return_value)
mock_img_api.assert_called_once_with('ctx', powervm.TEST_IMAGE1.id)
mock_img_api.reset_mock()
mock_upload_vdisk.reset_mock()
# Now ensure upload_new_vdisk isn't called if the vdisk already exists.
mock_image = mock.MagicMock()
mock_image.configure_mock(name='i_3e865d14_8c1e', udid='udid')
mock_instance = mock.MagicMock()
mock_instance.configure_mock(name='b_Inst_Nam_d506')
mock_wrapper.virtual_disks = [mock_instance, mock_image]
mock_get_vg.return_value = mock_wrapper
self.assertEqual(
mock_image.udid,
local._get_or_upload_image('ctx', powervm.TEST_IMAGE1))
mock_img_api.assert_not_called()
self.assertEqual(0, mock_upload_vdisk.call_count)
@mock.patch('nova_powervm.virt.powervm.disk.localdisk.LocalStorage.'
'_get_vg_wrap')
def test_capacity(self, mock_vg):
"""Tests the capacity methods."""
local = self.get_ls(self.apt)
mock_vg.return_value = mock.Mock(
capacity='5120', available_size='2048')
self.assertEqual(5120.0, local.capacity)
self.assertEqual(3072.0, local.capacity_used)
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_active_vioses', autospec=True)
def test_disconnect_disk(self, mock_active_vioses, mock_rm_maps):
# vio_to_vg is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTaskFx and FeedTask.
feed = [self.vio_to_vg]
mock_active_vioses.return_value = feed
# The mock return values
mock_rm_maps.return_value = True
# Create the feed task
local = self.get_ls(self.apt)
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
# As initialized above, remove_maps returns True to trigger update.
local.disconnect_disk(inst, stg_ftsk=None,
disk_type=[disk_dvr.DiskType.BOOT])
self.assertEqual(1, mock_rm_maps.call_count)
self.assertEqual(1, self.vio_to_vg.update.call_count)
mock_rm_maps.assert_called_once_with(feed[0], fx.FAKE_INST_UUID_PVM,
match_func=mock.ANY)
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_active_vioses', autospec=True)
def test_disconnect_disk_no_update(self, mock_active_vioses, mock_rm_maps):
# vio_to_vg is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTaskFx and FeedTask.
feed = [self.vio_to_vg]
mock_active_vioses.return_value = feed
# The mock return values
mock_rm_maps.return_value = False
# Create the feed task
local = self.get_ls(self.apt)
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
# As initialized above, remove_maps returns True to trigger update.
local.disconnect_disk(inst, stg_ftsk=None,
disk_type=[disk_dvr.DiskType.BOOT])
self.assertEqual(1, mock_rm_maps.call_count)
self.vio_to_vg.update.assert_not_called()
mock_rm_maps.assert_called_once_with(feed[0], fx.FAKE_INST_UUID_PVM,
match_func=mock.ANY)
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
def test_disconnect_disk_disktype(self, mock_match_func):
"""Ensures that the match function passes in the right prefix."""
# Set up the mock data.
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
mock_match_func.return_value = 'test'
# Invoke
local = self.get_ls(self.apt)
local.disconnect_disk(inst, stg_ftsk=mock.MagicMock(),
disk_type=[disk_dvr.DiskType.BOOT])
# Make sure the find maps is invoked once.
self.mock_find_maps.assert_called_once_with(
mock.ANY, client_lpar_id=fx.FAKE_INST_UUID_PVM, match_func='test')
# Make sure the matching function is generated with the right disk type
mock_match_func.assert_called_once_with(
pvm_stor.VDisk, prefixes=[disk_dvr.DiskType.BOOT])
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
autospec=True)
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_active_vioses', autospec=True)
def test_connect_disk(self, mock_active_vioses, mock_add_map,
mock_build_map):
# vio_to_vg is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTask.
feed = [self.vio_to_vg]
mock_active_vioses.return_value = feed
# The mock return values
mock_add_map.return_value = True
mock_build_map.return_value = 'fake_map'
# Need the driver to return the actual UUID of the VIOS in the feed,
# to match the FeedTask.
local = self.get_ls(self.apt)
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
lpar_uuid = vm.get_pvm_uuid(inst)
mock_disk = mock.Mock()
# As initialized above, remove_maps returns True to trigger update.
local.connect_disk(inst, mock_disk, stg_ftsk=None)
self.assertEqual(1, mock_add_map.call_count)
mock_build_map.assert_called_once_with(
'host_uuid', self.vio_to_vg, lpar_uuid, mock_disk)
mock_add_map.assert_called_once_with(feed[0], 'fake_map')
self.assertEqual(1, self.vio_to_vg.update.call_count)
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
autospec=True)
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_active_vioses', autospec=True)
def test_connect_disk_no_update(self, mock_active_vioses, mock_add_map,
mock_build_map):
# vio_to_vg is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTask.
feed = [self.vio_to_vg]
mock_active_vioses.return_value = feed
# The mock return values
mock_add_map.return_value = False
mock_build_map.return_value = 'fake_map'
# Need the driver to return the actual UUID of the VIOS in the feed,
# to match the FeedTask.
local = self.get_ls(self.apt)
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
# As initialized above, remove_maps returns True to trigger update.
local.connect_disk(inst, mock.Mock(), stg_ftsk=None)
self.assertEqual(1, mock_add_map.call_count)
mock_add_map.assert_called_once_with(feed[0], 'fake_map')
self.vio_to_vg.update.assert_not_called()
@mock.patch('pypowervm.wrappers.storage.VG.update', new=mock.Mock())
@mock.patch('nova_powervm.virt.powervm.disk.localdisk.LocalStorage.'
'_get_vg_wrap')
def test_delete_disks(self, mock_vg):
# Mocks
self.apt.side_effect = [mock.Mock()]
mock_remove = mock.MagicMock()
mock_remove.name = 'disk'
mock_wrapper = mock.MagicMock()
mock_wrapper.virtual_disks = [mock_remove]
mock_vg.return_value = mock_wrapper
# Invoke the call
local = self.get_ls(self.apt)
local.delete_disks([mock_remove])
# Validate the call
self.assertEqual(1, mock_wrapper.update.call_count)
self.assertEqual(0, len(mock_wrapper.virtual_disks))
@mock.patch('pypowervm.wrappers.storage.VG', autospec=True)
def test_extend_disk_not_found(self, mock_vg):
local = self.get_ls(self.apt)
inst = mock.Mock()
inst.name = 'Name Of Instance'
inst.uuid = 'd5065c2c-ac43-3fa6-af32-ea84a3960291'
vdisk = mock.Mock(name='vdisk')
vdisk.name = 'NO_MATCH'
resp = mock.Mock(name='response')
resp.virtual_disks = [vdisk]
mock_vg.get.return_value = resp
self.assertRaises(nova_exc.DiskNotFound, local.extend_disk,
inst, dict(type='boot'), 10)
vdisk.name = 'b_Name_Of__d506'
local.extend_disk(inst, dict(type='boot'), 1000)
# Validate the call
self.assertEqual(1, resp.update.call_count)
self.assertEqual(vdisk.capacity, 1000)
@mock.patch('pypowervm.wrappers.storage.VG', autospec=True)
def test_extend_disk_file_format(self, mock_vg):
local = self.get_ls(self.apt)
inst = mock.Mock()
inst.name = 'Name Of Instance'
inst.uuid = 'd5065c2c-ac43-3fa6-af32-ea84a3960291'
vdisk = mock.Mock(name='vdisk')
vdisk.configure_mock(name='/path/to/b_Name_Of__d506',
backstore_type=pvm_stor.BackStoreType.USER_QCOW,
file_format=pvm_stor.FileFormatType.QCOW2)
resp = mock.Mock(name='response')
resp.virtual_disks = [vdisk]
mock_vg.get.return_value = resp
self.assertRaises(nova_exc.ResizeError, local.extend_disk,
inst, dict(type='boot'), 10)
vdisk.file_format = pvm_stor.FileFormatType.RAW
self.assertRaises(nova_exc.ResizeError, local.extend_disk,
inst, dict(type='boot'), 10)
def _bld_mocks_for_instance_disk(self):
inst = mock.Mock()
inst.name = 'Name Of Instance'
inst.uuid = uuids.inst_uuid
lpar_wrap = mock.Mock()
lpar_wrap.id = 2
vios1 = self.vio_to_vg
back_stor_name = 'b_Name_Of__' + inst.uuid[:4]
vios1.scsi_mappings[0].backing_storage.name = back_stor_name
return inst, lpar_wrap, vios1
def test_get_bootdisk_path(self):
local = self.get_ls(self.apt)
inst = mock.Mock()
inst.name = 'Name Of Instance'
inst.uuid = 'f921620A-EE30-440E-8C2D-9F7BA123F298'
vios1 = self.vio_to_vg
vios1.scsi_mappings[0].server_adapter.backing_dev_name = 'boot_7f81628'
vios1.scsi_mappings[0].backing_storage.name = 'b_Name_Of__f921'
self.mock_vios_get.return_value = vios1
dev_name = local.get_bootdisk_path(inst, vios1.uuid)
self.assertEqual('boot_7f81628', dev_name)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
@mock.patch('pypowervm.wrappers.storage.VG.get', new=mock.Mock())
def test_get_bootdisk_iter(self, mock_lpar_wrap):
local = self.get_ls(self.apt)
inst, lpar_wrap, vios1 = self._bld_mocks_for_instance_disk()
mock_lpar_wrap.return_value = lpar_wrap
# Good path
self.mock_vios_get.return_value = vios1
for vdisk, vios in local._get_bootdisk_iter(inst):
self.assertEqual(vios1.scsi_mappings[0].backing_storage, vdisk)
self.assertEqual(vios1.uuid, vios.uuid)
self.mock_vios_get.assert_called_once_with(
self.apt, uuid='vios-uuid', xag=[pvm_const.XAG.VIO_SMAP])
# Not found because no storage of that name
self.mock_vios_get.reset_mock()
self.mock_find_maps.return_value = []
for vdisk, vios in local._get_bootdisk_iter(inst):
self.fail('Should not have found any storage elements.')
self.mock_vios_get.assert_called_once_with(
self.apt, uuid='vios-uuid', xag=[pvm_const.XAG.VIO_SMAP])
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper')
@mock.patch('pypowervm.tasks.scsi_mapper.add_vscsi_mapping', autospec=True)
def test_connect_instance_disk_to_mgmt_partition(self, mock_add, mock_lw):
local = self.get_ls(self.apt)
inst, lpar_wrap, vios1 = self._bld_mocks_for_instance_disk()
mock_lw.return_value = lpar_wrap
# Good path
self.mock_vios_get.return_value = vios1
vdisk, vios = local.connect_instance_disk_to_mgmt(inst)
self.assertEqual(vios1.scsi_mappings[0].backing_storage, vdisk)
self.assertIs(vios1, vios)
self.assertEqual(1, mock_add.call_count)
mock_add.assert_called_with('host_uuid', vios, 'mp_uuid', vdisk)
# add_vscsi_mapping raises. Show-stopper since only one VIOS.
mock_add.reset_mock()
mock_add.side_effect = Exception("mapping failed")
self.assertRaises(npvmex.InstanceDiskMappingFailed,
local.connect_instance_disk_to_mgmt, inst)
self.assertEqual(1, mock_add.call_count)
# Not found
mock_add.reset_mock()
self.mock_find_maps.return_value = []
self.assertRaises(npvmex.InstanceDiskMappingFailed,
local.connect_instance_disk_to_mgmt, inst)
self.assertFalse(mock_add.called)
@mock.patch('pypowervm.tasks.scsi_mapper.remove_vdisk_mapping',
autospec=True)
def test_disconnect_disk_from_mgmt_partition(self, mock_rm_vdisk_map):
local = self.get_ls(self.apt)
local.disconnect_disk_from_mgmt('vios-uuid', 'disk_name')
mock_rm_vdisk_map.assert_called_with(
local.adapter, 'vios-uuid', 'mp_uuid', disk_names=['disk_name'])
def test_capabilities_non_mgmt_vios(self):
local = self.get_ls(self.apt)
self.assertFalse(local.capabilities.get('shared_storage'))
self.assertTrue(local.capabilities.get('has_imagecache'))
# With the default setup, the management partition isn't the VIOS.
self.assertFalse(local.capabilities.get('snapshot'))
def test_capabilities_mgmt_vios(self):
# Make the management partition the VIOS.
self.vio_to_vg.uuid = self.mgmt_uuid.return_value
local = self.get_ls(self.apt)
self.assertFalse(local.capabilities.get('shared_storage'))
self.assertTrue(local.capabilities.get('has_imagecache'))
self.assertTrue(local.capabilities.get('snapshot'))

View File

@ -1,625 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova.objects import image_meta
from nova import test
from pypowervm import const
from pypowervm.tasks import storage as tsk_stg
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.wrappers import cluster as pvm_clust
from pypowervm.wrappers import storage as pvm_stg
from pypowervm.wrappers import virtual_io_server as pvm_vios
from nova_powervm.tests.virt.powervm import fixtures as fx
from nova_powervm.virt.powervm.disk import driver as disk_dvr
from nova_powervm.virt.powervm.disk import ssp as ssp_dvr
from nova_powervm.virt.powervm import exception as npvmex
class SSPFixture(fixtures.Fixture):
"""Patch out PyPowerVM SSP and Cluster EntryWrapper methods."""
def __init__(self):
pass
def mockpatch(self, methstr):
return self.useFixture(fixtures.MockPatch(methstr)).mock
def setUp(self):
super(SSPFixture, self).setUp()
self.mock_clust_get = self.mockpatch(
'pypowervm.wrappers.cluster.Cluster.get')
self.mock_clust_search = self.mockpatch(
'pypowervm.wrappers.cluster.Cluster.search')
self.mock_ssp_gbhref = self.mockpatch(
'pypowervm.wrappers.storage.SSP.get_by_href')
self.mock_ssp_update = self.mockpatch(
'pypowervm.wrappers.storage.SSP.update')
self.mock_get_tier = self.mockpatch(
'pypowervm.tasks.storage.default_tier_for_ssp')
class TestSSPDiskAdapter(test.NoDBTestCase):
"""Unit Tests for the LocalDisk storage driver."""
def setUp(self):
super(TestSSPDiskAdapter, self).setUp()
class Instance(object):
uuid = fx.FAKE_INST_UUID
name = 'instance-name'
self.instance = Instance()
self.apt = mock.Mock()
self.host_uuid = 'host_uuid'
self.sspfx = self.useFixture(SSPFixture())
self.ssp_wrap = mock.Mock(spec=pvm_stg.SSP)
self.ssp_wrap.refresh.return_value = self.ssp_wrap
self.node1 = mock.Mock()
self.node2 = mock.Mock()
self.clust_wrap = mock.Mock(spec=pvm_clust.Cluster,
nodes=[self.node1, self.node2])
self.clust_wrap.refresh.return_value = self.clust_wrap
self.vio_wrap = mock.Mock(spec=pvm_vios.VIOS, uuid='uuid')
# For _fetch_cluster() with no name
self.mock_clust_get = self.sspfx.mock_clust_get
self.mock_clust_get.return_value = [self.clust_wrap]
# For _fetch_cluster() with configured name
self.mock_clust_search = self.sspfx.mock_clust_search
# EntryWrapper.search always returns a list of wrappers.
self.mock_clust_search.return_value = [self.clust_wrap]
# For _fetch_ssp() fresh
self.mock_ssp_gbhref = self.sspfx.mock_ssp_gbhref
self.mock_ssp_gbhref.return_value = self.ssp_wrap
# For _tier
self.mock_get_tier = self.sspfx.mock_get_tier
# By default, assume the config supplied a Cluster name
self.flags(cluster_name='clust1', group='powervm')
# Return the mgmt uuid
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
'nova_powervm.virt.powervm.mgmt.mgmt_uuid')).mock
self.mgmt_uuid.return_value = 'mp_uuid'
def _get_ssp_stor(self):
return ssp_dvr.SSPDiskAdapter(self.apt, self.host_uuid)
def test_tier_cache(self):
# default_tier_for_ssp not yet invoked
self.mock_get_tier.assert_not_called()
ssp = self._get_ssp_stor()
# default_tier_for_ssp invoked by constructor
self.mock_get_tier.assert_called_once_with(ssp._ssp_wrap)
self.assertEqual(self.mock_get_tier.return_value, ssp._tier)
# default_tier_for_ssp not called again.
self.assertEqual(1, self.mock_get_tier.call_count)
def test_capabilities(self):
ssp_stor = self._get_ssp_stor()
self.assertTrue(ssp_stor.capabilities.get('shared_storage'))
self.assertFalse(ssp_stor.capabilities.get('has_imagecache'))
self.assertTrue(ssp_stor.capabilities.get('snapshot'))
def test_get_info(self):
ssp_stor = self._get_ssp_stor()
expected = {'cluster_name': self.clust_wrap.name,
'ssp_name': self.ssp_wrap.name,
'ssp_uuid': self.ssp_wrap.uuid}
# Ensure the base method returns empty dict
self.assertEqual(expected, ssp_stor.get_info())
def test_validate(self):
ssp_stor = self._get_ssp_stor()
fake_data = {}
# Ensure returns error message when no data
self.assertIsNotNone(ssp_stor.validate(fake_data))
# Get our own data and it should always match!
fake_data = ssp_stor.get_info()
# Ensure returns no error on good data
self.assertIsNone(ssp_stor.validate(fake_data))
def test_init_green_with_config(self):
"""Bootstrap SSPStorage, testing call to _fetch_cluster.
Driver init should search for cluster by name.
"""
# Invoke __init__ => _fetch_cluster()
self._get_ssp_stor()
# _fetch_cluster() WITH configured name does a search, but not a get.
# Refresh shouldn't be invoked.
self.mock_clust_search.assert_called_once_with(self.apt, name='clust1')
self.mock_clust_get.assert_not_called()
self.clust_wrap.refresh.assert_not_called()
def test_init_green_no_config(self):
"""No cluster name specified in config; one cluster on host - ok."""
self.flags(cluster_name='', group='powervm')
self._get_ssp_stor()
# _fetch_cluster() WITHOUT configured name does feed GET, not a search.
# Refresh shouldn't be invoked.
self.mock_clust_search.assert_not_called()
self.mock_clust_get.assert_called_once_with(self.apt)
self.clust_wrap.refresh.assert_not_called()
def test_init_ClusterNotFoundByName(self):
"""Empty feed comes back from search - no cluster by that name."""
self.mock_clust_search.return_value = []
self.assertRaises(npvmex.ClusterNotFoundByName, self._get_ssp_stor)
def test_init_TooManyClustersFound(self):
"""Search-by-name returns more than one result."""
self.mock_clust_search.return_value = ['newclust1', 'newclust2']
self.assertRaises(npvmex.TooManyClustersFound, self._get_ssp_stor)
def test_init_NoConfigNoClusterFound(self):
"""No cluster name specified in config, no clusters on host."""
self.flags(cluster_name='', group='powervm')
self.mock_clust_get.return_value = []
self.assertRaises(npvmex.NoConfigNoClusterFound, self._get_ssp_stor)
def test_init_NoConfigTooManyClusters(self):
"""No SSP name specified in config, more than one SSP on host."""
self.flags(cluster_name='', group='powervm')
self.mock_clust_get.return_value = ['newclust1', 'newclust2']
self.assertRaises(npvmex.NoConfigTooManyClusters, self._get_ssp_stor)
def test_refresh_cluster(self):
"""_refresh_cluster with cached wrapper."""
# Save original cluster wrapper for later comparison
orig_clust_wrap = self.clust_wrap
# Prime _clust_wrap
ssp_stor = self._get_ssp_stor()
# Verify baseline call counts
self.mock_clust_search.assert_called_once_with(self.apt, name='clust1')
self.clust_wrap.refresh.assert_not_called()
clust_wrap = ssp_stor._refresh_cluster()
# This should call refresh
self.mock_clust_search.assert_called_once_with(self.apt, name='clust1')
self.mock_clust_get.assert_not_called()
self.clust_wrap.refresh.assert_called_once_with()
self.assertEqual(clust_wrap.name, orig_clust_wrap.name)
def test_fetch_ssp(self):
# For later comparison
orig_ssp_wrap = self.ssp_wrap
# Verify baseline call counts
self.mock_ssp_gbhref.assert_not_called()
self.ssp_wrap.refresh.assert_not_called()
# This should prime self._ssp_wrap: calls read_by_href but not refresh.
ssp_stor = self._get_ssp_stor()
self.mock_ssp_gbhref.assert_called_once_with(self.apt,
self.clust_wrap.ssp_uri)
self.ssp_wrap.refresh.assert_not_called()
# Accessing the @property will trigger refresh
ssp_wrap = ssp_stor._ssp
self.mock_ssp_gbhref.assert_called_once_with(self.apt,
self.clust_wrap.ssp_uri)
self.ssp_wrap.refresh.assert_called_once_with()
self.assertEqual(ssp_wrap.name, orig_ssp_wrap.name)
@mock.patch('pypowervm.util.get_req_path_uuid')
def test_vios_uuids(self, mock_rpu):
mock_rpu.return_value = self.host_uuid
ssp_stor = self._get_ssp_stor()
vios_uuids = ssp_stor.vios_uuids
self.assertEqual({self.node1.vios_uuid, self.node2.vios_uuid},
set(vios_uuids))
mock_rpu.assert_has_calls(
[mock.call(node.vios_uri, preserve_case=True, root=True)
for node in [self.node1, self.node2]])
s = set()
for i in range(1000):
u = ssp_stor._any_vios_uuid()
# Make sure we got a good value
self.assertIn(u, vios_uuids)
s.add(u)
# Make sure we hit all the values over 1000 iterations. This isn't
# guaranteed to work, but the odds of failure should be infinitesimal.
self.assertEqual(set(vios_uuids), s)
mock_rpu.reset_mock()
# Test VIOSes on other nodes, which won't have uuid or url
node1 = mock.Mock(vios_uuid=None, vios_uri='uri1')
node2 = mock.Mock(vios_uuid='2', vios_uri=None)
# This mock is good and should be returned
node3 = mock.Mock(vios_uuid='3', vios_uri='uri3')
self.clust_wrap.nodes = [node1, node2, node3]
self.assertEqual(['3'], ssp_stor.vios_uuids)
# get_req_path_uuid was only called on the good one
mock_rpu.assert_called_once_with('uri3', preserve_case=True, root=True)
def test_capacity(self):
ssp_stor = self._get_ssp_stor()
self.mock_get_tier.return_value.refresh.return_value.capacity = 10
self.assertAlmostEqual(10.0, ssp_stor.capacity)
def test_capacity_used(self):
ssp_stor = self._get_ssp_stor()
self.ssp_wrap.capacity = 4.56
self.ssp_wrap.free_space = 1.23
self.assertAlmostEqual((4.56 - 1.23), ssp_stor.capacity_used)
@mock.patch('pypowervm.tasks.cluster_ssp.get_or_upload_image_lu')
@mock.patch('nova_powervm.virt.powervm.disk.driver.DiskAdapter.'
'_get_image_name')
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'_any_vios_uuid')
@mock.patch('nova_powervm.virt.powervm.disk.driver.DiskAdapter.'
'_get_disk_name')
@mock.patch('pypowervm.tasks.storage.crt_lu')
@mock.patch('nova.image.api.API.download')
@mock.patch('nova_powervm.virt.powervm.disk.driver.IterableToFileAdapter')
def test_create_disk_from_image(self, mock_it2f, mock_dl, mock_crt_lu,
mock_gdn, mock_vuuid, mock_gin, mock_goru):
instance = mock.Mock()
img_meta = mock.Mock()
ssp = self._get_ssp_stor()
mock_crt_lu.return_value = ssp._ssp_wrap, 'lu'
mock_gin.return_value = 'img_name'
mock_vuuid.return_value = 'vios_uuid'
# Default image_type
self.assertEqual('lu', ssp.create_disk_from_image(
'ctx', instance, img_meta))
mock_goru.assert_called_once_with(
self.mock_get_tier.return_value, mock_gin.return_value,
mock_vuuid.return_value, mock_it2f.return_value, img_meta.size,
upload_type=tsk_stg.UploadType.IO_STREAM)
mock_dl.assert_called_once_with('ctx', img_meta.id)
mock_it2f.assert_called_once_with(mock_dl.return_value)
mock_gdn.assert_called_once_with(disk_dvr.DiskType.BOOT, instance)
mock_crt_lu.assert_called_once_with(
self.mock_get_tier.return_value, mock_gdn.return_value,
instance.flavor.root_gb, typ=pvm_stg.LUType.DISK,
clone=mock_goru.return_value)
# Reset
mock_goru.reset_mock()
mock_gdn.reset_mock()
mock_crt_lu.reset_mock()
mock_dl.reset_mock()
mock_it2f.reset_mock()
# Specified image_type
self.assertEqual('lu', ssp.create_disk_from_image(
'ctx', instance, img_meta, image_type='imgtyp'))
mock_goru.assert_called_once_with(
self.mock_get_tier.return_value, mock_gin.return_value,
mock_vuuid.return_value, mock_it2f.return_value, img_meta.size,
upload_type=tsk_stg.UploadType.IO_STREAM)
mock_dl.assert_called_once_with('ctx', img_meta.id)
mock_it2f.assert_called_once_with(mock_dl.return_value)
mock_gdn.assert_called_once_with('imgtyp', instance)
mock_crt_lu.assert_called_once_with(
self.mock_get_tier.return_value, mock_gdn.return_value,
instance.flavor.root_gb, typ=pvm_stg.LUType.DISK,
clone=mock_goru.return_value)
def test_get_image_name(self):
"""Generate image name from ImageMeta."""
ssp = self._get_ssp_stor()
def verify_image_name(name, checksum, expected):
img_meta = image_meta.ImageMeta(name=name, checksum=checksum)
self.assertEqual(expected, ssp._get_image_name(img_meta))
self.assertTrue(len(expected) <= const.MaxLen.FILENAME_DEFAULT)
verify_image_name('foo', 'bar', 'image_foo_bar')
# Ensure a really long name gets truncated properly. Note also '-'
# chars are sanitized.
verify_image_name(
'Template_zw82enbix_PowerVM-CI-18y2385y9123785192364',
'b518a8ba2b152b5607aceb5703fac072',
'image_Template_zw82enbix_PowerVM_CI_18y2385y91'
'_b518a8ba2b152b5607aceb5703fac072')
@mock.patch('pypowervm.wrappers.storage.LUEnt.search')
@mock.patch('nova_powervm.virt.powervm.disk.driver.DiskAdapter.'
'_get_disk_name')
def test_get_disk_ref(self, mock_dsk_nm, mock_srch):
ssp = self._get_ssp_stor()
self.assertEqual(mock_srch.return_value, ssp.get_disk_ref(
self.instance, disk_dvr.DiskType.BOOT))
mock_dsk_nm.assert_called_with(disk_dvr.DiskType.BOOT, self.instance)
mock_srch.assert_called_with(
ssp.adapter, parent=self.mock_get_tier.return_value,
name=mock_dsk_nm.return_value, lu_type=pvm_stg.LUType.DISK,
one_result=True)
# Assert handles not finding it.
mock_srch.return_value = None
self.assertIsNone(
ssp.get_disk_ref(self.instance, disk_dvr.DiskType.BOOT))
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'vios_uuids', new_callable=mock.PropertyMock)
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping')
@mock.patch('pypowervm.tasks.scsi_mapper.add_map')
@mock.patch('pypowervm.tasks.partition.get_active_vioses')
def test_connect_disk(self, mock_active_vioses, mock_add_map,
mock_build_map, mock_vio_uuids):
# vio is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTaskFx and FeedTask.
feed = [self.vio_wrap]
mock_active_vioses.return_value = feed
ft_fx = pvm_fx.FeedTaskFx(feed)
self.useFixture(ft_fx)
# The mock return values
mock_add_map.return_value = True
mock_build_map.return_value = 'fake_map'
# Need the driver to return the actual UUID of the VIOS in the feed,
# to match the FeedTask.
ssp = self._get_ssp_stor()
mock_vio_uuids.return_value = [self.vio_wrap.uuid]
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
# As initialized above, remove_maps returns True to trigger update.
ssp.connect_disk(inst, mock.Mock(), stg_ftsk=None)
mock_add_map.assert_called_once_with(self.vio_wrap, 'fake_map')
self.vio_wrap.update.assert_called_once_with(timeout=mock.ANY)
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'vios_uuids', new_callable=mock.PropertyMock)
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping')
@mock.patch('pypowervm.tasks.scsi_mapper.add_map')
@mock.patch('pypowervm.tasks.partition.get_active_vioses')
def test_connect_disk_no_update(self, mock_active_vioses, mock_add_map,
mock_build_map, mock_vio_uuids):
# vio is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTaskFx and FeedTask.
feed = [self.vio_wrap]
mock_active_vioses.return_value = feed
ft_fx = pvm_fx.FeedTaskFx(feed)
self.useFixture(ft_fx)
# The mock return values
mock_add_map.return_value = None
mock_build_map.return_value = 'fake_map'
# Need the driver to return the actual UUID of the VIOS in the feed,
# to match the FeedTask.
ssp = self._get_ssp_stor()
mock_vio_uuids.return_value = [self.vio_wrap.uuid]
inst = mock.Mock(uuid=fx.FAKE_INST_UUID)
# As initialized above, add_maps returns False to skip update.
ssp.connect_disk(inst, mock.Mock(), stg_ftsk=None)
mock_add_map.assert_called_once_with(self.vio_wrap, 'fake_map')
self.vio_wrap.update.assert_not_called()
@mock.patch('pypowervm.tasks.storage.rm_tier_storage')
def test_delete_disks(self, mock_rm_tstor):
sspdrv = self._get_ssp_stor()
sspdrv.delete_disks(['disk1', 'disk2'])
mock_rm_tstor.assert_called_once_with(['disk1', 'disk2'],
tier=sspdrv._tier)
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'vios_uuids', new_callable=mock.PropertyMock)
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps')
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps')
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping')
@mock.patch('pypowervm.tasks.partition.get_active_vioses')
def test_disconnect_disk(self, mock_active_vioses, mock_build_map,
mock_remove_maps, mock_find_maps, mock_vio_uuids):
# vio is a single-entry response. Wrap it and put it in a list
# to act as the feed for FeedTaskFx and FeedTask.
feed = [self.vio_wrap]
ft_fx = pvm_fx.FeedTaskFx(feed)
mock_active_vioses.return_value = feed
self.useFixture(ft_fx)
# The mock return values
mock_build_map.return_value = 'fake_map'
# Need the driver to return the actual UUID of the VIOS in the feed,
# to match the FeedTask.
ssp = self._get_ssp_stor()
mock_vio_uuids.return_value = [self.vio_wrap.uuid]
# Make the LU's to remove
def mklu(udid):
lu = pvm_stg.LU.bld(None, 'lu_%s' % udid, 1)
lu._udid('27%s' % udid)
return lu
lu1 = mklu('abc')
lu2 = mklu('def')
def remove_resp(vios_w, client_lpar_id, match_func=None,
include_orphans=False):
return [mock.Mock(backing_storage=lu1),
mock.Mock(backing_storage=lu2)]
mock_remove_maps.side_effect = remove_resp
mock_find_maps.side_effect = remove_resp
# As initialized above, remove_maps returns True to trigger update.
lu_list = ssp.disconnect_disk(self.instance, stg_ftsk=None)
self.assertEqual({lu1, lu2}, set(lu_list))
mock_remove_maps.assert_called_once_with(
self.vio_wrap, fx.FAKE_INST_UUID_PVM, match_func=mock.ANY)
self.vio_wrap.update.assert_called_once_with(timeout=mock.ANY)
def test_shared_stg_calls(self):
# Check the good paths
ssp_stor = self._get_ssp_stor()
data = ssp_stor.check_instance_shared_storage_local('context', 'inst')
self.assertTrue(
ssp_stor.check_instance_shared_storage_remote('context', data))
ssp_stor.check_instance_shared_storage_cleanup('context', data)
# Check bad paths...
# No data
self.assertFalse(
ssp_stor.check_instance_shared_storage_remote('context', None))
# Unexpected data format
self.assertFalse(
ssp_stor.check_instance_shared_storage_remote('context', 'bad'))
# Good data, but not the same SSP uuid
not_same = {'ssp_uuid': 'uuid value not the same'}
self.assertFalse(
ssp_stor.check_instance_shared_storage_remote('context', not_same))
def _bld_mocks_for_instance_disk(self):
inst = mock.Mock()
inst.name = 'my-instance-name'
lpar_wrap = mock.Mock()
lpar_wrap.id = 4
lu_wrap = mock.Mock(spec=pvm_stg.LU)
lu_wrap.configure_mock(name='boot_my_instance_name', udid='lu_udid')
smap = mock.Mock(backing_storage=lu_wrap,
server_adapter=mock.Mock(lpar_id=4))
# Build mock VIOS Wrappers as the returns from VIOS.wrap.
# vios1 and vios2 will both have the mapping for client ID 4 and LU
# named boot_my_instance_name.
smaps = [mock.Mock(), mock.Mock(), mock.Mock(), smap]
vios1 = mock.Mock(spec=pvm_vios.VIOS)
vios1.configure_mock(name='vios1', uuid='uuid1', scsi_mappings=smaps)
vios2 = mock.Mock(spec=pvm_vios.VIOS)
vios2.configure_mock(name='vios2', uuid='uuid2', scsi_mappings=smaps)
# vios3 will not have the mapping
vios3 = mock.Mock(spec=pvm_vios.VIOS)
vios3.configure_mock(name='vios3', uuid='uuid3',
scsi_mappings=[mock.Mock(), mock.Mock()])
return inst, lpar_wrap, vios1, vios2, vios3
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'vios_uuids', new_callable=mock.PropertyMock)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper')
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
def test_get_bootdisk_iter(self, mock_vio_get, mock_lw, mock_vio_uuids):
inst, lpar_wrap, vio1, vio2, vio3 = self._bld_mocks_for_instance_disk()
mock_lw.return_value = lpar_wrap
mock_vio_uuids.return_value = [1, 2]
ssp_stor = self._get_ssp_stor()
# Test with two VIOSes, both of which contain the mapping. Force the
# method to get the lpar_wrap.
mock_vio_get.side_effect = [vio1, vio2]
idi = ssp_stor._get_bootdisk_iter(inst)
lu, vios = next(idi)
self.assertEqual('lu_udid', lu.udid)
self.assertEqual('vios1', vios.name)
mock_vio_get.assert_called_once_with(self.apt, uuid=1,
xag=[const.XAG.VIO_SMAP])
lu, vios = next(idi)
self.assertEqual('lu_udid', lu.udid)
self.assertEqual('vios2', vios.name)
mock_vio_get.assert_called_with(self.apt, uuid=2,
xag=[const.XAG.VIO_SMAP])
self.assertRaises(StopIteration, next, idi)
self.assertEqual(2, mock_vio_get.call_count)
mock_lw.assert_called_once_with(self.apt, inst)
# Same, but prove that breaking out of the loop early avoids the second
# get call.
mock_vio_get.reset_mock()
mock_lw.reset_mock()
mock_vio_get.side_effect = [vio1, vio2]
for lu, vios in ssp_stor._get_bootdisk_iter(inst):
self.assertEqual('lu_udid', lu.udid)
self.assertEqual('vios1', vios.name)
break
mock_vio_get.assert_called_once_with(self.apt, uuid=1,
xag=[const.XAG.VIO_SMAP])
# Now the first VIOS doesn't have the mapping, but the second does
mock_vio_get.reset_mock()
mock_vio_get.side_effect = [vio3, vio2]
idi = ssp_stor._get_bootdisk_iter(inst)
lu, vios = next(idi)
self.assertEqual('lu_udid', lu.udid)
self.assertEqual('vios2', vios.name)
mock_vio_get.assert_has_calls(
[mock.call(self.apt, uuid=uuid, xag=[const.XAG.VIO_SMAP])
for uuid in (1, 2)])
self.assertRaises(StopIteration, next, idi)
self.assertEqual(2, mock_vio_get.call_count)
# No hits
mock_vio_get.reset_mock()
mock_vio_get.side_effect = [vio3, vio3]
self.assertEqual([], list(ssp_stor._get_bootdisk_iter(inst)))
self.assertEqual(2, mock_vio_get.call_count)
@mock.patch('nova_powervm.virt.powervm.disk.ssp.SSPDiskAdapter.'
'vios_uuids', new_callable=mock.PropertyMock)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper')
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
@mock.patch('pypowervm.tasks.scsi_mapper.add_vscsi_mapping')
def test_connect_instance_disk_to_mgmt(self, mock_add, mock_vio_get,
mock_lw, mock_vio_uuids):
inst, lpar_wrap, vio1, vio2, vio3 = self._bld_mocks_for_instance_disk()
mock_lw.return_value = lpar_wrap
mock_vio_uuids.return_value = [1, 2]
ssp_stor = self._get_ssp_stor()
# Test with two VIOSes, both of which contain the mapping
mock_vio_get.side_effect = [vio1, vio2]
lu, vios = ssp_stor.connect_instance_disk_to_mgmt(inst)
self.assertEqual('lu_udid', lu.udid)
# Should hit on the first VIOS
self.assertIs(vio1, vios)
mock_add.assert_called_once_with(self.host_uuid, vio1, 'mp_uuid', lu)
# Now the first VIOS doesn't have the mapping, but the second does
mock_add.reset_mock()
mock_vio_get.side_effect = [vio3, vio2]
lu, vios = ssp_stor.connect_instance_disk_to_mgmt(inst)
self.assertEqual('lu_udid', lu.udid)
# Should hit on the second VIOS
self.assertIs(vio2, vios)
self.assertEqual(1, mock_add.call_count)
mock_add.assert_called_once_with(self.host_uuid, vio2, 'mp_uuid', lu)
# No hits
mock_add.reset_mock()
mock_vio_get.side_effect = [vio3, vio3]
self.assertRaises(npvmex.InstanceDiskMappingFailed,
ssp_stor.connect_instance_disk_to_mgmt, inst)
mock_add.assert_not_called()
# First add_vscsi_mapping call raises
mock_vio_get.side_effect = [vio1, vio2]
mock_add.side_effect = [Exception("mapping failed"), None]
# Should hit on the second VIOS
self.assertIs(vio2, vios)
@mock.patch('pypowervm.tasks.scsi_mapper.remove_lu_mapping')
def test_disconnect_disk_from_mgmt(self, mock_rm_lu_map):
ssp_stor = self._get_ssp_stor()
ssp_stor.disconnect_disk_from_mgmt('vios_uuid', 'disk_name')
mock_rm_lu_map.assert_called_with(ssp_stor.adapter, 'vios_uuid',
'mp_uuid', disk_names=['disk_name'])

View File

@ -1,196 +0,0 @@
# Copyright 2015, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import absolute_import
import fixtures
import mock
from nova.virt.powervm_ext import driver
from nova.virt import fake
from pypowervm.tests import test_fixtures as pvm_fx
FAKE_INST_UUID = 'b6513403-fd7f-4ad0-ab27-f73bacbd3929'
FAKE_INST_UUID_PVM = '36513403-FD7F-4AD0-AB27-F73BACBD3929'
class ImageAPI(fixtures.Fixture):
"""Mock out the Glance API."""
def setUp(self):
super(ImageAPI, self).setUp()
self.img_api_fx = self.useFixture(fixtures.MockPatch('nova.image.API'))
class DiskAdapter(fixtures.Fixture):
"""Mock out the DiskAdapter."""
def setUp(self):
super(DiskAdapter, self).setUp()
self.std_disk_adpt_fx = self.useFixture(
fixtures.MockPatch('nova_powervm.virt.powervm.disk.localdisk.'
'LocalStorage'))
self.std_disk_adpt = self.std_disk_adpt_fx.mock
class HostCPUMetricCache(fixtures.Fixture):
"""Mock out the HostCPUMetricCache."""
def setUp(self):
super(HostCPUMetricCache, self).setUp()
self.host_cpu_stats = self.useFixture(
fixtures.MockPatch('pypowervm.tasks.monitor.host_cpu.'
'HostCPUMetricCache'))
class ComprehensiveScrub(fixtures.Fixture):
"""Mock out the ComprehensiveScrub."""
def setUp(self):
super(ComprehensiveScrub, self).setUp()
self.mock_comp_scrub = self.useFixture(
fixtures.MockPatch('pypowervm.tasks.storage.ComprehensiveScrub'))
class VolumeAdapter(fixtures.Fixture):
"""Mock out the VolumeAdapter."""
def __init__(self, patch_class):
self.patch_class = patch_class
def setUp(self):
super(VolumeAdapter, self).setUp()
self.std_vol_adpt_fx = self.useFixture(
fixtures.MockPatch(self.patch_class, __name__='MockVolumeAdapter'))
self.std_vol_adpt = self.std_vol_adpt_fx.mock
# We want to mock out the connection_info individually so it gives
# back a new mock on every call. That's because the vol id is
# used for task names and we can't have duplicates. Here we have
# just one mock for simplicity of the vol driver but we need
# multiple names.
self.std_vol_adpt.return_value.connection_info.__getitem__\
.side_effect = mock.MagicMock
self.drv = self.std_vol_adpt.return_value
class PowerVMComputeDriver(fixtures.Fixture):
"""Construct a fake compute driver."""
@mock.patch('nova_powervm.virt.powervm.disk.localdisk.LocalStorage')
@mock.patch('nova_powervm.virt.powervm.driver.PowerVMDriver._get_adapter')
@mock.patch('pypowervm.tasks.partition.get_this_partition')
@mock.patch('pypowervm.tasks.cna.find_orphaned_trunks')
def _init_host(self, *args):
self.mock_sys = self.useFixture(fixtures.MockPatch(
'pypowervm.wrappers.managed_system.System.get')).mock
self.mock_sys.return_value = [mock.Mock(
uuid='host_uuid',
system_name='Server-8247-21L-SN9999999',
proc_compat_modes=('default', 'POWER7', 'POWER8'),
migration_data={'active_migrations_supported': 16,
'active_migrations_in_progress': 0})]
# Mock active vios
self.get_active_vios = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.partition.get_active_vioses')).mock
self.get_active_vios.return_value = ['mock_vios']
self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.partition.validate_vios_ready'))
self.drv.session = self.drv.adapter.session
self.drv.init_host('FakeHost')
def setUp(self):
super(PowerVMComputeDriver, self).setUp()
# Set up the mock CPU stats (init_host uses it)
self.useFixture(HostCPUMetricCache())
self.scrubber = ComprehensiveScrub()
self.useFixture(self.scrubber)
self.drv = driver.PowerVMDriver(fake.FakeVirtAPI())
self.drv.adapter = self.useFixture(pvm_fx.AdapterFx()).adpt
self._init_host()
self.drv.image_api = mock.Mock()
disk_adpt_fx = self.useFixture(DiskAdapter())
self.drv.disk_dvr = disk_adpt_fx.std_disk_adpt
def cleanUp(self):
self.scrubber.mock_comp_scrub.mock.assert_called_once()
super(PowerVMComputeDriver, self).cleanUp()
class TaskFlow(fixtures.Fixture):
"""Construct a fake TaskFlow.
This fixture makes it easy to check if tasks were added to a task flow
without having to mock each task.
"""
def __init__(self, linear_flow='taskflow.patterns.linear_flow',
engines='taskflow.engines'):
"""Create the fixture.
:param linear_flow: The import path to patch for the linear flow.
:param engines: The import path to patch for the engines.
"""
super(TaskFlow, self).__init__()
self.linear_flow_import = linear_flow
self.engines_import = engines
def setUp(self):
super(TaskFlow, self).setUp()
self.tasks_added = []
self.lf_fix = self.useFixture(
fixtures.MockPatch(self.linear_flow_import))
self.lf_fix.mock.Flow.return_value.add.side_effect = self._record_tasks
self.engine_fx = self.useFixture(
fixtures.MockPatch(self.engines_import))
def _record_tasks(self, *args, **kwargs):
self.tasks_added.append(args[0])
def assert_tasks_added(self, testcase, expected_tasks):
# Ensure the lists are the same size.
testcase.assertEqual(len(expected_tasks), len(self.tasks_added),
'Expected tasks not added: %s, %s' %
(expected_tasks,
[t.name for t in self.tasks_added]))
def compare_tasks(expected, observed):
if expected.endswith('*'):
cmplen = len(expected[:-1])
testcase.assertEqual(expected[:cmplen], observed.name[:cmplen])
else:
testcase.assertEqual(expected, observed.name)
# Compare the list of expected against added.
map(compare_tasks, expected_tasks, self.tasks_added)
class DriverTaskFlow(TaskFlow):
"""Specific TaskFlow fixture for the main compute driver."""
def __init__(self):
super(DriverTaskFlow, self).__init__(
linear_flow='nova_powervm.virt.powervm.driver.tf_lf',
engines='nova_powervm.virt.powervm.tasks.base.tf_eng')

View File

@ -1,67 +0,0 @@
# Copyright 2016, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova_powervm.virt.powervm.nvram import api
class NoopNvramStore(api.NvramStore):
def store(self, instance, data, force=True):
"""Store the NVRAM into the storage service.
:param instance: The nova instance object OR instance UUID.
:param data: the NVRAM data base64 encoded string
:param force: boolean whether an update should always be saved,
otherwise, check to see if it's changed.
"""
pass
def fetch(self, instance):
"""Fetch the NVRAM from the storage service.
:param instance: The nova instance object OR instance UUID.
:returns: the NVRAM data base64 encoded string
"""
return None
def delete(self, instance):
"""Delete the NVRAM from the storage service.
:param instance: The nova instance object OR instance UUID.
"""
pass
class ExpNvramStore(NoopNvramStore):
def fetch(self, instance):
"""Fetch the NVRAM from the storage service.
:param instance: The nova instance object OR instance UUID.
:returns: the NVRAM data base64 encoded string
"""
# Raise exception. This is to ensure fetch causes a failure
# when an exception is raised
raise Exception('Error')
def delete(self, instance):
"""Delete the NVRAM from the storage service.
:param instance: The nova instance object OR instance UUID.
"""
# Raise exception. This is to ensure delete does not fail
# despite an exception being raised
raise Exception('Error')

View File

@ -1,97 +0,0 @@
# Copyright 2016, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova import test
from pypowervm import exceptions as pvm_exc
import time
from nova_powervm.tests.virt import powervm
from nova_powervm.tests.virt.powervm.nvram import fake_api
from nova_powervm.virt.powervm.nvram import api
from nova_powervm.virt.powervm.nvram import manager
from nova_powervm.virt.powervm import vm
class TestNvramManager(test.NoDBTestCase):
def setUp(self):
super(TestNvramManager, self).setUp()
self.fake_store = fake_api.NoopNvramStore()
self.fake_exp_store = fake_api.ExpNvramStore()
self.mock_store = self.useFixture(
fixtures.MockPatchObject(self.fake_store, 'store')).mock
self.mock_fetch = self.useFixture(
fixtures.MockPatchObject(self.fake_store, 'fetch')).mock
self.mock_remove = self.useFixture(
fixtures.MockPatchObject(self.fake_store, 'delete')).mock
self.mock_exp_remove = self.useFixture(
fixtures.MockPatchObject(self.fake_exp_store, 'delete')).mock
@mock.patch('nova_powervm.virt.powervm.nvram.manager.LOG.exception',
autospec=True)
@mock.patch.object(vm, 'get_instance_wrapper', autospec=True)
def test_store_with_exception(self, mock_get_inst, mock_log):
mock_get_inst.side_effect = pvm_exc.HttpError(mock.Mock())
mgr = manager.NvramManager(self.fake_store, mock.Mock(), mock.Mock())
mgr.store(powervm.TEST_INST1.uuid)
self.assertEqual(1, mock_log.call_count)
@mock.patch('nova_powervm.virt.powervm.nvram.manager.LOG.warning',
autospec=True)
@mock.patch.object(vm, 'get_instance_wrapper', autospec=True)
def test_store_with_not_found_exc(self, mock_get_inst, mock_log):
mock_get_inst.side_effect = pvm_exc.HttpNotFound(mock.Mock())
mgr = manager.NvramManager(self.fake_store, mock.Mock(), mock.Mock())
mgr.store(powervm.TEST_INST1.uuid)
self.assertEqual(0, mock_log.call_count)
@mock.patch.object(vm, 'get_instance_wrapper', autospec=True)
def test_manager(self, mock_get_inst):
mgr = manager.NvramManager(self.fake_store, mock.Mock(), mock.Mock())
mgr.store(powervm.TEST_INST1.uuid)
mgr.store(powervm.TEST_INST2)
mgr.fetch(powervm.TEST_INST2)
mgr.fetch(powervm.TEST_INST2.uuid)
mgr.remove(powervm.TEST_INST2)
# Simulate a quick repeated stores of the same LPAR by poking the Q.
mgr._queue.put(powervm.TEST_INST1)
mgr._queue.put(powervm.TEST_INST1)
mgr._queue.put(powervm.TEST_INST2)
time.sleep(0)
mgr.shutdown()
self.mock_store.assert_has_calls(
[mock.call(powervm.TEST_INST1.uuid, mock.ANY),
mock.call(powervm.TEST_INST2.uuid, mock.ANY)])
self.mock_fetch.assert_has_calls(
[mock.call(powervm.TEST_INST2.uuid)] * 2)
self.mock_remove.assert_called_once_with(powervm.TEST_INST2.uuid)
self.mock_remove.reset_mock()
# Test when fetch returns an exception
mgr_exp = manager.NvramManager(self.fake_exp_store,
mock.Mock(), mock.Mock())
self.assertRaises(api.NVRAMDownloadException,
mgr_exp.fetch, powervm.TEST_INST2)
# Test exception being logged but not raised during remove
mgr_exp.remove(powervm.TEST_INST2.uuid)
self.mock_exp_remove.assert_called_once_with(powervm.TEST_INST2.uuid)

View File

@ -1,319 +0,0 @@
# Copyright 2016, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import test
from requests.exceptions import RequestException
from swiftclient import exceptions as swft_exc
from swiftclient import service as swft_srv
from nova_powervm.tests.virt import powervm
from nova_powervm.virt.powervm.nvram import api
from nova_powervm.virt.powervm.nvram import swift
class TestSwiftStore(test.NoDBTestCase):
def setUp(self):
super(TestSwiftStore, self).setUp()
self.flags(swift_password='secret', swift_auth_url='url',
group='powervm')
self.swift_store = swift.SwiftNvramStore()
def test_run_operation(self):
fake_result = [{'key1': 'value1'}, {'2key1', '2value1'}]
fake_result2 = fake_result[0]
def fake_generator(alist):
for item in alist:
yield item
# Address the 'list' method that should be called.
list_op = mock.Mock()
self.swift_store.swift_service = mock.Mock(list=list_op)
# Setup expected results
list_op.return_value = fake_generator(fake_result)
results = self.swift_store._run_operation('list', 1, x=2)
list_op.assert_called_once_with(1, x=2)
# Returns a copy of the results
self.assertEqual(results, fake_result)
self.assertNotEqual(id(results), id(fake_result))
# Try a single result - Setup expected results
list_op.reset_mock()
list_op.return_value = fake_result2
results = self.swift_store._run_operation('list', 3, x=4)
list_op.assert_called_once_with(3, x=4)
# Returns the actual result
self.assertEqual(results, fake_result2)
self.assertEqual(id(results), id(fake_result2))
# Should raise any swift errors encountered
list_op.side_effect = swft_srv.SwiftError('Error message.')
self.assertRaises(swft_srv.SwiftError, self.swift_store._run_operation,
'list', 3, x=4)
def _build_results(self, names):
listing = [{'name': name} for name in names]
return [{'success': True, 'listing': listing}]
def test_get_name_from_listing(self):
names = self.swift_store._get_name_from_listing(
self._build_results(['snoopy']))
self.assertEqual(['snoopy'], names)
def test_get_container_names(self):
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['container'])
names = self.swift_store._get_container_names()
self.assertEqual(['container'], names)
mock_run.assert_called_once_with('list',
options={'long': True})
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_get_container_names', autospec=True)
def test_get_object_names(self, mock_container_names):
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj', 'obj2'])
# First run, no containers.
mock_container_names.return_value = []
names = self.swift_store._get_object_names('powervm_nvram')
self.assertEqual([], names)
self.assertEqual(1, mock_container_names.call_count)
# Test without a prefix
mock_container_names.return_value = ['powervm_nvram']
names = self.swift_store._get_object_names('powervm_nvram')
self.assertEqual(['obj', 'obj2'], names)
mock_run.assert_called_once_with(
'list', container='powervm_nvram',
options={'long': True, 'prefix': None})
self.assertEqual(mock_container_names.call_count, 2)
# Test with a prefix
names = self.swift_store._get_object_names('powervm_nvram',
prefix='obj')
self.assertEqual(['obj', 'obj2'], names)
mock_run.assert_called_with(
'list', container='powervm_nvram',
options={'long': True, 'prefix': 'obj'})
# Second run should not increment the call count here
self.assertEqual(mock_container_names.call_count, 2)
@mock.patch('swiftclient.service.SwiftUploadObject', autospec=True)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_underscore_store(self, mock_exists, mock_swiftuploadobj):
mock_exists.return_value = True
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
self.swift_store._store(powervm.TEST_INST1.uuid, 'data')
mock_run.assert_called_once_with('upload', 'powervm_nvram',
mock.ANY, options=None)
# Test unsuccessful upload
mock_result = [{'success': False,
'error': RequestException('Error Message.')}]
mock_run.return_value = mock_result
self.assertRaises(api.NVRAMUploadException,
self.swift_store._store, powervm.TEST_INST1.uuid,
'data')
# Test retry upload
mock_run.reset_mock()
mock_swiftuploadobj.reset_mock()
mock_res_obj = {'success': False,
'error': swft_exc.
ClientException('Error Message.'),
'object': '6ecb1386-53ab-43da-9e04-54e986ad4a9d'}
mock_run.side_effect = [mock_res_obj,
self._build_results(['obj'])]
self.swift_store._store(powervm.TEST_INST1.uuid, 'data')
mock_run.assert_called_with('upload', 'powervm_nvram',
mock.ANY, options=None)
self.assertEqual(mock_run.call_count, 2)
self.assertEqual(mock_swiftuploadobj.call_count, 2)
@mock.patch('swiftclient.service.SwiftUploadObject', autospec=True)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_underscore_store_not_exists(self, mock_exists,
mock_swiftuploadobj):
mock_exists.return_value = False
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
self.swift_store._store(powervm.TEST_INST1.uuid, 'data')
mock_run.assert_called_once_with(
'upload', 'powervm_nvram', mock.ANY,
options={'leave_segments': True})
# Test retry upload
mock_run.reset_mock()
mock_swiftuploadobj.reset_mock()
mock_res_obj = {'success': False,
'error': swft_exc.
ClientException('Error Message.'),
'object': '6ecb1386-53ab-43da-9e04-54e986ad4a9d'}
mock_run.side_effect = [mock_res_obj,
self._build_results(['obj'])]
self.swift_store._store(powervm.TEST_INST1.uuid, 'data')
mock_run.assert_called_with('upload', 'powervm_nvram', mock.ANY,
options={'leave_segments': True})
self.assertEqual(mock_run.call_count, 2)
self.assertEqual(mock_swiftuploadobj.call_count, 2)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_store(self, mock_exists):
# Test forcing a update
with mock.patch.object(self.swift_store, '_store') as mock_store:
mock_exists.return_value = False
self.swift_store.store(powervm.TEST_INST1.uuid, 'data', force=True)
mock_store.assert_called_once_with(powervm.TEST_INST1.uuid,
'data', exists=False)
with mock.patch.object(
self.swift_store, '_store') as mock_store, mock.patch.object(
self.swift_store, '_run_operation') as mock_run:
mock_exists.return_value = True
data_md5_hash = '8d777f385d3dfec8815d20f7496026dc'
results = self._build_results(['obj'])
results[0]['headers'] = {'etag': data_md5_hash}
mock_run.return_value = results
self.swift_store.store(powervm.TEST_INST1.uuid, 'data',
force=False)
self.assertFalse(mock_store.called)
mock_run.assert_called_once_with(
'stat', options={'long': True},
container='powervm_nvram', objects=[powervm.TEST_INST1.uuid])
def test_store_slot_map(self):
# Test forcing a update
with mock.patch.object(self.swift_store, '_store') as mock_store:
self.swift_store.store_slot_map("test_slot", 'data')
mock_store.assert_called_once_with(
'test_slot', 'data')
@mock.patch('os.remove', autospec=True)
@mock.patch('tempfile.NamedTemporaryFile', autospec=True)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_fetch(self, mock_exists, mock_tmpf, mock_rmv):
mock_exists.return_value = True
with mock.patch('nova_powervm.virt.powervm.nvram.swift.open',
mock.mock_open(read_data='data to read')
) as m_open, mock.patch.object(
self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
mock_tmpf.return_value.__enter__.return_value.name = 'fname'
data = self.swift_store.fetch(powervm.TEST_INST1)
self.assertEqual('data to read', data)
mock_rmv.assert_called_once_with(m_open.return_value.name)
# Bad result from the download
mock_run.return_value[0]['success'] = False
self.assertRaises(api.NVRAMDownloadException,
self.swift_store.fetch, powervm.TEST_INST1)
@mock.patch('os.remove', autospec=True)
@mock.patch('tempfile.NamedTemporaryFile', autospec=True)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_fetch_slot_map(self, mock_exists, mock_tmpf, mock_rmv):
mock_exists.return_value = True
with mock.patch('nova_powervm.virt.powervm.nvram.swift.open',
mock.mock_open(read_data='data to read')
) as m_open, mock.patch.object(
self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
mock_tmpf.return_value.__enter__.return_value.name = 'fname'
data = self.swift_store.fetch_slot_map("test_slot")
self.assertEqual('data to read', data)
mock_rmv.assert_called_once_with(m_open.return_value.name)
@mock.patch('os.remove', autospec=True)
@mock.patch('tempfile.NamedTemporaryFile', autospec=True)
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_exists', autospec=True)
def test_fetch_slot_map_no_exist(self, mock_exists, mock_tmpf, mock_rmv):
mock_exists.return_value = False
data = self.swift_store.fetch_slot_map("test_slot")
self.assertIsNone(data)
# Make sure the remove (part of the finally block) is never called.
# Should not get that far.
self.assertFalse(mock_rmv.called)
def test_delete(self):
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
self.swift_store.delete(powervm.TEST_INST1)
mock_run.assert_called_once_with('delete',
container='powervm_nvram',
objects=[powervm.TEST_INST1.uuid])
# Bad result from the operation
mock_run.return_value[0]['success'] = False
self.assertRaises(api.NVRAMDeleteException,
self.swift_store.delete, powervm.TEST_INST1)
def test_delete_slot_map(self):
with mock.patch.object(self.swift_store, '_run_operation') as mock_run:
mock_run.return_value = self._build_results(['obj'])
self.swift_store.delete_slot_map('test_slot')
mock_run.assert_called_once_with(
'delete', container='powervm_nvram', objects=['test_slot'])
# Bad result from the operation
mock_run.return_value[0]['success'] = False
self.assertRaises(
api.NVRAMDeleteException, self.swift_store.delete_slot_map,
'test_slot')
@mock.patch('nova_powervm.virt.powervm.nvram.swift.SwiftNvramStore.'
'_get_object_names', autospec=True)
def test_exists(self, mock_get_obj_names):
# Test where there are elements in here
mock_get_obj_names.return_value = ['obj', 'obj1', 'obj2']
self.assertTrue(self.swift_store._exists('obj'))
# Test where there are objects that start with the prefix, but aren't
# actually there themselves
mock_get_obj_names.return_value = ['obj1', 'obj2']
self.assertFalse(self.swift_store._exists('obj'))
def test_optional_options(self):
"""Test optional config values."""
# Not in the sparse one from setUp()
self.assertIsNone(self.swift_store.options['os_cacert'])
self.assertIsNone(self.swift_store.options['os_endpoint_type'])
# Create a new one with the optional values set
self.flags(swift_cacert='/path/to/ca.pem', group='powervm')
self.flags(swift_endpoint_type='internalURL', group='powervm')
swift_store = swift.SwiftNvramStore()
self.assertEqual('/path/to/ca.pem', swift_store.options['os_cacert'])
self.assertEqual('internalURL',
swift_store.options['os_endpoint_type'])

View File

@ -1,67 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import test
from nova_powervm.virt.powervm.tasks import image as tsk_img
class TestImage(test.NoDBTestCase):
def test_update_task_state(self):
def func(task_state, expected_state='delirious'):
self.assertEqual('task_state', task_state)
self.assertEqual('delirious', expected_state)
tf = tsk_img.UpdateTaskState(func, 'task_state')
self.assertEqual('update_task_state_task_state', tf.name)
tf.execute()
def func2(task_state, expected_state=None):
self.assertEqual('task_state', task_state)
self.assertEqual('expected_state', expected_state)
tf = tsk_img.UpdateTaskState(func2, 'task_state',
expected_state='expected_state')
tf.execute()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tsk_img.UpdateTaskState(func, 'task_state')
tf.assert_called_once_with(name='update_task_state_task_state')
@mock.patch('nova_powervm.virt.powervm.image.stream_blockdev_to_glance',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.image.generate_snapshot_metadata',
autospec=True)
def test_stream_to_glance(self, mock_metadata, mock_stream):
mock_metadata.return_value = 'metadata'
mock_inst = mock.Mock()
mock_inst.name = 'instance_name'
tf = tsk_img.StreamToGlance('context', 'image_api', 'image_id',
mock_inst)
self.assertEqual('stream_to_glance', tf.name)
tf.execute('disk_path')
mock_metadata.assert_called_with('context', 'image_api', 'image_id',
mock_inst)
mock_stream.assert_called_with('context', 'image_api', 'image_id',
'metadata', 'disk_path')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tsk_img.StreamToGlance('context', 'image_api', 'image_id',
mock_inst)
tf.assert_called_once_with(name='stream_to_glance',
requires='disk_path')

View File

@ -1,416 +0,0 @@
# Copyright 2015, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import eventlet
import mock
from nova import exception
from nova import objects
from nova import test
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.wrappers import iocard as pvm_card
from pypowervm.wrappers import network as pvm_net
from nova_powervm.tests.virt import powervm
from nova_powervm.virt.powervm.tasks import network as tf_net
def cna(mac):
"""Builds a mock Client Network Adapter (or VNIC) for unit tests."""
nic = mock.MagicMock()
nic.mac = mac
nic.vswitch_uri = 'fake_href'
return nic
class TestNetwork(test.NoDBTestCase):
def setUp(self):
super(TestNetwork, self).setUp()
self.flags(host='host1')
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
self.mock_lpar_wrap = mock.MagicMock()
self.mock_lpar_wrap.can_modify_io.return_value = True, None
@mock.patch('nova_powervm.virt.powervm.vif.unplug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_unplug_vifs(self, mock_vm_get, mock_unplug):
"""Tests that a delete of the vif can be done."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the CNA responses.
cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')]
mock_vm_get.return_value = cnas
# Mock up the network info. This also validates that they will be
# sanitized to upper case.
net_info = [
{'address': 'aa:bb:cc:dd:ee:ff'}, {'address': 'aa:bb:cc:dd:ee:22'},
{'address': 'aa:bb:cc:dd:ee:33'}
]
# Mock out the vif driver
def validate_unplug(adapter, host_uuid, instance, vif,
slot_mgr, cna_w_list=None):
self.assertEqual(adapter, self.apt)
self.assertEqual('host_uuid', host_uuid)
self.assertEqual(instance, inst)
self.assertIn(vif, net_info)
self.assertEqual('slot_mgr', slot_mgr)
self.assertEqual(cna_w_list, cnas)
mock_unplug.side_effect = validate_unplug
# Run method
p_vifs = tf_net.UnplugVifs(self.apt, inst, net_info, 'host_uuid',
'slot_mgr')
p_vifs.execute(self.mock_lpar_wrap)
# Make sure the unplug was invoked, so that we know that the validation
# code was called
self.assertEqual(3, mock_unplug.call_count)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_net.UnplugVifs(self.apt, inst, net_info, 'host_uuid',
'slot_mgr')
tf.assert_called_once_with(name='unplug_vifs', requires=['lpar_wrap'])
def test_unplug_vifs_invalid_state(self):
"""Tests that the delete raises an exception if bad VM state."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock that the state is incorrect
self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad'
# Run method
p_vifs = tf_net.UnplugVifs(self.apt, inst, mock.Mock(), 'host_uuid',
'slot_mgr')
self.assertRaises(exception.VirtualInterfaceUnplugException,
p_vifs.execute, self.mock_lpar_wrap)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_vnics', autospec=True)
def test_plug_vifs_rmc(self, mock_vnic_get, mock_cna_get, mock_plug):
"""Tests that a crt vif can be done with secure RMC."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the CNA response. One should already exist, the other
# should not.
pre_cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')]
mock_cna_get.return_value = copy.deepcopy(pre_cnas)
# Ditto VNIC response.
mock_vnic_get.return_value = [cna('AABBCCDDEE33'), cna('AABBCCDDEE44')]
# Mock up the network info. This also validates that they will be
# sanitized to upper case.
net_info = [
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
{'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'},
{'address': 'aa:bb:cc:dd:ee:33', 'vnic_type': 'direct'},
{'address': 'aa:bb:cc:dd:ee:55', 'vnic_type': 'direct'}
]
# Both updates run first (one CNA, one VNIC); then the CNA create, then
# the VNIC create.
mock_new_cna = mock.Mock(spec=pvm_net.CNA)
mock_new_vnic = mock.Mock(spec=pvm_card.VNIC)
mock_plug.side_effect = ['upd_cna', 'upd_vnic',
mock_new_cna, mock_new_vnic]
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
all_cnas = p_vifs.execute(self.mock_lpar_wrap)
# new vif should be created twice.
mock_plug.assert_any_call(self.apt, 'host_uuid', inst, net_info[0],
'slot_mgr', new_vif=False)
mock_plug.assert_any_call(self.apt, 'host_uuid', inst, net_info[1],
'slot_mgr', new_vif=True)
mock_plug.assert_any_call(self.apt, 'host_uuid', inst, net_info[2],
'slot_mgr', new_vif=False)
mock_plug.assert_any_call(self.apt, 'host_uuid', inst, net_info[3],
'slot_mgr', new_vif=True)
# The Task provides the list of original CNAs plus only CNAs that were
# created.
self.assertEqual(pre_cnas + [mock_new_cna], all_cnas)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
tf.assert_called_once_with(name='plug_vifs', provides='vm_cnas',
requires=['lpar_wrap'])
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_rmc_no_create(self, mock_vm_get, mock_plug):
"""Verifies if no creates are needed, none are done."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the CNA response. Both should already exist.
mock_vm_get.return_value = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')]
# Mock up the network info. This also validates that they will be
# sanitized to upper case. This also validates that we don't call
# get_vnics if no nets have vnic_type 'direct'.
net_info = [
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
{'address': 'aa:bb:cc:dd:ee:11', 'vnic_type': 'normal'}
]
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
p_vifs.execute(self.mock_lpar_wrap)
# The create should have been called with new_vif as False.
mock_plug.assert_called_with(
self.apt, 'host_uuid', inst, net_info[1],
'slot_mgr', new_vif=False)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_invalid_state(self, mock_vm_get, mock_plug):
"""Tests that a crt_vif fails when the LPAR state is bad."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the CNA response. Only doing one for simplicity
mock_vm_get.return_value = []
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
# Mock that the state is incorrect
self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad'
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
self.assertRaises(exception.VirtualInterfaceCreateException,
p_vifs.execute, self.mock_lpar_wrap)
# The create should not have been invoked
self.assertEqual(0, mock_plug.call_count)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_timeout(self, mock_vm_get, mock_plug):
"""Tests that crt vif failure via loss of neutron callback."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the CNA response. Only doing one for simplicity
mock_vm_get.return_value = [cna('AABBCCDDEE11')]
# Mock up the network info.
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
# Ensure that an exception is raised by a timeout.
mock_plug.side_effect = eventlet.timeout.Timeout()
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
self.assertRaises(exception.VirtualInterfaceCreateException,
p_vifs.execute, self.mock_lpar_wrap)
# The create should have only been called once.
self.assertEqual(1, mock_plug.call_count)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_diff_host(self, mock_vm_get, mock_plug):
"""Tests that crt vif handles bad inst.host value."""
inst = powervm.TEST_INST1
# Set this up as a different host from the inst.host
self.flags(host='host2')
# Mock up the CNA response. Only doing one for simplicity
mock_vm_get.return_value = [cna('AABBCCDDEE11')]
# Mock up the network info.
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
with mock.patch.object(inst, 'save') as mock_inst_save:
p_vifs.execute(self.mock_lpar_wrap)
# The create should have only been called once.
self.assertEqual(1, mock_plug.call_count)
# Should have called save to save the new host and then changed it back
self.assertEqual(2, mock_inst_save.call_count)
self.assertEqual('host1', inst.host)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_diff_host_except(self, mock_vm_get, mock_plug):
"""Tests that crt vif handles bad inst.host value.
This test ensures that if we get a timeout exception we still reset
the inst.host value back to the original value
"""
inst = powervm.TEST_INST1
# Set this up as a different host from the inst.host
self.flags(host='host2')
# Mock up the CNA response. Only doing one for simplicity
mock_vm_get.return_value = [cna('AABBCCDDEE11')]
# Mock up the network info.
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
# Ensure that an exception is raised by a timeout.
mock_plug.side_effect = eventlet.timeout.Timeout()
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
with mock.patch.object(inst, 'save') as mock_inst_save:
self.assertRaises(exception.VirtualInterfaceCreateException,
p_vifs.execute, self.mock_lpar_wrap)
# The create should have only been called once.
self.assertEqual(1, mock_plug.call_count)
# Should have called save to save the new host and then changed it back
self.assertEqual(2, mock_inst_save.call_count)
self.assertEqual('host1', inst.host)
@mock.patch('nova_powervm.virt.powervm.vif.unplug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_vifs_revert(self, mock_vm_get, mock_plug, mock_unplug):
"""Tests that the revert flow works properly."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Fake CNA list. The one pre-existing VIF should *not* get reverted.
cna_list = [cna('AABBCCDDEEFF'), cna('FFEEDDCCBBAA')]
mock_vm_get.return_value = cna_list
# Mock up the network info. Three roll backs.
net_info = [
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
{'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'},
{'address': 'aa:bb:cc:dd:ee:33', 'vnic_type': 'normal'}
]
# Make sure we test raising an exception
mock_unplug.side_effect = [exception.NovaException(), None]
# Run method
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
p_vifs.execute(self.mock_lpar_wrap)
p_vifs.revert(self.mock_lpar_wrap, mock.Mock(), mock.Mock())
# The unplug should be called twice. The exception shouldn't stop the
# second call.
self.assertEqual(2, mock_unplug.call_count)
# Make sure each call is invoked correctly. The first plug was not a
# new vif, so it should not be reverted.
c2 = mock.call(self.apt, 'host_uuid', inst, net_info[1],
'slot_mgr', cna_w_list=cna_list)
c3 = mock.call(self.apt, 'host_uuid', inst, net_info[2],
'slot_mgr', cna_w_list=cna_list)
mock_unplug.assert_has_calls([c2, c3])
@mock.patch('nova_powervm.virt.powervm.vif.plug_secure_rmc_vif',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif.get_secure_rmc_vswitch',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif.plug', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_plug_mgmt_vif(self, mock_vm_get, mock_plug,
mock_get_rmc_vswitch, mock_plug_rmc_vif):
"""Tests that a mgmt vif can be created."""
inst = objects.Instance(**powervm.TEST_INSTANCE)
# Mock up the rmc vswitch
vswitch_w = mock.MagicMock()
vswitch_w.href = 'fake_mgmt_uri'
mock_get_rmc_vswitch.return_value = vswitch_w
# Run method such that it triggers a fresh CNA search
p_vifs = tf_net.PlugMgmtVif(self.apt, inst, 'host_uuid', 'slot_mgr')
p_vifs.execute(None)
# With the default get_cnas mock (which returns a Mock()), we think we
# found an existing management CNA.
self.assertEqual(0, mock_plug_rmc_vif.call_count)
mock_vm_get.assert_called_once_with(
self.apt, inst, vswitch_uri='fake_mgmt_uri')
# Now mock get_cnas to return no hits
mock_vm_get.reset_mock()
mock_vm_get.return_value = []
p_vifs.execute(None)
# Get was called; and since it didn't have the mgmt CNA, so was plug.
self.assertEqual(1, mock_plug_rmc_vif.call_count)
mock_vm_get.assert_called_once_with(
self.apt, inst, vswitch_uri='fake_mgmt_uri')
# Now pass CNAs, but not the mgmt vif, "from PlugVifs"
cnas = [mock.Mock(vswitch_uri='uri1'), mock.Mock(vswitch_uri='uri2')]
mock_plug_rmc_vif.reset_mock()
mock_vm_get.reset_mock()
p_vifs.execute(cnas)
# Get wasn't called, since the CNAs were passed "from PlugVifs"; but
# since the mgmt vif wasn't included, plug was called.
self.assertEqual(0, mock_vm_get.call_count)
self.assertEqual(1, mock_plug_rmc_vif.call_count)
# Finally, pass CNAs including the mgmt.
cnas.append(mock.Mock(vswitch_uri='fake_mgmt_uri'))
mock_plug_rmc_vif.reset_mock()
p_vifs.execute(cnas)
# Neither get nor plug was called.
self.assertEqual(0, mock_vm_get.call_count)
self.assertEqual(0, mock_plug_rmc_vif.call_count)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_net.PlugMgmtVif(self.apt, inst, 'host_uuid', 'slot_mgr')
tf.assert_called_once_with(name='plug_mgmt_vif', provides='mgmt_cna',
requires=['vm_cnas'])
def test_get_vif_events(self):
# Set up common mocks.
inst = objects.Instance(**powervm.TEST_INSTANCE)
net_info = [mock.MagicMock(), mock.MagicMock()]
net_info[0]['id'] = 'a'
net_info[0].get.return_value = False
net_info[1]['id'] = 'b'
net_info[1].get.return_value = True
# Set up the runner.
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info,
'host_uuid', 'slot_mgr')
p_vifs.crt_network_infos = net_info
resp = p_vifs._get_vif_events()
# Only one should be returned since only one was active.
self.assertEqual(1, len(resp))

View File

@ -1,55 +0,0 @@
# Copyright 2016, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import test
from nova_powervm.virt.powervm.tasks import slot
class TestSaveSlotStore(test.NoDBTestCase):
def setUp(self):
super(TestSaveSlotStore, self).setUp()
def test_execute(self):
slot_mgr = mock.Mock()
save = slot.SaveSlotStore(mock.MagicMock(), slot_mgr)
save.execute()
slot_mgr.save.assert_called_once_with()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
slot.SaveSlotStore(mock.MagicMock(), slot_mgr)
tf.assert_called_once_with(name='save_slot_store')
class TestDeleteSlotStore(test.NoDBTestCase):
def setUp(self):
super(TestDeleteSlotStore, self).setUp()
def test_execute(self):
slot_mgr = mock.Mock()
delete = slot.DeleteSlotStore(mock.MagicMock(), slot_mgr)
delete.execute()
slot_mgr.delete.assert_called_once_with()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
slot.DeleteSlotStore(mock.MagicMock(), slot_mgr)
tf.assert_called_once_with(name='delete_slot_store')

View File

@ -1,407 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import mock
from nova import test
from nova_powervm.virt.powervm import exception as npvmex
from nova_powervm.virt.powervm.tasks import storage as tf_stg
class TestStorage(test.NoDBTestCase):
def setUp(self):
super(TestStorage, self).setUp()
self.adapter = mock.Mock()
self.disk_dvr = mock.MagicMock()
self.mock_cfg_drv = self.useFixture(fixtures.MockPatch(
'nova_powervm.virt.powervm.media.ConfigDrivePowerVM')).mock
self.mock_mb = self.mock_cfg_drv.return_value
self.instance = mock.MagicMock()
self.context = 'context'
def test_create_and_connect_cfg_drive(self):
lpar_w = mock.Mock()
# Test with no FeedTask
task = tf_stg.CreateAndConnectCfgDrive(
self.adapter, self.instance, 'injected_files',
'network_info', 'admin_pass')
task.execute(lpar_w, 'mgmt_cna')
self.mock_cfg_drv.assert_called_once_with(self.adapter)
self.mock_mb.create_cfg_drv_vopt.assert_called_once_with(
self.instance, 'injected_files', 'network_info', lpar_w.uuid,
admin_pass='admin_pass', mgmt_cna='mgmt_cna', stg_ftsk=None)
self.mock_cfg_drv.reset_mock()
self.mock_mb.reset_mock()
# Normal revert
task.revert(lpar_w, 'mgmt_cna', 'result', 'flow_failures')
self.mock_mb.dlt_vopt.assert_called_once_with(lpar_w.uuid)
self.mock_mb.reset_mock()
# Revert when dlt_vopt fails
self.mock_mb.dlt_vopt.side_effect = Exception('fake-exc')
task.revert(lpar_w, 'mgmt_cna', 'result', 'flow_failures')
self.mock_mb.dlt_vopt.assert_called_once_with(lpar_w.uuid)
self.mock_mb.reset_mock()
# With a specified FeedTask
task = tf_stg.CreateAndConnectCfgDrive(
self.adapter, self.instance, 'injected_files',
'network_info', 'admin_pass', stg_ftsk='stg_ftsk')
task.execute(lpar_w, 'mgmt_cna')
self.mock_cfg_drv.assert_called_once_with(self.adapter)
self.mock_mb.create_cfg_drv_vopt.assert_called_once_with(
self.instance, 'injected_files', 'network_info', lpar_w.uuid,
admin_pass='admin_pass', mgmt_cna='mgmt_cna', stg_ftsk='stg_ftsk')
# Revert when media builder not created
task.mb = None
task.revert(lpar_w, 'mgmt_cna', 'result', 'flow_failures')
self.mock_mb.assert_not_called()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.CreateAndConnectCfgDrive(
self.adapter, self.instance, 'injected_files', 'network_info',
'admin_pass')
tf.assert_called_once_with(name='cfg_drive', requires=['lpar_wrap',
'mgmt_cna'])
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_delete_vopt(self, mock_pvm_uuid):
# Test with no FeedTask
mock_pvm_uuid.return_value = 'pvm_uuid'
task = tf_stg.DeleteVOpt(self.adapter, self.instance)
task.execute()
self.mock_cfg_drv.assert_called_once_with(self.adapter)
self.mock_mb.dlt_vopt.assert_called_once_with(
'pvm_uuid', stg_ftsk=None)
self.mock_cfg_drv.reset_mock()
self.mock_mb.reset_mock()
# With a specified FeedTask
task = tf_stg.DeleteVOpt(self.adapter, self.instance,
stg_ftsk='ftsk')
task.execute()
self.mock_cfg_drv.assert_called_once_with(self.adapter)
self.mock_mb.dlt_vopt.assert_called_once_with(
'pvm_uuid', stg_ftsk='ftsk')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.DeleteVOpt(self.adapter, self.instance)
tf.assert_called_once_with(name='vopt_delete')
def test_delete_disk(self):
stor_adpt_mappings = mock.Mock()
task = tf_stg.DeleteDisk(self.disk_dvr, self.instance)
task.execute(stor_adpt_mappings)
self.disk_dvr.delete_disks.assert_called_once_with(stor_adpt_mappings)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.DeleteDisk(self.disk_dvr, self.instance)
tf.assert_called_once_with(
name='dlt_storage', requires=['stor_adpt_mappings'])
def test_detach_disk(self):
disk_type = 'disk_type'
stg_ftsk = mock.Mock()
task = tf_stg.DetachDisk(
self.disk_dvr, self.instance, stg_ftsk=stg_ftsk,
disk_type=disk_type)
task.execute()
self.disk_dvr.disconnect_disk.assert_called_once_with(
self.instance, stg_ftsk=stg_ftsk, disk_type=disk_type)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.DetachDisk(self.disk_dvr, self.instance)
tf.assert_called_once_with(
name='detach_storage', provides='stor_adpt_mappings')
def test_connect_disk(self):
stg_ftsk = mock.Mock()
disk_dev_info = mock.Mock()
task = tf_stg.ConnectDisk(
self.disk_dvr, self.instance, stg_ftsk=stg_ftsk)
task.execute(disk_dev_info)
self.disk_dvr.connect_disk.assert_called_once_with(
self.instance, disk_dev_info, stg_ftsk=stg_ftsk)
task.revert(disk_dev_info, 'result', 'flow failures')
self.disk_dvr.disconnect_disk.assert_called_once_with(self.instance)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.ConnectDisk(self.disk_dvr, self.instance)
tf.assert_called_once_with(
name='connect_disk', requires=['disk_dev_info'])
def test_create_disk_for_img(self):
image_meta = mock.Mock()
image_type = mock.Mock()
task = tf_stg.CreateDiskForImg(
self.disk_dvr, self.context, self.instance, image_meta,
image_type=image_type)
task.execute()
self.disk_dvr.create_disk_from_image.assert_called_once_with(
self.context, self.instance, image_meta, image_type=image_type)
task.revert('result', 'flow failures')
self.disk_dvr.delete_disks.assert_called_once_with(['result'])
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.CreateDiskForImg(
self.disk_dvr, self.context, self.instance, image_meta)
tf.assert_called_once_with(
name='crt_disk_from_img', provides='disk_dev_info')
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
@mock.patch('nova_powervm.virt.powervm.mgmt.discover_vscsi_disk',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.mgmt.remove_block_dev',
autospec=True)
def test_instance_disk_to_mgmt(self, mock_rm, mock_discover, mock_find):
mock_discover.return_value = '/dev/disk'
mock_instance = mock.Mock()
mock_instance.name = 'instance_name'
mock_stg = mock.Mock()
mock_stg.name = 'stg_name'
mock_vwrap = mock.Mock()
mock_vwrap.name = 'vios_name'
mock_vwrap.uuid = 'vios_uuid'
mock_vwrap.scsi_mappings = ['mapping1']
disk_dvr = mock.MagicMock()
disk_dvr.mp_uuid = 'mp_uuid'
disk_dvr.connect_instance_disk_to_mgmt.return_value = (mock_stg,
mock_vwrap)
def reset_mocks():
mock_find.reset_mock()
mock_discover.reset_mock()
mock_rm.reset_mock()
disk_dvr.reset_mock()
# Good path - find_maps returns one result
mock_find.return_value = ['one_mapping']
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
self.assertEqual('instance_disk_to_mgmt', tf.name)
self.assertEqual((mock_stg, mock_vwrap, '/dev/disk'), tf.execute())
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
mock_instance)
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
stg_elem=mock_stg)
mock_discover.assert_called_with('one_mapping')
tf.revert('result', 'failures')
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
'stg_name')
mock_rm.assert_called_with('/dev/disk')
# Good path - find_maps returns >1 result
reset_mocks()
mock_find.return_value = ['first_mapping', 'second_mapping']
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
self.assertEqual((mock_stg, mock_vwrap, '/dev/disk'), tf.execute())
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
mock_instance)
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
stg_elem=mock_stg)
mock_discover.assert_called_with('first_mapping')
tf.revert('result', 'failures')
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
'stg_name')
mock_rm.assert_called_with('/dev/disk')
# Management Partition is VIOS and NovaLink hosted storage
reset_mocks()
disk_dvr.vios_uuids = ['mp_uuid']
dev_name = '/dev/vg/fake_name'
disk_dvr.get_bootdisk_path.return_value = dev_name
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
self.assertEqual((None, None, dev_name), tf.execute())
# Management Partition is VIOS and not NovaLink hosted storage
reset_mocks()
disk_dvr.vios_uuids = ['mp_uuid']
disk_dvr.get_bootdisk_path.return_value = None
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
tf.execute()
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
mock_instance)
# Bad path - find_maps returns no results
reset_mocks()
mock_find.return_value = []
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
self.assertRaises(npvmex.NewMgmtMappingNotFoundException, tf.execute)
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
mock_instance)
# find_maps was still called
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
stg_elem=mock_stg)
# discover_vscsi_disk didn't get called
self.assertEqual(0, mock_discover.call_count)
tf.revert('result', 'failures')
# disconnect_disk_from_mgmt got called
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
'stg_name')
# ...but remove_block_dev did not.
self.assertEqual(0, mock_rm.call_count)
# Bad path - connect raises
reset_mocks()
disk_dvr.connect_instance_disk_to_mgmt.side_effect = (
npvmex.InstanceDiskMappingFailed(instance_name='inst_name'))
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
self.assertRaises(npvmex.InstanceDiskMappingFailed, tf.execute)
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
mock_instance)
self.assertEqual(0, mock_find.call_count)
self.assertEqual(0, mock_discover.call_count)
# revert shouldn't call disconnect or remove
tf.revert('result', 'failures')
self.assertEqual(0, disk_dvr.disconnect_disk_from_mgmt.call_count)
self.assertEqual(0, mock_rm.call_count)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
tf.assert_called_once_with(
name='instance_disk_to_mgmt',
provides=['stg_elem', 'vios_wrap', 'disk_path'])
@mock.patch('nova_powervm.virt.powervm.mgmt.remove_block_dev',
autospec=True)
def test_remove_instance_disk_from_mgmt(self, mock_rm):
disk_dvr = mock.MagicMock()
mock_instance = mock.Mock()
mock_instance.name = 'instance_name'
mock_stg = mock.Mock()
mock_stg.name = 'stg_name'
mock_vwrap = mock.Mock()
mock_vwrap.name = 'vios_name'
mock_vwrap.uuid = 'vios_uuid'
tf = tf_stg.RemoveInstanceDiskFromMgmt(disk_dvr, mock_instance)
self.assertEqual('remove_inst_disk_from_mgmt', tf.name)
tf.execute(mock_stg, mock_vwrap, '/dev/disk')
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
'stg_name')
mock_rm.assert_called_with('/dev/disk')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.RemoveInstanceDiskFromMgmt(disk_dvr, mock_instance)
tf.assert_called_once_with(
name='remove_inst_disk_from_mgmt',
requires=['stg_elem', 'vios_wrap', 'disk_path'])
def test_finddisk(self):
disk_dvr = mock.Mock()
disk_dvr.get_disk_ref.return_value = 'disk_ref'
instance = mock.Mock()
context = 'context'
disk_type = 'disk_type'
task = tf_stg.FindDisk(disk_dvr, context, instance, disk_type)
ret_disk = task.execute()
disk_dvr.get_disk_ref.assert_called_once_with(instance, disk_type)
self.assertEqual('disk_ref', ret_disk)
# Bad path for no disk found
disk_dvr.reset_mock()
disk_dvr.get_disk_ref.return_value = None
ret_disk = task.execute()
disk_dvr.get_disk_ref.assert_called_once_with(instance, disk_type)
self.assertIsNone(ret_disk)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.FindDisk(disk_dvr, context, instance, disk_type)
tf.assert_called_once_with(name='find_disk', provides='disk_dev_info')
def test_save_bdm(self):
mock_bdm = mock.Mock(volume_id=1)
save_bdm = tf_stg.SaveBDM(mock_bdm, 'instance')
save_bdm.execute()
mock_bdm.save.assert_called_once_with()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.SaveBDM(mock_bdm, 'instance')
tf.assert_called_once_with(name='save_bdm_1')
def test_extend_disk(self):
disk_dvr = mock.Mock()
instance = mock.Mock()
disk_info = {'type': 'disk_type'}
task = tf_stg.ExtendDisk(disk_dvr, instance, disk_info, 1024)
task.execute()
disk_dvr.extend_disk.assert_called_once_with(instance, disk_info, 1024)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.ExtendDisk(disk_dvr, instance, disk_info, 1024)
tf.assert_called_once_with(name='extend_disk_disk_type')
def test_connect_volume(self):
vol_dvr = mock.Mock(connection_info={'data': {'volume_id': '1'}})
task = tf_stg.ConnectVolume(vol_dvr, 'slot map')
task.execute()
vol_dvr.connect_volume.assert_called_once_with('slot map')
task.revert('result', 'flow failures')
vol_dvr.reset_stg_ftsk.assert_called_once_with()
vol_dvr.disconnect_volume.assert_called_once_with('slot map')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.ConnectVolume(vol_dvr, 'slot map')
tf.assert_called_once_with(name='connect_vol_1')
def test_disconnect_volume(self):
vol_dvr = mock.Mock(connection_info={'data': {'volume_id': '1'}})
task = tf_stg.DisconnectVolume(vol_dvr, 'slot map')
task.execute()
vol_dvr.disconnect_volume.assert_called_once_with('slot map')
task.revert('result', 'flow failures')
vol_dvr.reset_stg_ftsk.assert_called_once_with()
vol_dvr.connect_volume.assert_called_once_with('slot map')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_stg.DisconnectVolume(vol_dvr, 'slot map')
tf.assert_called_once_with(name='disconnect_vol_1')

View File

@ -1,267 +0,0 @@
# Copyright 2015, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.compute import task_states
from nova import exception
from nova import test
from nova_powervm.virt.powervm.tasks import vm as tf_vm
from pypowervm import const as pvmc
from taskflow import engines as tf_eng
from taskflow.patterns import linear_flow as tf_lf
from taskflow import task as tf_tsk
class TestVMTasks(test.NoDBTestCase):
def setUp(self):
super(TestVMTasks, self).setUp()
self.apt = mock.Mock()
self.instance = mock.Mock(uuid='fake-uuid')
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_get(self, mock_inst_wrap):
get = tf_vm.Get(self.apt, 'host_uuid', self.instance)
get.execute()
mock_inst_wrap.assert_called_once_with(self.apt, self.instance)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.Get(self.apt, 'host_uuid', self.instance)
tf.assert_called_once_with(name='get_vm', provides='lpar_wrap')
@mock.patch('pypowervm.utils.transaction.FeedTask', autospec=True)
@mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task',
autospec=True)
@mock.patch('pypowervm.tasks.storage.add_lpar_storage_scrub_tasks',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.create_lpar', autospec=True)
def test_create(self, mock_vm_crt, mock_stg, mock_bld, mock_ftsk):
nvram_mgr = mock.Mock()
nvram_mgr.fetch.return_value = 'data'
mock_ftsk.name = 'vio_feed_task'
lpar_entry = mock.Mock()
# Test create with normal (non-recreate) ftsk
crt = tf_vm.Create(self.apt, 'host_wrapper', self.instance,
stg_ftsk=mock_ftsk, nvram_mgr=nvram_mgr,
slot_mgr='slot_mgr')
mock_vm_crt.return_value = lpar_entry
crt_entry = crt.execute()
mock_ftsk.execute.assert_not_called()
mock_vm_crt.assert_called_once_with(
self.apt, 'host_wrapper', self.instance, nvram='data',
slot_mgr='slot_mgr')
self.assertEqual(lpar_entry, crt_entry)
nvram_mgr.fetch.assert_called_once_with(self.instance)
mock_ftsk.name = 'create_scrubber'
mock_bld.return_value = mock_ftsk
# Test create with recreate ftsk
rcrt = tf_vm.Create(self.apt, 'host_wrapper', self.instance,
stg_ftsk=None, nvram_mgr=nvram_mgr,
slot_mgr='slot_mgr')
mock_bld.assert_called_once_with(
self.apt, name='create_scrubber',
xag={pvmc.XAG.VIO_SMAP, pvmc.XAG.VIO_FMAP})
rcrt.execute()
mock_ftsk.execute.assert_called_once_with()
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.Create(self.apt, 'host_wrapper', self.instance)
tf.assert_called_once_with(name='crt_vm', provides='lpar_wrap')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('nova_powervm.virt.powervm.tasks.vm.Create.execute',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.delete_lpar', autospec=True)
def test_create_revert(self, mock_vm_dlt, mock_crt_exc,
mock_get_pvm_uuid):
mock_crt_exc.side_effect = exception.NovaException()
crt = tf_vm.Create(self.apt, 'host_wrapper', self.instance, 'stg_ftsk',
None)
# Assert that a failure while building does not revert
crt.instance.task_state = task_states.SPAWNING
flow_test = tf_lf.Flow("test_revert")
flow_test.add(crt)
self.assertRaises(exception.NovaException, tf_eng.run, flow_test)
self.assertEqual(0, mock_vm_dlt.call_count)
# Assert that a failure when rebuild results in revert
crt.instance.task_state = task_states.REBUILD_SPAWNING
flow_test = tf_lf.Flow("test_revert")
flow_test.add(crt)
self.assertRaises(exception.NovaException, tf_eng.run, flow_test)
self.assertEqual(1, mock_vm_dlt.call_count)
@mock.patch('nova_powervm.virt.powervm.vm.power_on', autospec=True)
def test_power_on(self, mock_pwron):
pwron = tf_vm.PowerOn(self.apt, self.instance, pwr_opts='opt')
pwron.execute()
mock_pwron.assert_called_once_with(self.apt, self.instance, opts='opt')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.PowerOn(self.apt, self.instance)
tf.assert_called_once_with(name='pwr_vm')
@mock.patch('nova_powervm.virt.powervm.vm.power_on', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.power_off', autospec=True)
def test_power_on_revert(self, mock_pwroff, mock_pwron):
flow = tf_lf.Flow('revert_power_on')
pwron = tf_vm.PowerOn(self.apt, self.instance, pwr_opts='opt')
flow.add(pwron)
# Dummy Task that fails, triggering flow revert
def failure(*a, **k):
raise ValueError()
flow.add(tf_tsk.FunctorTask(failure))
# When PowerOn.execute doesn't fail, revert calls power_off
self.assertRaises(ValueError, tf_eng.run, flow)
mock_pwron.assert_called_once_with(self.apt, self.instance, opts='opt')
mock_pwroff.assert_called_once_with(self.apt, self.instance,
force_immediate=True)
mock_pwron.reset_mock()
mock_pwroff.reset_mock()
# When PowerOn.execute fails, revert doesn't call power_off
mock_pwron.side_effect = exception.NovaException()
self.assertRaises(exception.NovaException, tf_eng.run, flow)
mock_pwron.assert_called_once_with(self.apt, self.instance, opts='opt')
self.assertEqual(0, mock_pwroff.call_count)
@mock.patch('nova_powervm.virt.powervm.vm.power_off', autospec=True)
def test_power_off(self, mock_pwroff):
# Default force_immediate
pwroff = tf_vm.PowerOff(self.apt, self.instance)
pwroff.execute()
mock_pwroff.assert_called_once_with(self.apt, self.instance,
force_immediate=False)
mock_pwroff.reset_mock()
# Explicit force_immediate
pwroff = tf_vm.PowerOff(self.apt, self.instance, force_immediate=True)
pwroff.execute()
mock_pwroff.assert_called_once_with(self.apt, self.instance,
force_immediate=True)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.PowerOff(self.apt, self.instance)
tf.assert_called_once_with(name='pwr_off_vm')
@mock.patch('nova_powervm.virt.powervm.vm.delete_lpar', autospec=True)
def test_delete(self, mock_dlt):
delete = tf_vm.Delete(self.apt, self.instance)
delete.execute()
mock_dlt.assert_called_once_with(self.apt, self.instance)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.Delete(self.apt, self.instance)
tf.assert_called_once_with(name='dlt_vm')
@mock.patch('nova_powervm.virt.powervm.vm.update', autospec=True)
def test_resize(self, mock_vm_update):
resize = tf_vm.Resize(self.apt, 'host_wrapper', self.instance,
name='new_name')
mock_vm_update.return_value = 'resized_entry'
resized_entry = resize.execute()
mock_vm_update.assert_called_once_with(
self.apt, 'host_wrapper', self.instance, entry=None,
name='new_name')
self.assertEqual('resized_entry', resized_entry)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.Resize(self.apt, 'host_wrapper', self.instance)
tf.assert_called_once_with(name='resize_vm', provides='lpar_wrap')
@mock.patch('nova_powervm.virt.powervm.vm.rename', autospec=True)
def test_rename(self, mock_vm_rename):
mock_vm_rename.return_value = 'new_entry'
rename = tf_vm.Rename(self.apt, self.instance, 'new_name')
new_entry = rename.execute()
mock_vm_rename.assert_called_once_with(
self.apt, self.instance, 'new_name')
self.assertEqual('new_entry', new_entry)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.Rename(self.apt, self.instance, 'new_name')
tf.assert_called_once_with(
name='rename_vm_new_name', provides='lpar_wrap')
def test_store_nvram(self):
nvram_mgr = mock.Mock()
store_nvram = tf_vm.StoreNvram(nvram_mgr, self.instance,
immediate=True)
store_nvram.execute()
nvram_mgr.store.assert_called_once_with(self.instance,
immediate=True)
# No exception is raised if the NVRAM could not be stored.
nvram_mgr.reset_mock()
nvram_mgr.store.side_effect = ValueError('Not Available')
store_nvram.execute()
nvram_mgr.store.assert_called_once_with(self.instance,
immediate=True)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.StoreNvram(nvram_mgr, self.instance)
tf.assert_called_once_with(name='store_nvram')
def test_delete_nvram(self):
nvram_mgr = mock.Mock()
delete_nvram = tf_vm.DeleteNvram(nvram_mgr, self.instance)
delete_nvram.execute()
nvram_mgr.remove.assert_called_once_with(self.instance)
# No exception is raised if the NVRAM could not be stored.
nvram_mgr.reset_mock()
nvram_mgr.remove.side_effect = ValueError('Not Available')
delete_nvram.execute()
nvram_mgr.remove.assert_called_once_with(self.instance)
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.DeleteNvram(nvram_mgr, self.instance)
tf.assert_called_once_with(name='delete_nvram')
@mock.patch('nova_powervm.virt.powervm.vm.update_ibmi_settings',
autospec=True)
def test_update_ibmi_settings(self, mock_update):
update = tf_vm.UpdateIBMiSettings(self.apt, self.instance, 'boot_type')
update.execute()
mock_update.assert_called_once_with(self.apt, self.instance,
'boot_type')
# Validate args on taskflow.task.Task instantiation
with mock.patch('taskflow.task.Task.__init__') as tf:
tf_vm.UpdateIBMiSettings(self.apt, self.instance, 'boot_type')
tf.assert_called_once_with(name='update_ibmi_settings')

File diff suppressed because it is too large Load Diff

View File

@ -1,393 +0,0 @@
# Copyright 2014, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
from nova.compute import power_state
from nova import exception
from nova import test
from pypowervm.wrappers import event as pvm_evt
from nova_powervm.virt.powervm import event
class TestGetInstance(test.NoDBTestCase):
@mock.patch('nova.context.get_admin_context', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance', autospec=True)
def test_get_instance(self, mock_get_inst, mock_get_context):
# If instance provided, vm.get_instance not called
self.assertEqual('inst', event._get_instance('inst', 'uuid'))
self.assertEqual(0, mock_get_inst.call_count)
# Note that we can only guarantee get_admin_context wasn't called
# because _get_instance is mocked everywhere else in this suite.
# Otherwise it could run from another test case executing in parallel.
self.assertEqual(0, mock_get_context.call_count)
# If instance not provided, vm.get_instance is called
mock_get_inst.return_value = 'inst2'
for _ in range(2):
# Doing it the second time doesn't call get_admin_context() again.
self.assertEqual('inst2', event._get_instance(None, 'uuid'))
mock_get_context.assert_called_once_with()
mock_get_inst.assert_called_once_with(
mock_get_context.return_value, 'uuid')
mock_get_inst.reset_mock()
# Don't reset mock_get_context
class TestPowerVMNovaEventHandler(test.NoDBTestCase):
def setUp(self):
super(TestPowerVMNovaEventHandler, self).setUp()
lceh_process_p = mock.patch(
'nova_powervm.virt.powervm.event.PowerVMLifecycleEventHandler.'
'process')
self.addCleanup(lceh_process_p.stop)
self.mock_lceh_process = lceh_process_p.start()
self.mock_driver = mock.Mock()
self.handler = event.PowerVMNovaEventHandler(self.mock_driver)
@mock.patch('nova_powervm.virt.powervm.event._get_instance', autospec=True)
def test_get_inst_uuid(self, mock_get_instance):
fake_inst1 = mock.Mock(uuid='uuid1')
fake_inst2 = mock.Mock(uuid='uuid2')
mock_get_instance.side_effect = lambda i, u: {
'fake_pvm_uuid1': fake_inst1,
'fake_pvm_uuid2': fake_inst2}.get(u)
self.assertEqual(
(fake_inst1, 'uuid1'),
self.handler._get_inst_uuid(fake_inst1, 'fake_pvm_uuid1'))
self.assertEqual(
(fake_inst2, 'uuid2'),
self.handler._get_inst_uuid(fake_inst2, 'fake_pvm_uuid2'))
self.assertEqual(
(None, 'uuid1'),
self.handler._get_inst_uuid(None, 'fake_pvm_uuid1'))
self.assertEqual(
(fake_inst2, 'uuid2'),
self.handler._get_inst_uuid(fake_inst2, 'fake_pvm_uuid2'))
self.assertEqual(
(fake_inst1, 'uuid1'),
self.handler._get_inst_uuid(fake_inst1, 'fake_pvm_uuid1'))
mock_get_instance.assert_has_calls(
[mock.call(fake_inst1, 'fake_pvm_uuid1'),
mock.call(fake_inst2, 'fake_pvm_uuid2')])
@mock.patch('nova_powervm.virt.powervm.event._get_instance', autospec=True)
def test_handle_inst_event(self, mock_get_instance):
# If no event we care about, or NVRAM but no nvram_mgr, nothing happens
self.mock_driver.nvram_mgr = None
for dets in ([], ['foo', 'bar', 'baz'], ['NVRAM']):
self.assertEqual('inst', self.handler._handle_inst_event(
'inst', 'uuid', dets))
self.assertEqual(0, mock_get_instance.call_count)
self.mock_lceh_process.assert_not_called()
self.mock_driver.nvram_mgr = mock.Mock()
# PartitionState only: no NVRAM handling, and inst is passed through.
self.assertEqual('inst', self.handler._handle_inst_event(
'inst', 'uuid', ['foo', 'PartitionState', 'bar']))
self.assertEqual(0, mock_get_instance.call_count)
self.mock_driver.nvram_mgr.store.assert_not_called()
self.mock_lceh_process.assert_called_once_with('inst', 'uuid')
self.mock_lceh_process.reset_mock()
# No instance; nothing happens (we skip PartitionState handling too)
mock_get_instance.return_value = None
self.assertIsNone(self.handler._handle_inst_event(
'inst', 'uuid', ['NVRAM', 'PartitionState']))
mock_get_instance.assert_called_once_with('inst', 'uuid')
self.mock_driver.nvram_mgr.store.assert_not_called()
self.mock_lceh_process.assert_not_called()
mock_get_instance.reset_mock()
fake_inst = mock.Mock(uuid='fake-uuid')
mock_get_instance.return_value = fake_inst
# NVRAM only - no PartitionState handling, instance is returned
self.assertEqual(fake_inst, self.handler._handle_inst_event(
None, 'uuid', ['NVRAM', 'baz']))
mock_get_instance.assert_called_once_with(None, 'uuid')
self.mock_driver.nvram_mgr.store.assert_called_once_with('fake-uuid')
self.mock_lceh_process.assert_not_called()
mock_get_instance.reset_mock()
self.mock_driver.nvram_mgr.store.reset_mock()
self.handler._uuid_cache.clear()
# Both event types
self.assertEqual(fake_inst, self.handler._handle_inst_event(
None, 'uuid', ['PartitionState', 'NVRAM']))
mock_get_instance.assert_called_once_with(None, 'uuid')
self.mock_driver.nvram_mgr.store.assert_called_once_with('fake-uuid')
self.mock_lceh_process.assert_called_once_with(fake_inst, 'uuid')
mock_get_instance.reset_mock()
self.mock_driver.nvram_mgr.store.reset_mock()
self.handler._uuid_cache.clear()
# Handle multiple NVRAM and PartitionState events
self.assertEqual(fake_inst, self.handler._handle_inst_event(
None, 'uuid', ['NVRAM']))
self.assertEqual(None, self.handler._handle_inst_event(
None, 'uuid', ['NVRAM']))
self.assertEqual(None, self.handler._handle_inst_event(
None, 'uuid', ['PartitionState']))
self.assertEqual(fake_inst, self.handler._handle_inst_event(
fake_inst, 'uuid', ['NVRAM']))
self.assertEqual(fake_inst, self.handler._handle_inst_event(
fake_inst, 'uuid', ['NVRAM', 'PartitionState']))
mock_get_instance.assert_called_once_with(None, 'uuid')
self.mock_driver.nvram_mgr.store.assert_has_calls(
[mock.call('fake-uuid')] * 4)
self.mock_lceh_process.assert_has_calls(
[mock.call(None, 'uuid'),
mock.call(fake_inst, 'uuid')])
@mock.patch('nova_powervm.virt.powervm.event.PowerVMNovaEventHandler.'
'_handle_inst_event')
@mock.patch('pypowervm.util.get_req_path_uuid', autospec=True)
def test_process(self, mock_get_rpu, mock_handle):
# NEW_CLIENT/CACHE_CLEARED events are ignored
events = [mock.Mock(etype=pvm_evt.EventType.NEW_CLIENT),
mock.Mock(etype=pvm_evt.EventType.CACHE_CLEARED)]
self.handler.process(events)
self.assertEqual(0, mock_get_rpu.call_count)
mock_handle.assert_not_called()
moduri = pvm_evt.EventType.MODIFY_URI
# If get_req_path_uuid doesn't find a UUID, or not a LogicalPartition
# URI, or details is empty, or has no actions we care about, no action
# is taken.
mock_get_rpu.side_effect = [None, 'uuid1', 'uuid2', 'uuid3']
events = [
mock.Mock(etype=moduri, data='foo/LogicalPartition/None',
details='NVRAM,PartitionState'),
mock.Mock(etype=moduri, data='bar/VirtualIOServer/uuid1',
details='NVRAM,PartitionState'),
mock.Mock(etype=moduri, data='baz/LogicalPartition/uuid2',
detail=''),
mock.Mock(etype=moduri, data='blah/LogicalPartition/uuid3',
detail='do,not,care')]
self.handler.process(events)
mock_get_rpu.assert_has_calls(
[mock.call(uri, preserve_case=True)
for uri in ('bar/VirtualIOServer/uuid1',
'baz/LogicalPartition/uuid2',
'blah/LogicalPartition/uuid3')])
mock_handle.assert_not_called()
mock_get_rpu.reset_mock()
# The stars align, and we handle some events.
uuid_det = (('uuid1', 'NVRAM'),
('uuid2', 'this,one,ignored'),
('uuid3', 'PartitionState,baz,NVRAM'),
# Repeat uuid1 to test the cache
('uuid1', 'blah,PartitionState'),
('uuid5', 'also,ignored'))
mock_get_rpu.side_effect = [ud[0] for ud in uuid_det]
events = [
mock.Mock(etype=moduri, data='LogicalPartition/' + uuid,
detail=detail) for uuid, detail in uuid_det]
# Set up _handle_inst_event to test the cache and the exception path
mock_handle.side_effect = ['inst1', None, ValueError]
# Run it!
self.handler.process(events)
mock_get_rpu.assert_has_calls(
[mock.call(uri, preserve_case=True) for uri in
('LogicalPartition/' + ud[0] for ud in uuid_det)])
mock_handle.assert_has_calls(
[mock.call(None, 'uuid1', ['NVRAM']),
mock.call(None, 'uuid3', ['PartitionState', 'baz', 'NVRAM']),
# inst1 pulled from the cache based on uuid1
mock.call('inst1', 'uuid1', ['blah', 'PartitionState'])])
@mock.patch('nova_powervm.virt.powervm.event._get_instance', autospec=True)
@mock.patch('pypowervm.util.get_req_path_uuid', autospec=True)
def test_uuid_cache(self, mock_get_rpu, mock_get_instance):
deluri = pvm_evt.EventType.DELETE_URI
moduri = pvm_evt.EventType.MODIFY_URI
fake_inst1 = mock.Mock(uuid='uuid1')
fake_inst2 = mock.Mock(uuid='uuid2')
fake_inst4 = mock.Mock(uuid='uuid4')
mock_get_instance.side_effect = lambda i, u: {
'fake_pvm_uuid1': fake_inst1,
'fake_pvm_uuid2': fake_inst2,
'fake_pvm_uuid4': fake_inst4}.get(u)
mock_get_rpu.side_effect = lambda d, **k: d.split('/')[1]
uuid_det = (('fake_pvm_uuid1', 'NVRAM', moduri),
('fake_pvm_uuid2', 'NVRAM', moduri),
('fake_pvm_uuid4', 'NVRAM', moduri),
('fake_pvm_uuid1', 'NVRAM', moduri),
('fake_pvm_uuid2', '', deluri),
('fake_pvm_uuid2', 'NVRAM', moduri),
('fake_pvm_uuid1', '', deluri),
('fake_pvm_uuid3', '', deluri))
events = [
mock.Mock(etype=etype, data='LogicalPartition/' + uuid,
detail=detail) for uuid, detail, etype in uuid_det]
self.handler.process(events[0:4])
mock_get_instance.assert_has_calls([
mock.call(None, 'fake_pvm_uuid1'),
mock.call(None, 'fake_pvm_uuid2'),
mock.call(None, 'fake_pvm_uuid4')])
self.assertEqual({
'fake_pvm_uuid1': 'uuid1',
'fake_pvm_uuid2': 'uuid2',
'fake_pvm_uuid4': 'uuid4'}, self.handler._uuid_cache)
mock_get_instance.reset_mock()
# Test the cache with a second process call
self.handler.process(events[4:7])
mock_get_instance.assert_has_calls([
mock.call(None, 'fake_pvm_uuid2')])
self.assertEqual({
'fake_pvm_uuid2': 'uuid2',
'fake_pvm_uuid4': 'uuid4'}, self.handler._uuid_cache)
mock_get_instance.reset_mock()
# Make sure a delete to a non-cached UUID doesn't blow up
self.handler.process([events[7]])
self.assertEqual(0, mock_get_instance.call_count)
mock_get_rpu.reset_mock()
mock_get_instance.reset_mock()
clear_events = [mock.Mock(etype=pvm_evt.EventType.NEW_CLIENT),
mock.Mock(etype=pvm_evt.EventType.CACHE_CLEARED)]
# This should clear the cache
self.handler.process(clear_events)
self.assertEqual(dict(), self.handler._uuid_cache)
self.assertEqual(0, mock_get_rpu.call_count)
self.assertEqual(0, mock_get_instance.call_count)
class TestPowerVMLifecycleEventHandler(test.NoDBTestCase):
def setUp(self):
super(TestPowerVMLifecycleEventHandler, self).setUp()
self.mock_driver = mock.MagicMock()
self.handler = event.PowerVMLifecycleEventHandler(self.mock_driver)
@mock.patch('nova_powervm.virt.powervm.vm.get_vm_qp', autospec=True)
@mock.patch('nova_powervm.virt.powervm.event._get_instance', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.translate_event', autospec=True)
@mock.patch('nova.virt.event.LifecycleEvent', autospec=True)
def test_emit_event(self, mock_lce, mock_tx_evt, mock_get_inst, mock_qp):
def assert_qp():
mock_qp.assert_called_once_with(
self.mock_driver.adapter, 'uuid', 'PartitionState')
mock_qp.reset_mock()
def assert_get_inst():
mock_get_inst.assert_called_once_with('inst', 'uuid')
mock_get_inst.reset_mock()
# Ignore if LPAR is gone
mock_qp.side_effect = exception.InstanceNotFound(instance_id='uuid')
self.handler._emit_event('uuid', None)
assert_qp()
self.assertEqual(0, mock_get_inst.call_count)
self.assertEqual(0, mock_tx_evt.call_count)
self.assertEqual(0, mock_lce.call_count)
self.mock_driver.emit_event.assert_not_called()
# Let get_vm_qp return its usual mock from now on
mock_qp.side_effect = None
# Ignore if instance is gone
mock_get_inst.return_value = None
self.handler._emit_event('uuid', 'inst')
assert_qp()
assert_get_inst()
self.assertEqual(0, mock_tx_evt.call_count)
self.assertEqual(0, mock_lce.call_count)
self.mock_driver.emit_event.assert_not_called()
# Ignore if task_state isn't one we care about
for task_state in event._NO_EVENT_TASK_STATES:
mock_get_inst.return_value = mock.Mock(task_state=task_state)
self.handler._emit_event('uuid', 'inst')
assert_qp()
assert_get_inst()
self.assertEqual(0, mock_tx_evt.call_count)
self.assertEqual(0, mock_lce.call_count)
self.mock_driver.emit_event.assert_not_called()
# Task state we care about from now on
inst = mock.Mock(task_state='scheduling',
power_state=power_state.RUNNING)
mock_get_inst.return_value = inst
# Ignore if not a transition we care about
mock_tx_evt.return_value = None
self.handler._emit_event('uuid', 'inst')
assert_qp()
assert_get_inst()
mock_tx_evt.assert_called_once_with(
mock_qp.return_value, power_state.RUNNING)
mock_lce.assert_not_called()
self.mock_driver.emit_event.assert_not_called()
mock_tx_evt.reset_mock()
# Good path
mock_tx_evt.return_value = 'transition'
self.handler._delayed_event_threads = {'uuid': 'thread1',
'uuid2': 'thread2'}
self.handler._emit_event('uuid', 'inst')
assert_qp()
assert_get_inst()
mock_tx_evt.assert_called_once_with(
mock_qp.return_value, power_state.RUNNING)
mock_lce.assert_called_once_with(inst.uuid, 'transition')
self.mock_driver.emit_event.assert_called_once_with(
mock_lce.return_value)
# The thread was removed
self.assertEqual({'uuid2': 'thread2'},
self.handler._delayed_event_threads)
@mock.patch('eventlet.greenthread.spawn_after', autospec=True)
def test_process(self, mock_spawn):
thread1 = mock.Mock()
thread2 = mock.Mock()
mock_spawn.side_effect = [thread1, thread2]
# First call populates the delay queue
self.assertEqual({}, self.handler._delayed_event_threads)
self.handler.process(None, 'uuid')
mock_spawn.assert_called_once_with(15, self.handler._emit_event,
'uuid', None)
self.assertEqual({'uuid': thread1},
self.handler._delayed_event_threads)
thread1.cancel.assert_not_called()
thread2.cancel.assert_not_called()
mock_spawn.reset_mock()
# Second call cancels the first thread and replaces it in delay queue
self.handler.process('inst', 'uuid')
mock_spawn.assert_called_once_with(15, self.handler._emit_event,
'uuid', 'inst')
self.assertEqual({'uuid': thread2},
self.handler._delayed_event_threads)
thread1.cancel.assert_called_once_with()
thread2.cancel.assert_not_called()

View File

@ -1,103 +0,0 @@
# Copyright 2014, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
import logging
from nova import test
from oslo_serialization import jsonutils
from pypowervm.wrappers import iocard as pvm_card
from pypowervm.wrappers import managed_system as pvm_ms
from nova_powervm.virt.powervm import host as pvm_host
LOG = logging.getLogger(__name__)
logging.basicConfig()
def mock_sriov(adap_id, pports):
sriov = mock.create_autospec(pvm_card.SRIOVAdapter, spec_set=True)
sriov.configure_mock(sriov_adap_id=adap_id, phys_ports=pports)
return sriov
def mock_pport(port_id, label, maxlps):
port = mock.create_autospec(pvm_card.SRIOVEthPPort, spec_set=True)
port.configure_mock(port_id=port_id, label=label, supp_max_lps=maxlps)
return port
class TestPowerVMHost(test.NoDBTestCase):
def test_host_resources(self):
# Create objects to test with
sriov_adaps = [
mock_sriov(1, [mock_pport(2, 'foo', 1), mock_pport(3, '', 2)]),
mock_sriov(4, [mock_pport(5, 'bar', 3)])]
ms_wrapper = mock.create_autospec(pvm_ms.System, spec_set=True)
asio = mock.create_autospec(pvm_ms.ASIOConfig, spec_set=True)
asio.configure_mock(sriov_adapters=sriov_adaps)
ms_wrapper.configure_mock(
proc_units_configurable=500,
proc_units_avail=500,
memory_configurable=5242880,
memory_free=5242752,
memory_region_size='big',
asio_config=asio)
self.flags(host='the_hostname')
# Run the actual test
stats = pvm_host.build_host_resource_from_ms(ms_wrapper)
self.assertIsNotNone(stats)
# Check for the presence of fields
fields = (('vcpus', 500), ('vcpus_used', 0),
('memory_mb', 5242880), ('memory_mb_used', 128),
'hypervisor_type', 'hypervisor_version',
('hypervisor_hostname', 'the_hostname'), 'cpu_info',
'supported_instances', 'stats', 'pci_passthrough_devices')
for fld in fields:
if isinstance(fld, tuple):
value = stats.get(fld[0], None)
self.assertEqual(value, fld[1])
else:
value = stats.get(fld, None)
self.assertIsNotNone(value)
# Check for individual stats
hstats = (('proc_units', '500.00'), ('proc_units_used', '0.00'))
for stat in hstats:
if isinstance(stat, tuple):
value = stats['stats'].get(stat[0], None)
self.assertEqual(value, stat[1])
else:
value = stats['stats'].get(stat, None)
self.assertIsNotNone(value)
# pci_passthrough_devices. Parse json - entries can be in any order.
ppdstr = stats['pci_passthrough_devices']
ppdlist = jsonutils.loads(ppdstr)
self.assertEqual({'foo', 'bar', 'default'}, {ppd['physical_network']
for ppd in ppdlist})
self.assertEqual({'foo', 'bar', 'default'}, {ppd['label']
for ppd in ppdlist})
self.assertEqual({'*:1:2.0', '*:1:3.0', '*:1:3.1', '*:4:5.0',
'*:4:5.1', '*:4:5.2'},
{ppd['address'] for ppd in ppdlist})
for ppd in ppdlist:
self.assertEqual('type-VF', ppd['dev_type'])
self.assertEqual('*:*:*.*', ppd['parent_addr'])
self.assertEqual('*', ppd['vendor_id'])
self.assertEqual('*', ppd['product_id'])
self.assertEqual(1, ppd['numa_node'])

View File

@ -1,62 +0,0 @@
# Copyright IBM Corp. and contributors
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import six
from nova import test
from nova_powervm.virt.powervm import image
if six.PY2:
_BUILTIN = '__builtin__'
else:
_BUILTIN = 'builtins'
class TestImage(test.NoDBTestCase):
@mock.patch('nova.utils.temporary_chown', autospec=True)
@mock.patch(_BUILTIN + '.open', autospec=True)
@mock.patch('nova.image.api.API', autospec=True)
def test_stream_blockdev_to_glance(self, mock_api, mock_open, mock_chown):
mock_open.return_value.__enter__.return_value = 'mock_stream'
image.stream_blockdev_to_glance('context', mock_api, 'image_id',
'metadata', '/dev/disk')
mock_chown.assert_called_with('/dev/disk')
mock_open.assert_called_with('/dev/disk', 'rb')
mock_api.update.assert_called_with('context', 'image_id', 'metadata',
'mock_stream')
@mock.patch('nova.image.api.API', autospec=True)
def test_generate_snapshot_metadata(self, mock_api):
mock_api.get.return_value = {'name': 'image_name'}
mock_instance = mock.Mock()
mock_instance.project_id = 'project_id'
ret = image.generate_snapshot_metadata('context', mock_api, 'image_id',
mock_instance)
mock_api.get.assert_called_with('context', 'image_id')
self.assertEqual({
'name': 'image_name',
'status': 'active',
'disk_format': 'raw',
'container_format': 'bare',
'properties': {
'image_location': 'snapshot',
'image_state': 'available',
'owner_id': 'project_id',
}
}, ret)

View File

@ -1,335 +0,0 @@
# Copyright 2015, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import absolute_import
import fixtures
import mock
from nova import exception
from nova import objects
from nova.objects import migrate_data as mig_obj
from nova import test
from nova.tests.unit import fake_network
from nova_powervm.tests.virt import powervm
from nova_powervm.tests.virt.powervm import fixtures as fx
from nova_powervm.virt.powervm import live_migration as lpm
class TestLPM(test.NoDBTestCase):
def setUp(self):
super(TestLPM, self).setUp()
self.flags(disk_driver='localdisk', group='powervm')
self.drv_fix = self.useFixture(fx.PowerVMComputeDriver())
self.drv = self.drv_fix.drv
self.apt = self.drv.adapter
self.inst = objects.Instance(**powervm.TEST_INSTANCE)
self.network_infos = fake_network.fake_get_instance_nw_info(self, 1)
self.inst.info_cache = objects.InstanceInfoCache(
network_info=self.network_infos)
self.mig_data = mig_obj.PowerVMLiveMigrateData()
self.mig_data.host_mig_data = {}
self.mig_data.dest_ip = '1'
self.mig_data.dest_user_id = 'neo'
self.mig_data.dest_sys_name = 'a'
self.mig_data.public_key = 'PublicKey'
self.mig_data.dest_proc_compat = 'a,b,c'
self.mig_data.vol_data = {}
self.mig_data.vea_vlan_mappings = {}
self.lpmsrc = lpm.LiveMigrationSrc(self.drv, self.inst, self.mig_data)
self.lpmdst = lpm.LiveMigrationDest(self.drv, self.inst)
self.add_key = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.management_console.add_authorized_key')).mock
self.get_key = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.management_console.get_public_key')).mock
self.get_key.return_value = 'PublicKey'
# Short path to the host's migration_data
self.host_mig_data = self.drv.host_wrapper.migration_data
@mock.patch('pypowervm.tasks.storage.ScrubOrphanStorageForLpar',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.media.ConfigDrivePowerVM',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
@mock.patch('pypowervm.tasks.vterm.close_vterm', autospec=True)
def test_lpm_source(self, mock_vterm_close, mock_get_wrap,
mock_cd, mock_scrub):
self.host_mig_data['active_migrations_supported'] = 4
self.host_mig_data['active_migrations_in_progress'] = 2
with mock.patch.object(
self.lpmsrc, '_check_migration_ready', return_value=None):
# Test the bad path first, then patch in values to make succeed
mock_wrap = mock.Mock(id=123)
mock_get_wrap.return_value = mock_wrap
self.assertRaises(exception.MigrationPreCheckError,
self.lpmsrc.check_source, 'context',
'block_device_info', [])
# Patch the proc compat fields, to get further
pm = mock.PropertyMock(return_value='b')
type(mock_wrap).proc_compat_mode = pm
self.assertRaises(exception.MigrationPreCheckError,
self.lpmsrc.check_source, 'context',
'block_device_info', [])
pm = mock.PropertyMock(return_value='Not_Migrating')
type(mock_wrap).migration_state = pm
# Get a volume driver.
mock_vol_drv = mock.MagicMock()
# Finally, good path.
self.lpmsrc.check_source('context', 'block_device_info',
[mock_vol_drv])
# Ensure we built a scrubber.
mock_scrub.assert_called_with(mock.ANY, 123)
# Ensure we added the subtasks to remove the vopts.
mock_cd.return_value.dlt_vopt.assert_called_once_with(
mock.ANY, stg_ftsk=mock_scrub.return_value,
remove_mappings=False)
# And ensure the scrubber was executed
mock_scrub.return_value.execute.assert_called_once_with()
mock_vol_drv.pre_live_migration_on_source.assert_called_once_with(
{})
# Ensure migration counts are validated
self.host_mig_data['active_migrations_in_progress'] = 4
self.assertRaises(exception.MigrationPreCheckError,
self.lpmsrc.check_source, 'context',
'block_device_info', [])
# Ensure the vterm was closed
mock_vterm_close.assert_called_once_with(
self.apt, mock_wrap.uuid)
def test_lpm_dest(self):
src_compute_info = {'stats': {'memory_region_size': 1}}
dst_compute_info = {'stats': {'memory_region_size': 1}}
self.host_mig_data['active_migrations_supported'] = 4
self.host_mig_data['active_migrations_in_progress'] = 2
with mock.patch.object(self.drv.host_wrapper, 'refresh') as mock_rfh:
self.lpmdst.check_destination(
'context', src_compute_info, dst_compute_info)
mock_rfh.assert_called_once_with()
# Ensure migration counts are validated
self.host_mig_data['active_migrations_in_progress'] = 4
self.assertRaises(exception.MigrationPreCheckError,
self.lpmdst.check_destination, 'context',
src_compute_info, dst_compute_info)
# Repair the stat
self.host_mig_data['active_migrations_in_progress'] = 2
# Ensure diff memory sizes raises an exception
dst_compute_info['stats']['memory_region_size'] = 2
self.assertRaises(exception.MigrationPreCheckError,
self.lpmdst.check_destination, 'context',
src_compute_info, dst_compute_info)
@mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif.'
'pre_live_migrate_at_destination', autospec=True)
def test_pre_live_mig(self, mock_vif_pre, mock_scrub):
vol_drv = mock.MagicMock()
network_infos = [{'type': 'pvm_sea'}]
def update_vea_mapping(adapter, host_uuid, instance, network_info,
vea_vlan_mappings):
# Make sure what comes in is None, but that we change it.
self.assertEqual(vea_vlan_mappings, {})
vea_vlan_mappings['test'] = 'resp'
mock_vif_pre.side_effect = update_vea_mapping
resp = self.lpmdst.pre_live_migration(
'context', 'block_device_info', network_infos, 'disk_info',
self.mig_data, [vol_drv])
# Make sure the pre_live_migrate_at_destination was invoked for the vif
mock_vif_pre.assert_called_once_with(
self.drv.adapter, self.drv.host_uuid, self.inst, network_infos[0],
mock.ANY)
self.assertEqual({'test': 'resp'}, self.mig_data.vea_vlan_mappings)
# Make sure we get something back, and that the volume driver was
# invoked.
self.assertIsNotNone(resp)
vol_drv.pre_live_migration_on_destination.assert_called_once_with(
self.mig_data.vol_data)
self.assertEqual(1, mock_scrub.call_count)
self.add_key.assert_called_once_with(self.apt, 'PublicKey')
vol_drv.reset_mock()
raising_vol_drv = mock.Mock()
raising_vol_drv.pre_live_migration_on_destination.side_effect = (
Exception('foo'))
self.assertRaises(
exception.MigrationPreCheckError, self.lpmdst.pre_live_migration,
'context', 'block_device_info', network_infos, 'disk_info',
self.mig_data, [vol_drv, raising_vol_drv])
vol_drv.pre_live_migration_on_destination.assert_called_once_with({})
(raising_vol_drv.pre_live_migration_on_destination.
assert_called_once_with({}))
def test_src_cleanup(self):
vol_drv = mock.Mock()
self.lpmdst.cleanup_volume(vol_drv)
# Ensure the volume driver is not called
self.assertEqual(0, vol_drv.cleanup_volume_at_destination.call_count)
def test_src_cleanup_valid(self):
vol_drv = mock.Mock()
self.lpmdst.pre_live_vol_data = {'vscsi-vol-id': 'fake_udid'}
self.lpmdst.cleanup_volume(vol_drv)
# Ensure the volume driver was called to clean up the volume.
vol_drv.cleanup_volume_at_destination.assert_called_once()
@mock.patch('pypowervm.tasks.migration.migrate_lpar', autospec=True)
@mock.patch('nova_powervm.virt.powervm.live_migration.LiveMigrationSrc.'
'_convert_nl_io_mappings', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif.pre_live_migrate_at_source',
autospec=True)
def test_live_migration(self, mock_vif_pre_lpm, mock_convert_mappings,
mock_migr):
mock_trunk = mock.MagicMock()
mock_vif_pre_lpm.return_value = [mock_trunk]
mock_convert_mappings.return_value = ['AABBCCDDEEFF/5']
self.lpmsrc.lpar_w = mock.Mock()
self.lpmsrc.live_migration('context', self.mig_data)
mock_migr.assert_called_once_with(
self.lpmsrc.lpar_w, 'a', sdn_override=True, tgt_mgmt_svr='1',
tgt_mgmt_usr='neo', validate_only=False,
virtual_fc_mappings=None, virtual_scsi_mappings=None,
vlan_check_override=True, vlan_mappings=['AABBCCDDEEFF/5'])
# Network assertions
mock_vif_pre_lpm.assert_called_once_with(
self.drv.adapter, self.drv.host_uuid, self.inst, mock.ANY)
mock_trunk.delete.assert_called_once()
# Test that we raise errors received during migration
mock_migr.side_effect = ValueError()
self.assertRaises(ValueError, self.lpmsrc.live_migration, 'context',
self.mig_data)
mock_migr.assert_called_with(
self.lpmsrc.lpar_w, 'a', sdn_override=True, tgt_mgmt_svr='1',
tgt_mgmt_usr='neo', validate_only=False,
virtual_fc_mappings=None, virtual_scsi_mappings=None,
vlan_mappings=['AABBCCDDEEFF/5'], vlan_check_override=True)
def test_convert_nl_io_mappings(self):
# Test simple None case
self.assertIsNone(self.lpmsrc._convert_nl_io_mappings(None))
# Do some mappings
test_mappings = {'aa:bb:cc:dd:ee:ff': 5, 'aa:bb:cc:dd:ee:ee': 126}
expected = ['AABBCCDDEEFF/5', 'AABBCCDDEEEE/126']
self.assertEqual(
set(expected),
set(self.lpmsrc._convert_nl_io_mappings(test_mappings)))
@mock.patch('pypowervm.tasks.migration.migrate_recover', autospec=True)
def test_rollback(self, mock_migr):
self.lpmsrc.lpar_w = mock.Mock()
# Test no need to rollback
self.lpmsrc.lpar_w.migration_state = 'Not_Migrating'
self.lpmsrc.rollback_live_migration('context')
self.assertTrue(self.lpmsrc.lpar_w.refresh.called)
self.assertFalse(mock_migr.called)
# Test calling the rollback
self.lpmsrc.lpar_w.reset_mock()
self.lpmsrc.lpar_w.migration_state = 'Pretend its Migrating'
self.lpmsrc.rollback_live_migration('context')
self.assertTrue(self.lpmsrc.lpar_w.refresh.called)
mock_migr.assert_called_once_with(self.lpmsrc.lpar_w, force=True)
# Test exception from rollback
mock_migr.reset_mock()
self.lpmsrc.lpar_w.reset_mock()
mock_migr.side_effect = ValueError()
self.lpmsrc.rollback_live_migration('context')
self.assertTrue(self.lpmsrc.lpar_w.refresh.called)
mock_migr.assert_called_once_with(self.lpmsrc.lpar_w, force=True)
def test_check_migration_ready(self):
lpar_w, host_w = mock.Mock(), mock.Mock()
lpar_w.can_lpm.return_value = (True, None)
self.lpmsrc._check_migration_ready(lpar_w, host_w)
lpar_w.can_lpm.assert_called_once_with(host_w, migr_data={})
lpar_w.can_lpm.return_value = (False, 'This is the reason message.')
self.assertRaises(exception.MigrationPreCheckError,
self.lpmsrc._check_migration_ready, lpar_w, host_w)
@mock.patch('pypowervm.tasks.migration.migrate_abort', autospec=True)
def test_migration_abort(self, mock_mig_abort):
self.lpmsrc.lpar_w = mock.Mock()
self.lpmsrc.migration_abort()
mock_mig_abort.assert_called_once_with(self.lpmsrc.lpar_w)
@mock.patch('pypowervm.tasks.migration.migrate_recover', autospec=True)
def test_migration_recover(self, mock_mig_recover):
self.lpmsrc.lpar_w = mock.Mock()
self.lpmsrc.migration_recover()
mock_mig_recover.assert_called_once_with(
self.lpmsrc.lpar_w, force=True)
@mock.patch('nova_powervm.virt.powervm.vif.post_live_migrate_at_source',
autospec=True)
def test_post_live_migration_at_source(self, mock_vif_post_lpm_at_source):
network_infos = [{'devname': 'tap-dev1', 'address': 'mac-addr1',
'network': {'bridge': 'br-int'}, 'id': 'vif_id_1'},
{'devname': 'tap-dev2', 'address': 'mac-addr2',
'network': {'bridge': 'br-int'}, 'id': 'vif_id_2'}]
self.lpmsrc.post_live_migration_at_source(network_infos)
# Assertions
for network_info in network_infos:
mock_vif_post_lpm_at_source.assert_any_call(mock.ANY, mock.ANY,
mock.ANY, network_info)
@mock.patch('nova_powervm.virt.powervm.tasks.storage.SaveBDM.execute',
autospec=True)
def test_post_live_migration_at_dest(self, mock_save_bdm):
bdm1, bdm2, vol_drv1, vol_drv2 = [mock.Mock()] * 4
vals = [(bdm1, vol_drv1), (bdm2, vol_drv2)]
self.lpmdst.pre_live_vol_data = {'vscsi-vol-id': 'fake_udid',
'vscsi-vol-id2': 'fake_udid2'}
self.lpmdst.post_live_migration_at_destination('network_infos', vals)
# Assertions
for bdm, vol_drv in vals:
vol_drv.post_live_migration_at_destination.assert_called_with(
mock.ANY)
self.assertEqual(len(vals), mock_save_bdm.call_count)

View File

@ -1,248 +0,0 @@
# Copyright 2015, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import fixtures
import mock
from nova import test
from oslo_utils.fixture import uuidsentinel
from pypowervm import const as pvm_const
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.wrappers import storage as pvm_stg
from pypowervm.wrappers import virtual_io_server as pvm_vios
from nova_powervm.virt.powervm import media as m
class TestConfigDrivePowerVM(test.NoDBTestCase):
"""Unit Tests for the ConfigDrivePowerVM class."""
def setUp(self):
super(TestConfigDrivePowerVM, self).setUp()
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
self.validate_vopt = self.useFixture(fixtures.MockPatch(
'pypowervm.tasks.vopt.validate_vopt_repo_exists')).mock
self.validate_vopt.return_value = None, None
@mock.patch('nova.api.metadata.base.InstanceMetadata', autospec=True)
@mock.patch('nova.virt.configdrive.ConfigDriveBuilder.make_drive',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_crt_cfg_dr_iso(self, mock_pvm_uuid, mock_mkdrv, mock_meta):
"""Validates that the image creation method works."""
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
self.assertTrue(self.validate_vopt.called)
mock_instance = mock.MagicMock()
mock_instance.uuid = '1e46bbfd-73b6-3c2a-aeab-a1d3f065e92f'
mock_files = mock.MagicMock()
mock_net = mock.MagicMock()
iso_path = '/tmp/cfgdrv.iso'
cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net,
iso_path)
self.assertTrue(mock_pvm_uuid.called)
self.assertEqual(mock_mkdrv.call_count, 1)
# Test retry iso create
mock_mkdrv.reset_mock()
mock_mkdrv.side_effect = [OSError, mock_mkdrv]
cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net,
iso_path)
self.assertEqual(mock_mkdrv.call_count, 2)
def test_get_cfg_drv_name(self):
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
mock_instance = mock.MagicMock()
mock_instance.uuid = uuidsentinel.inst_id
# calculate expected file name
expected_file_name = 'cfg_' + mock_instance.uuid.replace('-', '')
allowed_len = pvm_const.MaxLen.VOPT_NAME - 4 # '.iso' is 4 chars
expected_file_name = expected_file_name[:allowed_len] + '.iso'
name = cfg_dr_builder.get_cfg_drv_name(mock_instance)
self.assertEqual(name, expected_file_name)
@mock.patch('nova_powervm.virt.powervm.media.ConfigDrivePowerVM.'
'get_cfg_drv_name')
@mock.patch('tempfile.NamedTemporaryFile', autospec=True)
@mock.patch('nova_powervm.virt.powervm.media.ConfigDrivePowerVM.'
'_attach_vopt')
@mock.patch('os.path.getsize', autospec=True)
@mock.patch('pypowervm.tasks.storage.upload_vopt', autospec=True)
@mock.patch('nova_powervm.virt.powervm.media.ConfigDrivePowerVM.'
'_create_cfg_dr_iso', autospec=True)
def test_crt_cfg_drv_vopt(self, mock_ccdi, mock_upl, mock_getsize,
mock_attach, mock_ntf, mock_name):
# Mock Returns
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
cfg_dr_builder.vios_uuid = 'vios_uuid'
mock_instance = mock.MagicMock()
mock_instance.uuid = uuidsentinel.inst_id
mock_upl.return_value = 'vopt', 'f_uuid'
fh = mock_ntf.return_value.__enter__.return_value
fh.name = 'iso_path'
mock_name.return_value = 'fake-name'
# Run
cfg_dr_builder.create_cfg_drv_vopt(mock_instance, 'files', 'netinfo',
'fake_lpar', admin_pass='pass')
mock_ntf.assert_called_once_with(mode='rb')
mock_ccdi.assert_called_once_with(mock_instance,
'files', 'netinfo', 'iso_path',
admin_pass='pass')
mock_getsize.assert_called_once_with('iso_path')
mock_upl.assert_called_once_with(self.apt, 'vios_uuid', fh,
'fake-name',
mock_getsize.return_value)
mock_attach.assert_called_once_with(mock_instance, 'fake_lpar',
'vopt', None)
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
autospec=True)
@mock.patch('pypowervm.utils.transaction.WrapperTask', autospec=True)
def test_attach_vopt(self, mock_class_wrapper_task, mock_build_map,
mock_add_map):
# Create objects to test with
mock_instance = mock.MagicMock(name='fake-instance')
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
vopt = mock.Mock()
mock_vios = mock.Mock(spec=pvm_vios.VIOS)
mock_vios.configure_mock(name='vios name')
# Mock methods not currently under test
mock_wrapper_task = mock.MagicMock()
mock_class_wrapper_task.return_value = mock_wrapper_task
def call_param(param):
param(mock_vios)
mock_wrapper_task.add_functor_subtask.side_effect = call_param
def validate_build(host_uuid, vios_w, lpar_uuid, vopt_elem):
self.assertEqual(None, host_uuid)
self.assertIsInstance(vios_w, pvm_vios.VIOS)
self.assertEqual('lpar_uuid', lpar_uuid)
self.assertEqual(vopt, vopt_elem)
return 'map'
mock_build_map.side_effect = validate_build
def validate_add(vios_w, mapping):
self.assertIsInstance(vios_w, pvm_vios.VIOS)
self.assertEqual(mapping, 'map')
return 'added'
mock_add_map.side_effect = validate_add
# Run the actual test
cfg_dr_builder._attach_vopt(mock_instance, 'lpar_uuid', vopt)
# Make sure they were called and validated
self.assertTrue(mock_wrapper_task.execute.called)
self.assertEqual(1, mock_build_map.call_count)
self.assertEqual(1, mock_add_map.call_count)
self.assertTrue(self.validate_vopt.called)
def test_sanitize_network_info(self):
network_info = [{'type': 'lbr'}, {'type': 'pvm_sea'},
{'type': 'ovs'}]
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
resp = cfg_dr_builder._sanitize_network_info(network_info)
expected_ret = [{'type': 'vif'}, {'type': 'vif'},
{'type': 'ovs'}]
self.assertEqual(resp, expected_ret)
def test_mgmt_cna_to_vif(self):
mock_cna = mock.MagicMock()
mock_cna.mac = "FAD4433ED120"
# Run
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
vif = cfg_dr_builder._mgmt_cna_to_vif(mock_cna)
# Validate
self.assertEqual(vif.get('address'), "fa:d4:43:3e:d1:20")
self.assertEqual(vif.get('id'), 'mgmt_vif')
self.assertIsNotNone(vif.get('network'))
self.assertEqual(1, len(vif.get('network').get('subnets')))
subnet = vif.get('network').get('subnets')[0]
self.assertEqual(6, subnet.get('version'))
self.assertEqual('fe80::/64', subnet.get('cidr'))
ip = subnet.get('ips')[0]
self.assertEqual('fe80::f8d4:43ff:fe3e:d120', ip.get('address'))
def test_mac_to_link_local(self):
mac = 'fa:d4:43:3e:d1:20'
self.assertEqual('fe80::f8d4:43ff:fe3e:d120',
m.ConfigDrivePowerVM._mac_to_link_local(mac))
mac = '00:00:00:00:00:00'
self.assertEqual('fe80::0200:00ff:fe00:0000',
m.ConfigDrivePowerVM._mac_to_link_local(mac))
mac = 'ff:ff:ff:ff:ff:ff'
self.assertEqual('fe80::fdff:ffff:feff:ffff',
m.ConfigDrivePowerVM._mac_to_link_local(mac))
@mock.patch('nova_powervm.virt.powervm.media.ConfigDrivePowerVM.'
'add_dlt_vopt_tasks')
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.wrap',
new=mock.MagicMock())
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps')
@mock.patch('pypowervm.utils.transaction.FeedTask')
@mock.patch('pypowervm.utils.transaction.FeedTask.execute')
def test_dlt_vopt_no_map(self, mock_execute, mock_class_feed_task,
mock_add_dlt_vopt_tasks, mock_find_maps):
# Init objects to test with
mock_feed_task = mock.MagicMock()
mock_class_feed_task.return_value = mock_feed_task
mock_find_maps.return_value = []
# Invoke the operation
cfg_dr = m.ConfigDrivePowerVM(self.apt)
cfg_dr.dlt_vopt('2', remove_mappings=False)
# Verify expected methods were called
mock_add_dlt_vopt_tasks.assert_not_called()
self.assertTrue(mock_feed_task.execute.called)
@mock.patch('nova_powervm.virt.powervm.vm.get_vm_id', autospec=True)
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
def test_add_dlt_vopt_tasks(self, mock_find_maps, mock_gen_match_func,
mock_vm_id):
# Init objects to test with
cfg_dr = m.ConfigDrivePowerVM(self.apt)
stg_ftsk = mock.MagicMock()
cfg_dr.vios_uuid = 'vios_uuid'
lpar_uuid = 'lpar_uuid'
mock_find_maps.return_value = [mock.Mock(backing_storage='stor')]
mock_vm_id.return_value = '2'
# Run
cfg_dr.add_dlt_vopt_tasks(lpar_uuid, stg_ftsk)
# Validate
mock_gen_match_func.assert_called_with(pvm_stg.VOptMedia)
mock_find_maps.assert_called_with(
stg_ftsk.get_wrapper().scsi_mappings, client_lpar_id='2',
match_func=mock_gen_match_func.return_value)
self.assertTrue(stg_ftsk.add_post_execute.called)
self.assertTrue(
stg_ftsk.wrapper_tasks['vios_uuid'].add_functor_subtask.called)

View File

@ -1,192 +0,0 @@
# Copyright 2015, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import retrying
from nova import exception
from nova import test
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.tests.test_utils import pvmhttp
from nova_powervm.virt.powervm import exception as npvmex
from nova_powervm.virt.powervm import mgmt
LPAR_HTTPRESP_FILE = "lpar.txt"
class TestMgmt(test.NoDBTestCase):
def setUp(self):
super(TestMgmt, self).setUp()
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
lpar_http = pvmhttp.load_pvm_resp(LPAR_HTTPRESP_FILE, adapter=self.apt)
self.assertNotEqual(lpar_http, None,
"Could not load %s " % LPAR_HTTPRESP_FILE)
self.resp = lpar_http.response
@mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True)
def test_mgmt_uuid(self, mock_get_partition):
mock_get_partition.return_value = mock.Mock(uuid='mock_mgmt')
adpt = mock.Mock()
# First run should call the partition only once
self.assertEqual('mock_mgmt', mgmt.mgmt_uuid(adpt))
mock_get_partition.assert_called_once_with(adpt)
# But a subsequent call should effectively no-op
mock_get_partition.reset_mock()
self.assertEqual('mock_mgmt', mgmt.mgmt_uuid(adpt))
self.assertEqual(0, mock_get_partition.call_count)
@mock.patch('glob.glob', autospec=True)
@mock.patch('nova.privsep.path.writefile', autospec=True)
@mock.patch('os.path.realpath', autospec=True)
def test_discover_vscsi_disk(self, mock_realpath, mock_dacw, mock_glob):
scanpath = '/sys/bus/vio/devices/30000005/host*/scsi_host/host*/scan'
udid = ('275b5d5f88fa5611e48be9000098be9400'
'13fb2aa55a2d7b8d150cb1b7b6bc04d6')
devlink = ('/dev/disk/by-id/scsi-SIBM_3303_NVDISK' + udid)
mapping = mock.Mock()
mapping.client_adapter.lpar_slot_num = 5
mapping.backing_storage.udid = udid
# Realistically, first glob would return e.g. .../host0/.../host0/...
# but it doesn't matter for test purposes.
mock_glob.side_effect = [[scanpath], [devlink]]
mgmt.discover_vscsi_disk(mapping)
mock_glob.assert_has_calls(
[mock.call(scanpath), mock.call('/dev/disk/by-id/*' + udid[-32:])])
mock_dacw.assert_called_with(scanpath, 'a', '- - -')
mock_realpath.assert_called_with(devlink)
@mock.patch('retrying.retry', autospec=True)
@mock.patch('glob.glob', autospec=True)
@mock.patch('nova.privsep.path.writefile', autospec=True)
def test_discover_vscsi_disk_not_one_result(self, mock_write, mock_glob,
mock_retry):
"""Zero or more than one disk is found by discover_vscsi_disk."""
def validate_retry(kwargs):
self.assertIn('retry_on_result', kwargs)
self.assertEqual(250, kwargs['wait_fixed'])
self.assertEqual(300000, kwargs['stop_max_delay'])
def raiser(unused):
raise retrying.RetryError(mock.Mock(attempt_number=123))
def retry_passthrough(**kwargs):
validate_retry(kwargs)
def wrapped(_poll_for_dev):
return _poll_for_dev
return wrapped
def retry_timeout(**kwargs):
validate_retry(kwargs)
def wrapped(_poll_for_dev):
return raiser
return wrapped
udid = ('275b5d5f88fa5611e48be9000098be9400'
'13fb2aa55a2d7b8d150cb1b7b6bc04d6')
mapping = mock.Mock()
mapping.client_adapter.lpar_slot_num = 5
mapping.backing_storage.udid = udid
# No disks found
mock_retry.side_effect = retry_timeout
mock_glob.side_effect = lambda path: []
self.assertRaises(npvmex.NoDiskDiscoveryException,
mgmt.discover_vscsi_disk, mapping)
# Multiple disks found
mock_retry.side_effect = retry_passthrough
mock_glob.side_effect = [['path'], ['/dev/sde', '/dev/sdf']]
self.assertRaises(npvmex.UniqueDiskDiscoveryException,
mgmt.discover_vscsi_disk, mapping)
@mock.patch('time.sleep', autospec=True)
@mock.patch('os.path.realpath', autospec=True)
@mock.patch('os.stat', autospec=True)
@mock.patch('nova.privsep.path.writefile', autospec=True)
def test_remove_block_dev(self, mock_dacw, mock_stat, mock_realpath,
mock_sleep):
link = '/dev/link/foo'
realpath = '/dev/sde'
delpath = '/sys/block/sde/device/delete'
mock_realpath.return_value = realpath
# Good path
mock_stat.side_effect = (None, None, OSError())
mgmt.remove_block_dev(link)
mock_realpath.assert_called_with(link)
mock_stat.assert_has_calls([mock.call(realpath), mock.call(delpath),
mock.call(realpath)])
mock_dacw.assert_called_with(delpath, 'a', '1')
self.assertEqual(0, mock_sleep.call_count)
# Device param not found
mock_dacw.reset_mock()
mock_stat.reset_mock()
mock_stat.side_effect = (OSError(), None, None)
self.assertRaises(exception.InvalidDevicePath, mgmt.remove_block_dev,
link)
# stat was called once; privsep write was not called
self.assertEqual(1, mock_stat.call_count)
mock_dacw.assert_not_called()
# Delete special file not found
mock_stat.reset_mock()
mock_stat.side_effect = (None, OSError(), None)
self.assertRaises(exception.InvalidDevicePath, mgmt.remove_block_dev,
link)
# stat was called twice; privsep write was not called
self.assertEqual(2, mock_stat.call_count)
mock_dacw.assert_not_called()
@mock.patch('retrying.retry')
@mock.patch('os.path.realpath')
@mock.patch('os.stat')
@mock.patch('nova.privsep.path.writefile')
def test_remove_block_dev_timeout(self, mock_dacw, mock_stat,
mock_realpath, mock_retry):
def validate_retry(kwargs):
self.assertIn('retry_on_result', kwargs)
self.assertEqual(250, kwargs['wait_fixed'])
self.assertEqual(10000, kwargs['stop_max_delay'])
def raiser(unused):
raise retrying.RetryError(mock.Mock(attempt_number=123))
def retry_timeout(**kwargs):
validate_retry(kwargs)
def wrapped(_poll_for_del):
return raiser
return wrapped
# Deletion was attempted, but device is still there
link = '/dev/link/foo'
delpath = '/sys/block/sde/device/delete'
realpath = '/dev/sde'
mock_realpath.return_value = realpath
mock_stat.side_effect = lambda path: 1
mock_retry.side_effect = retry_timeout
self.assertRaises(
npvmex.DeviceDeletionException, mgmt.remove_block_dev, link)
mock_realpath.assert_called_once_with(link)
mock_dacw.assert_called_with(delpath, 'a', '1')

View File

@ -1,171 +0,0 @@
# Copyright 2016, 2017 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
from nova import test
from nova_powervm.virt.powervm import exception as p_exc
from nova_powervm.virt.powervm import slot
from pypowervm import exceptions as pvm_exc
class TestNovaSlotManager(test.NoDBTestCase):
def setUp(self):
super(TestNovaSlotManager, self).setUp()
self.store_api = mock.MagicMock()
self.inst = mock.MagicMock(uuid='uuid1')
def test_build_slot_mgr(self):
# Test when NVRAM store exists
# The Swift-backed implementation of PowerVM SlotMapStore is returned
self.store_api.fetch_slot_map = mock.MagicMock(return_value=None)
slot_mgr = slot.build_slot_mgr(self.inst, self.store_api, adapter=None,
vol_drv_iter=None)
self.assertIsInstance(slot_mgr, slot.SwiftSlotManager)
self.assertFalse(slot_mgr.is_rebuild)
# Test when no NVRAM store is set up
# The no-op implementation of PowerVM SlotMapStore is returned
self.assertIsInstance(
slot.build_slot_mgr(self.inst, None, adapter=None,
vol_drv_iter=None),
slot.NoopSlotManager)
# Test that the rebuild flag is set when it is flagged as a rebuild
slot_mgr = slot.build_slot_mgr(
self.inst, self.store_api, adapter='adpt', vol_drv_iter='test')
self.assertTrue(slot_mgr.is_rebuild)
class TestSwiftSlotManager(test.NoDBTestCase):
def setUp(self):
super(TestSwiftSlotManager, self).setUp()
self.store_api = mock.MagicMock()
self.store_api.fetch_slot_map = mock.MagicMock(return_value=None)
self.inst = mock.MagicMock(uuid='a2e71b38-160f-4650-bbdc-2a10cd507e2b')
self.slot_mgr = slot.SwiftSlotManager(self.store_api,
instance=self.inst)
def test_load(self):
# load() should have been called internally by __init__
self.store_api.fetch_slot_map.assert_called_with(
self.inst.uuid + '_slot_map')
def test_save(self):
# Mock the call
self.store_api.store_slot_map = mock.MagicMock()
# Run save
self.slot_mgr.save()
# Not called because nothing changed
self.store_api.store_slot_map.assert_not_called()
# Change something
mock_vfcmap = mock.Mock(server_adapter=mock.Mock(lpar_slot_num=123))
self.slot_mgr.register_vfc_mapping(mock_vfcmap, 'fabric')
# Run save
self.slot_mgr.save()
# Validate the call
self.store_api.store_slot_map.assert_called_once_with(
self.inst.uuid + '_slot_map', mock.ANY)
def test_delete(self):
# Mock the call
self.store_api.delete_slot_map = mock.MagicMock()
# Run delete
self.slot_mgr.delete()
# Validate the call
self.store_api.delete_slot_map.assert_called_once_with(
self.inst.uuid + '_slot_map')
@mock.patch('pypowervm.tasks.slot_map.RebuildSlotMap', autospec=True)
@mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True)
def test_init_recreate_map(self, mock_ftsk, mock_rebuild_slot):
vios1, vios2 = mock.Mock(uuid='uuid1'), mock.Mock(uuid='uuid2')
mock_ftsk.return_value.feed = [vios1, vios2]
self.slot_mgr.init_recreate_map(mock.Mock(), self._vol_drv_iter())
self.assertEqual(1, mock_ftsk.call_count)
mock_rebuild_slot.assert_called_once_with(
self.slot_mgr, mock.ANY, {'udid': ['uuid2'], 'iscsi': ['uuid1']},
['a', 'b'])
@mock.patch('pypowervm.tasks.slot_map.RebuildSlotMap', autospec=True)
@mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True)
def test_init_recreate_map_fails(self, mock_ftsk, mock_rebuild_slot):
vios1, vios2 = mock.Mock(uuid='uuid1'), mock.Mock(uuid='uuid2')
mock_ftsk.return_value.feed = [vios1, vios2]
mock_rebuild_slot.side_effect = (
pvm_exc.InvalidHostForRebuildNotEnoughVIOS(udid='udid56'))
self.assertRaises(
p_exc.InvalidRebuild, self.slot_mgr.init_recreate_map, mock.Mock(),
self._vol_drv_iter())
@mock.patch('pypowervm.tasks.slot_map.RebuildSlotMap', autospec=True)
@mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True)
def test_init_recreate_map_fileio(self, mock_ftsk, mock_rebuild_slot):
vios1, vios2 = mock.Mock(uuid='uuid1'), mock.Mock(uuid='uuid2')
mock_ftsk.return_value.feed = [vios1, vios2]
expected_vio_wrap = [vios1, vios2]
self.slot_mgr.init_recreate_map(mock.Mock(), self._vol_drv_iter_2())
self.assertEqual(1, mock_ftsk.call_count)
mock_rebuild_slot.assert_called_once_with(
self.slot_mgr, expected_vio_wrap,
{'udidvscsi': ['uuid1'], 'udid': ['uuid1']}, [])
def _vol_drv_iter_2(self):
mock_fileio = mock.Mock()
mock_fileio.vol_type.return_value = 'fileio'
mock_fileio.is_volume_on_vios.side_effect = ((True, 'udid'),
(False, None))
mock_scsi = mock.Mock()
mock_scsi.vol_type.return_value = 'vscsi'
mock_scsi.is_volume_on_vios.side_effect = ((True, 'udidvscsi'),
(False, None))
vol_drv = [mock_fileio, mock_scsi]
for type in vol_drv:
yield mock.Mock(), type
def _vol_drv_iter(self):
mock_scsi = mock.Mock()
mock_scsi.vol_type.return_value = 'vscsi'
mock_scsi.is_volume_on_vios.side_effect = ((False, None),
(True, 'udid'))
mock_iscsi = mock.Mock()
mock_iscsi.vol_type.return_value = 'iscsi'
mock_iscsi.is_volume_on_vios.side_effect = ((True, 'iscsi'),
(False, None))
mock_npiv1 = mock.Mock()
mock_npiv1.vol_type.return_value = 'npiv'
mock_npiv1._fabric_names.return_value = ['a', 'b']
mock_npiv2 = mock.Mock()
mock_npiv2.vol_type.return_value = 'npiv'
mock_npiv2._fabric_names.return_value = ['a', 'b', 'c']
vol_drv = [mock_scsi, mock_npiv1, mock_npiv2, mock_iscsi]
for type in vol_drv:
yield mock.Mock(), type

View File

@ -1,968 +0,0 @@
# Copyright 2016, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import exception
from nova.network import model
from nova.network.neutronv2 import api as netapi
from nova import test
from oslo_config import cfg
from pypowervm import exceptions as pvm_ex
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.wrappers import logical_partition as pvm_lpar
from pypowervm.wrappers import managed_system as pvm_ms
from pypowervm.wrappers import network as pvm_net
from nova_powervm.virt.powervm import vif
CONF = cfg.CONF
def cna(mac):
"""Builds a mock Client Network Adapter for unit tests."""
nic = mock.MagicMock()
nic.mac = mac
nic.vswitch_uri = 'fake_href'
return nic
class FakeNetworkAPI(object):
def __init__(self, physnet):
self.physical_network = physnet
def get(self, context, netid):
physnet = mock.MagicMock()
physnet.physical_network = self.physical_network
return physnet
class TestVifFunctions(test.NoDBTestCase):
def setUp(self):
super(TestVifFunctions, self).setUp()
self.adpt = self.useFixture(pvm_fx.AdapterFx(
traits=pvm_fx.LocalPVMTraits)).adpt
self.slot_mgr = mock.Mock()
@mock.patch('oslo_serialization.jsonutils.dumps', autospec=True)
@mock.patch('pypowervm.wrappers.event.Event', autospec=True)
def test_push_vif_event(self, mock_event, mock_dumps):
mock_vif = mock.Mock(mac='MAC', href='HREF')
vif._push_vif_event(self.adpt, 'action', mock_vif, mock.Mock(),
'pvm_sea')
mock_dumps.assert_called_once_with(
{'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC',
'type': 'pvm_sea'})
mock_event.bld.assert_called_once_with(self.adpt, 'HREF',
mock_dumps.return_value)
mock_event.bld.return_value.create.assert_called_once_with()
mock_dumps.reset_mock()
mock_event.bld.reset_mock()
mock_event.bld.return_value.create.reset_mock()
# Exception reraises
mock_event.bld.return_value.create.side_effect = IndexError
self.assertRaises(IndexError, vif._push_vif_event, self.adpt, 'action',
mock_vif, mock.Mock(), 'pvm_sea')
mock_dumps.assert_called_once_with(
{'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC',
'type': 'pvm_sea'})
mock_event.bld.assert_called_once_with(self.adpt, 'HREF',
mock_dumps.return_value)
mock_event.bld.return_value.create.assert_called_once_with()
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif._push_vif_event', autospec=True)
def test_plug(self, mock_event, mock_bld_drv):
"""Test the top-level plug method."""
mock_vif = {'address': 'MAC', 'type': 'pvm_sea'}
slot_mgr = mock.Mock()
# 1) With slot registration
slot_mgr.build_map.get_vnet_slot.return_value = None
vnet = vif.plug(self.adpt, 'host_uuid', 'instance', mock_vif, slot_mgr)
mock_bld_drv.assert_called_once_with(self.adpt, 'host_uuid',
'instance', mock_vif)
slot_mgr.build_map.get_vnet_slot.assert_called_once_with('MAC')
mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif, None,
new_vif=True)
slot_mgr.register_vnet.assert_called_once_with(
mock_bld_drv.return_value.plug.return_value)
mock_event.assert_called_once_with(self.adpt, 'plug', vnet, mock.ANY,
'pvm_sea')
self.assertEqual(mock_bld_drv.return_value.plug.return_value, vnet)
# Clean up
mock_bld_drv.reset_mock()
slot_mgr.build_map.get_vnet_slot.reset_mock()
mock_bld_drv.return_value.plug.reset_mock()
slot_mgr.register_vnet.reset_mock()
mock_event.reset_mock()
# 2) Without slot registration; and plug returns None (which it should
# IRL whenever new_vif=False).
slot_mgr.build_map.get_vnet_slot.return_value = 123
mock_bld_drv.return_value.plug.return_value = None
vnet = vif.plug(self.adpt, 'host_uuid', 'instance', mock_vif, slot_mgr,
new_vif=False)
mock_bld_drv.assert_called_once_with(self.adpt, 'host_uuid',
'instance', mock_vif)
slot_mgr.build_map.get_vnet_slot.assert_called_once_with('MAC')
mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif, 123,
new_vif=False)
slot_mgr.register_vnet.assert_not_called()
self.assertEqual(0, mock_event.call_count)
self.assertIsNone(vnet)
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif._push_vif_event', autospec=True)
def test_unplug(self, mock_event, mock_bld_drv):
"""Test the top-level unplug method."""
mock_vif = {'address': 'MAC', 'type': 'pvm_sea'}
slot_mgr = mock.Mock()
# 1) With slot deregistration, default cna_w_list
mock_bld_drv.return_value.unplug.return_value = 'vnet_w'
vif.unplug(self.adpt, 'host_uuid', 'instance', mock_vif, slot_mgr)
mock_bld_drv.assert_called_once_with(self.adpt, 'host_uuid',
'instance', mock_vif)
mock_bld_drv.return_value.unplug.assert_called_once_with(
mock_vif, cna_w_list=None)
slot_mgr.drop_vnet.assert_called_once_with('vnet_w')
mock_event.assert_called_once_with(self.adpt, 'unplug', 'vnet_w',
mock.ANY, 'pvm_sea')
# Clean up
mock_bld_drv.reset_mock()
mock_bld_drv.return_value.unplug.reset_mock()
slot_mgr.drop_vnet.reset_mock()
mock_event.reset_mock()
# 2) Without slot deregistration, specified cna_w_list
mock_bld_drv.return_value.unplug.return_value = None
vif.unplug(self.adpt, 'host_uuid', 'instance', mock_vif, slot_mgr,
cna_w_list='cnalist')
mock_bld_drv.assert_called_once_with(self.adpt, 'host_uuid',
'instance', mock_vif)
mock_bld_drv.return_value.unplug.assert_called_once_with(
mock_vif, cna_w_list='cnalist')
slot_mgr.drop_vnet.assert_not_called()
# When unplug doesn't find a vif, we don't push an event
self.assertEqual(0, mock_event.call_count)
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
def test_plug_raises(self, mock_vif_drv):
"""HttpError is converted to VirtualInterfacePlugException."""
vif_drv = mock.Mock(plug=mock.Mock(side_effect=pvm_ex.HttpError(
resp=mock.Mock(status='status', reqmethod='method', reqpath='path',
reason='reason'))))
mock_vif_drv.return_value = vif_drv
mock_slot_mgr = mock.Mock()
mock_vif = {'address': 'vifaddr'}
self.assertRaises(exception.VirtualInterfacePlugException,
vif.plug, 'adap', 'huuid', 'inst', mock_vif,
mock_slot_mgr, new_vif='new_vif')
mock_vif_drv.assert_called_once_with('adap', 'huuid', 'inst', mock_vif)
vif_drv.plug.assert_called_once_with(
mock_vif, mock_slot_mgr.build_map.get_vnet_slot.return_value,
new_vif='new_vif')
mock_slot_mgr.build_map.get_vnet_slot.assert_called_once_with(
'vifaddr')
@mock.patch('pypowervm.wrappers.network.VSwitch.search')
def test_get_secure_rmc_vswitch(self, mock_search):
# Test no data coming back gets none
mock_search.return_value = []
resp = vif.get_secure_rmc_vswitch(self.adpt, 'host_uuid')
self.assertIsNone(resp)
# Mock that a couple vswitches get returned, but only the correct
# MGMT Switch gets returned
mock_vs = mock.MagicMock()
mock_vs.name = 'MGMTSWITCH'
mock_search.return_value = [mock_vs]
self.assertEqual(mock_vs,
vif.get_secure_rmc_vswitch(self.adpt, 'host_uuid'))
mock_search.assert_called_with(
self.adpt, parent_type=pvm_ms.System.schema_type,
parent_uuid='host_uuid', name=vif.SECURE_RMC_VSWITCH)
@mock.patch('pypowervm.tasks.cna.crt_cna', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_plug_secure_rmc_vif(self, mock_pvm_uuid, mock_crt):
# Mock up the data
mock_pvm_uuid.return_value = 'lpar_uuid'
mock_crt.return_value = mock.Mock()
self.slot_mgr.build_map.get_mgmt_vea_slot = mock.Mock(
return_value=(None, None))
mock_instance = mock.MagicMock(system_metadata={})
# Run the method
vif.plug_secure_rmc_vif(self.adpt, mock_instance, 'host_uuid',
self.slot_mgr)
# Validate responses
mock_crt.assert_called_once_with(
self.adpt, 'host_uuid', 'lpar_uuid', 4094, vswitch='MGMTSWITCH',
crt_vswitch=True, slot_num=None, mac_addr=None)
self.slot_mgr.register_cna.assert_called_once_with(
mock_crt.return_value)
@mock.patch('pypowervm.tasks.cna.crt_cna', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_plug_secure_rmc_vif_with_slot(self, mock_pvm_uuid, mock_crt):
# Mock up the data
mock_pvm_uuid.return_value = 'lpar_uuid'
mock_crt.return_value = mock.Mock()
self.slot_mgr.build_map.get_mgmt_vea_slot = mock.Mock(
return_value=('mac_addr', 5))
mock_instance = mock.MagicMock(system_metadata={})
# Run the method
vif.plug_secure_rmc_vif(self.adpt, mock_instance, 'host_uuid',
self.slot_mgr)
# Validate responses
mock_crt.assert_called_once_with(
self.adpt, 'host_uuid', 'lpar_uuid', 4094, vswitch='MGMTSWITCH',
crt_vswitch=True, slot_num=5, mac_addr='mac_addr')
self.assertFalse(self.slot_mgr.called)
@mock.patch('pypowervm.tasks.cna.crt_cna', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_plug_secure_rmc_vif_for_rebuild(self, mock_pvm_uuid, mock_crt):
# Mock up the data
mock_pvm_uuid.return_value = 'lpar_uuid'
mock_crt.return_value = mock.Mock()
self.slot_mgr.build_map.get_mgmt_vea_slot = mock.Mock(
return_value=(None, None))
mock_instance = mock.MagicMock(
system_metadata={'mgmt_interface_mac': 'old_mac'})
# Run the method
vif.plug_secure_rmc_vif(self.adpt, mock_instance, 'host_uuid',
self.slot_mgr)
# Validate responses
# Validate that as part of rebuild, pvm_cna.crt_cna is called with
# 'old_mac' stored in instance's syetem_metadata. Also, the slot
# number is not passed. This is because as part of rebuild, the
# instance is destroyed and spawned again. When the instance is
# destroyed, the slot data is removed. When instance is spawned,
# the required volume and network info is got as part of BDM
# and network info dicts. The only missing information is mgmt
# interface mac address.
mock_crt.assert_called_once_with(
self.adpt, 'host_uuid', 'lpar_uuid', 4094, vswitch='MGMTSWITCH',
crt_vswitch=True, slot_num=None, mac_addr='old_mac')
# Validate that register_cna is called.
self.slot_mgr.register_cna.assert_called_once_with(
mock_crt.return_value)
def test_build_vif_driver(self):
# Test the Shared Ethernet Adapter type VIF
mock_inst = mock.MagicMock()
mock_inst.name = 'instance'
self.assertIsInstance(
vif._build_vif_driver(self.adpt, 'host_uuid', mock_inst,
{'type': 'pvm_sea'}),
vif.PvmSeaVifDriver)
self.assertIsInstance(
vif._build_vif_driver(self.adpt, 'host_uuid', mock_inst,
{'type': 'pvm_sriov'}),
vif.PvmVnicSriovVifDriver)
# Test raises exception for no type
self.assertRaises(exception.VirtualInterfacePlugException,
vif._build_vif_driver, self.adpt, 'host_uuid',
mock_inst, {})
# Test an invalid vif type
self.assertRaises(exception.VirtualInterfacePlugException,
vif._build_vif_driver, self.adpt, 'host_uuid',
mock_inst, {'type': 'bad'})
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
def test_pre_live_migrate_at_source(self, mock_build_vif_drv):
mock_drv = mock.MagicMock()
mock_build_vif_drv.return_value = mock_drv
mock_vif = mock.MagicMock()
vif.pre_live_migrate_at_source(self.adpt, 'host_uuid', mock.Mock(),
mock_vif)
mock_drv.pre_live_migrate_at_source.assert_called_once_with(mock_vif)
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
def test_rollback_live_migration_at_destination(self, mock_build_vif_drv):
mock_build_vif_drv.return_value = mock_drv = mock.MagicMock()
mock_vif, mappings = mock.MagicMock(), {}
vif.rollback_live_migration_at_destination(
self.adpt, 'host_uuid', mock.Mock(), mock_vif,
mappings)
rb = mock_drv.rollback_live_migration_at_destination
rb.assert_called_once_with(mock_vif, mappings)
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
def test_pre_live_migrate_at_destination(self, mock_build_vif_drv):
mock_drv = mock.MagicMock()
mock_build_vif_drv.return_value = mock_drv
mock_vif = mock.MagicMock()
vif.pre_live_migrate_at_destination(self.adpt, 'host_uuid',
mock.Mock(), mock_vif, {})
mock_drv.pre_live_migrate_at_destination.assert_called_once_with(
mock_vif, {})
@mock.patch('nova_powervm.virt.powervm.vif._build_vif_driver',
autospec=True)
def test_post_live_migrate_at_source(self, mock_build_vif_drv):
mock_drv = mock.MagicMock()
mock_build_vif_drv.return_value = mock_drv
mock_vif = mock.MagicMock()
vif.post_live_migrate_at_source(self.adpt, 'host_uuid', mock.Mock(),
mock_vif)
mock_drv.post_live_migrate_at_source.assert_called_once_with(mock_vif)
def test_get_trunk_dev_name(self):
mock_vif = {'devname': 'tap_test', 'id': '1234567890123456'}
# Test when the dev name is available
self.assertEqual('tap_test', vif._get_trunk_dev_name(mock_vif))
# And when it isn't. Should also cut off a few characters from the id
del mock_vif['devname']
self.assertEqual('nic12345678901',
vif._get_trunk_dev_name(mock_vif))
class TestVifSriovDriver(test.NoDBTestCase):
def setUp(self):
super(TestVifSriovDriver, self).setUp()
self.adpt = self.useFixture(pvm_fx.AdapterFx()).adpt
self.inst = mock.MagicMock()
self.drv = vif.PvmVnicSriovVifDriver(self.adpt, 'host_uuid', self.inst)
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_no_pports(self, mock_sysget):
"""Raise when plug is called with a network with no physical ports."""
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='loc1', label='foo'),
mock.Mock(loc_code='loc2', label='')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='loc3', label='bar'),
mock.Mock(loc_code='loc4', label='foo')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
self.assertRaises(exception.VirtualInterfacePlugException,
self.drv.plug, FakeDirectVif('net2'), 1)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.sriov.set_vnic_back_devs', autospec=True)
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_no_physnet(self, mock_sysget, mock_back_devs, mock_pvm_uuid,
mock_vnic_bld):
slot = 10
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port11', label='default'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port22', label='default')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
netapi.API = mock.Mock(return_value=FakeNetworkAPI('default'))
self.drv.plug(FakeDirectVif(''), slot)
# Ensure back devs are created with pports from sriov_adaps and
# not with what pports passed into plug method
mock_back_devs.assert_called_once_with(
mock_vnic_bld.return_value, ['port11', 'port22'], redundancy=3,
capacity=None, max_capacity=None, check_port_status=True,
sys_w=sys)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.sriov.set_vnic_back_devs', autospec=True)
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_no_matching_pports(self, mock_sysget, mock_back_devs,
mock_pvm_uuid, mock_vnic_bld):
slot = 10
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port1', label='data1'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port2', label='data2')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
netapi.API = mock.Mock(return_value=FakeNetworkAPI('default'))
# Ensure Plug exception is raised when there are no matching pports
# for physical network of corresponding neutron network
self.assertRaises(exception.VirtualInterfacePlugException,
self.drv.plug,
FakeDirectVif('default'), slot)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.sriov.set_vnic_back_devs', autospec=True)
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_bad_pports(self, mock_sysget, mock_back_devs, mock_pvm_uuid,
mock_vnic_bld):
slot = 10
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port1', label='default'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port2', label='default')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
netapi.API = mock.Mock(return_value=FakeNetworkAPI('default'))
self.drv.plug(FakeDirectVif(''), slot)
# Ensure back devs are created with correct pports belonging to same
# physical network corresonding to neutron network
mock_back_devs.assert_called_once_with(
mock_vnic_bld.return_value, ['port1', 'port2'], redundancy=3,
capacity=None, max_capacity=None, check_port_status=True,
sys_w=sys)
@mock.patch('pypowervm.wrappers.managed_system.System.get')
@mock.patch('pypowervm.util.sanitize_mac_for_api', autospec=True)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('pypowervm.tasks.sriov.set_vnic_back_devs', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_plug(self, mock_pvm_uuid, mock_back_devs, mock_vnic_bld,
mock_san_mac, mock_sysget):
slot = 10
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port1', label='default'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port2', label='default')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
self.drv.plug(FakeDirectVif('default'),
slot)
mock_san_mac.assert_called_once_with('ab:ab:ab:ab:ab:ab')
mock_vnic_bld.assert_called_once_with(
self.drv.adapter, 79, slot_num=slot,
mac_addr=mock_san_mac.return_value, allowed_vlans='NONE',
allowed_macs='NONE')
mock_back_devs.assert_called_once_with(
mock_vnic_bld.return_value, ['port1', 'port2'], redundancy=3,
capacity=None, max_capacity=None, check_port_status=True,
sys_w=sys)
mock_pvm_uuid.assert_called_once_with(self.drv.instance)
mock_vnic_bld.return_value.create.assert_called_once_with(
parent_type=pvm_lpar.LPAR, parent_uuid=mock_pvm_uuid.return_value)
# Now with redundancy/capacity values from binding:profile
mock_san_mac.reset_mock()
mock_vnic_bld.reset_mock()
mock_back_devs.reset_mock()
mock_pvm_uuid.reset_mock()
self.drv.plug(FakeDirectVif('default', cap=0.08),
slot)
mock_san_mac.assert_called_once_with('ab:ab:ab:ab:ab:ab')
mock_vnic_bld.assert_called_once_with(
self.drv.adapter, 79, slot_num=slot,
mac_addr=mock_san_mac.return_value, allowed_vlans='NONE',
allowed_macs='NONE')
mock_back_devs.assert_called_once_with(
mock_vnic_bld.return_value, ['port1', 'port2'],
redundancy=3, capacity=0.08, check_port_status=True,
sys_w=sys, max_capacity=None)
mock_pvm_uuid.assert_called_once_with(self.drv.instance)
mock_vnic_bld.return_value.create.assert_called_once_with(
parent_type=pvm_lpar.LPAR, parent_uuid=mock_pvm_uuid.return_value)
# No-op with new_vif=False
mock_san_mac.reset_mock()
mock_vnic_bld.reset_mock()
mock_back_devs.reset_mock()
mock_pvm_uuid.reset_mock()
self.assertIsNone(self.drv.plug(
FakeDirectVif('default'), slot, new_vif=False))
self.assertEqual(0, mock_san_mac.call_count)
self.assertEqual(0, mock_vnic_bld.call_count)
self.assertEqual(0, mock_back_devs.call_count)
self.assertEqual(0, mock_pvm_uuid.call_count)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid')
@mock.patch('pypowervm.tasks.sriov.set_vnic_back_devs')
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_max_capacity(self, mock_sysget, mock_back_devs,
mock_pvm_uuid, mock_vnic_bld):
slot = 10
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port1', label='default'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port2', label='default')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
netapi.API = mock.Mock(return_value=FakeNetworkAPI('default'))
self.drv.plug(FakeDirectVifWithMaxCapacity('default',
cap=0.03, maxcap=0.75),
slot)
mock_back_devs.assert_called_once_with(
mock_vnic_bld.return_value, ['port1', 'port2'], redundancy=3,
capacity=0.03, max_capacity=0.75, check_port_status=True,
sys_w=sys)
# Test without max capacity, it is set to None
self.drv.plug(FakeDirectVifWithMaxCapacity('data1',
cap=0.5), slot)
mock_back_devs.assert_called_with(
mock_vnic_bld.return_value, ['port3'], redundancy=3,
capacity=0.5, max_capacity=None, check_port_status=True,
sys_w=sys)
@mock.patch('pypowervm.wrappers.iocard.VNIC.bld')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid')
@mock.patch('pypowervm.wrappers.managed_system.System.get')
def test_plug_max_capacity_error(self, mock_sysget, mock_pvm_uuid,
mock_vnic_bld):
sriov_adaps = [
mock.Mock(phys_ports=[
mock.Mock(loc_code='port1', label='default'),
mock.Mock(loc_code='port3', label='data1')]),
mock.Mock(phys_ports=[
mock.Mock(loc_code='port4', label='data2'),
mock.Mock(loc_code='port2', label='default')])]
sys = mock.Mock(asio_config=mock.Mock(sriov_adapters=sriov_adaps))
mock_sysget.return_value = [sys]
netapi.API = mock.Mock(return_value=FakeNetworkAPI('default'))
# Ensure VirtualInterfacePlugException is raised if maximum capacity
# is greater than 100 percent
self.assertRaises(exception.VirtualInterfacePlugException,
self.drv.plug,
FakeDirectVifWithMaxCapacity('data1',
cap=0.5, maxcap=1.4), 1)
# Ensure VirtualInterfacePlugException is raised if maximum capacity
# is less than capacity
self.assertRaises(exception.VirtualInterfacePlugException,
self.drv.plug,
FakeDirectVifWithMaxCapacity('data1',
cap=0.5, maxcap=0.4), 1)
@mock.patch('pypowervm.wrappers.iocard.VNIC.search')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.util.sanitize_mac_for_api', autospec=True)
def test_unplug(self, mock_san_mac, mock_pvm_uuid, mock_find):
fvif = FakeDirectVif('default')
self.assertEqual(mock_find.return_value, self.drv.unplug(fvif))
mock_find.assert_called_once_with(
self.drv.adapter, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_pvm_uuid.return_value,
mac=mock_san_mac.return_value, one_result=True)
mock_pvm_uuid.assert_called_once_with(self.inst)
mock_san_mac.assert_called_once_with(fvif['address'])
mock_find.return_value.delete.assert_called_once_with()
# Not found
mock_find.reset_mock()
mock_pvm_uuid.reset_mock()
mock_san_mac.reset_mock()
mock_find.return_value = None
self.assertIsNone(self.drv.unplug(fvif))
mock_find.assert_called_once_with(
self.drv.adapter, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_pvm_uuid.return_value,
mac=mock_san_mac.return_value, one_result=True)
mock_pvm_uuid.assert_called_once_with(self.inst)
mock_san_mac.assert_called_once_with(fvif['address'])
class FakeDirectVif(dict):
def __init__(self, physnet, pports=None, cap=None):
self._physnet = physnet
super(FakeDirectVif, self).__init__(
network={'id': 'net_id'},
address='ab:ab:ab:ab:ab:ab',
details={
'vlan': '79',
'physical_ports': [],
'redundancy': 3,
'capacity': cap},
profile={})
if pports is not None:
self['details']['physical_ports'] = pports
def get_physical_network(self):
return self._physnet
class FakeDirectVifWithMaxCapacity(FakeDirectVif):
def __init__(self, physnet, pports=None, cap=None, maxcap=None):
super(FakeDirectVifWithMaxCapacity, self).__init__(physnet,
pports=pports,
cap=cap)
self.get('details')['maxcapacity'] = maxcap
class TestVifSeaDriver(test.NoDBTestCase):
def setUp(self):
super(TestVifSeaDriver, self).setUp()
self.adpt = self.useFixture(pvm_fx.AdapterFx(
traits=pvm_fx.LocalPVMTraits)).adpt
self.inst = mock.MagicMock()
self.drv = vif.PvmSeaVifDriver(self.adpt, 'host_uuid', self.inst)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.cna.crt_cna', autospec=True)
def test_plug(self, mock_crt_cna, mock_pvm_uuid):
"""Tests that a VIF can be created."""
# Set up the mocks
fake_vif = {'network': {'meta': {'vlan': 5}},
'address': 'aabbccddeeff'}
fake_slot_num = 5
def validate_crt(adpt, host_uuid, lpar_uuid, vlan, mac_addr=None,
slot_num=None):
self.assertEqual('host_uuid', host_uuid)
self.assertEqual(5, vlan)
self.assertEqual('aabbccddeeff', mac_addr)
self.assertEqual(5, slot_num)
return pvm_net.CNA.bld(self.adpt, 5, host_uuid, slot_num=slot_num,
mac_addr=mac_addr)
mock_crt_cna.side_effect = validate_crt
# Invoke
resp = self.drv.plug(fake_vif, fake_slot_num)
# Validate (along with validate method above)
self.assertEqual(1, mock_crt_cna.call_count)
self.assertIsNotNone(resp)
self.assertIsInstance(resp, pvm_net.CNA)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.cna.crt_cna', autospec=True)
def test_plug_from_neutron(self, mock_crt_cna, mock_pvm_uuid):
"""Tests that a VIF can be created. Mocks Neutron net"""
# Set up the mocks. Look like Neutron
fake_vif = {'details': {'vlan': 5}, 'network': {'meta': {}},
'address': 'aabbccddeeff'}
fake_slot_num = 5
def validate_crt(adpt, host_uuid, lpar_uuid, vlan, mac_addr=None,
slot_num=None):
self.assertEqual('host_uuid', host_uuid)
self.assertEqual(5, vlan)
self.assertEqual('aabbccddeeff', mac_addr)
self.assertEqual(5, slot_num)
return pvm_net.CNA.bld(self.adpt, 5, host_uuid, slot_num=slot_num,
mac_addr=mac_addr)
mock_crt_cna.side_effect = validate_crt
# Invoke
resp = self.drv.plug(fake_vif, fake_slot_num)
# Validate (along with validate method above)
self.assertEqual(1, mock_crt_cna.call_count)
self.assertIsNotNone(resp)
self.assertIsInstance(resp, pvm_net.CNA)
def test_plug_existing_vif(self):
"""Tests that a VIF need not be created."""
# Set up the mocks
fake_vif = {'network': {'meta': {'vlan': 5}},
'address': 'aabbccddeeff'}
fake_slot_num = 5
# Invoke
resp = self.drv.plug(fake_vif, fake_slot_num, new_vif=False)
self.assertIsNone(resp)
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas', autospec=True)
def test_unplug_vifs(self, mock_vm_get):
"""Tests that a delete of the vif can be done."""
# Mock up the CNA response. Two should already exist, the other
# should not.
cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')]
mock_vm_get.return_value = cnas
# Run method. The AABBCCDDEE11 wont' be unplugged (wasn't invoked
# below) and the last unplug will also just no-op because its not on
# the VM.
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:ff'})
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:22'})
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:33'})
# The delete should have only been called once. The second CNA didn't
# have a matching mac...so it should be skipped.
self.assertEqual(1, cnas[0].delete.call_count)
self.assertEqual(0, cnas[1].delete.call_count)
self.assertEqual(1, cnas[2].delete.call_count)
class TestVifOvsDriver(test.NoDBTestCase):
def setUp(self):
super(TestVifOvsDriver, self).setUp()
self.adpt = self.useFixture(pvm_fx.AdapterFx(
traits=pvm_fx.LocalPVMTraits)).adpt
self.inst = mock.MagicMock(uuid='inst_uuid')
self.drv = vif.PvmOvsVifDriver(self.adpt, 'host_uuid', self.inst)
@mock.patch('nova_powervm.virt.powervm.vif._get_trunk_dev_name',
autospec=True)
@mock.patch('pypowervm.tasks.cna.crt_p2p_cna', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_mgmt_partition', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_plug(self, mock_pvm_uuid, mock_mgmt_lpar, mock_p2p_cna,
mock_trunk_dev_name):
# Mock the data
mock_pvm_uuid.return_value = 'lpar_uuid'
mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid')
mock_trunk_dev_name.return_value = 'device'
cna_w, trunk_wraps = mock.MagicMock(), [mock.MagicMock()]
mock_p2p_cna.return_value = cna_w, trunk_wraps
# Run the plug
net_model = model.Model({'bridge': 'br-int', 'meta': {'mtu': 1450}})
vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id',
devname='tap-dev', network=net_model)
slot_num = 5
self.drv.plug(vif, slot_num)
# Validate the calls
ovs_ext_ids = ('iface-id=vif_id,iface-status=active,'
'attached-mac=aa:bb:cc:dd:ee:ff,vm-uuid=inst_uuid')
mock_p2p_cna.assert_called_once_with(
self.adpt, 'host_uuid', 'lpar_uuid', ['mgmt_uuid'],
'NovaLinkVEABridge', crt_vswitch=True,
mac_addr='aa:bb:cc:dd:ee:ff', slot_num=slot_num, dev_name='device',
ovs_bridge='br-int', ovs_ext_ids=ovs_ext_ids, configured_mtu=1450)
@mock.patch('nova_powervm.virt.powervm.vif._get_trunk_dev_name')
@mock.patch('pypowervm.tasks.partition.get_mgmt_partition', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid')
@mock.patch('nova_powervm.virt.powervm.vif.PvmOvsVifDriver.'
'_find_cna_for_vif')
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas')
@mock.patch('pypowervm.tasks.cna.find_trunks', autospec=True)
def test_plug_existing_vif(self, mock_find_trunks, mock_get_cnas,
mock_find_cna, mock_pvm_uuid, mock_mgmt_lpar,
mock_trunk_dev_name):
# Mock the data
t1, t2 = mock.MagicMock(), mock.MagicMock()
mock_find_trunks.return_value = [t1, t2]
mock_cna = mock.Mock()
mock_get_cnas.return_value = [mock_cna, mock.Mock()]
mock_find_cna.return_value = mock_cna
mock_pvm_uuid.return_value = 'lpar_uuid'
mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid')
mock_trunk_dev_name.return_value = 'device'
self.inst = mock.MagicMock(uuid='c2e7ff9f-b9b6-46fa-8716-93bbb795b8b4')
self.drv = vif.PvmOvsVifDriver(self.adpt, 'host_uuid', self.inst)
# Run the plug
network_model = model.Model({'bridge': 'br0', 'meta': {'mtu': 1500}})
mock_vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id',
network=network_model)
slot_num = 5
resp = self.drv.plug(mock_vif, slot_num, new_vif=False)
self.assertIsNone(resp)
# Validate if trunk.update got invoked for all trunks of CNA of vif
self.assertTrue(t1.update.called)
self.assertTrue(t2.update.called)
@mock.patch('pypowervm.tasks.cna.find_trunks', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vif._get_trunk_dev_name')
@mock.patch('nova_powervm.virt.powervm.vif.PvmOvsVifDriver.'
'_find_cna_for_vif')
@mock.patch('nova_powervm.virt.powervm.vm.get_cnas')
def test_unplug(self, mock_get_cnas, mock_find_cna, mock_trunk_dev_name,
mock_find_trunks):
# Set up the mocks
mock_cna = mock.Mock()
mock_get_cnas.return_value = [mock_cna, mock.Mock()]
mock_find_cna.return_value = mock_cna
t1, t2 = mock.MagicMock(), mock.MagicMock()
mock_find_trunks.return_value = [t1, t2]
mock_trunk_dev_name.return_value = 'fake_dev'
# Call the unplug
mock_vif = {'address': 'aa:bb:cc:dd:ee:ff',
'network': {'bridge': 'br-int'}}
self.drv.unplug(mock_vif)
# The trunks and the cna should have been deleted
self.assertTrue(t1.delete.called)
self.assertTrue(t2.delete.called)
self.assertTrue(mock_cna.delete.called)
@mock.patch('pypowervm.tasks.cna.find_trunks', autospec=True)
@mock.patch('pypowervm.wrappers.network.CNA', autospec=True)
@mock.patch('pypowervm.util.sanitize_mac_for_api', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
def test_pre_live_migrate_at_source(self, mock_pvm_uuid, mock_sanitize,
mock_cna, mock_trunk_find):
# Set up the mocks
vif = {'address': 'aa:bb:cc:dd:ee:ff'}
mock_sanitize.return_value = 'AABBCCDDEEFF'
mock_trunk_find.return_value = 'trunk'
mock_pvm_uuid.return_value = 'pvm_uuid'
resp = self.drv.pre_live_migrate_at_source(vif)
self.assertEqual(resp, 'trunk')
# Make sure the APIs were called correctly
mock_sanitize.assert_called_once_with(vif['address'])
mock_cna.search.assert_called_once_with(
self.adpt, parent_type=pvm_lpar.LPAR.schema_type,
parent_uuid='pvm_uuid', one_result=True, mac='AABBCCDDEEFF')
mock_trunk_find.assert_called_once_with(self.adpt, mock.ANY)
@mock.patch('pypowervm.tasks.cna.crt_trunk_with_free_vlan', autospec=True)
@mock.patch('pypowervm.tasks.cna.find_orphaned_trunks', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_mgmt_partition', autospec=True)
def test_pre_live_migrate_at_destination(
self, mock_part_get, mock_find_trunks, mock_trunk_crt):
# Mock the vif
net_model = model.Model({'bridge': 'br-int', 'meta': {'mtu': 1450}})
vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id',
devname='tap-dev', network=net_model)
# Mock out the management partition
mock_mgmt_wrap = mock.MagicMock()
mock_mgmt_wrap.uuid = 'mgmt_uuid'
mock_part_get.return_value = mock_mgmt_wrap
mock_trunk_crt.return_value = [mock.Mock(pvid=2)]
mock_orphan_wrap = mock.MagicMock(mac='aabbccddeeff')
mock_find_trunks.return_value = [mock_orphan_wrap]
# Invoke and test the basic response
vea_vlan_mappings = {}
self.drv.pre_live_migrate_at_destination(vif, vea_vlan_mappings)
self.assertEqual(vea_vlan_mappings['aa:bb:cc:dd:ee:ff'], 2)
# Now validate it called the things it needed to
ovs_ext_ids = ('iface-id=vif_id,iface-status=active,'
'attached-mac=aa:bb:cc:dd:ee:ff,vm-uuid=inst_uuid')
mock_trunk_crt.assert_called_once_with(
self.adpt, 'host_uuid', ['mgmt_uuid'],
CONF.powervm.pvm_vswitch_for_novalink_io, dev_name='tap-dev',
ovs_bridge='br-int', ovs_ext_ids=ovs_ext_ids,
configured_mtu=1450)
mock_find_trunks.assert_called_once_with(
self.adpt, CONF.powervm.pvm_vswitch_for_novalink_io)
mock_orphan_wrap.delete.assert_called_once_with()
@mock.patch('pypowervm.wrappers.network.CNA', autospec=True)
@mock.patch('pypowervm.tasks.partition.get_mgmt_partition', autospec=True)
@mock.patch('pypowervm.wrappers.network.VSwitch', autospec=True)
def test_rollback_live_migration_at_destination(
self, mock_vs, mock_get_part, mock_cna):
# All the fun mocking
mock_vs.search.return_value = mock.MagicMock(switch_id=5)
# Since this gets passed through conductor, the VLAN's switch to string
# format.
vea_vlan_mappings = {'aa:bb:cc:dd:ee:ff': '3',
'aa:bb:cc:dd:ee:ee': '4'}
vif = {'devname': 'tap-dev', 'address': 'aa:bb:cc:dd:ee:ee',
'network': {'bridge': 'br-int'}, 'id': 'vif_id'}
mock_vio = mock.MagicMock(schema_type='VIO', uuid='uuid')
mock_get_part.return_value = mock_vio
trunk1 = mock.Mock(pvid=2, vswitch_id=3, trunk_pri=1)
trunk2 = mock.Mock(pvid=3, vswitch_id=5, trunk_pri=1)
trunk3 = mock.Mock(pvid=4, vswitch_id=5, trunk_pri=None)
trunk4 = mock.Mock(pvid=4, vswitch_id=5, trunk_pri=1)
mock_cna.get.return_value = [trunk1, trunk2, trunk3, trunk4]
# Invoke
self.drv.rollback_live_migration_at_destination(vif, vea_vlan_mappings)
# Make sure the trunk was deleted
trunk4.delete.assert_called_once()
# Now make sure the calls were done correctly to actually produce a
# trunk adapter
mock_vs.search.assert_called_once_with(
self.drv.adapter, parent_type=pvm_ms.System, one_result=True,
name=CONF.powervm.pvm_vswitch_for_novalink_io)
mock_get_part.assert_called_once_with(self.drv.adapter)
mock_cna.get.assert_called_once_with(
self.drv.adapter, parent=mock_vio)
@mock.patch('nova_powervm.virt.powervm.vif.PvmOvsVifDriver.'
'_cleanup_orphan_adapters')
def test_post_live_migrate_at_source(self, mock_orphan_cleanup):
# Mock the vif
vif = {'devname': 'tap-dev', 'address': 'aa:bb:cc:dd:ee:ff',
'network': {'bridge': 'br-int'}, 'id': 'vif_id'}
# Invoke and test
self.drv.post_live_migrate_at_source(vif)
# Validate that the cleanup is called
mock_orphan_cleanup.assert_called_once_with(
vif, CONF.powervm.pvm_vswitch_for_novalink_io)

View File

@ -1,913 +0,0 @@
# Copyright 2014, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import fixtures
import logging
import mock
from nova.compute import power_state
from nova.compute import task_states
from nova import exception
from nova import objects
from nova import test
from nova.virt import event
from pypowervm import exceptions as pvm_exc
from pypowervm.helpers import log_helper as pvm_log
from pypowervm.tests import test_fixtures as pvm_fx
from pypowervm.tests.test_utils import pvmhttp
from pypowervm.utils import lpar_builder as lpar_bld
from pypowervm.wrappers import base_partition as pvm_bp
from pypowervm.wrappers import logical_partition as pvm_lpar
from nova_powervm.tests.virt import powervm
from nova_powervm.virt.powervm import exception as nvex
from nova_powervm.virt.powervm import vm
LPAR_HTTPRESP_FILE = "lpar.txt"
LPAR_MAPPING = (
{
'z3-9-5-126-127-00000001': '089ffb20-5d19-4a8c-bb80-13650627d985',
'z3-9-5-126-208-000001f0': '668b0882-c24a-4ae9-91c8-297e95e3fe29'
})
LOG = logging.getLogger(__name__)
logging.basicConfig()
class FakeAdapterResponse(object):
def __init__(self, status):
self.status = status
class TestVMBuilder(test.NoDBTestCase):
def setUp(self):
super(TestVMBuilder, self).setUp()
self.adpt = mock.MagicMock()
self.host_w = mock.MagicMock()
self.lpar_b = vm.VMBuilder(self.host_w, self.adpt)
self.san_lpar_name = self.useFixture(fixtures.MockPatch(
'pypowervm.util.sanitize_partition_name_for_api')).mock
self.san_lpar_name.side_effect = lambda name: name
def test_resize_attributes_maintained(self):
lpar_w = mock.MagicMock()
lpar_w.io_config.max_virtual_slots = 200
lpar_w.proc_config.shared_proc_cfg.pool_id = 56
lpar_w.avail_priority = 129
lpar_w.srr_enabled = False
lpar_w.proc_compat_mode = 'POWER7'
lpar_w.allow_perf_data_collection = True
vm_bldr = vm.VMBuilder(self.host_w, self.adpt, cur_lpar_w=lpar_w)
self.assertEqual(200, vm_bldr.stdz.max_slots)
self.assertEqual(56, vm_bldr.stdz.spp)
self.assertEqual(129, vm_bldr.stdz.avail_priority)
self.assertFalse(vm_bldr.stdz.srr)
self.assertEqual('POWER7', vm_bldr.stdz.proc_compat)
self.assertTrue(vm_bldr.stdz.enable_lpar_metric)
def test_max_vslots_is_the_greater(self):
lpar_w = mock.MagicMock()
lpar_w.io_config.max_virtual_slots = 64
lpar_w.proc_config.shared_proc_cfg.pool_id = 56
lpar_w.avail_priority = 129
lpar_w.srr_enabled = False
lpar_w.proc_compat_mode = 'POWER7'
lpar_w.allow_perf_data_collection = True
slot_mgr = mock.MagicMock()
slot_mgr.build_map.get_max_vslots.return_value = 128
vm_bldr = vm.VMBuilder(
self.host_w, self.adpt, slot_mgr=slot_mgr, cur_lpar_w=lpar_w)
self.assertEqual(128, vm_bldr.stdz.max_slots)
def test_conf_values(self):
# Test driver CONF values are passed to the standardizer
self.flags(uncapped_proc_weight=75, proc_units_factor=.25,
group='powervm')
lpar_bldr = vm.VMBuilder(self.host_w, self.adpt)
self.assertEqual(75, lpar_bldr.stdz.uncapped_weight)
self.assertEqual(.25, lpar_bldr.stdz.proc_units_factor)
def test_format_flavor(self):
"""Perform tests against _format_flavor."""
instance = objects.Instance(**powervm.TEST_INSTANCE)
flavor = instance.get_flavor()
# LP 1561128, simplified remote restart is enabled by default
lpar_attrs = {'memory': 2048,
'name': 'instance-00000001',
'uuid': '49629a5c-f4c4-4721-9511-9725786ff2e5',
'vcpu': 1, 'srr_capability': True}
# Test dedicated procs
flavor.extra_specs = {'powervm:dedicated_proc': 'true'}
test_attrs = dict(lpar_attrs, dedicated_proc='true')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test dedicated procs, min/max vcpu and sharing mode
flavor.extra_specs = {'powervm:dedicated_proc': 'true',
'powervm:dedicated_sharing_mode':
'share_idle_procs_active',
'powervm:min_vcpu': '1',
'powervm:max_vcpu': '3'}
test_attrs = dict(lpar_attrs,
dedicated_proc='true',
sharing_mode='sre idle procs active',
min_vcpu='1', max_vcpu='3')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test shared proc sharing mode
flavor.extra_specs = {'powervm:uncapped': 'true'}
test_attrs = dict(lpar_attrs, sharing_mode='uncapped')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test availability priority
flavor.extra_specs = {'powervm:availability_priority': '150'}
test_attrs = dict(lpar_attrs, avail_priority='150')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test the Enable LPAR Metrics for true value
flavor.extra_specs = {'powervm:enable_lpar_metric': 'true'}
test_attrs = dict(lpar_attrs, enable_lpar_metric=True)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test the Enable LPAR Metrics for false value
flavor.extra_specs = {'powervm:enable_lpar_metric': 'false'}
test_attrs = dict(lpar_attrs, enable_lpar_metric=False)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test processor compatibility
flavor.extra_specs = {'powervm:processor_compatibility': 'POWER8'}
test_attrs = dict(lpar_attrs, processor_compatibility='POWER8')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
flavor.extra_specs = {'powervm:processor_compatibility': 'POWER6+'}
test_attrs = dict(
lpar_attrs,
processor_compatibility=pvm_bp.LPARCompat.POWER6_PLUS)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
flavor.extra_specs = {'powervm:processor_compatibility':
'POWER6+_Enhanced'}
test_attrs = dict(
lpar_attrs,
processor_compatibility=pvm_bp.LPARCompat.POWER6_PLUS_ENHANCED)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test min, max proc units
flavor.extra_specs = {'powervm:min_proc_units': '0.5',
'powervm:max_proc_units': '2.0'}
test_attrs = dict(lpar_attrs, min_proc_units='0.5',
max_proc_units='2.0')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test min, max mem
flavor.extra_specs = {'powervm:min_mem': '1024',
'powervm:max_mem': '4096'}
test_attrs = dict(lpar_attrs, min_mem='1024', max_mem='4096')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.san_lpar_name.assert_called_with(instance.name)
self.san_lpar_name.reset_mock()
# Test remote restart set to false
flavor.extra_specs = {'powervm:srr_capability': 'false'}
test_attrs = dict(lpar_attrs, srr_capability=False)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
# Test PPT set
flavor.extra_specs = {'powervm:ppt_ratio': '1:64'}
test_attrs = dict(lpar_attrs, ppt_ratio='1:64')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
# Test enforce affinity check set to true
flavor.extra_specs = {'powervm:enforce_affinity_check': 'true'}
test_attrs = dict(lpar_attrs, enforce_affinity_check=True)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
# Test enforce affinity check set to false
flavor.extra_specs = {'powervm:enforce_affinity_check': 'false'}
test_attrs = dict(lpar_attrs, enforce_affinity_check=False)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
# Test enforce affinity check set to invalid value
flavor.extra_specs = {'powervm:enforce_affinity_check': 'invalid'}
self.assertRaises(exception.ValidationError,
self.lpar_b._format_flavor, instance)
# Test secure boot set
flavor.extra_specs = {'powervm:secure_boot': '2'}
test_attrs = dict(lpar_attrs, secure_boot='2')
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
# Prep for unsupported host tests
self.lpar_b.host_w.get_capability.return_value = False
# Test PPT ratio not set when rebuilding to non-supported host
flavor.extra_specs = {'powervm:ppt_ratio': '1:4096'}
instance.task_state = task_states.REBUILD_SPAWNING
test_attrs = dict(lpar_attrs)
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.lpar_b.host_w.get_capability.assert_called_once_with(
'physical_page_table_ratio_capable')
# Test affinity check not set when rebuilding to non-supported host
self.lpar_b.host_w.get_capability.reset_mock()
flavor.extra_specs = {'powervm:enforce_affinity_check': 'true'}
self.assertEqual(self.lpar_b._format_flavor(instance), test_attrs)
self.lpar_b.host_w.get_capability.assert_called_once_with(
'affinity_check_capable')
@mock.patch('pypowervm.wrappers.shared_proc_pool.SharedProcPool.search')
def test_spp_pool_id(self, mock_search):
# The default pool is always zero. Validate the path.
self.assertEqual(0, self.lpar_b._spp_pool_id('DefaultPool'))
self.assertEqual(0, self.lpar_b._spp_pool_id(None))
# Further invocations require calls to the adapter. Build a minimal
# mocked SPP wrapper
spp = mock.MagicMock()
spp.id = 1
# Three invocations. First has too many elems. Second has none.
# Third is just right. :-)
mock_search.side_effect = [[spp, spp], [], [spp]]
self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id,
'fake_name')
self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id,
'fake_name')
self.assertEqual(1, self.lpar_b._spp_pool_id('fake_name'))
def test_flavor_bool(self):
true_iterations = ['true', 't', 'yes', 'y', 'TrUe', 'YeS', 'Y', 'T']
for t in true_iterations:
self.assertTrue(self.lpar_b._flavor_bool(t, 'key'))
false_iterations = ['false', 'f', 'no', 'n', 'FaLSe', 'nO', 'F', 'N']
for f in false_iterations:
self.assertFalse(self.lpar_b._flavor_bool(f, 'key'))
raise_iterations = ['NotGood', '', 'invalid']
for r in raise_iterations:
self.assertRaises(exception.ValidationError,
self.lpar_b._flavor_bool, r, 'key')
class TestVM(test.NoDBTestCase):
def setUp(self):
super(TestVM, self).setUp()
self.apt = self.useFixture(pvm_fx.AdapterFx(
traits=pvm_fx.LocalPVMTraits)).adpt
self.apt.helpers = [pvm_log.log_helper]
self.san_lpar_name = self.useFixture(fixtures.MockPatch(
'pypowervm.util.sanitize_partition_name_for_api')).mock
self.san_lpar_name.side_effect = lambda name: name
lpar_http = pvmhttp.load_pvm_resp(LPAR_HTTPRESP_FILE, adapter=self.apt)
self.assertNotEqual(lpar_http, None,
"Could not load %s " %
LPAR_HTTPRESP_FILE)
self.resp = lpar_http.response
def test_translate_event(self):
# (expected event, pvm state, power_state)
tests = [
(event.EVENT_LIFECYCLE_STARTED, "running", power_state.SHUTDOWN),
(None, "running", power_state.RUNNING)
]
for t in tests:
self.assertEqual(t[0], vm.translate_event(t[1], t[2]))
@mock.patch.object(objects.Instance, 'get_by_uuid')
def test_get_instance(self, mock_get_uuid):
mock_get_uuid.return_value = '1111'
self.assertEqual('1111', vm.get_instance('ctx', 'ABC'))
mock_get_uuid.side_effect = [
exception.InstanceNotFound({'instance_id': 'fake_instance'}),
'222'
]
self.assertEqual('222', vm.get_instance('ctx', 'ABC'))
def test_uuid_set_high_bit(self):
self.assertEqual(
vm._uuid_set_high_bit('65e7a5f0-ceb2-427d-a6d1-e47f0eb38708'),
'e5e7a5f0-ceb2-427d-a6d1-e47f0eb38708')
self.assertEqual(
vm._uuid_set_high_bit('f6f79d3f-eef1-4009-bfd4-172ab7e6fff4'),
'f6f79d3f-eef1-4009-bfd4-172ab7e6fff4')
def test_translate_vm_state(self):
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('running'))
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('migrating running'))
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('starting'))
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('open firmware'))
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('shutting down'))
self.assertEqual(power_state.RUNNING,
vm._translate_vm_state('suspending'))
self.assertEqual(power_state.SHUTDOWN,
vm._translate_vm_state('migrating not active'))
self.assertEqual(power_state.SHUTDOWN,
vm._translate_vm_state('not activated'))
self.assertEqual(power_state.NOSTATE,
vm._translate_vm_state('unknown'))
self.assertEqual(power_state.NOSTATE,
vm._translate_vm_state('hardware discovery'))
self.assertEqual(power_state.NOSTATE,
vm._translate_vm_state('not available'))
self.assertEqual(power_state.SUSPENDED,
vm._translate_vm_state('resuming'))
self.assertEqual(power_state.SUSPENDED,
vm._translate_vm_state('suspended'))
self.assertEqual(power_state.CRASHED,
vm._translate_vm_state('error'))
def test_get_lpars(self):
self.apt.read.return_value = self.resp
lpars = vm.get_lpars(self.apt)
# One of the LPARs is a management partition, so one less than the
# total length should be returned.
self.assertEqual(len(self.resp.feed.entries) - 1, len(lpars))
exc = pvm_exc.Error('Not found', response=FakeAdapterResponse(404))
self.apt.read.side_effect = exc
self.assertRaises(pvm_exc.Error, vm.get_lpars, self.apt)
def test_get_lpar_names(self):
self.apt.read.return_value = self.resp
lpar_list = vm.get_lpar_names(self.apt)
# Check the first one in the feed and the length of the feed
self.assertEqual(lpar_list[0], 'z3-9-5-126-208-000001f0')
self.assertEqual(len(lpar_list), 20)
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('pypowervm.tasks.vterm.close_vterm', autospec=True)
def test_dlt_lpar(self, mock_vterm, mock_pvm_uuid):
"""Performs a delete LPAR test."""
mock_pvm_uuid.return_value = 'pvm_uuid'
vm.delete_lpar(self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_called_once_with('LogicalPartition',
root_id='pvm_uuid')
# Test Failure Path
# build a mock response body with the expected HSCL msg
resp = mock.Mock(body='error msg: HSCL151B more text')
self.apt.delete.side_effect = pvm_exc.Error(
'Mock Error Message', response=resp)
# Reset counters
mock_pvm_uuid.reset_mock()
self.apt.reset_mock()
mock_vterm.reset_mock()
self.assertRaises(pvm_exc.Error,
vm.delete_lpar, self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_called_once_with('LogicalPartition',
root_id='pvm_uuid')
# Test HttpNotFound - exception not raised
mock_pvm_uuid.reset_mock()
self.apt.reset_mock()
mock_vterm.reset_mock()
resp.status = 404
self.apt.delete.side_effect = pvm_exc.HttpNotFound(resp=resp)
vm.delete_lpar(self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_called_once_with('LogicalPartition',
root_id='pvm_uuid')
# Test Other HttpError
mock_pvm_uuid.reset_mock()
self.apt.reset_mock()
mock_vterm.reset_mock()
resp.status = 111
self.apt.delete.side_effect = pvm_exc.HttpError(resp=resp)
self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_called_once_with('LogicalPartition',
root_id='pvm_uuid')
# Test HttpNotFound closing vterm
mock_pvm_uuid.reset_mock()
self.apt.reset_mock()
mock_vterm.reset_mock()
resp.status = 404
mock_vterm.side_effect = pvm_exc.HttpNotFound(resp=resp)
vm.delete_lpar(self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_not_called()
# Test Other HttpError closing vterm
mock_pvm_uuid.reset_mock()
self.apt.reset_mock()
mock_vterm.reset_mock()
resp.status = 111
mock_vterm.side_effect = pvm_exc.HttpError(resp=resp)
self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst')
mock_pvm_uuid.assert_called_once_with('inst')
mock_vterm.assert_called_once_with(self.apt, 'pvm_uuid')
self.apt.delete.assert_not_called()
@mock.patch('nova_powervm.virt.powervm.vm.VMBuilder._add_IBMi_attrs',
autospec=True)
@mock.patch('pypowervm.utils.lpar_builder.DefaultStandardize',
autospec=True)
@mock.patch('pypowervm.utils.lpar_builder.LPARBuilder.build',
autospec=True)
@mock.patch('pypowervm.utils.validation.LPARWrapperValidator.validate_all',
autospec=True)
def test_crt_lpar(self, mock_vld_all, mock_bld, mock_stdz, mock_ibmi):
instance = objects.Instance(**powervm.TEST_INSTANCE)
flavor = instance.get_flavor()
flavor.extra_specs = {'powervm:dedicated_proc': 'true'}
host_wrapper = mock.Mock()
lparw = pvm_lpar.LPAR.wrap(self.resp.feed.entries[0])
mock_bld.return_value = lparw
self.apt.create.return_value = lparw.entry
vm.create_lpar(self.apt, host_wrapper, instance, nvram='data')
self.apt.create.assert_called_once_with(
lparw, host_wrapper.schema_type, child_type='LogicalPartition',
root_id=host_wrapper.uuid, service='uom', timeout=-1)
mock_stdz.assert_called_once_with(host_wrapper, uncapped_weight=64,
proc_units_factor=0.1)
self.assertEqual(lparw.nvram, 'data')
self.assertTrue(mock_vld_all.called)
# Test srr and slot_mgr
self.apt.reset_mock()
mock_vld_all.reset_mock()
mock_stdz.reset_mock()
flavor.extra_specs = {'powervm:srr_capability': 'true'}
self.apt.create.return_value = lparw.entry
mock_slot_mgr = mock.Mock(build_map=mock.Mock(
get_max_vslots=mock.Mock(return_value=123)))
vm.create_lpar(self.apt, host_wrapper, instance,
slot_mgr=mock_slot_mgr)
self.assertTrue(self.apt.create.called)
self.assertTrue(mock_vld_all.called)
self.assertTrue(lparw.srr_enabled)
mock_stdz.assert_called_once_with(host_wrapper, uncapped_weight=64,
proc_units_factor=0.1, max_slots=123)
# The save is called with the LPAR's actual value, which in this mock
# setup comes from lparw
mock_slot_mgr.register_max_vslots.assert_called_with(
lparw.io_config.max_virtual_slots)
# Test to verify the LPAR Creation with invalid name specification
mock_bld.side_effect = lpar_bld.LPARBuilderException("Invalid Name")
host_wrapper = mock.Mock()
self.assertRaises(exception.BuildAbortException, vm.create_lpar,
self.apt, host_wrapper, instance)
resp = mock.Mock(status=202, method='fake', path='/dev/',
reason='Failure')
mock_bld.side_effect = pvm_exc.HttpError(resp)
try:
vm.create_lpar(self.apt, host_wrapper, instance)
except nvex.PowerVMAPIFailed as e:
self.assertEqual(e.kwargs['inst_name'], instance.name)
self.assertEqual(e.kwargs['reason'], mock_bld.side_effect)
flavor.extra_specs = {'powervm:BADATTR': 'true'}
host_wrapper = mock.Mock()
self.assertRaises(exception.InvalidAttribute, vm.create_lpar,
self.apt, host_wrapper, instance)
@mock.patch('pypowervm.wrappers.logical_partition.LPAR.get')
def test_get_instance_wrapper(self, mock_get):
mock_get.side_effect = pvm_exc.HttpNotFound(resp=mock.Mock(status=404))
instance = objects.Instance(**powervm.TEST_INSTANCE)
# vm.get_instance_wrapper(self.apt, instance, 'lpar_uuid')
self.assertRaises(exception.InstanceNotFound, vm.get_instance_wrapper,
self.apt, instance, 'lpar_uuid')
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.VMBuilder', autospec=True)
def test_update(self, mock_vmb, mock_get_inst):
instance = objects.Instance(**powervm.TEST_INSTANCE)
entry = mock.Mock()
name = "new_name"
entry.update.return_value = 'NewEntry'
bldr = mock_vmb.return_value
lpar_bldr = bldr.lpar_builder.return_value
new_entry = vm.update(self.apt, 'mock_host_wrap', instance,
entry=entry, name=name)
# Ensure the lpar was rebuilt
lpar_bldr.rebuild.assert_called_once_with(entry)
entry.update.assert_called_once_with()
self.assertEqual(name, entry.name)
self.assertEqual('NewEntry', new_entry)
self.san_lpar_name.assert_called_with(name)
@mock.patch('pypowervm.utils.transaction.entry_transaction', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_rename(self, mock_get_inst, mock_entry_transaction):
instance = objects.Instance(**powervm.TEST_INSTANCE)
mock_entry_transaction.side_effect = lambda x: x
entry = mock.Mock()
entry.update.return_value = 'NewEntry'
new_entry = vm.rename(self.apt, instance, 'new_name', entry=entry)
self.assertEqual('new_name', entry.name)
entry.update.assert_called_once_with()
mock_entry_transaction.assert_called_once_with(mock.ANY)
self.assertEqual('NewEntry', new_entry)
self.san_lpar_name.assert_called_with('new_name')
self.san_lpar_name.reset_mock()
# Test optional entry parameter
entry.reset_mock()
mock_get_inst.return_value = entry
new_entry = vm.rename(self.apt, instance, 'new_name')
mock_get_inst.assert_called_once_with(self.apt, instance)
self.assertEqual('new_name', entry.name)
entry.update.assert_called_once_with()
self.assertEqual('NewEntry', new_entry)
self.san_lpar_name.assert_called_with('new_name')
def test_add_IBMi_attrs(self):
inst = mock.Mock()
# Non-ibmi distro
attrs = {}
inst.system_metadata = {'image_os_distro': 'rhel'}
bldr = vm.VMBuilder(mock.Mock(), mock.Mock())
bldr._add_IBMi_attrs(inst, attrs)
self.assertDictEqual(attrs, {})
inst.system_metadata = {}
bldr._add_IBMi_attrs(inst, attrs)
self.assertDictEqual(attrs, {})
# ibmi distro
inst.system_metadata = {'image_os_distro': 'ibmi'}
bldr._add_IBMi_attrs(inst, attrs)
self.assertDictEqual(attrs, {'env': 'OS400'})
@mock.patch('pypowervm.tasks.power.power_on', autospec=True)
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_power_on(self, mock_wrap, mock_lock, mock_power_on):
instance = objects.Instance(**powervm.TEST_INSTANCE)
entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
mock_wrap.return_value = entry
self.assertTrue(vm.power_on(None, instance, opts='opts'))
mock_power_on.assert_called_once_with(entry, None, add_parms='opts')
mock_lock.assert_called_once_with('power_%s' % instance.uuid)
mock_power_on.reset_mock()
mock_lock.reset_mock()
stop_states = [
pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING,
pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN,
pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING,
pvm_bp.LPARState.SUSPENDING]
for stop_state in stop_states:
entry.state = stop_state
self.assertFalse(vm.power_on(None, instance))
self.assertEqual(0, mock_power_on.call_count)
mock_lock.assert_called_once_with('power_%s' % instance.uuid)
mock_lock.reset_mock()
@mock.patch('pypowervm.tasks.power.PowerOp', autospec=True)
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_power_off(self, mock_wrap, mock_lock, mock_power_off, mock_pop):
instance = objects.Instance(**powervm.TEST_INSTANCE)
entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
mock_wrap.return_value = entry
self.assertFalse(vm.power_off(None, instance))
self.assertEqual(0, mock_power_off.call_count)
self.assertEqual(0, mock_pop.stop.call_count)
mock_lock.assert_called_once_with('power_%s' % instance.uuid)
stop_states = [
pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING,
pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN,
pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING,
pvm_bp.LPARState.SUSPENDING]
for stop_state in stop_states:
entry.state = stop_state
mock_power_off.reset_mock()
mock_pop.stop.reset_mock()
mock_lock.reset_mock()
self.assertTrue(vm.power_off(None, instance))
mock_power_off.assert_called_once_with(entry)
self.assertEqual(0, mock_pop.stop.call_count)
mock_lock.assert_called_once_with('power_%s' % instance.uuid)
mock_power_off.reset_mock()
mock_lock.reset_mock()
self.assertTrue(vm.power_off(
None, instance, force_immediate=True, timeout=5))
self.assertEqual(0, mock_power_off.call_count)
mock_pop.stop.assert_called_once_with(
entry, opts=mock.ANY, timeout=5)
self.assertEqual('PowerOff(immediate=true, operation=shutdown)',
str(mock_pop.stop.call_args[1]['opts']))
mock_lock.assert_called_once_with('power_%s' % instance.uuid)
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_power_off_negative(self, mock_wrap, mock_power_off):
"""Negative tests."""
instance = objects.Instance(**powervm.TEST_INSTANCE)
mock_wrap.return_value = mock.Mock(state=pvm_bp.LPARState.RUNNING)
# Raise the expected pypowervm exception
mock_power_off.side_effect = pvm_exc.VMPowerOffFailure(
reason='Something bad.', lpar_nm='TheLPAR')
# We should get a valid Nova exception that the compute manager expects
self.assertRaises(exception.InstancePowerOffFailure,
vm.power_off, None, instance)
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
@mock.patch('pypowervm.tasks.power.power_on', autospec=True)
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
@mock.patch('pypowervm.tasks.power.PowerOp', autospec=True)
def test_reboot(self, mock_pop, mock_pwroff, mock_pwron, mock_giw,
mock_lock):
entry = mock.Mock()
inst = mock.Mock(uuid='uuid')
mock_giw.return_value = entry
# VM is in 'not activated' state
entry.state = pvm_bp.LPARState.NOT_ACTIVATED
vm.reboot('adapter', inst, True)
mock_pwron.assert_called_once_with(entry, None)
self.assertEqual(0, mock_pwroff.call_count)
self.assertEqual(0, mock_pop.stop.call_count)
mock_lock.assert_called_once_with('power_uuid')
mock_pwron.reset_mock()
mock_lock.reset_mock()
# VM is in an active state
entry.state = pvm_bp.LPARState.RUNNING
vm.reboot('adapter', inst, True)
self.assertEqual(0, mock_pwron.call_count)
self.assertEqual(0, mock_pwroff.call_count)
mock_pop.stop.assert_called_once_with(entry, opts=mock.ANY)
self.assertEqual(
'PowerOff(immediate=true, operation=shutdown, restart=true)',
str(mock_pop.stop.call_args[1]['opts']))
mock_lock.assert_called_once_with('power_uuid')
mock_pop.stop.reset_mock()
mock_lock.reset_mock()
# Same, but soft
vm.reboot('adapter', inst, False)
self.assertEqual(0, mock_pwron.call_count)
mock_pwroff.assert_called_once_with(entry, restart=True)
self.assertEqual(0, mock_pop.stop.call_count)
mock_lock.assert_called_once_with('power_uuid')
mock_pwroff.reset_mock()
mock_lock.reset_mock()
# Exception path
mock_pwroff.side_effect = Exception()
self.assertRaises(exception.InstanceRebootFailure, vm.reboot,
'adapter', inst, False)
self.assertEqual(0, mock_pwron.call_count)
mock_pwroff.assert_called_once_with(entry, restart=True)
self.assertEqual(0, mock_pop.stop.call_count)
mock_lock.assert_called_once_with('power_uuid')
def test_get_pvm_uuid(self):
nova_uuid = "dbbb48f1-2406-4019-98af-1c16d3df0204"
# Test with uuid string
self.assertEqual('5BBB48F1-2406-4019-98AF-1C16D3DF0204',
vm.get_pvm_uuid(nova_uuid))
mock_inst = mock.Mock(uuid=nova_uuid)
# Test with instance object
self.assertEqual('5BBB48F1-2406-4019-98AF-1C16D3DF0204',
vm.get_pvm_uuid(mock_inst))
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_vm_qp', autospec=True)
def test_instance_exists(self, mock_getvmqp, mock_getuuid):
# Try the good case where it exists
mock_getvmqp.side_effect = 'fake_state'
mock_parms = (mock.Mock(), mock.Mock())
self.assertTrue(vm.instance_exists(*mock_parms))
# Test the scenario where it does not exist.
mock_getvmqp.side_effect = exception.InstanceNotFound(instance_id=123)
self.assertFalse(vm.instance_exists(*mock_parms))
def test_get_vm_qp(self):
def adapter_read(root_type, root_id=None, suffix_type=None,
suffix_parm=None, helpers=None):
json_str = (u'{"IsVirtualServiceAttentionLEDOn":"false","Migration'
u'State":"Not_Migrating","CurrentProcessingUnits":0.1,'
u'"ProgressState":null,"PartitionType":"AIX/Linux","Pa'
u'rtitionID":1,"AllocatedVirtualProcessors":1,"Partiti'
u'onState":"not activated","RemoteRestartState":"Inval'
u'id","OperatingSystemVersion":"Unknown","AssociatedMa'
u'nagedSystem":"https://9.1.2.3:12443/rest/api/uom/Man'
u'agedSystem/98498bed-c78a-3a4f-b90a-4b715418fcb6","RM'
u'CState":"inactive","PowerManagementMode":null,"Parti'
u'tionName":"lpar-1-06674231-lpar","HasDedicatedProces'
u'sors":"false","ResourceMonitoringIPAddress":null,"Re'
u'ferenceCode":"00000000","CurrentProcessors":null,"Cu'
u'rrentMemory":512,"SharingMode":"uncapped"}')
self.assertEqual('LogicalPartition', root_type)
self.assertEqual('lpar_uuid', root_id)
self.assertEqual('quick', suffix_type)
resp = mock.MagicMock()
if suffix_parm is None:
resp.body = json_str
elif suffix_parm == 'PartitionID':
resp.body = '1'
elif suffix_parm == 'CurrentProcessingUnits':
resp.body = '0.1'
elif suffix_parm == 'AssociatedManagedSystem':
# The double quotes are important
resp.body = ('"https://9.1.2.3:12443/rest/api/uom/ManagedSyste'
'm/98498bed-c78a-3a4f-b90a-4b715418fcb6"')
else:
self.fail('Unhandled quick property key %s' % suffix_parm)
return resp
def adpt_read_no_log(*args, **kwds):
helpers = kwds['helpers']
try:
helpers.index(pvm_log.log_helper)
except ValueError:
# Successful path since the logger shouldn't be there
return adapter_read(*args, **kwds)
self.fail('Log helper was found when it should not be')
ms_href = ('https://9.1.2.3:12443/rest/api/uom/ManagedSystem/98498bed-'
'c78a-3a4f-b90a-4b715418fcb6')
self.apt.read.side_effect = adapter_read
self.assertEqual(1, vm.get_vm_id(self.apt, 'lpar_uuid'))
self.assertEqual(ms_href, vm.get_vm_qp(self.apt, 'lpar_uuid',
'AssociatedManagedSystem'))
self.apt.read.side_effect = adpt_read_no_log
self.assertEqual(0.1, vm.get_vm_qp(self.apt, 'lpar_uuid',
'CurrentProcessingUnits',
log_errors=False))
qp_dict = vm.get_vm_qp(self.apt, 'lpar_uuid', log_errors=False)
self.assertEqual(ms_href, qp_dict['AssociatedManagedSystem'])
self.assertEqual(1, qp_dict['PartitionID'])
self.assertEqual(0.1, qp_dict['CurrentProcessingUnits'])
resp = mock.MagicMock()
resp.status = 404
self.apt.read.side_effect = pvm_exc.HttpNotFound(resp)
self.assertRaises(exception.InstanceNotFound, vm.get_vm_qp, self.apt,
'lpar_uuid', log_errors=False)
self.apt.read.side_effect = pvm_exc.Error("message", response=None)
self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt,
'lpar_uuid', log_errors=False)
resp.status = 500
self.apt.read.side_effect = pvm_exc.Error("message", response=resp)
self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt,
'lpar_uuid', log_errors=False)
def test_norm_mac(self):
EXPECTED = "12:34:56:78:90:ab"
self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:ab"))
self.assertEqual(EXPECTED, vm.norm_mac("1234567890ab"))
self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:AB"))
self.assertEqual(EXPECTED, vm.norm_mac("1234567890AB"))
@mock.patch('pypowervm.tasks.ibmi.update_ibmi_settings', autospec=True)
@mock.patch('nova_powervm.virt.powervm.vm.get_instance_wrapper',
autospec=True)
def test_update_ibmi_settings(self, mock_lparw, mock_ibmi):
instance = mock.MagicMock()
# Test update load source with vscsi boot
boot_type = 'vscsi'
vm.update_ibmi_settings(self.apt, instance, boot_type)
mock_ibmi.assert_called_once_with(self.apt, mock.ANY, 'vscsi')
mock_ibmi.reset_mock()
# Test update load source with npiv boot
boot_type = 'npiv'
vm.update_ibmi_settings(self.apt, instance, boot_type)
mock_ibmi.assert_called_once_with(self.apt, mock.ANY, 'npiv')
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid')
@mock.patch('pypowervm.wrappers.network.CNA.search')
@mock.patch('pypowervm.wrappers.network.CNA.get')
def test_get_cnas(self, mock_get, mock_search, mock_uuid):
# No kwargs: get
self.assertEqual(mock_get.return_value, vm.get_cnas(self.apt, 'inst'))
mock_uuid.assert_called_once_with('inst')
mock_get.assert_called_once_with(self.apt, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_uuid.return_value)
mock_search.assert_not_called()
# With kwargs: search
mock_get.reset_mock()
mock_uuid.reset_mock()
self.assertEqual(mock_search.return_value, vm.get_cnas(
self.apt, 'inst', one=2, three=4))
mock_uuid.assert_called_once_with('inst')
mock_search.assert_called_once_with(
self.apt, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_uuid.return_value, one=2, three=4)
mock_get.assert_not_called()
@mock.patch('nova_powervm.virt.powervm.vm.get_pvm_uuid')
@mock.patch('pypowervm.wrappers.iocard.VNIC.search')
@mock.patch('pypowervm.wrappers.iocard.VNIC.get')
def test_get_vnics(self, mock_get, mock_search, mock_uuid):
# No kwargs: get
self.assertEqual(mock_get.return_value, vm.get_vnics(self.apt, 'inst'))
mock_uuid.assert_called_once_with('inst')
mock_get.assert_called_once_with(self.apt, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_uuid.return_value)
mock_search.assert_not_called()
# With kwargs: search
mock_get.reset_mock()
mock_uuid.reset_mock()
self.assertEqual(mock_search.return_value, vm.get_vnics(
self.apt, 'inst', one=2, three=4))
mock_uuid.assert_called_once_with('inst')
mock_search.assert_called_once_with(
self.apt, parent_type=pvm_lpar.LPAR,
parent_uuid=mock_uuid.return_value, one=2, three=4)
mock_get.assert_not_called()

View File

@ -1,109 +0,0 @@
# Copyright 2015, 2018 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import six
from nova import test
from nova_powervm.virt.powervm import volume
from nova_powervm.virt.powervm.volume import gpfs
from nova_powervm.virt.powervm.volume import iscsi
from nova_powervm.virt.powervm.volume import local
from nova_powervm.virt.powervm.volume import nfs
from nova_powervm.virt.powervm.volume import npiv
from nova_powervm.virt.powervm.volume import vscsi
class TestVolumeAdapter(test.NoDBTestCase):
def setUp(self):
super(TestVolumeAdapter, self).setUp()
# Enable passing through the can attach/detach checks
self.mock_get_inst_wrap_p = mock.patch('nova_powervm.virt.powervm.vm.'
'get_instance_wrapper')
self.mock_get_inst_wrap = self.mock_get_inst_wrap_p.start()
self.addCleanup(self.mock_get_inst_wrap_p.stop)
self.mock_inst_wrap = mock.MagicMock()
self.mock_inst_wrap.can_modify_io.return_value = (True, None)
self.mock_get_inst_wrap.return_value = self.mock_inst_wrap
class TestInitMethods(test.NoDBTestCase):
# Volume driver types to classes
volume_drivers = {
'iscsi': iscsi.IscsiVolumeAdapter,
'local': local.LocalVolumeAdapter,
'nfs': nfs.NFSVolumeAdapter,
'gpfs': gpfs.GPFSVolumeAdapter,
}
def test_get_volume_class(self):
for vol_type, class_type in six.iteritems(self.volume_drivers):
self.assertEqual(class_type, volume.get_volume_class(vol_type))
# Try the fibre as vscsi
self.flags(fc_attach_strategy='vscsi', group='powervm')
self.assertEqual(vscsi.PVVscsiFCVolumeAdapter,
volume.get_volume_class('fibre_channel'))
# Try the fibre as npiv
self.flags(fc_attach_strategy='npiv', group='powervm')
self.assertEqual(npiv.NPIVVolumeAdapter,
volume.get_volume_class('fibre_channel'))
def test_build_volume_driver(self):
for vol_type, class_type in six.iteritems(self.volume_drivers):
vdrv = volume.build_volume_driver(
mock.Mock(), "abc123", mock.Mock(uuid='abc1'),
{'driver_volume_type': vol_type})
self.assertIsInstance(vdrv, class_type)
# Try the fibre as vscsi
self.flags(fc_attach_strategy='vscsi', group='powervm')
vdrv = volume.build_volume_driver(
mock.Mock(), "abc123", mock.Mock(uuid='abc1'),
{'driver_volume_type': 'fibre_channel'})
self.assertIsInstance(vdrv, vscsi.PVVscsiFCVolumeAdapter)
# Try the fibre as npiv
self.flags(fc_attach_strategy='npiv', group='powervm')
vdrv = volume.build_volume_driver(
mock.Mock(), "abc123", mock.Mock(uuid='abc1'),
{'driver_volume_type': 'fibre_channel'})
self.assertIsInstance(vdrv, npiv.NPIVVolumeAdapter)
def test_hostname_for_volume(self):
self.flags(host='test_host')
mock_instance = mock.Mock()
mock_instance.name = 'instance'
# Try the fibre as vscsi
self.flags(fc_attach_strategy='vscsi', group='powervm')
self.assertEqual("test_host",
volume.get_hostname_for_volume(mock_instance))
# Try the fibre as npiv
self.flags(fc_attach_strategy='npiv', group='powervm')
self.assertEqual("test_host_instance",
volume.get_hostname_for_volume(mock_instance))
# NPIV with long host name
self.flags(host='really_long_host_name_too_long')
self.assertEqual("really_long_host_nam_instance",
volume.get_hostname_for_volume(mock_instance))

Some files were not shown because too many files have changed in this diff Show More