ironic-inspector 4.1.0 release

meta:version: 4.1.0
 meta:series: newton
 meta:release-type: release
 meta:announce: openstack-announce@lists.openstack.org
 meta:pypi: no
 meta:first: no
 meta:release:Author: Jim Rollenhagen <jim@jimrollenhagen.com>
 meta:release:Commit: Davanum Srinivas <davanum@gmail.com>
 meta:release:Change-Id: Ie2325b30ecb727e5279b5c072892e84b24bfbf3f
 -----BEGIN PGP SIGNATURE-----
 
 iEYEABECAAYFAlesgIsACgkQgNg6eWEDv1nfwQCePVoLS31dS3vnbAsVHnuxO7NO
 5fsAoNpZ8tl00N2yXttZGdnPkVpd34AH
 =Dj9N
 -----END PGP SIGNATURE-----

Merge tag '4.1.0' into debian/newton

ironic-inspector 4.1.0 release

  * New upstream release.
  * Fixed (build-)depends for this release.
  * Using OpenStack's Gerrit as VCS URLs.
  * Fix path file patched in fix-path-to-rootwrap.patch.

Change-Id: Ifb92de51cf5305e41e80f2dafe54152872235a57
This commit is contained in:
Thomas Goirand 2016-09-20 20:01:42 +02:00
commit 0f9a244f03
100 changed files with 4523 additions and 1963 deletions

View File

@ -58,6 +58,10 @@ interpreter of one of supported versions (currently 2.7 and 3.4), use
a db named 'openstack_citest' with user 'openstack_citest' and password
'openstack_citest' on localhost.
.. note::
Users of Fedora <= 23 will need to run "sudo dnf --releasever=24 update
python-virtualenv" to run unit tests
To run the functional tests, use::
tox -e func
@ -82,39 +86,8 @@ components. There is a plugin for installing **ironic-inspector** on DevStack.
Example local.conf
------------------
Using IPA
~~~~~~~~~
.. literalinclude:: ../../devstack/example.local.conf
::
[[local|localrc]]
enable_service ironic ir-api ir-cond
disable_service n-net n-novnc
enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
enable_service s-proxy s-object s-container s-account
disable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service cinder c-sch c-api c-vol
enable_plugin ironic https://github.com/openstack/ironic
enable_plugin ironic-inspector https://github.com/openstack/ironic-inspector
IRONIC_BAREMETAL_BASIC_OPS=True
IRONIC_VM_COUNT=2
IRONIC_VM_SPECS_RAM=1024
IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA=True
IRONIC_BUILD_DEPLOY_RAMDISK=False
IRONIC_INSPECTOR_RAMDISK_ELEMENT=ironic-agent
IRONIC_INSPECTOR_BUILD_RAMDISK=False
VIRT_DRIVER=ironic
LOGDAYS=1
LOGFILE=~/logs/stack.sh.log
SCREEN_LOGDIR=~/logs/screen
DEFAULT_INSTANCE_TYPE=baremetal
TEMPEST_ALLOW_TENANT_ISOLATION=False
Notes
-----
@ -132,52 +105,6 @@ Notes
* This configuration disables Heat and Cinder, adjust it if you need these
services.
Using simple ramdisk
~~~~~~~~~~~~~~~~~~~~
.. note::
This ramdisk is deprecated and should not be used.
::
[[local|localrc]]
enable_service ironic ir-api ir-cond
disable_service n-net n-novnc
enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
enable_service s-proxy s-object s-container s-account
disable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service cinder c-sch c-api c-vol
enable_plugin ironic https://github.com/openstack/ironic
enable_plugin ironic-inspector https://github.com/openstack/ironic-inspector
IRONIC_BAREMETAL_BASIC_OPS=True
IRONIC_VM_COUNT=2
IRONIC_VM_SPECS_RAM=1024
IRONIC_DEPLOY_FLAVOR="fedora deploy-ironic"
IRONIC_INSPECTOR_RAMDISK_FLAVOR="fedora ironic-discoverd-ramdisk"
VIRT_DRIVER=ironic
LOGDAYS=1
LOGFILE=~/logs/stack.sh.log
SCREEN_LOGDIR=~/logs/screen
DEFAULT_INSTANCE_TYPE=baremetal
TEMPEST_ALLOW_TENANT_ISOLATION=False
Notes
-----
* Replace "fedora" with whatever you have
* You need at least 1G of RAM for VMs, default value of 512 MB won't work
* Before restarting stack.sh::
rm -rf /opt/stack/ironic-inspector
Test
----

View File

@ -3,6 +3,7 @@ output_file = example.conf
namespace = ironic_inspector
namespace = ironic_inspector.common.ironic
namespace = ironic_inspector.common.swift
namespace = ironic_inspector.plugins.capabilities
namespace = ironic_inspector.plugins.discovery
namespace = keystonemiddleware.auth_token
namespace = oslo.db

9
debian/changelog vendored
View File

@ -1,3 +1,12 @@
ironic-inspector (4.1.0-1) experimental; urgency=medium
* New upstream release.
* Fixed (build-)depends for this release.
* Using OpenStack's Gerrit as VCS URLs.
* Fix path file patched in fix-path-to-rootwrap.patch.
-- Thomas Goirand <zigo@debian.org> Tue, 20 Sep 2016 20:01:51 +0200
ironic-inspector (3.2.0-2) unstable; urgency=medium
* Uploading to unstable.

44
debian/control vendored
View File

@ -10,67 +10,67 @@ Build-Depends: debhelper (>= 9),
python-pbr (>= 1.8),
python-setuptools,
python-sphinx,
Build-Depends-Indep: alembic (>= 0.8.0),
python-babel,
Build-Depends-Indep: alembic (>= 0.8.4),
python-babel (>= 2.3.4),
python-coverage,
python-eventlet (>= 0.18.4),
python-fixtures (>= 1.3.1),
python-fixtures (>= 3.0.0),
python-flask,
python-futurist (>= 0.11.0),
python-hacking (>= 0.10.0),
python-ironicclient (>= 1.1.0),
python-ironicclient (>= 1.6.0),
python-jsonpath-rw (>= 1.2.0),
python-jsonschema,
python-keystoneclient (>= 1:1.6.0),
python-keystoneauth1 (>= 2.10.0),
python-keystonemiddleware (>= 4.0.0),
python-mock (>= 1.3),
python-mock (>= 2.0),
python-netaddr (>= 0.7.12),
python-oslo.concurrency (>= 3.5.0),
python-oslo.config (>= 1:3.7.0),
python-oslo.concurrency (>= 3.8.0),
python-oslo.config (>= 1:3.14.0),
python-oslo.db (>= 4.1.0),
python-oslo.i18n (>= 2.1.0),
python-oslo.log (>= 1.14.0),
python-oslo.middleware (>= 3.0.0),
python-oslo.rootwrap (>= 2.0.0),
python-oslo.utils (>= 3.5.0),
python-oslo.rootwrap (>= 5.0.0),
python-oslo.utils (>= 3.16.0),
python-oslosphinx (>= 2.5.0),
python-oslotest (>= 1.10.0),
python-six (>= 1.9.0),
python-sqlalchemy (>= 1.0.10),
python-stevedore (>= 1.5.0),
python-stevedore (>= 1.16.0),
python-swiftclient (>= 1:2.2.0),
python-testresources,
python-testscenarios,
Standards-Version: 3.9.7
Vcs-Git: https://anonscm.debian.org/git/openstack/ironic-inspector.git
Vcs-Browser: https://anonscm.debian.org/cgit/openstack/ironic-inspector.git/
Vcs-Browser: https://git.openstack.org/cgit/openstack/deb-ironic-inspector
Vcs-Git: https://git.openstack.org/openstack/deb-ironic-inspector
Homepage: https://github.com/openstack/ironic-inspector
Package: python-ironic-inspector
Architecture: all
Depends: alembic (>= 0.8.0),
python-babel,
Depends: alembic (>= 0.8.4),
python-babel (>= 2.3.4),
python-eventlet (>= 0.18.4),
python-flask,
python-futurist (>= 0.11.0),
python-ironicclient (>= 1.1.0),
python-ironicclient (>= 1.6.0),
python-jsonpath-rw (>= 1.2.0),
python-jsonschema,
python-keystoneclient (>= 1:1.6.0),
python-keystoneauth1 (>= 2.10.0),
python-keystonemiddleware (>= 4.0.0),
python-netaddr (>= 0.7.12),
python-oslo.concurrency (>= 3.5.0),
python-oslo.config (>= 1:3.7.0),
python-oslo.concurrency (>= 3.8.0),
python-oslo.config (>= 1:3.14.0),
python-oslo.db (>= 4.1.0),
python-oslo.i18n (>= 2.1.0),
python-oslo.log (>= 1.14.0),
python-oslo.middleware (>= 3.0.0),
python-oslo.rootwrap (>= 2.0.0),
python-oslo.utils (>= 3.5.0),
python-oslo.rootwrap (>= 5.0.0),
python-oslo.utils (>= 3.16.0),
python-pbr (>= 1.8),
python-six (>= 1.9.0),
python-sqlalchemy (>= 1.0.10),
python-stevedore (>= 1.5.0),
python-stevedore (>= 1.16.0),
python-swiftclient (>= 1:2.2.0),
${misc:Depends},
${python:Depends},

View File

@ -3,9 +3,11 @@ Author: Thomas Goirand <zigo@debian.org>
Forwarded: no
Last-Update: 2015-10-30
--- ironic-inspector-2.2.1.orig/ironic_inspector/firewall.py
+++ ironic-inspector-2.2.1/ironic_inspector/firewall.py
@@ -61,7 +61,7 @@ def init():
Index: deb-ironic-inspector/ironic_inspector/firewall.py
===================================================================
--- deb-ironic-inspector.orig/ironic_inspector/firewall.py
+++ deb-ironic-inspector/ironic_inspector/firewall.py
@@ -66,7 +66,7 @@ def init():
INTERFACE = CONF.firewall.dnsmasq_interface
CHAIN = CONF.firewall.firewall_chain
NEW_CHAIN = CHAIN + '_temp'
@ -14,8 +16,10 @@ Last-Update: 2015-10-30
CONF.rootwrap_config, 'iptables',)
# -w flag makes iptables wait for xtables lock, but it's not supported
--- ironic-inspector-2.2.1.orig/ironic_inspector/test/test_firewall.py
+++ ironic-inspector-2.2.1/ironic_inspector/test/test_firewall.py
Index: deb-ironic-inspector/ironic_inspector/test/unit/test_firewall.py
===================================================================
--- deb-ironic-inspector.orig/ironic_inspector/test/unit/test_firewall.py
+++ deb-ironic-inspector/ironic_inspector/test/unit/test_firewall.py
@@ -54,7 +54,7 @@ class TestFirewall(test_base.NodeTest):
for (args, call) in zip(init_expected_args, call_args_list):
self.assertEqual(args, call[0])

View File

@ -0,0 +1,25 @@
[[local|localrc]]
disable_service n-net n-novnc
enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
enable_service s-proxy s-object s-container s-account
disable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service cinder c-sch c-api c-vol
enable_plugin ironic https://github.com/openstack/ironic
enable_plugin ironic-inspector https://github.com/openstack/ironic-inspector
IRONIC_BAREMETAL_BASIC_OPS=True
IRONIC_VM_COUNT=2
IRONIC_VM_SPECS_RAM=1024
IRONIC_BUILD_DEPLOY_RAMDISK=False
IRONIC_INSPECTOR_BUILD_RAMDISK=False
VIRT_DRIVER=ironic
LOGDAYS=1
LOGFILE=~/logs/stack.sh.log
SCREEN_LOGDIR=~/logs/screen
DEFAULT_INSTANCE_TYPE=baremetal
TEMPEST_ALLOW_TENANT_ISOLATION=False

View File

@ -1,6 +1,15 @@
#!/bin/bash
set -eux
set -ex
# NOTE(vsaienko) this script is launched with sudo.
# Only exported variables are passed here.
# Source to make sure all vars are available.
STACK_ROOT="$(dirname "$0")/../../"
source "$STACK_ROOT/devstack/stackrc"
source "$STACK_ROOT/ironic/devstack/lib/ironic"
set -u
INTROSPECTION_SLEEP=${INTROSPECTION_SLEEP:-30}
export IRONIC_API_VERSION=${IRONIC_API_VERSION:-latest}
@ -44,9 +53,7 @@ disk_size=$(openstack flavor show baremetal -f value -c disk)
ephemeral_size=$(openstack flavor show baremetal -f value -c "OS-FLV-EXT-DATA:ephemeral")
expected_local_gb=$(($disk_size + $ephemeral_size))
# FIXME(dtantsur): switch to OSC as soon as `openstack endpoint list` actually
# works on devstack
ironic_url=$(keystone endpoint-get --service baremetal | tail -n +4 | head -n -1 | tr '|' ' ' | awk '{ print $2; }')
ironic_url=$(openstack endpoint show baremetal -f value -c publicurl)
if [ -z "$ironic_url" ]; then
echo "Cannot find Ironic URL"
exit 1
@ -66,7 +73,7 @@ function curl_ins {
curl -f -H "X-Auth-Token: $token" -X $1 $args "http://127.0.0.1:5050/$2"
}
nodes=$(ironic node-list | tail -n +4 | head -n -1 | tr '|' ' ' | awk '{ print $1; }')
nodes=$(openstack baremetal node list -f value -c UUID)
if [ -z "$nodes" ]; then
echo "No nodes found in Ironic"
exit 1
@ -74,10 +81,10 @@ fi
for uuid in $nodes; do
for p in cpus cpu_arch memory_mb local_gb; do
ironic node-update $uuid remove properties/$p > /dev/null || true
openstack baremetal node unset --property $p $uuid > /dev/null || true
done
if ! ironic node-show $uuid | grep provision_state | grep -iq manageable; then
ironic node-set-provision-state $uuid manage
if [[ "$(openstack baremetal node show $uuid -f value -c provision_state)" != "manageable" ]]; then
openstack baremetal node manage $uuid
fi
done
@ -85,7 +92,7 @@ openstack baremetal introspection rule purge
openstack baremetal introspection rule import "$rules_file"
for uuid in $nodes; do
ironic node-set-provision-state $uuid inspect
openstack baremetal node inspect $uuid
done
current_nodes=$nodes
@ -132,12 +139,12 @@ function wait_for_provision_state {
local max_attempts=${3:-6}
for attempt in $(seq 1 $max_attempts); do
local current=$(ironic node-show $uuid | grep ' provision_state ' | awk '{ print $4; }')
local current=$(openstack baremetal node show $uuid -f value -c provision_state)
if [ "$current" != "$expected" ]; then
if [ "$attempt" -eq "$max_attempts" ]; then
echo "Expected provision_state $expected, got $current:"
ironic node-show $uuid
openstack baremetal node show $uuid
exit 1
fi
else
@ -179,7 +186,7 @@ for uuid in $nodes; do
openstack service list | grep swift && test_swift
wait_for_provision_state $uuid manageable
ironic node-set-provision-state $uuid provide
openstack baremetal node provide $uuid
done
# Cleaning kicks in here, we have to wait until it finishes (~ 2 minutes)
@ -190,11 +197,11 @@ done
echo "Wait until nova becomes aware of bare metal instances"
for attempt in {1..24}; do
if [ $(nova hypervisor-stats | grep ' vcpus ' | head -n1 | awk '{ print $4; }') -ge $expected_cpus ]; then
if [ $(openstack hypervisor stats show -f value -c vcpus) -ge $expected_cpus ]; then
break
elif [ "$attempt" -eq 24 ]; then
echo "Timeout while waiting for nova hypervisor-stats, current:"
nova hypervisor-stats
openstack hypervisor stats show
exit 1
fi
sleep 5
@ -203,7 +210,8 @@ done
echo "Try nova boot for one instance"
image=$(openstack image list --property disk_format=ami -f value -c ID | head -n1)
net_id=$(neutron net-list | egrep "$PRIVATE_NETWORK_NAME"'[^-]' | awk '{ print $2 }')
net_id=$(openstack network show "$PRIVATE_NETWORK_NAME" -f value -c id)
# TODO(vsaienko) replace by openstack create with --wait flag
uuid=$(nova boot --flavor baremetal --nic net-id=$net_id --image $image testing | grep " id " | awk '{ print $4 }')
for attempt in {1..30}; do
@ -211,8 +219,8 @@ for attempt in {1..30}; do
if [ "$status" = "ERROR" ]; then
echo "Instance failed to boot"
# Some debug output
nova show $uuid
nova hypervisor-stats
openstack server show $uuid
openstack hypervisor stats show
exit 1
elif [ "$status" != "ACTIVE" ]; then
if [ "$attempt" -eq 30 ]; then
@ -225,6 +233,6 @@ for attempt in {1..30}; do
sleep 30
done
nova delete $uuid
openstack server delete $uuid
echo "Validation passed"

View File

@ -18,8 +18,6 @@ IRONIC_INSPECTOR_URI="http://$IRONIC_INSPECTOR_HOST:$IRONIC_INSPECTOR_PORT"
IRONIC_INSPECTOR_BUILD_RAMDISK=$(trueorfalse False IRONIC_INSPECTOR_BUILD_RAMDISK)
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe.vmlinuz}
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz}
IRONIC_INSPECTOR_RAMDISK_ELEMENT=${IRONIC_INSPECTOR_RAMDISK_ELEMENT:-ironic-discoverd-ramdisk}
IRONIC_INSPECTOR_RAMDISK_FLAVOR=${IRONIC_INSPECTOR_RAMDISK_FLAVOR:-fedora $IRONIC_INSPECTOR_RAMDISK_ELEMENT}
IRONIC_INSPECTOR_COLLECTORS=${IRONIC_INSPECTOR_COLLECTORS:-default,logs}
IRONIC_INSPECTOR_RAMDISK_LOGDIR=${IRONIC_INSPECTOR_RAMDISK_LOGDIR:-$IRONIC_INSPECTOR_DATA_DIR/ramdisk-logs}
IRONIC_INSPECTOR_ALWAYS_STORE_RAMDISK_LOGS=${IRONIC_INSPECTOR_ALWAYS_STORE_RAMDISK_LOGS:-True}
@ -68,32 +66,25 @@ function install_inspector_client {
git_clone_by_name python-ironic-inspector-client
setup_dev_lib python-ironic-inspector-client
else
# TODO(dtantsur): switch to pip_install_gr
pip_install python-ironic-inspector-client
pip_install_gr python-ironic-inspector-client
fi
}
function start_inspector {
screen_it ironic-inspector \
"cd $IRONIC_INSPECTOR_DIR && $IRONIC_INSPECTOR_CMD"
run_process ironic-inspector "$IRONIC_INSPECTOR_CMD"
}
function start_inspector_dhcp {
screen_it ironic-inspector-dhcp \
run_process ironic-inspector-dhcp \
"sudo dnsmasq --conf-file=$IRONIC_INSPECTOR_DHCP_CONF_FILE"
}
function stop_inspector {
screen -S $SCREEN_NAME -p ironic-inspector -X kill
stop_process ironic-inspector
}
function stop_inspector_dhcp {
screen -S $SCREEN_NAME -p ironic-inspector-dhcp -X kill
}
function inspector_uses_ipa {
[[ $IRONIC_INSPECTOR_RAMDISK_ELEMENT = "ironic-agent" ]] || [[ $IRONIC_INSPECTOR_RAMDISK_FLAVOR =~ (ironic-agent$|^ironic-agent) ]] && return 0
return 1
stop_process ironic-inspector-dhcp
}
### Configuration
@ -104,35 +95,24 @@ function prepare_tftp {
IRONIC_INSPECTOR_INITRAMFS_PATH="$IRONIC_INSPECTOR_IMAGE_PATH.initramfs"
IRONIC_INSPECTOR_CALLBACK_URI="$IRONIC_INSPECTOR_INTERNAL_URI/v1/continue"
if inspector_uses_ipa; then
IRONIC_INSPECTOR_KERNEL_CMDLINE="ipa-inspection-callback-url=$IRONIC_INSPECTOR_CALLBACK_URI systemd.journald.forward_to_console=yes"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE vga=normal console=tty0 console=ttyS0"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-inspection-collectors=$IRONIC_INSPECTOR_COLLECTORS"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-debug=1"
if [[ "$IRONIC_INSPECTOR_BUILD_RAMDISK" == "True" ]]; then
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
build_ipa_coreos_ramdisk "$IRONIC_INSPECTOR_KERNEL_PATH" "$IRONIC_INSPECTOR_INITRAMFS_PATH"
fi
else
# download the agent image tarball
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
if [ -e "$IRONIC_DEPLOY_KERNEL_PATH" -a -e "$IRONIC_DEPLOY_RAMDISK_PATH" ]; then
cp $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_INSPECTOR_KERNEL_PATH
cp $IRONIC_DEPLOY_RAMDISK_PATH $IRONIC_INSPECTOR_INITRAMFS_PATH
else
wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_INSPECTOR_KERNEL_PATH
wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_INSPECTOR_INITRAMFS_PATH
fi
fi
IRONIC_INSPECTOR_KERNEL_CMDLINE="ipa-inspection-callback-url=$IRONIC_INSPECTOR_CALLBACK_URI systemd.journald.forward_to_console=yes"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE vga=normal console=tty0 console=ttyS0"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-inspection-collectors=$IRONIC_INSPECTOR_COLLECTORS"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-debug=1"
if [[ "$IRONIC_INSPECTOR_BUILD_RAMDISK" == "True" ]]; then
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
build_ipa_ramdisk "$IRONIC_INSPECTOR_KERNEL_PATH" "$IRONIC_INSPECTOR_INITRAMFS_PATH"
fi
else
IRONIC_INSPECTOR_KERNEL_CMDLINE="discoverd_callback_url=$IRONIC_INSPECTOR_CALLBACK_URI inspector_callback_url=$IRONIC_INSPECTOR_CALLBACK_URI"
# download the agent image tarball
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
if [[ $(type -P ramdisk-image-create) == "" ]]; then
pip_install diskimage_builder
if [ -e "$IRONIC_DEPLOY_KERNEL_PATH" -a -e "$IRONIC_DEPLOY_RAMDISK_PATH" ]; then
cp $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_INSPECTOR_KERNEL_PATH
cp $IRONIC_DEPLOY_RAMDISK_PATH $IRONIC_INSPECTOR_INITRAMFS_PATH
else
wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_INSPECTOR_KERNEL_PATH
wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_INSPECTOR_INITRAMFS_PATH
fi
ramdisk-image-create $IRONIC_INSPECTOR_RAMDISK_FLAVOR \
-o $IRONIC_INSPECTOR_IMAGE_PATH
fi
fi
@ -166,6 +146,18 @@ EOF
fi
}
function inspector_configure_auth_for {
inspector_iniset $1 auth_type password
inspector_iniset $1 auth_url "$KEYSTONE_SERVICE_URI"
inspector_iniset $1 username $IRONIC_INSPECTOR_ADMIN_USER
inspector_iniset $1 password $SERVICE_PASSWORD
inspector_iniset $1 project_name $SERVICE_PROJECT_NAME
inspector_iniset $1 user_domain_id default
inspector_iniset $1 project_domain_id default
inspector_iniset $1 cafile $SSL_BUNDLE_FILE
inspector_iniset $1 os_region $REGION_NAME
}
function configure_inspector {
mkdir_chown_stack "$IRONIC_INSPECTOR_CONF_DIR"
mkdir_chown_stack "$IRONIC_INSPECTOR_DATA_DIR"
@ -174,11 +166,7 @@ function configure_inspector {
cp "$IRONIC_INSPECTOR_DIR/example.conf" "$IRONIC_INSPECTOR_CONF_FILE"
inspector_iniset DEFAULT debug $IRONIC_INSPECTOR_DEBUG
inspector_iniset ironic os_auth_url "$KEYSTONE_SERVICE_URI"
inspector_iniset ironic os_username $IRONIC_INSPECTOR_ADMIN_USER
inspector_iniset ironic os_password $SERVICE_PASSWORD
inspector_iniset ironic os_tenant_name $SERVICE_PROJECT_NAME
inspector_configure_auth_for ironic
configure_auth_token_middleware $IRONIC_INSPECTOR_CONF_FILE $IRONIC_INSPECTOR_ADMIN_USER $IRONIC_INSPECTOR_AUTH_CACHE_DIR/api
inspector_iniset DEFAULT listen_port $IRONIC_INSPECTOR_PORT
@ -227,11 +215,7 @@ function configure_inspector {
}
function configure_inspector_swift {
inspector_iniset swift os_auth_url "$KEYSTONE_SERVICE_URI/v2.0"
inspector_iniset swift username $IRONIC_INSPECTOR_ADMIN_USER
inspector_iniset swift password $SERVICE_PASSWORD
inspector_iniset swift tenant_name $SERVICE_PROJECT_NAME
inspector_configure_auth_for swift
inspector_iniset processing store_data swift
}
@ -289,10 +273,11 @@ function create_ironic_inspector_cache_dir {
}
function cleanup_inspector {
rm -f $IRONIC_TFTPBOOT_DIR/pxelinux.cfg/default
rm -f $IRONIC_TFTPBOOT_DIR/ironic-inspector.*
if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
rm -f $IRONIC_HTTP_DIR/ironic-inspector.*
else
rm -f $IRONIC_TFTPBOOT_DIR/pxelinux.cfg/default
rm -f $IRONIC_TFTPBOOT_DIR/ironic-inspector.*
fi
sudo rm -f /etc/sudoers.d/ironic-inspector-rootwrap
sudo rm -rf $IRONIC_INSPECTOR_AUTH_CACHE_DIR

77
devstack/upgrade/resources.sh Executable file
View File

@ -0,0 +1,77 @@
#!/bin/bash
#
# Copyright 2015 Hewlett-Packard Development Company, L.P.
# Copyright 2016 Intel Corporation
# Copyright 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
## based on Ironic/devstack/upgrade/resources.sh
set -o errexit
source $GRENADE_DIR/grenaderc
source $GRENADE_DIR/functions
source $TOP_DIR/openrc admin admin
# Inspector relies on a couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
source $INSPECTOR_DEVSTACK_DIR/plugin.sh
set -o xtrace
function early_create {
:
}
function create {
:
}
function verify {
:
}
function verify_noapi {
:
}
function destroy {
:
}
# Dispatcher
case $1 in
"early_create")
early_create
;;
"create")
create
;;
"verify_noapi")
verify_noapi
;;
"verify")
verify
;;
"destroy")
destroy
;;
"force_destroy")
set +o errexit
destroy
;;
esac

14
devstack/upgrade/settings Normal file
View File

@ -0,0 +1,14 @@
# Enable our tests; also enable ironic tempest plugin as we depend on it.
export TEMPEST_PLUGINS="/opt/stack/new/ironic /opt/stack/new/ironic-inspector"
# Enabling Inspector grenade plug-in
# Based on Ironic/devstack/grenade/settings
register_project_for_upgrade ironic-inspector
register_db_to_save ironic_inspector
# Inspector plugin and service registration
devstack_localrc base enable_plugin ironic-inspector https://github.com/openstack/ironic-inspector
devstack_localrc base enable_service ironic-inspector ironic-inspector-dhcp
devstack_localrc target enable_plugin ironic-inspector https://github.com/openstack/ironic-inspector
devstack_localrc target enable_service ironic-inspector ironic-inspector-dhcp

29
devstack/upgrade/shutdown.sh Executable file
View File

@ -0,0 +1,29 @@
#!/bin/bash
#
# based on Ironic/devstack/upgrade/shutdown.sh
set -o errexit
source $GRENADE_DIR/grenaderc
source $GRENADE_DIR/functions
# We need base DevStack functions for this
source $BASE_DEVSTACK_DIR/functions
source $BASE_DEVSTACK_DIR/stackrc # needed for status directory
source $BASE_DEVSTACK_DIR/lib/tls
source $BASE_DEVSTACK_DIR/lib/apache
# Inspector relies on a couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
# Keep track of the DevStack directory
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
source $INSPECTOR_DEVSTACK_DIR/plugin.sh
set -o xtrace
stop_inspector
if [[ "$IRONIC_INSPECTOR_MANAGE_FIREWALL" == "True" ]]; then
stop_inspector_dhcp
fi

136
devstack/upgrade/upgrade.sh Executable file
View File

@ -0,0 +1,136 @@
#!/usr/bin/env bash
## based on Ironic/devstack/upgrade/upgrade.sh
# ``upgrade-inspector``
echo "*********************************************************************"
echo "Begin $0"
echo "*********************************************************************"
# Clean up any resources that may be in use
cleanup() {
set +o errexit
echo "*********************************************************************"
echo "ERROR: Abort $0"
echo "*********************************************************************"
# Kill ourselves to signal any calling process
trap 2; kill -2 $$
}
trap cleanup SIGHUP SIGINT SIGTERM
# Keep track of the grenade directory
RUN_DIR=$(cd $(dirname "$0") && pwd)
# Source params
source $GRENADE_DIR/grenaderc
# Import common functions
source $GRENADE_DIR/functions
# This script exits on an error so that errors don't compound and you see
# only the first error that occurred.
set -o errexit
# Upgrade Inspector
# =================
# Duplicate some setup bits from target DevStack
source $TARGET_DEVSTACK_DIR/stackrc
source $TARGET_DEVSTACK_DIR/lib/tls
source $TARGET_DEVSTACK_DIR/lib/nova
source $TARGET_DEVSTACK_DIR/lib/neutron-legacy
source $TARGET_DEVSTACK_DIR/lib/apache
source $TARGET_DEVSTACK_DIR/lib/keystone
source $TARGET_DEVSTACK_DIR/lib/database
# Inspector relies on couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
# Keep track of the DevStack directory
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
INSPECTOR_PLUGIN=$INSPECTOR_DEVSTACK_DIR/plugin.sh
source $INSPECTOR_PLUGIN
# Print the commands being run so that we can see the command that triggers
# an error. It is also useful for following allowing as the install occurs.
set -o xtrace
initialize_database_backends
function is_nova_migration {
# Determine whether we're "upgrading" from another compute driver
_ironic_old_driver=$(source $BASE_DEVSTACK_DIR/functions; source $BASE_DEVSTACK_DIR/localrc; echo $VIRT_DRIVER)
[ "$_ironic_old_driver" != "ironic" ]
}
# Duplicate all required devstack setup that is needed before starting
# Inspector during a sideways upgrade, where we are migrating from an
# devstack environment without Inspector.
function init_inspector {
# We need to source credentials here but doing so in the gate will unset
# HOST_IP.
local tmp_host_ip=$HOST_IP
source $TARGET_DEVSTACK_DIR/openrc admin admin
HOST_IP=$tmp_host_ip
IRONIC_BAREMETAL_BASIC_OPS="True"
$TARGET_DEVSTACK_DIR/tools/install_prereqs.sh
recreate_database ironic_inspector utf8
$INSPECTOR_PLUGIN stack install
$INSPECTOR_PLUGIN stack post-config
$INSPECTOR_PLUGIN stack extra
}
function wait_for_keystone {
if ! wait_for_service $SERVICE_TIMEOUT ${KEYSTONE_AUTH_URI}/v$IDENTITY_API_VERSION/; then
die $LINENO "keystone did not start"
fi
}
# Save current config files for posterity
if [[ -d $IRONIC_INSPECTOR_CONF_DIR ]] && [[ ! -d $SAVE_DIR/etc.inspector ]] ; then
cp -pr $IRONIC_INSPECTOR_CONF_DIR $SAVE_DIR/etc.inspector
fi
stack_install_service ironic-inspector
if [[ "$IRONIC_INSPECTOR_MANAGE_FIREWALL" == "True" ]]; then
stack_install_service ironic-inspector-dhcp
fi
# FIXME(milan): using Ironic's detection; not sure whether it's needed
# If we are sideways upgrading and migrating from a base deployed with
# VIRT_DRIVER=fake, we need to run Inspector install, config and init
# code from devstack.
if is_nova_migration ; then
init_inspector
fi
sync_inspector_database
# calls upgrade inspector for specific release
upgrade_project ironic-inspector $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH
start_inspector
if [[ "$IRONIC_INSPECTOR_MANAGE_FIREWALL" == "True" ]]; then
start_inspector_dhcp
fi
# Don't succeed unless the services come up
ensure_services_started ironic-inspector
ensure_logs_exist ironic-inspector
if [[ "$IRONIC_INSPECTOR_MANAGE_FIREWALL" == "True" ]]; then
ensure_services_started dnsmasq
ensure_logs_exist ironic-inspector-dhcp
fi
set +o xtrace
echo "*********************************************************************"
echo "SUCCESS: End $0"
echo "*********************************************************************"

View File

@ -9,9 +9,9 @@ can be changed in configuration. Protocol is JSON over HTTP.
Start Introspection
~~~~~~~~~~~~~~~~~~~
``POST /v1/introspection/<UUID>`` initiate hardware introspection for node
``<UUID>``. All power management configuration for this node needs to be done
prior to calling the endpoint (except when :ref:`setting-ipmi-creds`).
``POST /v1/introspection/<Node ID>`` initiate hardware introspection for node
``<Node ID>``. All power management configuration for this node needs to be
done prior to calling the endpoint (except when :ref:`setting-ipmi-creds`).
Requires X-Auth-Token header with Keystone token for authentication.
@ -36,7 +36,7 @@ Response:
Get Introspection Status
~~~~~~~~~~~~~~~~~~~~~~~~
``GET /v1/introspection/<UUID>`` get hardware introspection status.
``GET /v1/introspection/<Node ID>`` get hardware introspection status.
Requires X-Auth-Token header with Keystone token for authentication.
@ -58,7 +58,7 @@ Response body: JSON dictionary with keys:
Abort Running Introspection
~~~~~~~~~~~~~~~~~~~~~~~~~~~
``POST /v1/introspection/<UUID>/abort`` abort running introspection.
``POST /v1/introspection/<Node ID>/abort`` abort running introspection.
Requires X-Auth-Token header with Keystone token for authentication.
@ -74,7 +74,7 @@ Response:
Get Introspection Data
~~~~~~~~~~~~~~~~~~~~~~
``GET /v1/introspection/<UUID>/data`` get stored data from successful
``GET /v1/introspection/<Node ID>/data`` get stored data from successful
introspection.
Requires X-Auth-Token header with Keystone token for authentication.
@ -93,6 +93,25 @@ Response body: JSON dictionary with introspection data
format and contents of the stored data. Notably, it depends on the ramdisk
used and plugins enabled both in the ramdisk and in inspector itself.
Reapply introspection on stored data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``POST /v1/introspection/<Node ID>/data/unprocessed`` to trigger
introspection on stored unprocessed data. No data is allowed to be
sent along with the request.
Requires X-Auth-Token header with Keystone token for authentication.
Requires enabling Swift store in processing section of the
configuration file.
Response:
* 202 - accepted
* 400 - bad request or store not configured
* 401, 403 - missing or invalid authentication
* 404 - node not found for Node ID
* 409 - inspector locked node for processing
Introspection Rules
~~~~~~~~~~~~~~~~~~~
@ -112,7 +131,8 @@ authentication.
Response
* 200 - OK
* 200 - OK for API version < 1.6
* 201 - OK for API version 1.6 and higher
* 400 - bad request
Response body: JSON dictionary with introspection rule representation (the
@ -198,22 +218,6 @@ Optionally the following keys might be provided:
* ``logs`` base64-encoded logs from the ramdisk.
The following keys are supported for backward compatibility with the old
bash-based ramdisk, when ``inventory`` is not provided:
* ``cpus`` number of CPU
* ``cpu_arch`` architecture of the CPU
* ``memory_mb`` RAM in MiB
* ``local_gb`` hard drive size in GiB
* ``ipmi_address`` IP address of BMC, may be missing on VM
* ``block_devices`` block devices information for the ``raid_device`` plugin,
dictionary with one key: ``serials`` list of serial numbers of block devices.
.. note::
This list highly depends on enabled plugins, provided above are
expected keys for the default set of plugins. See :ref:`plugins`
@ -279,6 +283,10 @@ major version and is always ``1`` for now, ``Y`` is a minor version.
``X-OpenStack-Ironic-Inspector-API-Maximum-Version`` headers with minimum
and maximum API versions supported by the server.
.. note::
Maximum is server API version used by default.
API Discovery
~~~~~~~~~~~~~
@ -323,3 +331,6 @@ Version History
* **1.1** adds endpoint to retrieve stored introspection data.
* **1.2** endpoints for manipulating introspection rules.
* **1.3** endpoint for canceling running introspection
* **1.4** endpoint for reapplying the introspection over stored data.
* **1.5** support for Ironic node names.
* **1.6** endpoint for rules creating returns 201 instead of 200 on success.

View File

@ -15,6 +15,13 @@ status.
Finally, some distributions (e.g. Fedora) provide **ironic-inspector**
packaged, some of them - under its old name *ironic-discoverd*.
There are several projects you can use to set up **ironic-inspector** in
production. `puppet-ironic
<http://git.openstack.org/cgit/openstack/puppet-ironic/>`_ provides Puppet
manifests, while `bifrost <http://docs.openstack.org/developer/bifrost/>`_
provides an Ansible-based standalone installer. Refer to Configuration_
if you plan on installing **ironic-inspector** manually.
.. _PyPI: https://pypi.python.org/pypi/ironic-inspector
Note for Ubuntu users
@ -40,6 +47,7 @@ Ironic Version Standalone Inspection Interface
Juno 1.0 N/A
Kilo 1.0 - 2.2 1.0 - 1.1
Liberty 1.1 - 2.X 2.0 - 2.X
Mitaka+ 2.0 - 2.X 2.0 - 2.X
============== ========== ====================
.. note::
@ -53,11 +61,10 @@ Copy ``example.conf`` to some permanent place
(e.g. ``/etc/ironic-inspector/inspector.conf``).
Fill in at least these configuration values:
* ``os_username``, ``os_password``, ``os_tenant_name`` - Keystone credentials
to use when accessing other services and check client authentication tokens;
* The ``keystone_authtoken`` section - credentials to use when checking user
authentication.
* ``os_auth_url``, ``identity_uri`` - Keystone endpoints for validating
authentication tokens and checking user roles;
* The ``ironic`` section - credentials to use when accessing the Ironic API.
* ``connection`` in the ``database`` section - SQLAlchemy connection string
for the database;
@ -75,6 +82,49 @@ for the other possible configuration options.
Configuration file contains a password and thus should be owned by ``root``
and should have access rights like ``0600``.
Here is an example *inspector.conf* (adapted from a gate run)::
[DEFAULT]
debug = false
rootwrap_config = /etc/ironic-inspector/rootwrap.conf
[database]
connection = mysql+pymysql://root:<PASSWORD>@127.0.0.1/ironic_inspector?charset=utf8
[firewall]
dnsmasq_interface = br-ctlplane
[ironic]
os_region = RegionOne
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity
auth_type = password
[keystone_authtoken]
auth_uri = http://127.0.0.1/identity
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity_v2_admin
auth_type = password
[processing]
ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
store_data = swift
[swift]
os_region = RegionOne
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity
auth_type = password
.. note::
Set ``debug = true`` if you want to see complete logs.
**ironic-inspector** requires root rights for managing iptables. It gets them
by running ``ironic-inspector-rootwrap`` utility with ``sudo``.
To allow it, copy file ``rootwrap.conf`` and directory ``rootwrap.d`` to the
@ -103,6 +153,41 @@ configuration directory (e.g. ``/etc/ironic-inspector/``) and create file
Replace ``stack`` with whatever user you'll be using to run
**ironic-inspector**.
Configuring IPA
^^^^^^^^^^^^^^^
ironic-python-agent_ is a ramdisk developed for Ironic. During the Liberty
cycle support for **ironic-inspector** was added. This is the default ramdisk
starting with the Mitaka release.
.. note::
You need at least 1.5 GiB of RAM on the machines to use IPA built with
diskimage-builder_ and at least 384 MiB to use the *TinyIPA*.
To build an ironic-python-agent ramdisk, do the following:
* Get the new enough version of diskimage-builder_::
sudo pip install -U "diskimage-builder>=1.1.2"
* Build the ramdisk::
disk-image-create ironic-agent fedora -o ironic-agent
.. note::
Replace "fedora" with your distribution of choice.
* Use the resulting files ``ironic-agent.kernel`` and
``ironic-agent.initramfs`` in the following instructions to set PXE or iPXE.
Alternatively, you can download a `prebuilt TinyIPA image
<http://tarballs.openstack.org/ironic-python-agent/tinyipa/files/>`_ or use
the `other builders
<http://docs.openstack.org/developer/ironic-python-agent/#image-builders>`_.
.. _diskimage-builder: https://github.com/openstack/diskimage-builder
.. _ironic-python-agent: https://github.com/openstack/ironic-python-agent
Configuring PXE
^^^^^^^^^^^^^^^
@ -111,10 +196,42 @@ As for PXE boot environment, you'll need:
* TFTP server running and accessible (see below for using *dnsmasq*).
Ensure ``pxelinux.0`` is present in the TFTP root.
Copy ``ironic-agent.kernel`` and ``ironic-agent.initramfs`` to the TFTP
root as well.
* Next, set up ``$TFTPROOT/pxelinux.cfg/default`` as follows::
default introspect
label introspect
kernel ironic-agent.kernel
append initrd=ironic-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue systemd.journald.forward_to_console=yes
ipappend 3
Replace ``{IP}`` with IP of the machine (do not use loopback interface, it
will be accessed by ramdisk on a booting machine).
.. note::
While ``systemd.journald.forward_to_console=yes`` is not actually
required, it will substantially simplify debugging if something
goes wrong. You can also enable IPA debug logging by appending
``ipa-debug=1``.
IPA is pluggable: you can insert introspection plugins called
*collectors* into it. For example, to enable a very handy ``logs`` collector
(sending ramdisk logs to **ironic-inspector**), modify the ``append`` line in
``$TFTPROOT/pxelinux.cfg/default``::
append initrd=ironic-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue ipa-inspection-collectors=default,logs systemd.journald.forward_to_console=yes
.. note::
You probably want to always keep the ``default`` collector, as it provides
the basic information required for introspection.
* You need PXE boot server (e.g. *dnsmasq*) running on **the same** machine as
**ironic-inspector**. Don't do any firewall configuration:
**ironic-inspector** will handle it for you. In **ironic-inspector**
**ironic-inspector** will handle it for you. In the **ironic-inspector**
configuration file set ``dnsmasq_interface`` to the interface your
PXE boot server listens on. Here is an example *dnsmasq.conf*::
@ -132,116 +249,65 @@ As for PXE boot environment, you'll need:
simultaneously cause conflicts - the same IP address is suggested to
several nodes.
* You have to install and configure one of 2 available ramdisks: simple
bash-based (see `Using simple ramdisk`_) or more complex based on
ironic-python-agent_ (See `Using IPA`_).
Configuring iPXE
^^^^^^^^^^^^^^^^
Here is *inspector.conf* you may end up with::
iPXE allows better scaling as it primarily uses the HTTP protocol instead of
slow and unreliable TFTP. You still need a TFTP server as a fall back for
nodes not supporting iPXE. To use iPXE you'll need:
[DEFAULT]
debug = false
[ironic]
identity_uri = http://127.0.0.1:35357
os_auth_url = http://127.0.0.1:5000/v2.0
os_username = admin
os_password = password
os_tenant_name = admin
[firewall]
dnsmasq_interface = br-ctlplane
* TFTP server running and accessible (see above for using *dnsmasq*).
Ensure ``undionly.kpxe`` is present in the TFTP root. If any of your nodes
boot with UEFI, you'll also need ``ipxe.efi`` there.
.. note::
Set ``debug = true`` if you want to see complete logs.
* You also need an HTTP server capable of serving static files.
Copy ``ironic-agent.kernel`` and ``ironic-agent.initramfs`` there.
Using IPA
^^^^^^^^^
* Create a file called ``inspector.ipxe`` in the HTTP root (you can name and
place it differently, just don't forget to adjust the *dnsmasq.conf* example
below)::
ironic-python-agent_ is a ramdisk developed for Ironic. During the Liberty
cycle support for **ironic-inspector** was added. This is the default ramdisk
starting with the Mitaka release.
#!ipxe
.. note::
You need at least 1.5 GiB of RAM on the machines to use this ramdisk,
2 GiB is recommended.
:retry_dhcp
dhcp || goto retry_dhcp
To build an ironic-python-agent ramdisk, do the following:
* Get the new enough version of diskimage-builder_::
sudo pip install -U "diskimage-builder>=1.1.2"
* Build the ramdisk::
disk-image-create ironic-agent fedora -o ironic-agent
:retry_boot
imgfree
kernel --timeout 30000 http://{IP}:8088/ironic-agent.kernel ipa-inspection-callback-url=http://{IP}>:5050/v1/continue systemd.journald.forward_to_console=yes BOOTIF=${mac} initrd=agent.ramdisk || goto retry_boot
initrd --timeout 30000 http://{IP}:8088/ironic-agent.ramdisk || goto retry_boot
boot
.. note::
Replace "fedora" with your distribution of choice.
Older versions of the iPXE ROM tend to misbehave on unreliable network
connection, thus we use the timeout option with retries.
* Copy resulting files ``ironic-agent.vmlinuz`` and ``ironic-agent.initramfs``
to the TFTP root directory.
Just like with PXE you can customize the list of collectors by appending
the ``ipa-inspector-collectors`` kernel option, for example::
Alternatively, you can download a `prebuilt IPA image
<http://tarballs.openstack.org/ironic-python-agent/coreos/files/>`_ or use
the `CoreOS-based IPA builder
<http://docs.openstack.org/developer/ironic-python-agent/#coreos>`_.
ipa-inspection-collectors=default,logs,extra_hardware
Next, set up ``$TFTPROOT/pxelinux.cfg/default`` as follows::
* Just as with PXE you'll need a PXE boot server. The configuration, however,
will be different. Here is an example *dnsmasq.conf*::
default introspect
port=0
interface={INTERFACE}
bind-interfaces
dhcp-range={DHCP IP RANGE, e.g. 192.168.0.50,192.168.0.150}
enable-tftp
tftp-root={TFTP ROOT, e.g. /tftpboot}
dhcp-sequential-ip
dhcp-match=ipxe,175
dhcp-match=set:efi,option:client-arch,7
dhcp-boot=tag:ipxe,http://{IP}:8088/inspector.ipxe
dhcp-boot=tag:efi,ipxe.efi
dhcp-boot=undionly.kpxe,localhost.localdomain,{IP}
label introspect
kernel ironic-agent.vmlinuz
append initrd=ironic-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue systemd.journald.forward_to_console=yes
ipappend 3
Replace ``{IP}`` with IP of the machine (do not use loopback interface, it
will be accessed by ramdisk on a booting machine).
.. note::
While ``systemd.journald.forward_to_console=yes`` is not actually
required, it will substantially simplify debugging if something goes wrong.
This ramdisk is pluggable: you can insert introspection plugins called
*collectors* into it. For example, to enable a very handy ``logs`` collector
(sending ramdisk logs to **ironic-inspector**), modify the ``append`` line in
``$TFTPROOT/pxelinux.cfg/default``::
append initrd=ironic-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue ipa-inspection-collectors=default,logs systemd.journald.forward_to_console=yes
.. note::
You probably want to always keep ``default`` collector, as it provides the
basic information required for introspection.
.. _diskimage-builder: https://github.com/openstack/diskimage-builder
.. _ironic-python-agent: https://github.com/openstack/ironic-python-agent
Using simple ramdisk
^^^^^^^^^^^^^^^^^^^^
This ramdisk is deprecated, its use is not recommended.
* Build and put into your TFTP the kernel and ramdisk created using the
diskimage-builder_ `ironic-discoverd-ramdisk element`_::
ramdisk-image-create -o discovery fedora ironic-discoverd-ramdisk
You need diskimage-builder_ 0.1.38 or newer to do it (using the latest one
is always advised).
* Configure your ``$TFTPROOT/pxelinux.cfg/default`` with something like::
default introspect
label introspect
kernel discovery.kernel
append initrd=discovery.initramfs discoverd_callback_url=http://{IP}:5050/v1/continue
ipappend 3
Replace ``{IP}`` with IP of the machine (do not use loopback interface, it
will be accessed by ramdisk on a booting machine).
.. _ironic-discoverd-ramdisk element: https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk
First, we configure the same common parameters as with PXE. Then we define
``ipxe`` and ``efi`` tags. Nodes already supporting iPXE are ordered to
download and execute ``inspector.ipxe``. Nodes without iPXE booted with UEFI
will get ``ipxe.efi`` firmware to execute, while the remaining will get
``undionly.kpxe``.
Managing the **ironic-inspector** database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -79,8 +79,8 @@ Starting with the Mitaka release, you can also apply conditions to ironic node
field. Prefix field with schema (``data://`` or ``node://``) to distinguish
between values from introspection data and node. Both schemes use JSON path::
{'field': 'node://property.path', 'op': 'eq', 'value': 'val'}
{'field': 'data://introspection.path', 'op': 'eq', 'value': 'val'}
{"field": "node://property.path", "op": "eq", "value": "val"}
{"field": "data://introspection.path", "op": "eq", "value": "val"}
if scheme (node or data) is missing, condition compares data with
introspection data.
@ -127,8 +127,8 @@ Starting from Mitaka release, ``value`` field in actions supports fetching data
from introspection, it's using `python string formatting notation
<https://docs.python.org/2/library/string.html#formatspec>`_ ::
{'action': 'set-attribute', 'path': '/driver_info/ipmi_address',
'value': '{data[inventory][bmc_address]}'}
{"action": "set-attribute", "path": "/driver_info/ipmi_address",
"value": "{data[inventory][bmc_address]}"}
.. _setting-ipmi-creds:
@ -184,20 +184,27 @@ introspection data. Note that order does matter in this option.
These are plugins that are enabled by default and should not be disabled,
unless you understand what you're doing:
``ramdisk_error``
reports error, if ``error`` field is set by the ramdisk, also optionally
stores logs from ``logs`` field, see :ref:`api` for details.
``scheduler``
validates and updates basic hardware scheduling properties: CPU number and
architecture, memory and disk size.
``validate_interfaces``
validates network interfaces information.
The following plugins are enabled by default, but can be disabled if not
needed:
``ramdisk_error``
reports error, if ``error`` field is set by the ramdisk, also optionally
stores logs from ``logs`` field, see :ref:`api` for details.
``capabilities``
detect node capabilities: CPU, boot mode, etc. See `Capabilities
Detection`_ for more details.
Here are some plugins that can be additionally enabled:
``example``
example plugin logging it's input and output.
``raid_device`` (deprecated name ``root_device_hint``)
``raid_device``
gathers block devices from ramdisk and exposes root device in multiple
runs.
``extra_hardware``
@ -207,6 +214,12 @@ Here are some plugins that can be additionally enabled:
then the new format will be stored in the 'extra' key. The 'data' key is
then deleted from the introspection data, as unless converted it's assumed
unusable by introspection rules.
``local_link_connection``
Processes LLDP data returned from inspection specifically looking for the
port ID and chassis ID, if found it configures the local link connection
information on the nodes Ironic ports with that data. To enable LLDP in the
inventory from IPA ``ipa-collect-lldp=1`` should be passed as a kernel
parameter to the IPA ramdisk.
Refer to :ref:`contributing_link` for information on how to write your
own plugin.
@ -241,40 +254,131 @@ see :ref:`rules`.
A rule to set a node's Ironic driver to the ``agent_ipmitool`` driver and
populate the required driver_info for that driver would look like::
"description": "Set IPMI driver_info if no credentials",
"actions": [
{'action': 'set-attribute', 'path': 'driver', 'value': 'agent_ipmitool'},
{'action': 'set-attribute', 'path': 'driver_info/ipmi_username',
'value': 'username'},
{'action': 'set-attribute', 'path': 'driver_info/ipmi_password',
'value': 'password'}
]
"conditions": [
{'op': 'is-empty', 'field': 'node://driver_info.ipmi_password'},
{'op': 'is-empty', 'field': 'node://driver_info.ipmi_username'}
]
"description": "Set deploy info if not already set on node",
"actions": [
{'action': 'set-attribute', 'path': 'driver_info/deploy_kernel',
'value': '<glance uuid>'},
{'action': 'set-attribute', 'path': 'driver_info/deploy_ramdisk',
'value': '<glance uuid>'},
]
"conditions": [
{'op': 'is-empty', 'field': 'node://driver_info.deploy_ramdisk'},
{'op': 'is-empty', 'field': 'node://driver_info.deploy_kernel'}
]
[{
"description": "Set IPMI driver_info if no credentials",
"actions": [
{"action": "set-attribute", "path": "driver", "value": "agent_ipmitool"},
{"action": "set-attribute", "path": "driver_info/ipmi_username",
"value": "username"},
{"action": "set-attribute", "path": "driver_info/ipmi_password",
"value": "password"}
],
"conditions": [
{"op": "is-empty", "field": "node://driver_info.ipmi_password"},
{"op": "is-empty", "field": "node://driver_info.ipmi_username"}
]
},{
"description": "Set deploy info if not already set on node",
"actions": [
{"action": "set-attribute", "path": "driver_info/deploy_kernel",
"value": "<glance uuid>"},
{"action": "set-attribute", "path": "driver_info/deploy_ramdisk",
"value": "<glance uuid>"}
],
"conditions": [
{"op": "is-empty", "field": "node://driver_info.deploy_ramdisk"},
{"op": "is-empty", "field": "node://driver_info.deploy_kernel"}
]
}]
All nodes discovered and enrolled via the ``enroll`` hook, will contain an
``auto_discovered`` flag in the introspection data, this flag makes it
possible to distinguish between manually enrolled nodes and auto-discovered
nodes in the introspection rules using the rule condition ``eq``::
"description": "Enroll auto-discovered nodes with fake driver",
"actions": [
{'action': 'set-attribute', 'path': 'driver', 'value': 'fake'}
]
"conditions": [
{'op': 'eq', 'field': 'data://auto_discovered', 'value': True}
]
{
"description": "Enroll auto-discovered nodes with fake driver",
"actions": [
{"action": "set-attribute", "path": "driver", "value": "fake"}
],
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true}
]
}
Reapplying introspection on stored data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To allow correcting mistakes in introspection rules the API provides
an entry point that triggers the introspection over stored data. The
data to use for processing is kept in Swift separately from the data
already processed. Reapplying introspection overwrites processed data
in the store. Updating the introspection data through the endpoint
isn't supported yet. Following preconditions are checked before
reapplying introspection:
* no data is being sent along with the request
* Swift store is configured and enabled
* introspection data is stored in Swift for the node UUID
* node record is kept in database for the UUID
* introspection is not ongoing for the node UUID
Should the preconditions fail an immediate response is given to the
user:
* ``400`` if the request contained data or in case Swift store is not
enabled in configuration
* ``404`` in case Ironic doesn't keep track of the node UUID
* ``409`` if an introspection is already ongoing for the node
If the preconditions are met a background task is executed to carry
out the processing and a ``202 Accepted`` response is returned to the
endpoint user. As requested, these steps are performed in the
background task:
* preprocessing hooks
* post processing hooks, storing result in Swift
* introspection rules
These steps are avoided, based on the feature requirements:
* ``node_not_found_hook`` is skipped
* power operations
* roll-back actions done by hooks
Limitations:
* IPMI credentials are not updated --- ramdisk not running
* there's no way to update the unprocessed data atm.
* the unprocessed data is never cleaned from the store
* check for stored data presence is performed in background;
missing data situation still results in a ``202`` response
Capabilities Detection
~~~~~~~~~~~~~~~~~~~~~~
Starting with the Newton release, **Ironic Inspector** can optionally discover
several node capabilities. A recent (Newton or newer) IPA image is required
for it to work.
Boot mode
^^^^^^^^^
The current boot mode (BIOS or UEFI) can be detected and recorded as
``boot_mode`` capability in Ironic. It will make some drivers to change their
behaviour to account for this capability. Set the ``[capabilities]boot_mode``
configuration option to ``True`` to enable.
CPU capabilities
^^^^^^^^^^^^^^^^
Several CPU flags are detected by default and recorded as following
capabilities:
* ``cpu_aes`` AES instructions.
* ``cpu_vt`` virtualization support.
* ``cpu_txt`` TXT support.
* ``cpu_hugepages`` huge pages (2 MiB) support.
* ``cpu_hugepages_1g`` huge pages (1 GiB) support.
It is possible to define your own rules for detecting CPU capabilities.
Set the ``[capabilities]cpu_flags`` configuration option to a mapping between
a CPU flag and a capability, for example::
cpu_flags = aes:cpu_aes,svm:cpu_vt,vmx:cpu_vt
See the default value of this option for a more detail example.

View File

@ -8,7 +8,9 @@
# Deprecated group/name - [discoverd]/listen_address
#listen_address = 0.0.0.0
# Port to listen on. (integer value)
# Port to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [discoverd]/listen_port
#listen_port = 5050
@ -74,10 +76,11 @@
# If set to true, the logging level will be set to DEBUG instead of
# the default INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# If set to false, the logging level will be set to WARNING instead of
# the default INFO level. (boolean value)
# DEPRECATED: If set to false, the logging level will be set to
# WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
@ -89,6 +92,7 @@
# configuration is set in the configuration file and other logging
# configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
@ -166,6 +170,20 @@
#fatal_deprecations = false
[capabilities]
#
# From ironic_inspector.plugins.capabilities
#
# Whether to store the boot mode (BIOS or UEFI). (boolean value)
#boot_mode = false
# Mapping between a CPU flag and a capability to set if this flag is
# present. (dict value)
#cpu_flags = aes:cpu_aes,pdpe1gb:cpu_hugepages_1g,pse:cpu_hugepages,smx:cpu_txt,svm:cpu_vt,vmx:cpu_vt
[cors]
#
@ -173,7 +191,9 @@
#
# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. (list value)
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials
@ -182,7 +202,7 @@
# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers = Content-Type,Cache-Control,Content-Language,Expires,Last-Modified,Pragma
#expose_headers =
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
@ -203,7 +223,9 @@
#
# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. (list value)
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials
@ -212,7 +234,7 @@
# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers = Content-Type,Cache-Control,Content-Language,Expires,Last-Modified,Pragma
#expose_headers =
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
@ -232,8 +254,12 @@
# From oslo.db
#
# The file name to use with SQLite. (string value)
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to
# connect the database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
@ -338,8 +364,8 @@
# From ironic_inspector
#
# SQLite3 database to store nodes under introspection, required. Do
# not use :memory: here, it won't work. DEPRECATED: use
# DEPRECATED: SQLite3 database to store nodes under introspection,
# required. Do not use :memory: here, it won't work. DEPRECATED: use
# [database]/connection. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
@ -387,59 +413,153 @@
# From ironic_inspector.common.ironic
#
# Keystone authentication endpoint for accessing Ironic API. Use
# [keystone_authtoken]/auth_uri for keystone authentication. (string
# Authentication URL (string value)
#auth_url = <None>
# Method to use for authentication: noauth or keystone. (string value)
# Allowed values: keystone, noauth
#auth_strategy = keystone
# Authentication type to load (string value)
# Deprecated group/name - [ironic]/auth_plugin
#auth_type = <None>
# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>
# PEM encoded client certificate cert file (string value)
#certfile = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_name = <None>
# Domain ID to scope to (string value)
#domain_id = <None>
# Domain name to scope to (string value)
#domain_name = <None>
# DEPRECATED: Keystone admin endpoint. DEPRECATED: Use
# [keystone_authtoken] section for keystone token validation. (string
# value)
# Deprecated group/name - [discoverd]/os_auth_url
#os_auth_url =
# User name for accessing Ironic API. Use
# [keystone_authtoken]/admin_user for keystone authentication. (string
# value)
# Deprecated group/name - [discoverd]/os_username
#os_username =
# Password for accessing Ironic API. Use
# [keystone_authtoken]/admin_password for keystone authentication.
# (string value)
# Deprecated group/name - [discoverd]/os_password
#os_password =
# Tenant name for accessing Ironic API. Use
# [keystone_authtoken]/admin_tenant_name for keystone authentication.
# (string value)
# Deprecated group/name - [discoverd]/os_tenant_name
#os_tenant_name =
# Keystone admin endpoint. DEPRECATED: use
# [keystone_authtoken]/identity_uri. (string value)
# Deprecated group/name - [discoverd]/identity_uri
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#identity_uri =
# Method to use for authentication: noauth or keystone. (string value)
# Allowed values: keystone, noauth
#auth_strategy = keystone
# Verify HTTPS connections. (boolean value)
#insecure = false
# Ironic API URL, used to set Ironic API URL when auth_strategy option
# is noauth to work with standalone Ironic without keystone. (string
# value)
#ironic_url = http://localhost:6385/
# Ironic service type. (string value)
#os_service_type = baremetal
# PEM encoded client certificate key file (string value)
#keyfile = <None>
# Maximum number of retries in case of conflict error (HTTP 409).
# (integer value)
#max_retries = 30
# DEPRECATED: Keystone authentication endpoint for accessing Ironic
# API. Use [keystone_authtoken] section for keystone token validation.
# (string value)
# Deprecated group/name - [discoverd]/os_auth_url
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_auth_url =
# Ironic endpoint type. (string value)
#os_endpoint_type = internalURL
# DEPRECATED: Password for accessing Ironic API. Use
# [keystone_authtoken] section for keystone token validation. (string
# value)
# Deprecated group/name - [discoverd]/os_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_password =
# Keystone region used to get Ironic endpoints. (string value)
#os_region = <None>
# Ironic service type. (string value)
#os_service_type = baremetal
# DEPRECATED: Tenant name for accessing Ironic API. Use
# [keystone_authtoken] section for keystone token validation. (string
# value)
# Deprecated group/name - [discoverd]/os_tenant_name
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_tenant_name =
# DEPRECATED: User name for accessing Ironic API. Use
# [keystone_authtoken] section for keystone token validation. (string
# value)
# Deprecated group/name - [discoverd]/os_username
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_username =
# User's password (string value)
#password = <None>
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
#project_domain_name = <None>
# Project ID to scope to (string value)
# Deprecated group/name - [ironic]/tenant-id
#project_id = <None>
# Project name to scope to (string value)
# Deprecated group/name - [ironic]/tenant-name
#project_name = <None>
# Interval between retries in case of conflict error (HTTP 409).
# (integer value)
#retry_interval = 2
# Maximum number of retries in case of conflict error (HTTP 409).
# (integer value)
#max_retries = 30
# Tenant ID (string value)
#tenant_id = <None>
# Tenant Name (string value)
#tenant_name = <None>
# Timeout value for http requests (integer value)
#timeout = <None>
# Trust ID (string value)
#trust_id = <None>
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
#user_domain_name = <None>
# User id (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [ironic]/user-name
#username = <None>
[keystone_authtoken]
@ -448,7 +568,14 @@
# From keystonemiddleware.auth_token
#
# Complete public Identity API endpoint. (string value)
# Complete "public" Identity API endpoint. This endpoint should not be
# an "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to
# authenticate. Although this endpoint should ideally be unversioned,
# client support in the wild varies. If you're using a versioned v2
# endpoint here, then this should *not* be the same endpoint the
# service user utilizes for validating tokens, because normal end
# users may not be able to reach that endpoint. (string value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
@ -494,7 +621,7 @@
# Optionally specify a list of memcached server(s) to use for caching.
# If left undefined, tokens will instead be cached in-process. (list
# value)
# Deprecated group/name - [DEFAULT]/memcache_servers
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the
@ -506,7 +633,8 @@
# Determines the frequency at which the list of revoked tokens is
# retrieved from the Identity service (in seconds). A high number of
# revocation events combined with a low cache duration may
# significantly reduce performance. (integer value)
# significantly reduce performance. Only valid for PKI tokens.
# (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be
@ -577,11 +705,11 @@
# value)
#hash_algorithms = md5
# Authentication type to load (unknown value)
# Deprecated group/name - [DEFAULT]/auth_plugin
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (unknown
# Config Section from which to load plugin specific options (string
# value)
#auth_section = <None>
@ -626,7 +754,7 @@
# the Nova scheduler. Hook 'validate_interfaces' ensures that valid
# NIC data was provided by the ramdisk.Do not exclude these two unless
# you really know what you're doing. (string value)
#default_processing_hooks = ramdisk_error,root_disk_selection,scheduler,validate_interfaces
#default_processing_hooks = ramdisk_error,root_disk_selection,scheduler,validate_interfaces,capabilities
# Comma-separated list of enabled hooks for processing pipeline. The
# default for this is $default_processing_hooks, hooks can be added
@ -669,6 +797,15 @@
# processing. (boolean value)
#log_bmc_address = true
# File name template for storing ramdisk logs. The following
# replacements can be used: {uuid} - node UUID or "unknown", {bmc} -
# node BMC address or "unknown", {dt} - current UTC date and time,
# {mac} - PXE booting MAC or "unknown". (string value)
#ramdisk_logs_filename_format = {uuid}_{dt:%Y%m%d-%H%M%S.%f}.tar.gz
# Whether to power off a node after introspection. (boolean value)
#power_off = true
[swift]
@ -676,34 +813,112 @@
# From ironic_inspector.common.swift
#
# Maximum number of times to retry a Swift request, before failing.
# (integer value)
#max_retries = 2
# Authentication URL (string value)
#auth_url = <None>
# Authentication type to load (string value)
# Deprecated group/name - [swift]/auth_plugin
#auth_type = <None>
# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>
# PEM encoded client certificate cert file (string value)
#certfile = <None>
# Default Swift container to use when creating objects. (string value)
#container = ironic-inspector
# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_name = <None>
# Number of seconds that the Swift object will last before being
# deleted. (set to 0 to never delete the object). (integer value)
#delete_after = 0
# Default Swift container to use when creating objects. (string value)
#container = ironic-inspector
# Domain ID to scope to (string value)
#domain_id = <None>
# User name for accessing Swift API. (string value)
#username =
# Domain name to scope to (string value)
#domain_name = <None>
# Password for accessing Swift API. (string value)
#password =
# Verify HTTPS connections. (boolean value)
#insecure = false
# Tenant name for accessing Swift API. (string value)
#tenant_name =
# PEM encoded client certificate key file (string value)
#keyfile = <None>
# Keystone authentication API version (string value)
# Maximum number of times to retry a Swift request, before failing.
# (integer value)
#max_retries = 2
# DEPRECATED: Keystone authentication URL (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_auth_url =
# DEPRECATED: Keystone authentication API version (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use options presented by configured keystone auth plugin.
#os_auth_version = 2
# Keystone authentication URL (string value)
#os_auth_url =
# Swift endpoint type. (string value)
#os_endpoint_type = internalURL
# Keystone region to get endpoint for. (string value)
#os_region = <None>
# Swift service type. (string value)
#os_service_type = object-store
# Swift endpoint type. (string value)
#os_endpoint_type = internalURL
# User's password (string value)
#password = <None>
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
#project_domain_name = <None>
# Project ID to scope to (string value)
# Deprecated group/name - [swift]/tenant-id
#project_id = <None>
# Project name to scope to (string value)
# Deprecated group/name - [swift]/tenant-name
#project_name = <None>
# Tenant ID (string value)
#tenant_id = <None>
# Tenant Name (string value)
#tenant_name = <None>
# Timeout value for http requests (integer value)
#timeout = <None>
# Trust ID (string value)
#trust_id = <None>
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
#user_domain_name = <None>
# User id (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [swift]/user-name
#username = <None>

View File

@ -14,10 +14,11 @@
import socket
from ironicclient import client
from keystoneclient import client as keystone_client
from ironicclient import exceptions as ironic_exc
from oslo_config import cfg
from ironic_inspector.common.i18n import _
from ironic_inspector.common import keystone
from ironic_inspector import utils
CONF = cfg.CONF
@ -32,35 +33,50 @@ DEFAULT_IRONIC_API_VERSION = '1.11'
IRONIC_GROUP = 'ironic'
IRONIC_OPTS = [
cfg.StrOpt('os_region',
help='Keystone region used to get Ironic endpoints.'),
cfg.StrOpt('os_auth_url',
default='',
help='Keystone authentication endpoint for accessing Ironic '
'API. Use [keystone_authtoken]/auth_uri for keystone '
'authentication.',
deprecated_group='discoverd'),
'API. Use [keystone_authtoken] section for keystone '
'token validation.',
deprecated_group='discoverd',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('os_username',
default='',
help='User name for accessing Ironic API. '
'Use [keystone_authtoken]/admin_user for keystone '
'authentication.',
deprecated_group='discoverd'),
'Use [keystone_authtoken] section for keystone '
'token validation.',
deprecated_group='discoverd',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('os_password',
default='',
help='Password for accessing Ironic API. '
'Use [keystone_authtoken]/admin_password for keystone '
'authentication.',
'Use [keystone_authtoken] section for keystone '
'token validation.',
secret=True,
deprecated_group='discoverd'),
deprecated_group='discoverd',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('os_tenant_name',
default='',
help='Tenant name for accessing Ironic API. '
'Use [keystone_authtoken]/admin_tenant_name for keystone '
'authentication.',
deprecated_group='discoverd'),
'Use [keystone_authtoken] section for keystone '
'token validation.',
deprecated_group='discoverd',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('identity_uri',
default='',
help='Keystone admin endpoint. '
'DEPRECATED: use [keystone_authtoken]/identity_uri.',
'DEPRECATED: Use [keystone_authtoken] section for '
'keystone token validation.',
deprecated_group='discoverd',
deprecated_for_removal=True),
cfg.StrOpt('auth_strategy',
@ -90,6 +106,32 @@ IRONIC_OPTS = [
CONF.register_opts(IRONIC_OPTS, group=IRONIC_GROUP)
keystone.register_auth_opts(IRONIC_GROUP)
IRONIC_SESSION = None
LEGACY_MAP = {
'auth_url': 'os_auth_url',
'username': 'os_username',
'password': 'os_password',
'tenant_name': 'os_tenant_name'
}
class NotFound(utils.Error):
"""Node not found in Ironic."""
def __init__(self, node_ident, code=404, *args, **kwargs):
msg = _('Node %s was not found in Ironic') % node_ident
super(NotFound, self).__init__(msg, code, *args, **kwargs)
def reset_ironic_session():
"""Reset the global session variable.
Mostly useful for unit tests.
"""
global IRONIC_SESSION
IRONIC_SESSION = None
def get_ipmi_address(node):
@ -114,33 +156,28 @@ def get_client(token=None,
"""Get Ironic client instance."""
# NOTE: To support standalone ironic without keystone
if CONF.ironic.auth_strategy == 'noauth':
args = {'os_auth_token': 'noauth',
'ironic_url': CONF.ironic.ironic_url}
elif token is None:
args = {'os_password': CONF.ironic.os_password,
'os_username': CONF.ironic.os_username,
'os_auth_url': CONF.ironic.os_auth_url,
'os_tenant_name': CONF.ironic.os_tenant_name,
'os_service_type': CONF.ironic.os_service_type,
'os_endpoint_type': CONF.ironic.os_endpoint_type}
args = {'token': 'noauth',
'endpoint': CONF.ironic.ironic_url}
else:
keystone_creds = {'password': CONF.ironic.os_password,
'username': CONF.ironic.os_username,
'auth_url': CONF.ironic.os_auth_url,
'tenant_name': CONF.ironic.os_tenant_name}
keystone = keystone_client.Client(**keystone_creds)
# FIXME(sambetts): Work around for Bug 1539839 as client.authenticate
# is not called.
keystone.authenticate()
ironic_url = keystone.service_catalog.url_for(
service_type=CONF.ironic.os_service_type,
endpoint_type=CONF.ironic.os_endpoint_type)
args = {'os_auth_token': token,
'ironic_url': ironic_url}
global IRONIC_SESSION
if not IRONIC_SESSION:
IRONIC_SESSION = keystone.get_session(
IRONIC_GROUP, legacy_mapping=LEGACY_MAP)
if token is None:
args = {'session': IRONIC_SESSION,
'region_name': CONF.ironic.os_region}
else:
ironic_url = IRONIC_SESSION.get_endpoint(
service_type=CONF.ironic.os_service_type,
endpoint_type=CONF.ironic.os_endpoint_type,
region_name=CONF.ironic.os_region
)
args = {'token': token,
'endpoint': ironic_url}
args['os_ironic_api_version'] = api_version
args['max_retries'] = CONF.ironic.max_retries
args['retry_interval'] = CONF.ironic.retry_interval
return client.get_client(1, **args)
return client.Client(1, **args)
def check_provision_state(node, with_credentials=False):
@ -172,5 +209,24 @@ def dict_to_capabilities(caps_dict):
if value is not None])
def get_node(node_id, ironic=None, **kwargs):
"""Get a node from Ironic.
:param node_id: node UUID or name.
:param ironic: ironic client instance.
:param kwargs: arguments to pass to Ironic client.
:raises: Error on failure
"""
ironic = ironic if ironic is not None else get_client()
try:
return ironic.node.get(node_id, **kwargs)
except ironic_exc.NotFound:
raise NotFound(node_id)
except ironic_exc.HttpError as exc:
raise utils.Error(_("Cannot get node %(node)s: %(exc)s") %
{'node': node_id, 'exc': exc})
def list_opts():
return [(IRONIC_GROUP, IRONIC_OPTS)]
return keystone.add_auth_options(IRONIC_OPTS, IRONIC_GROUP)

View File

@ -0,0 +1,129 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keystoneauth1 import exceptions
from keystoneauth1 import loading
from oslo_config import cfg
from oslo_log import log
from six.moves.urllib import parse # for legacy options loading only
from ironic_inspector.common.i18n import _LW
CONF = cfg.CONF
LOG = log.getLogger(__name__)
def register_auth_opts(group):
loading.register_session_conf_options(CONF, group)
loading.register_auth_conf_options(CONF, group)
CONF.set_default('auth_type', default='password', group=group)
def get_session(group, legacy_mapping=None, legacy_auth_opts=None):
auth = _get_auth(group, legacy_mapping, legacy_auth_opts)
session = loading.load_session_from_conf_options(
CONF, group, auth=auth)
return session
def _get_auth(group, legacy_mapping=None, legacy_opts=None):
try:
auth = loading.load_auth_from_conf_options(CONF, group)
except exceptions.MissingRequiredOptions:
auth = _get_legacy_auth(group, legacy_mapping, legacy_opts)
else:
if auth is None:
auth = _get_legacy_auth(group, legacy_mapping, legacy_opts)
return auth
def _get_legacy_auth(group, legacy_mapping, legacy_opts):
"""Load auth plugin from legacy options.
If legacy_opts is not empty, these options will be registered first.
legacy_mapping is a dict that maps the following keys to legacy option
names:
auth_url
username
password
tenant_name
"""
LOG.warning(_LW("Group [%s]: Using legacy auth loader is deprecated. "
"Consider specifying appropriate keystone auth plugin as "
"'auth_type' and corresponding plugin options."), group)
if legacy_opts:
for opt in legacy_opts:
try:
CONF.register_opt(opt, group=group)
except cfg.DuplicateOptError:
pass
conf = getattr(CONF, group)
auth_params = {a: getattr(conf, legacy_mapping[a])
for a in legacy_mapping}
legacy_loader = loading.get_plugin_loader('password')
# NOTE(pas-ha) only Swift had this option, take it into account
try:
auth_version = conf.get('os_auth_version')
except cfg.NoSuchOptError:
auth_version = None
# NOTE(pas-ha) mimic defaults of keystoneclient
if _is_apiv3(auth_params['auth_url'], auth_version):
auth_params.update({
'project_domain_id': 'default',
'user_domain_id': 'default'})
return legacy_loader.load_from_options(**auth_params)
# NOTE(pas-ha): for backward compat with legacy options loading only
def _is_apiv3(auth_url, auth_version):
"""Check if V3 version of API is being used or not.
This method inspects auth_url and auth_version, and checks whether V3
version of the API is being used or not.
When no auth_version is specified and auth_url is not a versioned
endpoint, v2.0 is assumed.
:param auth_url: a http or https url to be inspected (like
'http://127.0.0.1:9898/').
:param auth_version: a string containing the version (like 'v2', 'v3.0')
or None
:returns: True if V3 of the API is being used.
"""
return (auth_version in ('v3.0', '3') or
'/v3' in parse.urlparse(auth_url).path)
def add_auth_options(options, group):
def add_options(opts, opts_to_add):
for new_opt in opts_to_add:
for opt in opts:
if opt.name == new_opt.name:
break
else:
opts.append(new_opt)
opts = copy.deepcopy(options)
opts.insert(0, loading.get_auth_common_conf_options()[0])
# NOTE(dims): There are a lot of auth plugins, we just generate
# the config options for a few common ones
plugins = ['password', 'v2password', 'v3password']
for name in plugins:
plugin = loading.get_plugin_loader(name)
add_options(opts, loading.get_auth_plugin_conf_options(plugin))
add_options(opts, loading.get_session_conf_options())
opts.sort(key=lambda x: x.name)
return [(group, opts)]

View File

@ -16,19 +16,18 @@
import json
from oslo_config import cfg
from oslo_log import log
import six
from swiftclient import client as swift_client
from swiftclient import exceptions as swift_exceptions
from ironic_inspector.common.i18n import _
from ironic_inspector.common import keystone
from ironic_inspector import utils
CONF = cfg.CONF
LOG = log.getLogger('ironic_inspector.common.swift')
SWIFT_GROUP = 'swift'
SWIFT_OPTS = [
cfg.IntOpt('max_retries',
default=2,
@ -41,6 +40,32 @@ SWIFT_OPTS = [
cfg.StrOpt('container',
default='ironic-inspector',
help='Default Swift container to use when creating objects.'),
cfg.StrOpt('os_auth_version',
default='2',
help='Keystone authentication API version',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('os_auth_url',
default='',
help='Keystone authentication URL',
deprecated_for_removal=True,
deprecated_reason='Use options presented by configured '
'keystone auth plugin.'),
cfg.StrOpt('os_service_type',
default='object-store',
help='Swift service type.'),
cfg.StrOpt('os_endpoint_type',
default='internalURL',
help='Swift endpoint type.'),
cfg.StrOpt('os_region',
help='Keystone region to get endpoint for.'),
]
# NOTE(pas-ha) these old options conflict with options exported by
# most used keystone auth plugins. Need to register them manually
# for the backward-compat case.
LEGACY_OPTS = [
cfg.StrOpt('username',
default='',
help='User name for accessing Swift API.'),
@ -51,59 +76,67 @@ SWIFT_OPTS = [
cfg.StrOpt('tenant_name',
default='',
help='Tenant name for accessing Swift API.'),
cfg.StrOpt('os_auth_version',
default='2',
help='Keystone authentication API version'),
cfg.StrOpt('os_auth_url',
default='',
help='Keystone authentication URL'),
cfg.StrOpt('os_service_type',
default='object-store',
help='Swift service type.'),
cfg.StrOpt('os_endpoint_type',
default='internalURL',
help='Swift endpoint type.'),
]
def list_opts():
return [
('swift', SWIFT_OPTS)
]
CONF.register_opts(SWIFT_OPTS, group='swift')
CONF.register_opts(SWIFT_OPTS, group=SWIFT_GROUP)
keystone.register_auth_opts(SWIFT_GROUP)
OBJECT_NAME_PREFIX = 'inspector_data'
SWIFT_SESSION = None
LEGACY_MAP = {
'auth_url': 'os_auth_url',
'username': 'username',
'password': 'password',
'tenant_name': 'tenant_name',
}
def reset_swift_session():
"""Reset the global session variable.
Mostly useful for unit tests.
"""
global SWIFT_SESSION
SWIFT_SESSION = None
class SwiftAPI(object):
"""API for communicating with Swift."""
def __init__(self, user=None, tenant_name=None, key=None,
auth_url=None, auth_version=None,
service_type=None, endpoint_type=None):
def __init__(self):
"""Constructor for creating a SwiftAPI object.
:param user: the name of the user for Swift account
:param tenant_name: the name of the tenant for Swift account
:param key: the 'password' or key to authenticate with
:param auth_url: the url for authentication
:param auth_version: the version of api to use for authentication
:param service_type: service type in the service catalog
:param endpoint_type: service endpoint type
Authentification is loaded from config file.
"""
self.connection = swift_client.Connection(
retries=CONF.swift.max_retries,
user=user or CONF.swift.username,
tenant_name=tenant_name or CONF.swift.tenant_name,
key=key or CONF.swift.password,
authurl=auth_url or CONF.swift.os_auth_url,
auth_version=auth_version or CONF.swift.os_auth_version,
os_options={
'service_type': service_type or CONF.swift.os_service_type,
'endpoint_type': endpoint_type or CONF.swift.os_endpoint_type
}
global SWIFT_SESSION
if not SWIFT_SESSION:
SWIFT_SESSION = keystone.get_session(
SWIFT_GROUP, legacy_mapping=LEGACY_MAP,
legacy_auth_opts=LEGACY_OPTS)
# TODO(pas-ha): swiftclient does not support keystone sessions ATM.
# Must be reworked when LP bug #1518938 is fixed.
swift_url = SWIFT_SESSION.get_endpoint(
service_type=CONF.swift.os_service_type,
endpoint_type=CONF.swift.os_endpoint_type,
region_name=CONF.swift.os_region
)
token = SWIFT_SESSION.get_token()
params = dict(retries=CONF.swift.max_retries,
preauthurl=swift_url,
preauthtoken=token)
# NOTE(pas-ha):session.verify is for HTTPS urls and can be
# - False (do not verify)
# - True (verify but try to locate system CA certificates)
# - Path (verify using specific CA certificate)
# This is normally handled inside the Session instance,
# but swiftclient still does not support sessions,
# so we need to reconstruct these options from Session here.
verify = SWIFT_SESSION.verify
params['insecure'] = not verify
if verify and isinstance(verify, six.string_types):
params['cacert'] = verify
self.connection = swift_client.Connection(**params)
def create_object(self, object, data, container=CONF.swift.container,
headers=None):
@ -160,25 +193,37 @@ class SwiftAPI(object):
return obj
def store_introspection_data(data, uuid):
def store_introspection_data(data, uuid, suffix=None):
"""Uploads introspection data to Swift.
:param data: data to store in Swift
:param uuid: UUID of the Ironic node that the data came from
:param suffix: optional suffix to add to the underlying swift
object name
:returns: name of the Swift object that the data is stored in
"""
swift_api = SwiftAPI()
swift_object_name = '%s-%s' % (OBJECT_NAME_PREFIX, uuid)
if suffix is not None:
swift_object_name = '%s-%s' % (swift_object_name, suffix)
swift_api.create_object(swift_object_name, json.dumps(data))
return swift_object_name
def get_introspection_data(uuid):
def get_introspection_data(uuid, suffix=None):
"""Downloads introspection data from Swift.
:param uuid: UUID of the Ironic node that the data came from
:param suffix: optional suffix to add to the underlying swift
object name
:returns: Swift object with the introspection data
"""
swift_api = SwiftAPI()
swift_object_name = '%s-%s' % (OBJECT_NAME_PREFIX, uuid)
if suffix is not None:
swift_object_name = '%s-%s' % (swift_object_name, suffix)
return swift_api.get_object(swift_object_name)
def list_opts():
return keystone.add_auth_options(SWIFT_OPTS, SWIFT_GROUP)

View File

@ -79,7 +79,7 @@ PROCESSING_OPTS = [
deprecated_group='discoverd'),
cfg.StrOpt('default_processing_hooks',
default='ramdisk_error,root_disk_selection,scheduler,'
'validate_interfaces',
'validate_interfaces,capabilities',
help='Comma-separated list of default hooks for processing '
'pipeline. Hook \'scheduler\' updates the node with the '
'minimum properties required by the Nova scheduler. '
@ -129,6 +129,17 @@ PROCESSING_OPTS = [
default=True,
help='Whether to log node BMC address with every message '
'during processing.'),
cfg.StrOpt('ramdisk_logs_filename_format',
default='{uuid}_{dt:%Y%m%d-%H%M%S.%f}.tar.gz',
help='File name template for storing ramdisk logs. The '
'following replacements can be used: '
'{uuid} - node UUID or "unknown", '
'{bmc} - node BMC address or "unknown", '
'{dt} - current UTC date and time, '
'{mac} - PXE booting MAC or "unknown".'),
cfg.BoolOpt('power_off',
default=True,
help='Whether to power off a node after introspection.'),
]
@ -146,10 +157,10 @@ SERVICE_OPTS = [
default='0.0.0.0',
help='IP to listen on.',
deprecated_group='discoverd'),
cfg.IntOpt('listen_port',
default=5050,
help='Port to listen on.',
deprecated_group='discoverd'),
cfg.PortOpt('listen_port',
default=5050,
help='Port to listen on.',
deprecated_group='discoverd'),
cfg.StrOpt('auth_strategy',
default='keystone',
choices=('keystone', 'noauth'),

View File

@ -39,7 +39,7 @@ def add_command_parsers(subparsers):
parser = add_alembic_command(subparsers, name)
parser.set_defaults(func=do_alembic_command)
for name in ['downgrade', 'stamp', 'show', 'edit']:
for name in ['stamp', 'show', 'edit']:
parser = add_alembic_command(subparsers, name)
parser.set_defaults(func=with_revision)
parser.add_argument('--revision', nargs='?', required=True)

View File

@ -135,7 +135,7 @@ def _temporary_chain(chain, main_chain):
def _disable_dhcp():
"""Disable DHCP completely."""
global ENABLED
global ENABLED, BLACKLIST_CACHE
if not ENABLED:
LOG.debug('DHCP is already disabled, not updating')
@ -143,6 +143,7 @@ def _disable_dhcp():
LOG.debug('No nodes on introspection and node_not_found_hook is '
'not set - disabling DHCP')
BLACKLIST_CACHE = None
with _temporary_chain(NEW_CHAIN, CHAIN):
# Blacklist everything
_iptables('-A', NEW_CHAIN, '-j', 'REJECT')

View File

@ -18,7 +18,6 @@ import string
import time
from eventlet import semaphore
from ironicclient import exceptions
from oslo_config import cfg
from ironic_inspector.common.i18n import _, _LI, _LW
@ -64,23 +63,16 @@ def _validate_ipmi_credentials(node, new_ipmi_credentials):
return new_username, new_password
def introspect(uuid, new_ipmi_credentials=None, token=None):
def introspect(node_id, new_ipmi_credentials=None, token=None):
"""Initiate hardware properties introspection for a given node.
:param uuid: node uuid
:param node_id: node UUID or name
:param new_ipmi_credentials: tuple (new username, new password) or None
:param token: authentication token
:raises: Error
"""
ironic = ir_utils.get_client(token)
try:
node = ironic.node.get(uuid)
except exceptions.NotFound:
raise utils.Error(_("Cannot find node %s") % uuid, code=404)
except exceptions.HttpError as exc:
raise utils.Error(_("Cannot get node %(node)s: %(exc)s") %
{'node': uuid, 'exc': exc})
node = ir_utils.get_node(node_id, ironic=ironic)
ir_utils.check_provision_state(node, with_credentials=new_ipmi_credentials)
@ -179,16 +171,16 @@ def _background_introspect_locked(ironic, node_info):
node_info=node_info)
def abort(uuid, token=None):
def abort(node_id, token=None):
"""Abort running introspection.
:param uuid: node uuid
:param node_id: node UUID or name
:param token: authentication token
:raises: Error
"""
LOG.debug('Aborting introspection for node %s', uuid)
LOG.debug('Aborting introspection for node %s', node_id)
ironic = ir_utils.get_client(token)
node_info = node_cache.get_node(uuid, ironic=ironic, locked=False)
node_info = node_cache.get_node(node_id, ironic=ironic, locked=False)
# check pending operations
locked = node_info.acquire_lock(blocking=False)
@ -209,15 +201,6 @@ def _abort(node_info, ironic):
node_info.release_lock()
return
# block this node from PXE Booting the introspection image
try:
firewall.update_filters(ironic)
except Exception as exc:
# Note(mkovacik): this will be retried in firewall update
# periodic task; we continue aborting
LOG.warning(_LW('Failed to update firewall filters: %s'), exc,
node_info=node_info)
# finish the introspection
LOG.debug('Forcing power-off', node_info=node_info)
try:
@ -227,4 +210,13 @@ def _abort(node_info, ironic):
node_info=node_info)
node_info.finished(error=_('Canceled by operator'))
# block this node from PXE Booting the introspection image
try:
firewall.update_filters(ironic)
except Exception as exc:
# Note(mkovacik): this will be retried in firewall update
# periodic task; we continue aborting
LOG.warning(_LW('Failed to update firewall filters: %s'), exc,
node_info=node_info)
LOG.info(_LI('Introspection aborted'), node_info=node_info)

View File

@ -47,15 +47,26 @@ app = flask.Flask(__name__)
LOG = utils.getProcessingLogger(__name__)
MINIMUM_API_VERSION = (1, 0)
CURRENT_API_VERSION = (1, 3)
CURRENT_API_VERSION = (1, 6)
_LOGGING_EXCLUDED_KEYS = ('logs',)
def _get_version():
ver = flask.request.headers.get(conf.VERSION_HEADER,
_DEFAULT_API_VERSION)
try:
requested = tuple(int(x) for x in ver.split('.'))
except (ValueError, TypeError):
return error_response(_('Malformed API version: expected string '
'in form of X.Y'), code=400)
return requested
def _format_version(ver):
return '%d.%d' % ver
_DEFAULT_API_VERSION = _format_version(MINIMUM_API_VERSION)
_DEFAULT_API_VERSION = _format_version(CURRENT_API_VERSION)
def error_response(exc, code=500):
@ -86,13 +97,7 @@ def convert_exceptions(func):
@app.before_request
def check_api_version():
requested = flask.request.headers.get(conf.VERSION_HEADER,
_DEFAULT_API_VERSION)
try:
requested = tuple(int(x) for x in requested.split('.'))
except (ValueError, TypeError):
return error_response(_('Malformed API version: expected string '
'in form of X.Y'), code=400)
requested = _get_version()
if requested < MINIMUM_API_VERSION or requested > CURRENT_API_VERSION:
return error_response(_('Unsupported API version %(requested)s, '
@ -178,14 +183,11 @@ def api_continue():
# TODO(sambetts) Add API discovery for this endpoint
@app.route('/v1/introspection/<uuid>', methods=['GET', 'POST'])
@app.route('/v1/introspection/<node_id>', methods=['GET', 'POST'])
@convert_exceptions
def api_introspection(uuid):
def api_introspection(node_id):
utils.check_auth(flask.request)
if not uuidutils.is_uuid_like(uuid):
raise utils.Error(_('Invalid UUID value'), code=400)
if flask.request.method == 'POST':
new_ipmi_password = flask.request.args.get('new_ipmi_password',
type=str,
@ -198,34 +200,34 @@ def api_introspection(uuid):
else:
new_ipmi_credentials = None
introspect.introspect(uuid,
introspect.introspect(node_id,
new_ipmi_credentials=new_ipmi_credentials,
token=flask.request.headers.get('X-Auth-Token'))
return '', 202
else:
node_info = node_cache.get_node(uuid)
node_info = node_cache.get_node(node_id)
return flask.json.jsonify(finished=bool(node_info.finished_at),
error=node_info.error or None)
@app.route('/v1/introspection/<uuid>/abort', methods=['POST'])
@app.route('/v1/introspection/<node_id>/abort', methods=['POST'])
@convert_exceptions
def api_introspection_abort(uuid):
def api_introspection_abort(node_id):
utils.check_auth(flask.request)
if not uuidutils.is_uuid_like(uuid):
raise utils.Error(_('Invalid UUID value'), code=400)
introspect.abort(uuid, token=flask.request.headers.get('X-Auth-Token'))
introspect.abort(node_id, token=flask.request.headers.get('X-Auth-Token'))
return '', 202
@app.route('/v1/introspection/<uuid>/data', methods=['GET'])
@app.route('/v1/introspection/<node_id>/data', methods=['GET'])
@convert_exceptions
def api_introspection_data(uuid):
def api_introspection_data(node_id):
utils.check_auth(flask.request)
if CONF.processing.store_data == 'swift':
res = swift.get_introspection_data(uuid)
if not uuidutils.is_uuid_like(node_id):
node = ir_utils.get_node(node_id, fields=['uuid'])
node_id = node.uuid
res = swift.get_introspection_data(node_id)
return res, 200, {'Content-Type': 'application/json'}
else:
return error_response(_('Inspector is not configured to store data. '
@ -234,6 +236,25 @@ def api_introspection_data(uuid):
code=404)
@app.route('/v1/introspection/<node_id>/data/unprocessed', methods=['POST'])
@convert_exceptions
def api_introspection_reapply(node_id):
utils.check_auth(flask.request)
if flask.request.content_length:
return error_response(_('User data processing is not '
'supported yet'), code=400)
if CONF.processing.store_data == 'swift':
process.reapply(node_id)
return '', 202
else:
return error_response(_('Inspector is not configured to store'
' data. Set the [processing] '
'store_data configuration option to '
'change this.'), code=400)
def rule_repr(rule, short):
result = rule.as_dict(short=short)
result['links'] = [{
@ -263,7 +284,10 @@ def api_rules():
actions_json=body.get('actions', []),
uuid=body.get('uuid'),
description=body.get('description'))
return flask.jsonify(rule_repr(rule, short=False))
response_code = (200 if _get_version() < (1, 6) else 201)
return flask.make_response(
flask.jsonify(rule_repr(rule, short=False)), response_code)
@app.route('/v1/rules/<uuid>', methods=['GET', 'DELETE'])
@ -351,7 +375,6 @@ class Service(object):
log.set_defaults(default_log_levels=[
'sqlalchemy=WARNING',
'keystoneclient=INFO',
'iso8601=WARNING',
'requests=WARNING',
'urllib3.connectionpool=WARNING',
@ -388,8 +411,9 @@ class Service(object):
hooks = [ext.name for ext in
plugins_base.processing_hooks_manager()]
except KeyError as exc:
# stevedore raises KeyError on missing hook
LOG.critical(_LC('Hook %s failed to load or was not found'),
# callback function raises MissingHookError derived from KeyError
# on missing hook
LOG.critical(_LC('Hook(s) %s failed to load or was not found'),
str(exc))
sys.exit(1)

View File

@ -30,7 +30,3 @@ ${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -61,9 +61,3 @@ def upgrade():
mysql_ENGINE='InnoDB',
mysql_DEFAULT_CHARSET='UTF8'
)
def downgrade():
op.drop_table('nodes')
op.drop_table('attributes')
op.drop_table('options')

View File

@ -62,9 +62,3 @@ def upgrade():
mysql_ENGINE='InnoDB',
mysql_DEFAULT_CHARSET='UTF8'
)
def downgrade():
op.drop_table('rules')
op.drop_table('rule_conditions')
op.drop_table('rule_actions')

View File

@ -31,7 +31,3 @@ import sqlalchemy as sa
def upgrade():
op.add_column('rule_conditions', sa.Column('invert', sa.Boolean(),
nullable=True, default=False))
def downgrade():
op.drop_column('rule_conditions', 'invert')

View File

@ -22,6 +22,7 @@ from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_db import exception as db_exc
from oslo_utils import excutils
from oslo_utils import uuidutils
from sqlalchemy import text
from ironic_inspector import db
@ -201,11 +202,11 @@ class NodeInfo(object):
self._attributes = None
@classmethod
def from_row(cls, row, ironic=None, lock=None):
def from_row(cls, row, ironic=None, lock=None, node=None):
"""Construct NodeInfo from a database row."""
fields = {key: row[key]
for key in ('uuid', 'started_at', 'finished_at', 'error')}
return cls(ironic=ironic, lock=lock, **fields)
return cls(ironic=ironic, lock=lock, node=node, **fields)
def invalidate_cache(self):
"""Clear all cached info, so that it's reloaded next time."""
@ -215,25 +216,29 @@ class NodeInfo(object):
self._attributes = None
self._ironic = None
def node(self):
def node(self, ironic=None):
"""Get Ironic node object associated with the cached node record."""
if self._node is None:
self._node = self.ironic.node.get(self.uuid)
ironic = ironic or self.ironic
self._node = ir_utils.get_node(self.uuid, ironic=ironic)
return self._node
def create_ports(self, macs):
def create_ports(self, macs, ironic=None):
"""Create one or several ports for this node.
A warning is issued if port already exists on a node.
"""
existing_macs = []
for mac in macs:
if mac not in self.ports():
self._create_port(mac)
self._create_port(mac, ironic)
else:
LOG.warning(_LW('Port %s already exists, skipping'),
mac, node_info=self)
existing_macs.append(mac)
if existing_macs:
LOG.warning(_LW('Did not create ports %s as they already exist'),
existing_macs, node_info=self)
def ports(self):
def ports(self, ironic=None):
"""Get Ironic port objects associated with the cached node record.
This value is cached as well, use invalidate_cache() to clean.
@ -241,13 +246,15 @@ class NodeInfo(object):
:return: dict MAC -> port object
"""
if self._ports is None:
ironic = ironic or self.ironic
self._ports = {p.address: p for p in
self.ironic.node.list_ports(self.uuid, limit=0)}
ironic.node.list_ports(self.uuid, limit=0)}
return self._ports
def _create_port(self, mac):
def _create_port(self, mac, ironic=None):
ironic = ironic or self.ironic
try:
port = self.ironic.port.create(node_uuid=self.uuid, address=mac)
port = ironic.port.create(node_uuid=self.uuid, address=mac)
except exceptions.Conflict:
LOG.warning(_LW('Port %s already exists, skipping'),
mac, node_info=self)
@ -257,14 +264,16 @@ class NodeInfo(object):
else:
self._ports[mac] = port
def patch(self, patches):
def patch(self, patches, ironic=None):
"""Apply JSON patches to a node.
Refreshes cached node instance.
:param patches: JSON patches to apply
:param ironic: Ironic client to use instead of self.ironic
:raises: ironicclient exceptions
"""
ironic = ironic or self.ironic
# NOTE(aarefiev): support path w/o ahead forward slash
# as Ironic cli does
for patch in patches:
@ -272,14 +281,16 @@ class NodeInfo(object):
patch['path'] = '/' + patch['path']
LOG.debug('Updating node with patches %s', patches, node_info=self)
self._node = self.ironic.node.update(self.uuid, patches)
self._node = ironic.node.update(self.uuid, patches)
def patch_port(self, port, patches):
def patch_port(self, port, patches, ironic=None):
"""Apply JSON patches to a port.
:param port: port object or its MAC
:param patches: JSON patches to apply
:param ironic: Ironic client to use instead of self.ironic
"""
ironic = ironic or self.ironic
ports = self.ports()
if isinstance(port, str):
port = ports[port]
@ -287,39 +298,45 @@ class NodeInfo(object):
LOG.debug('Updating port %(mac)s with patches %(patches)s',
{'mac': port.address, 'patches': patches},
node_info=self)
new_port = self.ironic.port.update(port.uuid, patches)
new_port = ironic.port.update(port.uuid, patches)
ports[port.address] = new_port
def update_properties(self, **props):
def update_properties(self, ironic=None, **props):
"""Update properties on a node.
:param props: properties to update
:param ironic: Ironic client to use instead of self.ironic
"""
ironic = ironic or self.ironic
patches = [{'op': 'add', 'path': '/properties/%s' % k, 'value': v}
for k, v in props.items()]
self.patch(patches)
self.patch(patches, ironic)
def update_capabilities(self, **caps):
def update_capabilities(self, ironic=None, **caps):
"""Update capabilities on a node.
:param props: capabilities to update
:param caps: capabilities to update
:param ironic: Ironic client to use instead of self.ironic
"""
existing = ir_utils.capabilities_to_dict(
self.node().properties.get('capabilities'))
existing.update(caps)
self.update_properties(
ironic=ironic,
capabilities=ir_utils.dict_to_capabilities(existing))
def delete_port(self, port):
def delete_port(self, port, ironic=None):
"""Delete port.
:param port: port object or its MAC
:param ironic: Ironic client to use instead of self.ironic
"""
ironic = ironic or self.ironic
ports = self.ports()
if isinstance(port, str):
port = ports[port]
self.ironic.port.delete(port.uuid)
ironic.port.delete(port.uuid)
del ports[port.address]
def get_by_path(self, path):
@ -349,6 +366,7 @@ class NodeInfo(object):
:raises: KeyError if value is not found and default is not set
:raises: everything that patch() may raise
"""
ironic = kwargs.pop("ironic", None) or self.ironic
try:
value = self.get_by_path(path)
op = 'replace'
@ -362,7 +380,7 @@ class NodeInfo(object):
ref_value = copy.deepcopy(value)
value = func(value)
if value != ref_value:
self.patch([{'op': op, 'path': path, 'value': value}])
self.patch([{'op': op, 'path': path, 'value': value}], ironic)
def add_node(uuid, **attributes):
@ -438,14 +456,21 @@ def _list_node_uuids():
return {x.uuid for x in db.model_query(db.Node.uuid)}
def get_node(uuid, ironic=None, locked=False):
"""Get node from cache by it's UUID.
def get_node(node_id, ironic=None, locked=False):
"""Get node from cache.
:param uuid: node UUID.
:param node_id: node UUID or name.
:param ironic: optional ironic client instance
:param locked: if True, get a lock on node before fetching its data
:returns: structure NodeInfo.
"""
if uuidutils.is_uuid_like(node_id):
node = None
uuid = node_id
else:
node = ir_utils.get_node(node_id, ironic=ironic)
uuid = node.uuid
if locked:
lock = _get_lock(uuid)
lock.acquire()
@ -457,7 +482,7 @@ def get_node(uuid, ironic=None, locked=False):
if row is None:
raise utils.Error(_('Could not find node %s in cache') % uuid,
code=404)
return NodeInfo.from_row(row, ironic=ironic, lock=lock)
return NodeInfo.from_row(row, ironic=ironic, lock=lock, node=node)
except Exception:
with excutils.save_and_reraise_exception():
if lock is not None:
@ -578,7 +603,7 @@ def clean_up():
return uuids
def create_node(driver, ironic=None, **attributes):
def create_node(driver, ironic=None, **attributes):
"""Create ironic node and cache it.
* Create new node in ironic.

View File

@ -149,6 +149,12 @@ _CONDITIONS_MGR = None
_ACTIONS_MGR = None
def missing_entrypoints_callback(names):
"""Raise MissingHookError with comma-separated list of missing hooks"""
missing_names = ', '.join(names)
raise MissingHookError(missing_names)
def processing_hooks_manager(*args):
"""Create a Stevedore extension manager for processing hooks.
@ -164,6 +170,7 @@ def processing_hooks_manager(*args):
names=names,
invoke_on_load=True,
invoke_args=args,
on_missing_entrypoints_callback=missing_entrypoints_callback,
name_order=True)
return _HOOKS_MGR
@ -204,3 +211,7 @@ def rule_actions_manager():
'actions is deprecated (action "%s")'),
act.name)
return _ACTIONS_MGR
class MissingHookError(KeyError):
"""Exception when hook is not found when processing it."""

View File

@ -0,0 +1,101 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Gather capabilities from inventory."""
from oslo_config import cfg
from ironic_inspector.common.i18n import _LI, _LW
from ironic_inspector.plugins import base
from ironic_inspector import utils
DEFAULT_CPU_FLAGS_MAPPING = {
'vmx': 'cpu_vt',
'svm': 'cpu_vt',
'aes': 'cpu_aes',
'pse': 'cpu_hugepages',
'pdpe1gb': 'cpu_hugepages_1g',
'smx': 'cpu_txt',
}
CAPABILITIES_OPTS = [
cfg.BoolOpt('boot_mode',
default=False,
help='Whether to store the boot mode (BIOS or UEFI).'),
cfg.DictOpt('cpu_flags',
default=DEFAULT_CPU_FLAGS_MAPPING,
help='Mapping between a CPU flag and a capability to set '
'if this flag is present.'),
]
def list_opts():
return [
('capabilities', CAPABILITIES_OPTS)
]
CONF = cfg.CONF
CONF.register_opts(CAPABILITIES_OPTS, group='capabilities')
LOG = utils.getProcessingLogger(__name__)
class CapabilitiesHook(base.ProcessingHook):
"""Processing hook for detecting capabilities."""
def _detect_boot_mode(self, inventory, node_info, data=None):
boot_mode = inventory.get('boot', {}).get('current_boot_mode')
if boot_mode is not None:
LOG.info(_LI('Boot mode was %s'), boot_mode,
data=data, node_info=node_info)
return {'boot_mode': boot_mode}
else:
LOG.warning(_LW('No boot mode information available'),
data=data, node_info=node_info)
return {}
def _detect_cpu_flags(self, inventory, node_info, data=None):
flags = inventory['cpu'].get('flags')
if not flags:
LOG.warning(_LW('No CPU flags available, please update your '
'introspection ramdisk'),
data=data, node_info=node_info)
return {}
flags = set(flags)
caps = {}
for flag, name in CONF.capabilities.cpu_flags.items():
if flag in flags:
caps[name] = 'true'
LOG.info(_LI('CPU capabilities: %s'), list(caps),
data=data, node_info=node_info)
return caps
def before_update(self, introspection_data, node_info, **kwargs):
inventory = utils.get_inventory(introspection_data)
caps = {}
if CONF.capabilities.boot_mode:
caps.update(self._detect_boot_mode(inventory, node_info,
introspection_data))
caps.update(self._detect_cpu_flags(inventory, node_info,
introspection_data))
if caps:
LOG.debug('New capabilities: %s', caps, node_info=node_info,
data=introspection_data)
node_info.update_capabilities(**caps)
else:
LOG.debug('No new capabilities detected', node_info=node_info,
data=introspection_data)

View File

@ -52,7 +52,7 @@ def _extract_node_driver_info(introspection_data):
def _check_existing_nodes(introspection_data, node_driver_info, ironic):
macs = introspection_data.get('macs')
macs = utils.get_valid_macs(introspection_data)
if macs:
# verify existing ports
for mac in macs:

View File

@ -96,7 +96,7 @@ class ExtraHardwareHook(base.ProcessingHook):
try:
item[3] = int(item[3])
except ValueError:
except (ValueError, TypeError):
pass
converted_1[item[2]] = item[3]

View File

@ -0,0 +1,122 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generic LLDP Processing Hook"""
import binascii
from ironicclient import exc as client_exc
import netaddr
from oslo_config import cfg
from ironic_inspector.common.i18n import _LW, _LE
from ironic_inspector.common import ironic
from ironic_inspector.plugins import base
from ironic_inspector import utils
LOG = utils.getProcessingLogger(__name__)
# NOTE(sambetts) Constants defined according to IEEE standard for LLDP
# http://standards.ieee.org/getieee802/download/802.1AB-2009.pdf
LLDP_TLV_TYPE_CHASSIS_ID = 1
LLDP_TLV_TYPE_PORT_ID = 2
PORT_ID_SUBTYPE_MAC = 3
PORT_ID_SUBTYPE_IFNAME = 5
PORT_ID_SUBTYPE_LOCAL = 7
STRING_PORT_SUBTYPES = [PORT_ID_SUBTYPE_IFNAME, PORT_ID_SUBTYPE_LOCAL]
CHASSIS_ID_SUBTYPE_MAC = 4
CONF = cfg.CONF
REQUIRED_IRONIC_VERSION = '1.19'
class GenericLocalLinkConnectionHook(base.ProcessingHook):
"""Process mandatory LLDP packet fields
Non-vendor specific LLDP packet fields processed for each NIC found for a
baremetal node, port ID and chassis ID. These fields if found and if valid
will be saved into the local link connection info port id and switch id
fields on the Ironic port that represents that NIC.
"""
def _get_local_link_patch(self, tlv_type, tlv_value, port):
try:
data = bytearray(binascii.unhexlify(tlv_value))
except TypeError:
LOG.warning(_LW("TLV value for TLV type %d not in correct"
"format, ensure TLV value is in "
"hexidecimal format when sent to "
"inspector"), tlv_type)
return
item = value = None
if tlv_type == LLDP_TLV_TYPE_PORT_ID:
# Check to ensure the port id is an allowed type
item = "port_id"
if data[0] in STRING_PORT_SUBTYPES:
value = data[1:].decode()
if data[0] == PORT_ID_SUBTYPE_MAC:
value = str(netaddr.EUI(
binascii.hexlify(data[1:]).decode()))
elif tlv_type == LLDP_TLV_TYPE_CHASSIS_ID:
# Check to ensure the chassis id is the allowed type
if data[0] == CHASSIS_ID_SUBTYPE_MAC:
item = "switch_id"
value = str(netaddr.EUI(
binascii.hexlify(data[1:]).decode()))
if item and value:
if (not CONF.processing.overwrite_existing and
item in port.local_link_connection):
return
return {'op': 'add',
'path': '/local_link_connection/%s' % item,
'value': value}
def before_update(self, introspection_data, node_info, **kwargs):
"""Process LLDP data and patch Ironic port local link connection"""
inventory = utils.get_inventory(introspection_data)
ironic_ports = node_info.ports()
for iface in inventory['interfaces']:
if iface['name'] not in introspection_data['all_interfaces']:
continue
port = ironic_ports[iface['mac_address']]
lldp_data = iface.get('lldp')
if lldp_data is None:
LOG.warning(_LW("No LLDP Data found for interface %s"), iface)
continue
patches = []
for tlv_type, tlv_value in lldp_data:
patch = self._get_local_link_patch(tlv_type, tlv_value, port)
if patch is not None:
patches.append(patch)
try:
# NOTE(sambetts) We need a newer version of Ironic API for this
# transaction, so create a new ironic client and explicitly
# pass it into the function.
cli = ironic.get_client(api_version=REQUIRED_IRONIC_VERSION)
node_info.patch_port(iface['mac_address'], patches, ironic=cli)
except client_exc.NotAcceptable:
LOG.error(_LE("Unable to set Ironic port local link "
"connection information because Ironic does not "
"support the required version"))
# NOTE(sambetts) May as well break out out of the loop here
# because Ironic version is not going to change for the other
# interfaces.
break

View File

@ -102,10 +102,3 @@ class RaidDeviceDetection(base.ProcessingHook):
node_info.patch([{'op': 'add',
'path': '/extra/block_devices',
'value': {'serials': current_devices}}])
class RootDeviceHintHook(RaidDeviceDetection):
def __init__(self):
LOG.warning(_LW('Using the root_device_hint alias for the '
'raid_device plugin is deprecated'))
super(RaidDeviceDetection, self).__init__()

View File

@ -23,9 +23,6 @@ from ironic_inspector.plugins import base
from ironic_inspector import utils
LOG = utils.getProcessingLogger(__name__)
def coerce(value, expected):
if isinstance(expected, float):
return float(value)
@ -69,6 +66,7 @@ class NeCondition(SimpleCondition):
class EmptyCondition(base.RuleConditionPlugin):
REQUIRED_PARAMS = set()
ALLOW_NONE = True
def check(self, node_info, field, params, **kwargs):
return field in ('', None, [], {})

View File

@ -13,9 +13,6 @@
"""Standard set of plugins."""
import base64
import datetime
import os
import sys
import netaddr
@ -49,19 +46,18 @@ class RootDiskSelectionHook(base.ProcessingHook):
node_info=node_info, data=introspection_data)
return
inventory = introspection_data.get('inventory')
if not inventory:
raise utils.Error(
_('Root device selection requires ironic-python-agent '
'as an inspection ramdisk'),
node_info=node_info, data=introspection_data)
if 'size' in hints:
# Special case to match IPA behaviour
try:
hints['size'] = int(hints['size'])
except (TypeError, ValueError):
raise utils.Error(_('Invalid root device size hint, expected '
'an integer, got %s') % hints['size'],
node_info=node_info, data=introspection_data)
disks = inventory.get('disks', [])
if not disks:
raise utils.Error(_('No disks found'),
node_info=node_info, data=introspection_data)
for disk in disks:
inventory = utils.get_inventory(introspection_data,
node_info=node_info)
for disk in inventory['disks']:
properties = disk.copy()
# Root device hints are in GiB, data from IPA is in bytes
properties['size'] //= units.Gi
@ -94,7 +90,8 @@ class SchedulerHook(base.ProcessingHook):
def before_update(self, introspection_data, node_info, **kwargs):
"""Update node with scheduler properties."""
inventory = introspection_data.get('inventory')
inventory = utils.get_inventory(introspection_data,
node_info=node_info)
errors = []
root_disk = introspection_data.get('root_disk')
@ -102,40 +99,25 @@ class SchedulerHook(base.ProcessingHook):
introspection_data['local_gb'] = root_disk['size'] // units.Gi
if CONF.processing.disk_partitioning_spacing:
introspection_data['local_gb'] -= 1
elif inventory:
else:
errors.append(_('root disk is not supplied by the ramdisk and '
'root_disk_selection hook is not enabled'))
if inventory:
try:
introspection_data['cpus'] = int(inventory['cpu']['count'])
introspection_data['cpu_arch'] = six.text_type(
inventory['cpu']['architecture'])
except (KeyError, ValueError, TypeError):
errors.append(_('malformed or missing CPU information: %s') %
inventory.get('cpu'))
try:
introspection_data['cpus'] = int(inventory['cpu']['count'])
introspection_data['cpu_arch'] = six.text_type(
inventory['cpu']['architecture'])
except (KeyError, ValueError, TypeError):
errors.append(_('malformed or missing CPU information: %s') %
inventory.get('cpu'))
try:
introspection_data['memory_mb'] = int(
inventory['memory']['physical_mb'])
except (KeyError, ValueError, TypeError):
errors.append(_('malformed or missing memory information: %s; '
'introspection requires physical memory size '
'from dmidecode') %
inventory.get('memory'))
else:
LOG.warning(_LW('No inventory provided: using old bash ramdisk '
'is deprecated, please switch to '
'ironic-python-agent'),
node_info=node_info, data=introspection_data)
missing = [key for key in self.KEYS
if not introspection_data.get(key)]
if missing:
raise utils.Error(
_('The following required parameters are missing: %s') %
missing,
node_info=node_info, data=introspection_data)
try:
introspection_data['memory_mb'] = int(
inventory['memory']['physical_mb'])
except (KeyError, ValueError, TypeError):
errors.append(_('malformed or missing memory information: %s; '
'introspection requires physical memory size '
'from dmidecode') % inventory.get('memory'))
if errors:
raise utils.Error(_('The following problems encountered: %s') %
@ -178,28 +160,36 @@ class ValidateInterfacesHook(base.ProcessingHook):
:return: dict interface name -> dict with keys 'mac' and 'ip'
"""
result = {}
inventory = data.get('inventory', {})
inventory = utils.get_inventory(data)
if inventory:
for iface in inventory.get('interfaces', ()):
name = iface.get('name')
mac = iface.get('mac_address')
ip = iface.get('ipv4_address')
for iface in inventory['interfaces']:
name = iface.get('name')
mac = iface.get('mac_address')
ip = iface.get('ipv4_address')
if not name:
LOG.error(_LE('Malformed interface record: %s'),
iface, data=data)
continue
if not name:
LOG.error(_LE('Malformed interface record: %s'),
iface, data=data)
continue
LOG.debug('Found interface %(name)s with MAC "%(mac)s" and '
'IP address "%(ip)s"',
{'name': name, 'mac': mac, 'ip': ip}, data=data)
result[name] = {'ip': ip, 'mac': mac}
else:
LOG.warning(_LW('No inventory provided: using old bash ramdisk '
'is deprecated, please switch to '
'ironic-python-agent'), data=data)
result = data.get('interfaces')
if not mac:
LOG.debug('Skipping interface %s without link information',
name, data=data)
continue
if not utils.is_valid_mac(mac):
LOG.warning(_LW('MAC %(mac)s for interface %(name)s is '
'not valid, skipping'),
{'mac': mac, 'name': name},
data=data)
continue
mac = mac.lower()
LOG.debug('Found interface %(name)s with MAC "%(mac)s" and '
'IP address "%(ip)s"',
{'name': name, 'mac': mac, 'ip': ip}, data=data)
result[name] = {'ip': ip, 'mac': mac}
return result
@ -223,20 +213,6 @@ class ValidateInterfacesHook(base.ProcessingHook):
mac = iface.get('mac')
ip = iface.get('ip')
if not mac:
LOG.debug('Skipping interface %s without link information',
name, data=data)
continue
if not utils.is_valid_mac(mac):
LOG.warning(_LW('MAC %(mac)s for interface %(name)s is not '
'valid, skipping'),
{'mac': mac, 'name': name},
data=data)
continue
mac = mac.lower()
if name == 'lo' or (ip and netaddr.IPAddress(ip).is_loopback()):
LOG.debug('Skipping local interface %s', name, data=data)
continue
@ -311,40 +287,8 @@ class ValidateInterfacesHook(base.ProcessingHook):
class RamdiskErrorHook(base.ProcessingHook):
"""Hook to process error send from the ramdisk."""
DATETIME_FORMAT = '%Y.%m.%d_%H.%M.%S_%f'
def before_processing(self, introspection_data, **kwargs):
error = introspection_data.get('error')
logs = introspection_data.get('logs')
if error or CONF.processing.always_store_ramdisk_logs:
if logs:
self._store_logs(logs, introspection_data)
else:
LOG.debug('No logs received from the ramdisk',
data=introspection_data)
if error:
raise utils.Error(_('Ramdisk reported error: %s') % error,
data=introspection_data)
def _store_logs(self, logs, introspection_data):
if not CONF.processing.ramdisk_logs_dir:
LOG.warning(
_LW('Failed to store logs received from the ramdisk '
'because ramdisk_logs_dir configuration option '
'is not set'),
data=introspection_data)
return
if not os.path.exists(CONF.processing.ramdisk_logs_dir):
os.makedirs(CONF.processing.ramdisk_logs_dir)
time_fmt = datetime.datetime.utcnow().strftime(self.DATETIME_FORMAT)
bmc_address = introspection_data.get('ipmi_address', 'unknown')
file_name = 'bmc_%s_%s' % (bmc_address, time_fmt)
with open(os.path.join(CONF.processing.ramdisk_logs_dir, file_name),
'wb') as fp:
fp.write(base64.b64decode(logs))
LOG.info(_LI('Ramdisk logs stored in file %s'), file_name,
data=introspection_data)

View File

@ -13,11 +13,18 @@
"""Handling introspection data from the ramdisk."""
import eventlet
from ironicclient import exceptions
from oslo_config import cfg
import base64
import copy
import datetime
import os
from ironic_inspector.common.i18n import _, _LE, _LI
import eventlet
import json
from oslo_config import cfg
from oslo_utils import excutils
from ironic_inspector.common.i18n import _, _LE, _LI, _LW
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector.common import swift
from ironic_inspector import firewall
@ -33,13 +40,53 @@ LOG = utils.getProcessingLogger(__name__)
_CREDENTIALS_WAIT_RETRIES = 10
_CREDENTIALS_WAIT_PERIOD = 3
_STORAGE_EXCLUDED_KEYS = {'logs'}
_UNPROCESSED_DATA_STORE_SUFFIX = 'UNPROCESSED'
def _store_logs(introspection_data, node_info):
logs = introspection_data.get('logs')
if not logs:
LOG.warning(_LW('No logs were passed by the ramdisk'),
data=introspection_data, node_info=node_info)
return
if not CONF.processing.ramdisk_logs_dir:
LOG.warning(_LW('Failed to store logs received from the ramdisk '
'because ramdisk_logs_dir configuration option '
'is not set'),
data=introspection_data, node_info=node_info)
return
fmt_args = {
'uuid': node_info.uuid if node_info is not None else 'unknown',
'mac': (utils.get_pxe_mac(introspection_data) or
'unknown').replace(':', ''),
'dt': datetime.datetime.utcnow(),
'bmc': (utils.get_ipmi_address_from_data(introspection_data) or
'unknown')
}
file_name = CONF.processing.ramdisk_logs_filename_format.format(**fmt_args)
try:
if not os.path.exists(CONF.processing.ramdisk_logs_dir):
os.makedirs(CONF.processing.ramdisk_logs_dir)
with open(os.path.join(CONF.processing.ramdisk_logs_dir, file_name),
'wb') as fp:
fp.write(base64.b64decode(logs))
except EnvironmentError:
LOG.exception(_LE('Could not store the ramdisk logs'),
data=introspection_data, node_info=node_info)
else:
LOG.info(_LI('Ramdisk logs were stored in file %s'), file_name,
data=introspection_data, node_info=node_info)
def _find_node_info(introspection_data, failures):
try:
return node_cache.find_node(
bmc_address=introspection_data.get('ipmi_address'),
mac=introspection_data.get('macs'))
mac=utils.get_valid_macs(introspection_data))
except utils.NotFoundInCacheError as exc:
not_found_hook = plugins_base.node_not_found_hook_manager()
if not_found_hook is None:
@ -60,13 +107,8 @@ def _find_node_info(introspection_data, failures):
failures.append(_('Look up error: %s') % exc)
def process(introspection_data):
"""Process data from the ramdisk.
This function heavily relies on the hooks to do the actual data processing.
"""
def _run_pre_hooks(introspection_data, failures):
hooks = plugins_base.processing_hooks_manager()
failures = []
for hook_ext in hooks:
# NOTE(dtantsur): catch exceptions, so that we have changes to update
# node introspection status after look up
@ -90,6 +132,64 @@ def process(introspection_data):
'exc_class': exc.__class__.__name__,
'error': exc})
def _filter_data_excluded_keys(data):
return {k: v for k, v in data.items()
if k not in _STORAGE_EXCLUDED_KEYS}
def _store_data(node_info, data, suffix=None):
if CONF.processing.store_data != 'swift':
LOG.debug("Swift support is disabled, introspection data "
"won't be stored", node_info=node_info)
return
swift_object_name = swift.store_introspection_data(
_filter_data_excluded_keys(data),
node_info.uuid,
suffix=suffix
)
LOG.info(_LI('Introspection data was stored in Swift in object '
'%s'), swift_object_name, node_info=node_info)
if CONF.processing.store_data_location:
node_info.patch([{'op': 'add', 'path': '/extra/%s' %
CONF.processing.store_data_location,
'value': swift_object_name}])
def _store_unprocessed_data(node_info, data):
# runs in background
try:
_store_data(node_info, data,
suffix=_UNPROCESSED_DATA_STORE_SUFFIX)
except Exception:
LOG.exception(_LE('Encountered exception saving unprocessed '
'introspection data'), node_info=node_info,
data=data)
def _get_unprocessed_data(uuid):
if CONF.processing.store_data == 'swift':
LOG.debug('Fetching unprocessed introspection data from '
'Swift for %s', uuid)
return json.loads(
swift.get_introspection_data(
uuid,
suffix=_UNPROCESSED_DATA_STORE_SUFFIX
)
)
else:
raise utils.Error(_('Swift support is disabled'), code=400)
def process(introspection_data):
"""Process data from the ramdisk.
This function heavily relies on the hooks to do the actual data processing.
"""
unprocessed_data = copy.deepcopy(introspection_data)
failures = []
_run_pre_hooks(introspection_data, failures)
node_info = _find_node_info(introspection_data, failures)
if node_info:
# Locking is already done in find_node() but may be not done in a
@ -101,6 +201,7 @@ def process(introspection_data):
'pre-processing hooks:\n%s') % '\n'.join(failures)
if node_info is not None:
node_info.finished(error='\n'.join(failures))
_store_logs(introspection_data, node_info)
raise utils.Error(msg, node_info=node_info, data=introspection_data)
LOG.info(_LI('Matching node is %s'), node_info.uuid,
@ -112,26 +213,38 @@ def process(introspection_data):
'error: %s') % node_info.error,
node_info=node_info, code=400)
try:
node = node_info.node()
except exceptions.NotFound:
msg = _('Node was found in cache, but is not found in Ironic')
node_info.finished(error=msg)
raise utils.Error(msg, code=404, node_info=node_info,
data=introspection_data)
# Note(mkovacik): store data now when we're sure that a background
# thread won't race with other process() or introspect.abort()
# call
utils.executor().submit(_store_unprocessed_data, node_info,
unprocessed_data)
try:
return _process_node(node, introspection_data, node_info)
node = node_info.node()
except ir_utils.NotFound as exc:
with excutils.save_and_reraise_exception():
node_info.finished(error=str(exc))
_store_logs(introspection_data, node_info)
try:
result = _process_node(node, introspection_data, node_info)
except utils.Error as exc:
node_info.finished(error=str(exc))
raise
with excutils.save_and_reraise_exception():
_store_logs(introspection_data, node_info)
except Exception as exc:
LOG.exception(_LE('Unexpected exception during processing'))
msg = _('Unexpected exception %(exc_class)s during processing: '
'%(error)s') % {'exc_class': exc.__class__.__name__,
'error': exc}
node_info.finished(error=msg)
raise utils.Error(msg, node_info=node_info, data=introspection_data)
_store_logs(introspection_data, node_info)
raise utils.Error(msg, node_info=node_info, data=introspection_data,
code=500)
if CONF.processing.always_store_ramdisk_logs:
_store_logs(introspection_data, node_info)
return result
def _run_post_hooks(node_info, introspection_data):
@ -148,23 +261,7 @@ def _process_node(node, introspection_data, node_info):
node_info.create_ports(introspection_data.get('macs') or ())
_run_post_hooks(node_info, introspection_data)
if CONF.processing.store_data == 'swift':
stored_data = {k: v for k, v in introspection_data.items()
if k not in _STORAGE_EXCLUDED_KEYS}
swift_object_name = swift.store_introspection_data(stored_data,
node_info.uuid)
LOG.info(_LI('Introspection data was stored in Swift in object %s'),
swift_object_name,
node_info=node_info, data=introspection_data)
if CONF.processing.store_data_location:
node_info.patch([{'op': 'add', 'path': '/extra/%s' %
CONF.processing.store_data_location,
'value': swift_object_name}])
else:
LOG.debug('Swift support is disabled, introspection data '
'won\'t be stored',
node_info=node_info, data=introspection_data)
_store_data(node_info, introspection_data)
ironic = ir_utils.get_client()
firewall.update_filters(ironic)
@ -184,7 +281,8 @@ def _process_node(node, introspection_data, node_info):
resp['ipmi_username'] = new_username
resp['ipmi_password'] = new_password
else:
utils.executor().submit(_finish, ironic, node_info, introspection_data)
utils.executor().submit(_finish, ironic, node_info, introspection_data,
power_off=CONF.processing.power_off)
return resp
@ -195,10 +293,10 @@ def _finish_set_ipmi_credentials(ironic, node, node_info, introspection_data,
'value': new_username},
{'op': 'add', 'path': '/driver_info/ipmi_password',
'value': new_password}]
if (not ir_utils.get_ipmi_address(node) and
introspection_data.get('ipmi_address')):
new_ipmi_address = utils.get_ipmi_address_from_data(introspection_data)
if not ir_utils.get_ipmi_address(node) and new_ipmi_address:
patch.append({'op': 'add', 'path': '/driver_info/ipmi_address',
'value': introspection_data['ipmi_address']})
'value': new_ipmi_address})
node_info.patch(patch)
for attempt in range(_CREDENTIALS_WAIT_RETRIES):
@ -222,23 +320,93 @@ def _finish_set_ipmi_credentials(ironic, node, node_info, introspection_data,
raise utils.Error(msg, node_info=node_info, data=introspection_data)
def _finish(ironic, node_info, introspection_data):
LOG.debug('Forcing power off of node %s', node_info.uuid)
try:
ironic.node.set_power_state(node_info.uuid, 'off')
except Exception as exc:
if node_info.node().provision_state == 'enroll':
LOG.info(_LI("Failed to power off the node in 'enroll' state, "
"ignoring; error was %s") % exc,
node_info=node_info, data=introspection_data)
else:
msg = (_('Failed to power off node %(node)s, check it\'s '
'power management configuration: %(exc)s') %
{'node': node_info.uuid, 'exc': exc})
node_info.finished(error=msg)
raise utils.Error(msg, node_info=node_info,
data=introspection_data)
def _finish(ironic, node_info, introspection_data, power_off=True):
if power_off:
LOG.debug('Forcing power off of node %s', node_info.uuid)
try:
ironic.node.set_power_state(node_info.uuid, 'off')
except Exception as exc:
if node_info.node().provision_state == 'enroll':
LOG.info(_LI("Failed to power off the node in"
"'enroll' state, ignoring; error was "
"%s") % exc, node_info=node_info,
data=introspection_data)
else:
msg = (_('Failed to power off node %(node)s, check '
'its power management configuration: '
'%(exc)s') % {'node': node_info.uuid, 'exc':
exc})
node_info.finished(error=msg)
raise utils.Error(msg, node_info=node_info,
data=introspection_data)
LOG.info(_LI('Node powered-off'), node_info=node_info,
data=introspection_data)
node_info.finished()
LOG.info(_LI('Introspection finished successfully'),
node_info=node_info, data=introspection_data)
def reapply(node_ident):
"""Re-apply introspection steps.
Re-apply preprocessing, postprocessing and introspection rules on
stored data.
:param node_ident: node UUID or name
:raises: utils.Error
"""
LOG.debug('Processing re-apply introspection request for node '
'UUID: %s', node_ident)
node_info = node_cache.get_node(node_ident, locked=False)
if not node_info.acquire_lock(blocking=False):
# Note (mkovacik): it should be sufficient to check data
# presence & locking. If either introspection didn't start
# yet, was in waiting state or didn't finish yet, either data
# won't be available or locking would fail
raise utils.Error(_('Node locked, please, try again later'),
node_info=node_info, code=409)
utils.executor().submit(_reapply, node_info)
def _reapply(node_info):
# runs in background
try:
introspection_data = _get_unprocessed_data(node_info.uuid)
except Exception:
LOG.exception(_LE('Encountered exception while fetching '
'stored introspection data'),
node_info=node_info)
node_info.release_lock()
return
failures = []
_run_pre_hooks(introspection_data, failures)
if failures:
LOG.error(_LE('Pre-processing failures detected reapplying '
'introspection on stored data:\n%s'),
'\n'.join(failures), node_info=node_info)
node_info.finished(error='\n'.join(failures))
return
try:
ironic = ir_utils.get_client()
node_info.create_ports(introspection_data.get('macs') or ())
_run_post_hooks(node_info, introspection_data)
_store_data(node_info, introspection_data)
node_info.invalidate_cache()
rules.apply(node_info, introspection_data)
_finish(ironic, node_info, introspection_data,
power_off=False)
except Exception as exc:
LOG.exception(_LE('Encountered exception reapplying '
'introspection on stored data'),
node_info=node_info,
data=introspection_data)
node_info.finished(error=str(exc))
else:
LOG.info(_LI('Successfully reapplied introspection on stored '
'data'), node_info=node_info, data=introspection_data)

View File

@ -19,6 +19,7 @@ import jsonschema
from oslo_db import exception as db_exc
from oslo_utils import timeutils
from oslo_utils import uuidutils
import six
from sqlalchemy import orm
from ironic_inspector.common.i18n import _, _LE, _LI
@ -202,7 +203,7 @@ class IntrospectionRule(object):
ext = ext_mgr[act.action].obj
for formatted_param in ext.FORMATTED_PARAMS:
value = act.params.get(formatted_param)
if not value:
if not value or not isinstance(value, six.string_types):
continue
# NOTE(aarefiev): verify provided value with introspection

View File

@ -19,6 +19,7 @@ from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslo_log import log
from oslo_utils import units
from oslo_utils import uuidutils
from ironic_inspector.common import i18n
@ -70,20 +71,76 @@ class BaseTest(fixtures.TestWithFixtures):
def assertCalledWithPatch(self, expected, mock_call):
def _get_patch_param(call):
try:
return call[0][1]
if isinstance(call[0][1], list):
return call[0][1]
except IndexError:
return call[0][0]
pass
return call[0][0]
actual = sum(map(_get_patch_param, mock_call.call_args_list), [])
self.assertPatchEqual(actual, expected)
class NodeTest(BaseTest):
class InventoryTest(BaseTest):
def setUp(self):
super(InventoryTest, self).setUp()
# Prepare some realistic inventory
# https://github.com/openstack/ironic-inspector/blob/master/HTTP-API.rst # noqa
self.bmc_address = '1.2.3.4'
self.macs = ['11:22:33:44:55:66', '66:55:44:33:22:11']
self.ips = ['1.2.1.2', '1.2.1.1']
self.inactive_mac = '12:12:21:12:21:12'
self.pxe_mac = self.macs[0]
self.all_macs = self.macs + [self.inactive_mac]
self.pxe_iface_name = 'eth1'
self.data = {
'boot_interface': '01-' + self.pxe_mac.replace(':', '-'),
'inventory': {
'interfaces': [
{'name': 'eth1', 'mac_address': self.macs[0],
'ipv4_address': self.ips[0]},
{'name': 'eth2', 'mac_address': self.inactive_mac},
{'name': 'eth3', 'mac_address': self.macs[1],
'ipv4_address': self.ips[1]},
],
'disks': [
{'name': '/dev/sda', 'model': 'Big Data Disk',
'size': 1000 * units.Gi},
{'name': '/dev/sdb', 'model': 'Small OS Disk',
'size': 20 * units.Gi},
],
'cpu': {
'count': 4,
'architecture': 'x86_64'
},
'memory': {
'physical_mb': 12288
},
'bmc_address': self.bmc_address
},
'root_disk': {'name': '/dev/sda', 'model': 'Big Data Disk',
'size': 1000 * units.Gi,
'wwn': None},
}
self.inventory = self.data['inventory']
self.all_interfaces = {
'eth1': {'mac': self.macs[0], 'ip': self.ips[0]},
'eth2': {'mac': self.inactive_mac, 'ip': None},
'eth3': {'mac': self.macs[1], 'ip': self.ips[1]}
}
self.active_interfaces = {
'eth1': {'mac': self.macs[0], 'ip': self.ips[0]},
'eth3': {'mac': self.macs[1], 'ip': self.ips[1]}
}
self.pxe_interfaces = {
self.pxe_iface_name: self.all_interfaces[self.pxe_iface_name]
}
class NodeTest(InventoryTest):
def setUp(self):
super(NodeTest, self).setUp()
self.uuid = uuidutils.generate_uuid()
self.bmc_address = '1.2.3.4'
self.macs = ['11:22:33:44:55:66', '66:55:44:33:22:11']
fake_node = {
'driver': 'pxe_ipmitool',
'driver_info': {'ipmi_address': self.bmc_address},

View File

@ -15,6 +15,7 @@ import eventlet
eventlet.monkey_patch()
import contextlib
import copy
import json
import os
import shutil
@ -23,10 +24,11 @@ import unittest
import mock
from oslo_config import cfg
from oslo_utils import units
from oslo_config import fixture as config_fixture
import requests
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector.common import swift
from ironic_inspector import dbsync
from ironic_inspector import main
from ironic_inspector import rules
@ -52,6 +54,23 @@ connection = sqlite:///%(db_file)s
DEFAULT_SLEEP = 2
TEST_CONF_FILE = None
def get_test_conf_file():
global TEST_CONF_FILE
if not TEST_CONF_FILE:
d = tempfile.mkdtemp()
TEST_CONF_FILE = os.path.join(d, 'test.conf')
db_file = os.path.join(d, 'test.db')
with open(TEST_CONF_FILE, 'wb') as fp:
content = CONF % {'db_file': db_file}
fp.write(content.encode('utf-8'))
return TEST_CONF_FILE
def get_error(response):
return response.json()['error']['message']
class Base(base.NodeTest):
@ -68,62 +87,12 @@ class Base(base.NodeTest):
self.cli.node.update.return_value = self.node
self.cli.node.list.return_value = [self.node]
# https://github.com/openstack/ironic-inspector/blob/master/HTTP-API.rst # noqa
self.data = {
'boot_interface': '01-' + self.macs[0].replace(':', '-'),
'inventory': {
'interfaces': [
{'name': 'eth1', 'mac_address': self.macs[0],
'ipv4_address': '1.2.1.2'},
{'name': 'eth2', 'mac_address': '12:12:21:12:21:12'},
{'name': 'eth3', 'mac_address': self.macs[1],
'ipv4_address': '1.2.1.1'},
],
'disks': [
{'name': '/dev/sda', 'model': 'Big Data Disk',
'size': 1000 * units.Gi},
{'name': '/dev/sdb', 'model': 'Small OS Disk',
'size': 20 * units.Gi},
],
'cpu': {
'count': 4,
'architecture': 'x86_64'
},
'memory': {
'physical_mb': 12288
},
'bmc_address': self.bmc_address
},
'root_disk': {'name': '/dev/sda', 'model': 'Big Data Disk',
'size': 1000 * units.Gi,
'wwn': None},
}
self.data_old_ramdisk = {
'cpus': 4,
'cpu_arch': 'x86_64',
'memory_mb': 12288,
'local_gb': 464,
'interfaces': {
'eth1': {'mac': self.macs[0], 'ip': '1.2.1.2'},
'eth2': {'mac': '12:12:21:12:21:12'},
'eth3': {'mac': self.macs[1], 'ip': '1.2.1.1'},
},
'boot_interface': '01-' + self.macs[0].replace(':', '-'),
'ipmi_address': self.bmc_address,
}
self.patch = [
{'op': 'add', 'path': '/properties/cpus', 'value': '4'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'op': 'add', 'path': '/properties/memory_mb', 'value': '12288'},
{'path': '/properties/local_gb', 'value': '999', 'op': 'add'}
]
self.patch_old_ramdisk = [
{'op': 'add', 'path': '/properties/cpus', 'value': '4'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'op': 'add', 'path': '/properties/memory_mb', 'value': '12288'},
{'path': '/properties/local_gb', 'value': '464', 'op': 'add'}
]
self.patch_root_hints = [
{'op': 'add', 'path': '/properties/cpus', 'value': '4'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
@ -133,6 +102,10 @@ class Base(base.NodeTest):
self.node.power_state = 'power off'
self.cfg = self.useFixture(config_fixture.Config())
conf_file = get_test_conf_file()
self.cfg.set_config_files([conf_file])
def call(self, method, endpoint, data=None, expect_error=None,
api_version=None):
if data is not None:
@ -146,7 +119,11 @@ class Base(base.NodeTest):
if expect_error:
self.assertEqual(expect_error, res.status_code)
else:
res.raise_for_status()
if res.status_code >= 400:
msg = ('%(meth)s %(url)s failed with code %(code)s: %(msg)s' %
{'meth': method.upper(), 'url': endpoint,
'code': res.status_code, 'msg': get_error(res)})
raise AssertionError(msg)
return res
def call_introspect(self, uuid, new_ipmi_username=None,
@ -164,6 +141,10 @@ class Base(base.NodeTest):
def call_abort_introspect(self, uuid):
return self.call('post', '/v1/introspection/%s/abort' % uuid)
def call_reapply(self, uuid):
return self.call('post', '/v1/introspection/%s/data/unprocessed' %
uuid)
def call_continue(self, data):
return self.call('post', '/v1/continue', data=data).json()
@ -205,27 +186,6 @@ class Test(Base):
status = self.call_get_status(self.uuid)
self.assertEqual({'finished': True, 'error': None}, status)
def test_old_ramdisk(self):
self.call_introspect(self.uuid)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
self.cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
status = self.call_get_status(self.uuid)
self.assertEqual({'finished': False, 'error': None}, status)
res = self.call_continue(self.data_old_ramdisk)
self.assertEqual({'uuid': self.uuid}, res)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
self.assertCalledWithPatch(self.patch_old_ramdisk,
self.cli.node.update)
self.cli.port.create.assert_called_once_with(
node_uuid=self.uuid, address='11:22:33:44:55:66')
status = self.call_get_status(self.uuid)
self.assertEqual({'finished': True, 'error': None}, status)
def test_setup_ipmi(self):
patch_credentials = [
{'op': 'add', 'path': '/driver_info/ipmi_username',
@ -314,6 +274,7 @@ class Test(Base):
{'field': 'inventory.interfaces[*].ipv4_address',
'op': 'contains', 'value': r'127\.0\.0\.1',
'invert': True, 'multiple': 'all'},
{'field': 'i.do.not.exist', 'op': 'is-empty'},
],
'actions': [
{'action': 'set-attribute', 'path': '/extra/foo',
@ -432,17 +393,72 @@ class Test(Base):
# after releasing the node lock
self.call('post', '/v1/continue', self.data, expect_error=400)
@mock.patch.object(swift, 'store_introspection_data', autospec=True)
@mock.patch.object(swift, 'get_introspection_data', autospec=True)
def test_stored_data_processing(self, get_mock, store_mock):
cfg.CONF.set_override('store_data', 'swift', 'processing')
# ramdisk data copy
# please mind the data is changed during processing
ramdisk_data = json.dumps(copy.deepcopy(self.data))
get_mock.return_value = ramdisk_data
self.call_introspect(self.uuid)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
self.cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
res = self.call_continue(self.data)
self.assertEqual({'uuid': self.uuid}, res)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
status = self.call_get_status(self.uuid)
self.assertEqual({'finished': True, 'error': None}, status)
res = self.call_reapply(self.uuid)
self.assertEqual(202, res.status_code)
self.assertEqual('', res.text)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
# reapply request data
get_mock.assert_called_once_with(self.uuid,
suffix='UNPROCESSED')
# store ramdisk data, store processing result data, store
# reapply processing result data; the ordering isn't
# guaranteed as store ramdisk data runs in a background
# thread; hower, last call has to always be reapply processing
# result data
store_ramdisk_call = mock.call(mock.ANY, self.uuid,
suffix='UNPROCESSED')
store_processing_call = mock.call(mock.ANY, self.uuid,
suffix=None)
self.assertEqual(3, len(store_mock.call_args_list))
self.assertIn(store_ramdisk_call,
store_mock.call_args_list[0:2])
self.assertIn(store_processing_call,
store_mock.call_args_list[0:2])
self.assertEqual(store_processing_call,
store_mock.call_args_list[2])
# second reapply call
get_mock.return_value = ramdisk_data
res = self.call_reapply(self.uuid)
self.assertEqual(202, res.status_code)
self.assertEqual('', res.text)
eventlet.greenthread.sleep(DEFAULT_SLEEP)
# reapply saves the result
self.assertEqual(4, len(store_mock.call_args_list))
self.assertEqual(store_processing_call,
store_mock.call_args_list[-1])
@contextlib.contextmanager
def mocked_server():
d = tempfile.mkdtemp()
try:
conf_file = os.path.join(d, 'test.conf')
db_file = os.path.join(d, 'test.db')
with open(conf_file, 'wb') as fp:
content = CONF % {'db_file': db_file}
fp.write(content.encode('utf-8'))
conf_file = get_test_conf_file()
with mock.patch.object(ir_utils, 'get_client'):
dbsync.main(args=['--config-file', conf_file, 'upgrade'])

View File

@ -0,0 +1,18 @@
=======================================
Tempest Integration of ironic-inspector
=======================================
This directory contains Tempest tests to cover the ironic-inspector project.
It uses tempest plugin to automatically load these tests into tempest. More
information about tempest plugin could be found here:
`Plugin <http://docs.openstack.org/developer/tempest/plugin.html>`_
The legacy method of running Tempest is to just treat the Tempest source code
as a python unittest:
`Run tests <http://docs.openstack.org/developer/tempest/overview.html#legacy-run-method>`_
There is also tox configuration for tempest, use following regex for running
introspection tests::
$ tox -e all-plugin -- inspector_tempest_plugin

View File

@ -0,0 +1,52 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from tempest import config # noqa
baremetal_introspection_group = cfg.OptGroup(
name="baremetal_introspection",
title="Baremetal introspection service options",
help="When enabling baremetal introspection tests,"
"Ironic must be configured.")
BaremetalIntrospectionGroup = [
cfg.StrOpt('catalog_type',
default='baremetal-introspection',
help="Catalog type of the baremetal provisioning service"),
cfg.StrOpt('endpoint_type',
default='publicURL',
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
help="The endpoint type to use for the baremetal introspection"
" service"),
cfg.IntOpt('introspection_sleep',
default=30,
help="Introspection sleep before check status"),
cfg.IntOpt('introspection_timeout',
default=600,
help="Introspection time out"),
cfg.IntOpt('hypervisor_update_sleep',
default=60,
help="Time to wait until nova becomes aware of "
"bare metal instances"),
cfg.IntOpt('hypervisor_update_timeout',
default=300,
help="Time out for wait until nova becomes aware of "
"bare metal instances"),
cfg.IntOpt('ironic_sync_timeout',
default=60,
help="Time it might take for Ironic--Inspector "
"sync to happen"),
]

View File

@ -0,0 +1,25 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import exceptions
class IntrospectionFailed(exceptions.TempestException):
message = "Introspection failed"
class IntrospectionTimeout(exceptions.TempestException):
message = "Introspection time out"
class HypervisorUpdateTimeout(exceptions.TempestException):
message = "Hypervisor stats update time out"

View File

@ -0,0 +1,37 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from tempest import config as tempest_config
from tempest.test_discover import plugins
from ironic_inspector.test.inspector_tempest_plugin import config
class InspectorTempestPlugin(plugins.TempestPlugin):
def load_tests(self):
base_path = os.path.split(os.path.dirname(
os.path.abspath(__file__)))[0]
test_dir = "inspector_tempest_plugin/tests"
full_test_dir = os.path.join(base_path, test_dir)
return full_test_dir, base_path
def register_opts(self, conf):
tempest_config.register_opt_group(
conf, config.baremetal_introspection_group,
config.BaremetalIntrospectionGroup)
def get_opt_lists(self):
return [(config.baremetal_introspection_group.name,
config.BaremetalIntrospectionGroup)]

View File

@ -0,0 +1,25 @@
[
{
"description": "Successful Rule",
"conditions": [
{"op": "ge", "field": "memory_mb", "value": 256},
{"op": "ge", "field": "local_gb", "value": 1}
],
"actions": [
{"action": "set-attribute", "path": "/extra/rule_success",
"value": "yes"}
]
},
{
"description": "Failing Rule",
"conditions": [
{"op": "lt", "field": "memory_mb", "value": 42},
{"op": "eq", "field": "local_gb", "value": 0}
],
"actions": [
{"action": "set-attribute", "path": "/extra/rule_success",
"value": "no"},
{"action": "fail", "message": "This rule should not have run"}
]
}
]

View File

@ -0,0 +1,70 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from ironic_tempest_plugin.services.baremetal import base
from tempest import clients
from tempest.common import credentials_factory as common_creds
from tempest import config
CONF = config.CONF
ADMIN_CREDS = common_creds.get_configured_admin_credentials()
class Manager(clients.Manager):
def __init__(self,
credentials=ADMIN_CREDS,
service=None,
api_microversions=None):
super(Manager, self).__init__(credentials, service)
self.introspection_client = BaremetalIntrospectionClient(
self.auth_provider,
CONF.baremetal_introspection.catalog_type,
CONF.identity.region,
endpoint_type=CONF.baremetal_introspection.endpoint_type,
**self.default_params_with_timeout_values)
class BaremetalIntrospectionClient(base.BaremetalClient):
"""Base Tempest REST client for Ironic Inspector API v1."""
version = '1'
uri_prefix = 'v1'
@base.handle_errors
def purge_rules(self):
"""Purge all existing rules."""
return self._delete_request('rules', uuid=None)
@base.handle_errors
def import_rule(self, rule_path):
"""Import introspection rules from a json file."""
with open(rule_path, 'r') as fp:
rules = json.load(fp)
if not isinstance(rules, list):
rules = [rules]
for rule in rules:
self._create_request('rules', rule)
@base.handle_errors
def get_status(self, uuid):
"""Get introspection status for a node."""
return self._show_request('introspection', uuid=uuid)
@base.handle_errors
def get_data(self, uuid):
"""Get introspection data for a node."""
return self._show_request('introspection', uuid=uuid,
uri='/%s/introspection/%s/data' %
(self.uri_prefix, uuid))

View File

@ -0,0 +1,192 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import time
import tempest
from tempest import config
from tempest.lib.common.api_version_utils import LATEST_MICROVERSION
from ironic_inspector.test.inspector_tempest_plugin import exceptions
from ironic_inspector.test.inspector_tempest_plugin.services import \
introspection_client
from ironic_tempest_plugin.tests.api.admin.api_microversion_fixture import \
APIMicroversionFixture as IronicMicroversionFixture
from ironic_tempest_plugin.tests.scenario.baremetal_manager import \
BaremetalProvisionStates
from ironic_tempest_plugin.tests.scenario.baremetal_manager import \
BaremetalScenarioTest
CONF = config.CONF
class InspectorScenarioTest(BaremetalScenarioTest):
"""Provide harness to do Inspector scenario tests."""
wait_provisioning_state_interval = 15
credentials = ['primary', 'admin']
ironic_api_version = LATEST_MICROVERSION
@classmethod
def setup_clients(cls):
super(InspectorScenarioTest, cls).setup_clients()
inspector_manager = introspection_client.Manager()
cls.introspection_client = inspector_manager.introspection_client
def setUp(self):
super(InspectorScenarioTest, self).setUp()
# we rely on the 'available' provision_state; using latest
# microversion
self.useFixture(IronicMicroversionFixture(self.ironic_api_version))
self.flavor = self.baremetal_flavor()
self.node_ids = {node['uuid'] for node in
self.node_filter(filter=lambda node:
node['provision_state'] ==
BaremetalProvisionStates.AVAILABLE)}
self.rule_purge()
def item_filter(self, list_method, show_method,
filter=lambda item: True, items=None):
if items is None:
items = [show_method(item['uuid']) for item in
list_method()]
return [item for item in items if filter(item)]
def node_list(self):
return self.baremetal_client.list_nodes()[1]['nodes']
def node_update(self, uuid, patch):
return self.baremetal_client.update_node(uuid, **patch)
def node_show(self, uuid):
return self.baremetal_client.show_node(uuid)[1]
def node_filter(self, filter=lambda node: True, nodes=None):
return self.item_filter(self.node_list, self.node_show,
filter=filter, items=nodes)
def hypervisor_stats(self):
return (self.admin_manager.hypervisor_client.
show_hypervisor_statistics())
def server_show(self, uuid):
self.servers_client.show_server(uuid)
def rule_purge(self):
self.introspection_client.purge_rules()
def rule_import(self, rule_path):
self.introspection_client.import_rule(rule_path)
def introspection_status(self, uuid):
return self.introspection_client.get_status(uuid)[1]
def introspection_data(self, uuid):
return self.introspection_client.get_data(uuid)[1]
def baremetal_flavor(self):
flavor_id = CONF.compute.flavor_ref
flavor = self.flavors_client.show_flavor(flavor_id)['flavor']
flavor['properties'] = self.flavors_client.list_flavor_extra_specs(
flavor_id)['extra_specs']
return flavor
def get_rule_path(self, rule_file):
base_path = os.path.split(
os.path.dirname(os.path.abspath(__file__)))[0]
base_path = os.path.split(base_path)[0]
return os.path.join(base_path, "inspector_tempest_plugin",
"rules", rule_file)
def boot_instance(self):
return super(InspectorScenarioTest, self).boot_instance()
def terminate_instance(self, instance):
return super(InspectorScenarioTest, self).terminate_instance(instance)
# TODO(aarefiev): switch to call_until_true
def wait_for_introspection_finished(self, node_ids):
"""Waits for introspection of baremetal nodes to finish.
"""
start = int(time.time())
not_introspected = {node_id for node_id in node_ids}
while not_introspected:
time.sleep(CONF.baremetal_introspection.introspection_sleep)
for node_id in node_ids:
status = self.introspection_status(node_id)
if status['finished']:
if status['error']:
message = ('Node %(node_id)s introspection failed '
'with %(error)s.' %
{'node_id': node_id,
'error': status['error']})
raise exceptions.IntrospectionFailed(message)
not_introspected = not_introspected - {node_id}
if (int(time.time()) - start >=
CONF.baremetal_introspection.introspection_timeout):
message = ('Introspection timed out for nodes: %s' %
not_introspected)
raise exceptions.IntrospectionTimeout(message)
def wait_for_nova_aware_of_bvms(self):
start = int(time.time())
while True:
time.sleep(CONF.baremetal_introspection.hypervisor_update_sleep)
stats = self.hypervisor_stats()
expected_cpus = self.baremetal_flavor()['vcpus']
if int(stats['hypervisor_statistics']['vcpus']) >= expected_cpus:
break
timeout = CONF.baremetal_introspection.hypervisor_update_timeout
if (int(time.time()) - start >= timeout):
message = (
'Timeout while waiting for nova hypervisor-stats: '
'%(stats)s required time (%(timeout)s s).' %
{'stats': stats,
'timeout': timeout})
raise exceptions.HypervisorUpdateTimeout(message)
def node_cleanup(self, node_id):
if (self.node_show(node_id)['provision_state'] ==
BaremetalProvisionStates.AVAILABLE):
return
try:
self.baremetal_client.set_node_provision_state(node_id, 'provide')
except tempest.lib.exceptions.RestClientException:
# maybe node already cleaning or available
pass
self.wait_provisioning_state(
node_id, [BaremetalProvisionStates.AVAILABLE,
BaremetalProvisionStates.NOSTATE],
timeout=CONF.baremetal.unprovision_timeout,
interval=self.wait_provisioning_state_interval)
def introspect_node(self, node_id):
# in case there are properties remove those
patch = {('properties/%s' % key): None for key in
self.node_show(node_id)['properties']}
# reset any previous rule result
patch['extra/rule_success'] = None
self.node_update(node_id, patch)
self.baremetal_client.set_node_provision_state(node_id, 'manage')
self.baremetal_client.set_node_provision_state(node_id, 'inspect')
self.addCleanup(self.node_cleanup, node_id)

View File

@ -0,0 +1,132 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.config import CONF
from tempest import test # noqa
from ironic_inspector.test.inspector_tempest_plugin.tests import manager
from ironic_tempest_plugin.tests.scenario.baremetal_manager import \
BaremetalProvisionStates
class InspectorBasicTest(manager.InspectorScenarioTest):
def verify_node_introspection_data(self, node):
self.assertEqual('yes', node['extra']['rule_success'])
data = self.introspection_data(node['uuid'])
self.assertEqual(data['cpu_arch'],
self.flavor['properties']['cpu_arch'])
self.assertEqual(int(data['memory_mb']),
int(self.flavor['ram']))
self.assertEqual(int(data['cpus']), int(self.flavor['vcpus']))
def verify_node_flavor(self, node):
expected_cpus = self.flavor['vcpus']
expected_memory_mb = self.flavor['ram']
expected_cpu_arch = self.flavor['properties']['cpu_arch']
disk_size = self.flavor['disk']
ephemeral_size = self.flavor['OS-FLV-EXT-DATA:ephemeral']
expected_local_gb = disk_size + ephemeral_size
self.assertEqual(expected_cpus,
int(node['properties']['cpus']))
self.assertEqual(expected_memory_mb,
int(node['properties']['memory_mb']))
self.assertEqual(expected_local_gb,
int(node['properties']['local_gb']))
self.assertEqual(expected_cpu_arch,
node['properties']['cpu_arch'])
@test.idempotent_id('03bf7990-bee0-4dd7-bf74-b97ad7b52a4b')
@test.services('baremetal', 'compute', 'image',
'network', 'object_storage')
def test_baremetal_introspection(self):
"""This smoke test case follows this set of operations:
* Fetches expected properties from baremetal flavor
* Removes all properties from nodes
* Sets nodes to manageable state
* Imports introspection rule basic_ops_rule.json
* Inspects nodes
* Verifies all properties are inspected
* Verifies introspection data
* Sets node to available state
* Creates a keypair
* Boots an instance using the keypair
* Deletes the instance
"""
# prepare introspection rule
rule_path = self.get_rule_path("basic_ops_rule.json")
self.rule_import(rule_path)
self.addCleanup(self.rule_purge)
for node_id in self.node_ids:
self.introspect_node(node_id)
# settle down introspection
self.wait_for_introspection_finished(self.node_ids)
for node_id in self.node_ids:
self.wait_provisioning_state(
node_id, 'manageable',
timeout=CONF.baremetal_introspection.ironic_sync_timeout,
interval=self.wait_provisioning_state_interval)
for node_id in self.node_ids:
node = self.node_show(node_id)
self.verify_node_introspection_data(node)
self.verify_node_flavor(node)
for node_id in self.node_ids:
self.baremetal_client.set_node_provision_state(node_id, 'provide')
for node_id in self.node_ids:
self.wait_provisioning_state(
node_id, BaremetalProvisionStates.AVAILABLE,
timeout=CONF.baremetal.active_timeout,
interval=self.wait_provisioning_state_interval)
self.wait_for_nova_aware_of_bvms()
self.add_keypair()
ins, _node = self.boot_instance()
self.terminate_instance(ins)
class InspectorSmokeTest(manager.InspectorScenarioTest):
@test.idempotent_id('a702d1f1-88e4-42ce-88ef-cba2d9e3312e')
@test.attr(type='smoke')
@test.services('baremetal', 'compute', 'image',
'network', 'object_storage')
def test_baremetal_introspection(self):
"""This smoke test case follows this very basic set of operations:
* Fetches expected properties from baremetal flavor
* Removes all properties from one node
* Sets the node to manageable state
* Inspects the node
* Sets the node to available state
"""
# NOTE(dtantsur): we can't silently skip this test because it runs in
# grenade with several other tests, and we won't have any indication
# that it was not run.
assert self.node_ids, "No available nodes"
node_id = next(iter(self.node_ids))
self.introspect_node(node_id)
# settle down introspection
self.wait_for_introspection_finished([node_id])
self.wait_provisioning_state(
node_id, 'manageable',
timeout=CONF.baremetal_introspection.ironic_sync_timeout,
interval=self.wait_provisioning_state_interval)

View File

@ -1,480 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import os
import shutil
import tempfile
import mock
from oslo_config import cfg
from oslo_utils import units
from ironic_inspector import node_cache
from ironic_inspector.plugins import base
from ironic_inspector.plugins import standard as std_plugins
from ironic_inspector import process
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class TestSchedulerHook(test_base.NodeTest):
def setUp(self):
super(TestSchedulerHook, self).setUp()
self.hook = std_plugins.SchedulerHook()
self.data = {
'inventory': {
'cpu': {'count': 2, 'architecture': 'x86_64'},
'memory': {'physical_mb': 1024},
},
'root_disk': {
'name': '/dev/sda',
'size': 21 * units.Gi
}
}
self.old_data = {
'local_gb': 20,
'memory_mb': 1024,
'cpus': 2,
'cpu_arch': 'x86_64'
}
self.node_info = node_cache.NodeInfo(uuid=self.uuid, started_at=0,
node=self.node)
def test_hook_loadable_by_name(self):
CONF.set_override('processing_hooks', 'scheduler', 'processing')
ext = base.processing_hooks_manager()['scheduler']
self.assertIsInstance(ext.obj, std_plugins.SchedulerHook)
def test_compat_missing(self):
for key in self.old_data:
new_data = self.old_data.copy()
del new_data[key]
self.assertRaisesRegexp(utils.Error, key,
self.hook.before_update, new_data,
self.node_info)
def test_no_root_disk(self):
self.assertRaisesRegexp(utils.Error, 'root disk is not supplied',
self.hook.before_update,
{'inventory': {'disks': []}}, self.node_info)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_ok(self, mock_patch):
patch = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_compat_ok(self, mock_patch):
patch = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
]
self.hook.before_update(self.old_data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_no_overwrite(self, mock_patch):
CONF.set_override('overwrite_existing', False, 'processing')
self.node.properties = {
'memory_mb': '4096',
'cpu_arch': 'i686'
}
patch = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_compat_root_disk(self, mock_patch):
self.old_data['root_disk'] = {'name': '/dev/sda',
'size': 42 * units.Gi}
patch = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '41', 'op': 'add'}
]
self.hook.before_update(self.old_data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_root_disk_no_spacing(self, mock_patch):
CONF.set_override('disk_partitioning_spacing', False, 'processing')
self.data['root_disk'] = {'name': '/dev/sda', 'size': 42 * units.Gi}
patch = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '42', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
class TestValidateInterfacesHook(test_base.NodeTest):
def setUp(self):
super(TestValidateInterfacesHook, self).setUp()
self.hook = std_plugins.ValidateInterfacesHook()
self.data = {
'inventory': {
'interfaces': [
{'name': 'em1', 'mac_address': '11:11:11:11:11:11',
'ipv4_address': '1.1.1.1'},
{'name': 'em2', 'mac_address': '22:22:22:22:22:22',
'ipv4_address': '2.2.2.2'},
{'name': 'em3', 'mac_address': '33:33:33:33:33:33',
'ipv4_address': None},
],
},
'boot_interface': '01-22-22-22-22-22-22'
}
self.old_data = {
'interfaces': {
'em1': {'mac': '11:11:11:11:11:11', 'ip': '1.1.1.1'},
'em2': {'mac': '22:22:22:22:22:22', 'ip': '2.2.2.2'},
'em3': {'mac': '33:33:33:33:33:33'}
},
'boot_interface': '01-22-22-22-22-22-22',
}
self.orig_interfaces = self.old_data['interfaces'].copy()
self.orig_interfaces['em3']['ip'] = None
self.pxe_interface = self.old_data['interfaces']['em2']
self.active_interfaces = {
'em1': {'mac': '11:11:11:11:11:11', 'ip': '1.1.1.1'},
'em2': {'mac': '22:22:22:22:22:22', 'ip': '2.2.2.2'},
}
self.existing_ports = [mock.Mock(spec=['address', 'uuid'],
address=a)
for a in ('11:11:11:11:11:11',
'44:44:44:44:44:44')]
self.node_info = node_cache.NodeInfo(uuid=self.uuid, started_at=0,
node=self.node,
ports=self.existing_ports)
def test_hook_loadable_by_name(self):
CONF.set_override('processing_hooks', 'validate_interfaces',
'processing')
ext = base.processing_hooks_manager()['validate_interfaces']
self.assertIsInstance(ext.obj, std_plugins.ValidateInterfacesHook)
def test_wrong_add_ports(self):
CONF.set_override('add_ports', 'foobar', 'processing')
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
def test_wrong_keep_ports(self):
CONF.set_override('keep_ports', 'foobar', 'processing')
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
def test_no_interfaces(self):
self.assertRaisesRegexp(utils.Error, 'No interfaces',
self.hook.before_processing, {})
self.assertRaisesRegexp(utils.Error, 'No interfaces',
self.hook.before_processing, {'inventory': {}})
self.assertRaisesRegexp(utils.Error, 'No interfaces',
self.hook.before_processing, {'inventory': {
'interfaces': []
}})
def test_only_pxe(self):
self.hook.before_processing(self.data)
self.assertEqual({'em2': self.pxe_interface}, self.data['interfaces'])
self.assertEqual([self.pxe_interface['mac']], self.data['macs'])
self.assertEqual(self.orig_interfaces, self.data['all_interfaces'])
def test_only_pxe_mac_format(self):
self.data['boot_interface'] = '22:22:22:22:22:22'
self.hook.before_processing(self.data)
self.assertEqual({'em2': self.pxe_interface}, self.data['interfaces'])
self.assertEqual([self.pxe_interface['mac']], self.data['macs'])
self.assertEqual(self.orig_interfaces, self.data['all_interfaces'])
def test_only_pxe_not_found(self):
self.data['boot_interface'] = 'aa:bb:cc:dd:ee:ff'
self.assertRaisesRegexp(utils.Error, 'No suitable interfaces',
self.hook.before_processing, self.data)
def test_only_pxe_no_boot_interface(self):
del self.data['boot_interface']
self.hook.before_processing(self.data)
self.assertEqual(self.active_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.active_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.orig_interfaces, self.data['all_interfaces'])
def test_only_active(self):
CONF.set_override('add_ports', 'active', 'processing')
self.hook.before_processing(self.data)
self.assertEqual(self.active_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.active_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.orig_interfaces, self.data['all_interfaces'])
def test_all(self):
CONF.set_override('add_ports', 'all', 'processing')
self.hook.before_processing(self.data)
self.assertEqual(self.orig_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.orig_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.orig_interfaces, self.data['all_interfaces'])
def test_malformed_interfaces(self):
self.data = {
'inventory': {
'interfaces': [
# no name
{'mac_address': '11:11:11:11:11:11',
'ipv4_address': '1.1.1.1'},
# empty
{},
],
},
}
self.assertRaisesRegexp(utils.Error, 'No interfaces supplied',
self.hook.before_processing, self.data)
def test_skipped_interfaces(self):
CONF.set_override('add_ports', 'all', 'processing')
self.data = {
'inventory': {
'interfaces': [
# local interface (by name)
{'name': 'lo', 'mac_address': '11:11:11:11:11:11',
'ipv4_address': '1.1.1.1'},
# local interface (by IP address)
{'name': 'em1', 'mac_address': '22:22:22:22:22:22',
'ipv4_address': '127.0.0.1'},
# no MAC provided
{'name': 'em3', 'ipv4_address': '2.2.2.2'},
# malformed MAC provided
{'name': 'em4', 'mac_address': 'foobar',
'ipv4_address': '2.2.2.2'},
],
},
}
self.assertRaisesRegexp(utils.Error, 'No suitable interfaces found',
self.hook.before_processing, self.data)
@mock.patch.object(node_cache.NodeInfo, 'delete_port', autospec=True)
def test_keep_all(self, mock_delete_port):
self.hook.before_update(self.data, self.node_info)
self.assertFalse(mock_delete_port.called)
@mock.patch.object(node_cache.NodeInfo, 'delete_port')
def test_keep_present(self, mock_delete_port):
CONF.set_override('keep_ports', 'present', 'processing')
self.data['all_interfaces'] = self.orig_interfaces
self.hook.before_update(self.data, self.node_info)
mock_delete_port.assert_called_once_with(self.existing_ports[1])
@mock.patch.object(node_cache.NodeInfo, 'delete_port')
def test_keep_added(self, mock_delete_port):
CONF.set_override('keep_ports', 'added', 'processing')
self.data['macs'] = [self.pxe_interface['mac']]
self.hook.before_update(self.data, self.node_info)
mock_delete_port.assert_any_call(self.existing_ports[0])
mock_delete_port.assert_any_call(self.existing_ports[1])
class TestRootDiskSelection(test_base.NodeTest):
def setUp(self):
super(TestRootDiskSelection, self).setUp()
self.hook = std_plugins.RootDiskSelectionHook()
self.data = {
'inventory': {
'disks': [
{'model': 'Model 1', 'size': 20 * units.Gi,
'name': '/dev/sdb'},
{'model': 'Model 2', 'size': 5 * units.Gi,
'name': '/dev/sda'},
{'model': 'Model 3', 'size': 10 * units.Gi,
'name': '/dev/sdc'},
{'model': 'Model 4', 'size': 4 * units.Gi,
'name': '/dev/sdd'},
{'model': 'Too Small', 'size': 1 * units.Gi,
'name': '/dev/sde'},
]
}
}
self.matched = self.data['inventory']['disks'][2].copy()
self.node_info = mock.Mock(spec=node_cache.NodeInfo,
uuid=self.uuid,
**{'node.return_value': self.node})
def test_no_hints(self):
self.hook.before_update(self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
def test_no_inventory(self):
self.node.properties['root_device'] = {'model': 'foo'}
del self.data['inventory']
self.assertRaisesRegexp(utils.Error,
'requires ironic-python-agent',
self.hook.before_update,
self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
def test_no_disks(self):
self.node.properties['root_device'] = {'size': 10}
self.data['inventory']['disks'] = []
self.assertRaisesRegexp(utils.Error,
'No disks found',
self.hook.before_update,
self.data, self.node_info)
def test_one_matches(self):
self.node.properties['root_device'] = {'size': 10}
self.hook.before_update(self.data, self.node_info)
self.assertEqual(self.matched, self.data['root_disk'])
def test_all_match(self):
self.node.properties['root_device'] = {'size': 10,
'model': 'Model 3'}
self.hook.before_update(self.data, self.node_info)
self.assertEqual(self.matched, self.data['root_disk'])
def test_one_fails(self):
self.node.properties['root_device'] = {'size': 10,
'model': 'Model 42'}
self.assertRaisesRegexp(utils.Error,
'No disks satisfied root device hints',
self.hook.before_update,
self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
class TestRamdiskError(test_base.BaseTest):
def setUp(self):
super(TestRamdiskError, self).setUp()
self.msg = 'BOOM'
self.bmc_address = '1.2.3.4'
self.data = {
'error': self.msg,
'ipmi_address': self.bmc_address,
}
self.tempdir = tempfile.mkdtemp()
self.addCleanup(lambda: shutil.rmtree(self.tempdir))
CONF.set_override('ramdisk_logs_dir', self.tempdir, 'processing')
def test_no_logs(self):
self.assertRaisesRegexp(utils.Error,
self.msg,
process.process, self.data)
self.assertEqual([], os.listdir(self.tempdir))
def test_logs_disabled(self):
self.data['logs'] = 'some log'
CONF.set_override('ramdisk_logs_dir', None, 'processing')
self.assertRaisesRegexp(utils.Error,
self.msg,
process.process, self.data)
self.assertEqual([], os.listdir(self.tempdir))
def test_logs(self):
log = b'log contents'
self.data['logs'] = base64.b64encode(log)
self.assertRaisesRegexp(utils.Error,
self.msg,
process.process, self.data)
files = os.listdir(self.tempdir)
self.assertEqual(1, len(files))
filename = files[0]
self.assertTrue(filename.startswith('bmc_%s_' % self.bmc_address),
'%s does not start with bmc_%s'
% (filename, self.bmc_address))
with open(os.path.join(self.tempdir, filename), 'rb') as fp:
self.assertEqual(log, fp.read())
def test_logs_create_dir(self):
shutil.rmtree(self.tempdir)
self.data['logs'] = base64.b64encode(b'log')
self.assertRaisesRegexp(utils.Error,
self.msg,
process.process, self.data)
files = os.listdir(self.tempdir)
self.assertEqual(1, len(files))
def test_logs_without_error(self):
log = b'log contents'
del self.data['error']
self.data['logs'] = base64.b64encode(log)
std_plugins.RamdiskErrorHook().before_processing(self.data)
files = os.listdir(self.tempdir)
self.assertFalse(files)
def test_always_store_logs(self):
CONF.set_override('always_store_ramdisk_logs', True, 'processing')
log = b'log contents'
del self.data['error']
self.data['logs'] = base64.b64encode(log)
std_plugins.RamdiskErrorHook().before_processing(self.data)
files = os.listdir(self.tempdir)
self.assertEqual(1, len(files))
filename = files[0]
self.assertTrue(filename.startswith('bmc_%s_' % self.bmc_address),
'%s does not start with bmc_%s'
% (filename, self.bmc_address))
with open(os.path.join(self.tempdir, filename), 'rb') as fp:
self.assertEqual(log, fp.read())

View File

@ -1,460 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
import json
import time
import eventlet
from ironicclient import exceptions
import mock
from oslo_config import cfg
from oslo_utils import uuidutils
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector import firewall
from ironic_inspector import node_cache
from ironic_inspector.plugins import base as plugins_base
from ironic_inspector.plugins import example as example_plugin
from ironic_inspector import process
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class BaseTest(test_base.NodeTest):
def setUp(self):
super(BaseTest, self).setUp()
self.started_at = time.time()
self.pxe_mac = self.macs[1]
self.data = {
'ipmi_address': self.bmc_address,
'cpus': 2,
'cpu_arch': 'x86_64',
'memory_mb': 1024,
'local_gb': 20,
'interfaces': {
'em1': {'mac': self.macs[0], 'ip': '1.2.0.1'},
'em2': {'mac': self.macs[1], 'ip': '1.2.0.2'},
'em3': {'mac': 'DE:AD:BE:EF:DE:AD'},
},
'boot_interface': '01-' + self.pxe_mac.replace(':', '-'),
}
self.all_ports = [mock.Mock(uuid=uuidutils.generate_uuid(),
address=mac) for mac in self.macs]
self.ports = [self.all_ports[1]]
@mock.patch.object(process, '_process_node', autospec=True)
@mock.patch.object(node_cache, 'find_node', autospec=True)
@mock.patch.object(ir_utils, 'get_client', autospec=True)
class TestProcess(BaseTest):
def setUp(self):
super(TestProcess, self).setUp()
self.fake_result_json = 'node json'
def prepare_mocks(func):
@functools.wraps(func)
def wrapper(self, client_mock, pop_mock, process_mock, *args, **kw):
cli = client_mock.return_value
pop_mock.return_value = node_cache.NodeInfo(
uuid=self.node.uuid,
started_at=self.started_at)
pop_mock.return_value.finished = mock.Mock()
cli.node.get.return_value = self.node
process_mock.return_value = self.fake_result_json
return func(self, cli, pop_mock, process_mock, *args, **kw)
return wrapper
@prepare_mocks
def test_ok(self, cli, pop_mock, process_mock):
res = process.process(self.data)
self.assertEqual(self.fake_result_json, res)
# Only boot interface is added by default
self.assertEqual(['em2'], sorted(self.data['interfaces']))
self.assertEqual([self.pxe_mac], self.data['macs'])
pop_mock.assert_called_once_with(bmc_address=self.bmc_address,
mac=self.data['macs'])
cli.node.get.assert_called_once_with(self.uuid)
process_mock.assert_called_once_with(cli.node.get.return_value,
self.data, pop_mock.return_value)
@prepare_mocks
def test_no_ipmi(self, cli, pop_mock, process_mock):
del self.data['ipmi_address']
process.process(self.data)
pop_mock.assert_called_once_with(bmc_address=None,
mac=self.data['macs'])
cli.node.get.assert_called_once_with(self.uuid)
process_mock.assert_called_once_with(cli.node.get.return_value,
self.data, pop_mock.return_value)
@prepare_mocks
def test_not_found_in_cache(self, cli, pop_mock, process_mock):
pop_mock.side_effect = iter([utils.Error('not found')])
self.assertRaisesRegexp(utils.Error,
'not found',
process.process, self.data)
self.assertFalse(cli.node.get.called)
self.assertFalse(process_mock.called)
@prepare_mocks
def test_not_found_in_ironic(self, cli, pop_mock, process_mock):
cli.node.get.side_effect = exceptions.NotFound()
self.assertRaisesRegexp(utils.Error,
'not found',
process.process, self.data)
cli.node.get.assert_called_once_with(self.uuid)
self.assertFalse(process_mock.called)
pop_mock.return_value.finished.assert_called_once_with(error=mock.ANY)
@prepare_mocks
def test_already_finished(self, cli, pop_mock, process_mock):
old_finished_at = pop_mock.return_value.finished_at
pop_mock.return_value.finished_at = time.time()
try:
self.assertRaisesRegexp(utils.Error, 'already finished',
process.process, self.data)
self.assertFalse(process_mock.called)
self.assertFalse(pop_mock.return_value.finished.called)
finally:
pop_mock.return_value.finished_at = old_finished_at
@prepare_mocks
def test_expected_exception(self, cli, pop_mock, process_mock):
process_mock.side_effect = iter([utils.Error('boom')])
self.assertRaisesRegexp(utils.Error, 'boom',
process.process, self.data)
pop_mock.return_value.finished.assert_called_once_with(error='boom')
@prepare_mocks
def test_unexpected_exception(self, cli, pop_mock, process_mock):
process_mock.side_effect = iter([RuntimeError('boom')])
self.assertRaisesRegexp(utils.Error, 'Unexpected exception',
process.process, self.data)
pop_mock.return_value.finished.assert_called_once_with(
error='Unexpected exception RuntimeError during processing: boom')
@prepare_mocks
def test_hook_unexpected_exceptions(self, cli, pop_mock, process_mock):
for ext in plugins_base.processing_hooks_manager():
patcher = mock.patch.object(ext.obj, 'before_processing',
side_effect=RuntimeError('boom'))
patcher.start()
self.addCleanup(lambda p=patcher: p.stop())
self.assertRaisesRegexp(utils.Error, 'Unexpected exception',
process.process, self.data)
pop_mock.return_value.finished.assert_called_once_with(
error=mock.ANY)
error_message = pop_mock.return_value.finished.call_args[1]['error']
self.assertIn('RuntimeError', error_message)
self.assertIn('boom', error_message)
@prepare_mocks
def test_hook_unexpected_exceptions_no_node(self, cli, pop_mock,
process_mock):
# Check that error from hooks is raised, not "not found"
pop_mock.side_effect = iter([utils.Error('not found')])
for ext in plugins_base.processing_hooks_manager():
patcher = mock.patch.object(ext.obj, 'before_processing',
side_effect=RuntimeError('boom'))
patcher.start()
self.addCleanup(lambda p=patcher: p.stop())
self.assertRaisesRegexp(utils.Error, 'Unexpected exception',
process.process, self.data)
self.assertFalse(pop_mock.return_value.finished.called)
@prepare_mocks
def test_error_if_node_not_found_hook(self, cli, pop_mock, process_mock):
plugins_base._NOT_FOUND_HOOK_MGR = None
pop_mock.side_effect = iter([utils.NotFoundInCacheError('BOOM')])
self.assertRaisesRegexp(utils.Error,
'Look up error: BOOM',
process.process, self.data)
@prepare_mocks
def test_node_not_found_hook_run_ok(self, cli, pop_mock, process_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
pop_mock.side_effect = iter([utils.NotFoundInCacheError('BOOM')])
with mock.patch.object(example_plugin,
'example_not_found_hook') as hook_mock:
hook_mock.return_value = node_cache.NodeInfo(
uuid=self.node.uuid,
started_at=self.started_at)
res = process.process(self.data)
self.assertEqual(self.fake_result_json, res)
hook_mock.assert_called_once_with(self.data)
@prepare_mocks
def test_node_not_found_hook_run_none(self, cli, pop_mock, process_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
pop_mock.side_effect = iter([utils.NotFoundInCacheError('BOOM')])
with mock.patch.object(example_plugin,
'example_not_found_hook') as hook_mock:
hook_mock.return_value = None
self.assertRaisesRegexp(utils.Error,
'Node not found hook returned nothing',
process.process, self.data)
hook_mock.assert_called_once_with(self.data)
@prepare_mocks
def test_node_not_found_hook_exception(self, cli, pop_mock, process_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
pop_mock.side_effect = iter([utils.NotFoundInCacheError('BOOM')])
with mock.patch.object(example_plugin,
'example_not_found_hook') as hook_mock:
hook_mock.side_effect = Exception('Hook Error')
self.assertRaisesRegexp(utils.Error,
'Node not found hook failed: Hook Error',
process.process, self.data)
hook_mock.assert_called_once_with(self.data)
@mock.patch.object(eventlet.greenthread, 'sleep', lambda _: None)
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_update')
@mock.patch.object(firewall, 'update_filters', autospec=True)
class TestProcessNode(BaseTest):
def setUp(self):
super(TestProcessNode, self).setUp()
CONF.set_override('processing_hooks',
'$processing.default_processing_hooks,example',
'processing')
self.validate_attempts = 5
self.data['macs'] = self.macs # validate_interfaces hook
self.data['all_interfaces'] = self.data['interfaces']
self.ports = self.all_ports
self.node_info = node_cache.NodeInfo(uuid=self.uuid,
started_at=self.started_at,
node=self.node)
self.patch_props = [
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
] # scheduler hook
self.new_creds = ('user', 'password')
self.patch_credentials = [
{'op': 'add', 'path': '/driver_info/ipmi_username',
'value': self.new_creds[0]},
{'op': 'add', 'path': '/driver_info/ipmi_password',
'value': self.new_creds[1]},
]
self.cli = mock.Mock()
self.cli.node.get_boot_device.side_effect = (
[RuntimeError()] * self.validate_attempts + [None])
self.cli.port.create.side_effect = self.ports
self.cli.node.update.return_value = self.node
self.cli.node.list_ports.return_value = []
@mock.patch.object(ir_utils, 'get_client')
def call(self, mock_cli):
mock_cli.return_value = self.cli
return process._process_node(self.node, self.data, self.node_info)
def test_return_includes_uuid(self, filters_mock, post_hook_mock):
ret_val = self.call()
self.assertEqual(self.uuid, ret_val.get('uuid'))
def test_return_includes_uuid_with_ipmi_creds(self, filters_mock,
post_hook_mock):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
ret_val = self.call()
self.assertEqual(self.uuid, ret_val.get('uuid'))
self.assertTrue(ret_val.get('ipmi_setup_credentials'))
def test_wrong_provision_state(self, filters_mock, post_hook_mock):
self.node.provision_state = 'active'
self.assertRaises(utils.Error, self.call)
self.assertFalse(post_hook_mock.called)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_ok(self, finished_mock, filters_mock, post_hook_mock):
self.call()
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[0])
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[1])
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.assertFalse(self.cli.node.validate.called)
post_hook_mock.assert_called_once_with(self.data, self.node_info)
finished_mock.assert_called_once_with(mock.ANY)
def test_overwrite_disabled(self, filters_mock, post_hook_mock):
CONF.set_override('overwrite_existing', False, 'processing')
patch = [
{'op': 'add', 'path': '/properties/cpus', 'value': '2'},
{'op': 'add', 'path': '/properties/memory_mb', 'value': '1024'},
]
self.call()
self.assertCalledWithPatch(patch, self.cli.node.update)
def test_port_failed(self, filters_mock, post_hook_mock):
self.cli.port.create.side_effect = (
[exceptions.Conflict()] + self.ports[1:])
self.call()
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[0])
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[1])
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)
def test_set_ipmi_credentials(self, filters_mock, post_hook_mock):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
self.call()
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.cli.node.get_boot_device.assert_called_with(self.uuid)
self.assertEqual(self.validate_attempts + 1,
self.cli.node.get_boot_device.call_count)
def test_set_ipmi_credentials_no_address(self, filters_mock,
post_hook_mock):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
del self.node.driver_info['ipmi_address']
self.patch_credentials.append({'op': 'add',
'path': '/driver_info/ipmi_address',
'value': self.bmc_address})
self.call()
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.cli.node.get_boot_device.assert_called_with(self.uuid)
self.assertEqual(self.validate_attempts + 1,
self.cli.node.get_boot_device.call_count)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_set_ipmi_credentials_timeout(self, finished_mock,
filters_mock, post_hook_mock):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
self.cli.node.get_boot_device.side_effect = RuntimeError('boom')
self.call()
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.assertEqual(2, self.cli.node.update.call_count)
self.assertEqual(process._CREDENTIALS_WAIT_RETRIES,
self.cli.node.get_boot_device.call_count)
self.assertFalse(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(
mock.ANY,
error='Failed to validate updated IPMI credentials for node %s, '
'node might require maintenance' % self.uuid)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_power_off_failed(self, finished_mock, filters_mock,
post_hook_mock):
self.cli.node.set_power_state.side_effect = RuntimeError('boom')
self.call()
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)
finished_mock.assert_called_once_with(
mock.ANY,
error='Failed to power off node %s, check it\'s power management'
' configuration: boom' % self.uuid)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_power_off_enroll_state(self, finished_mock, filters_mock,
post_hook_mock):
self.node.provision_state = 'enroll'
self.node_info.node = mock.Mock(return_value=self.node)
self.call()
self.assertTrue(post_hook_mock.called)
self.assertTrue(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(self.node_info)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data(self, swift_mock, filters_mock, post_hook_mock):
CONF.set_override('store_data', 'swift', 'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
expected = self.data
self.call()
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertEqual(expected,
json.loads(swift_conn.create_object.call_args[0][1]))
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data_no_logs(self, swift_mock, filters_mock,
post_hook_mock):
CONF.set_override('store_data', 'swift', 'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
expected = self.data.copy()
self.data['logs'] = 'something'
self.call()
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertEqual(expected,
json.loads(swift_conn.create_object.call_args[0][1]))
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data_location(self, swift_mock, filters_mock,
post_hook_mock):
CONF.set_override('store_data', 'swift', 'processing')
CONF.set_override('store_data_location', 'inspector_data_object',
'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
self.patch_props.append(
{'path': '/extra/inspector_data_object',
'value': name,
'op': 'add'}
)
expected = self.data
self.call()
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertEqual(expected,
json.loads(swift_conn.create_object.call_args[0][1]))
self.assertCalledWithPatch(self.patch_props, self.cli.node.update)

View File

View File

@ -16,10 +16,10 @@ import socket
import unittest
from ironicclient import client
from keystoneclient import client as keystone_client
from oslo_config import cfg
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector.common import keystone
from ironic_inspector.test import base
from ironic_inspector import utils
@ -27,37 +27,44 @@ from ironic_inspector import utils
CONF = cfg.CONF
@mock.patch.object(keystone, 'register_auth_opts')
@mock.patch.object(keystone, 'get_session')
@mock.patch.object(client, 'Client')
class TestGetClient(base.BaseTest):
def setUp(self):
super(TestGetClient, self).setUp()
CONF.set_override('auth_strategy', 'keystone')
ir_utils.reset_ironic_session()
self.cfg.config(auth_strategy='keystone')
self.cfg.config(os_region='somewhere', group='ironic')
self.addCleanup(ir_utils.reset_ironic_session)
@mock.patch.object(client, 'get_client')
@mock.patch.object(keystone_client, 'Client')
def test_get_client_with_auth_token(self, mock_keystone_client,
mock_client):
def test_get_client_with_auth_token(self, mock_client, mock_load,
mock_opts):
fake_token = 'token'
fake_ironic_url = 'http://127.0.0.1:6385'
mock_keystone_client().service_catalog.url_for.return_value = (
fake_ironic_url)
mock_sess = mock.Mock()
mock_sess.get_endpoint.return_value = fake_ironic_url
mock_load.return_value = mock_sess
ir_utils.get_client(fake_token)
args = {'os_auth_token': fake_token,
'ironic_url': fake_ironic_url,
'os_ironic_api_version': '1.11',
mock_sess.get_endpoint.assert_called_once_with(
endpoint_type=CONF.ironic.os_endpoint_type,
service_type=CONF.ironic.os_service_type,
region_name=CONF.ironic.os_region)
args = {'token': fake_token,
'endpoint': fake_ironic_url,
'os_ironic_api_version': ir_utils.DEFAULT_IRONIC_API_VERSION,
'max_retries': CONF.ironic.max_retries,
'retry_interval': CONF.ironic.retry_interval}
mock_client.assert_called_once_with(1, **args)
@mock.patch.object(client, 'get_client')
def test_get_client_without_auth_token(self, mock_client):
def test_get_client_without_auth_token(self, mock_client, mock_load,
mock_opts):
mock_sess = mock.Mock()
mock_load.return_value = mock_sess
ir_utils.get_client(None)
args = {'os_password': CONF.ironic.os_password,
'os_username': CONF.ironic.os_username,
'os_auth_url': CONF.ironic.os_auth_url,
'os_tenant_name': CONF.ironic.os_tenant_name,
'os_endpoint_type': CONF.ironic.os_endpoint_type,
'os_service_type': CONF.ironic.os_service_type,
'os_ironic_api_version': '1.11',
args = {'session': mock_sess,
'region_name': 'somewhere',
'os_ironic_api_version': ir_utils.DEFAULT_IRONIC_API_VERSION,
'max_retries': CONF.ironic.max_retries,
'retry_interval': CONF.ironic.retry_interval}
mock_client.assert_called_once_with(1, **args)
@ -92,7 +99,7 @@ class TestGetIpmiAddress(base.BaseTest):
driver_info={'foo': '192.168.1.1'})
self.assertIsNone(ir_utils.get_ipmi_address(node))
CONF.set_override('ipmi_address_fields', ['foo', 'bar', 'baz'])
self.cfg.config(ipmi_address_fields=['foo', 'bar', 'baz'])
ip = ir_utils.get_ipmi_address(node)
self.assertEqual(ip, '192.168.1.1')

View File

@ -288,6 +288,9 @@ class TestFirewall(test_base.NodeTest):
mock_get_client,
mock_iptables):
firewall.init()
firewall.BLACKLIST_CACHE = ['foo']
mock_get_client.return_value.port.list.return_value = [
mock.Mock(address='foobar')]
update_filters_expected_args = [
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
@ -317,6 +320,8 @@ class TestFirewall(test_base.NodeTest):
call_args_list):
self.assertEqual(args, call[0])
self.assertIsNone(firewall.BLACKLIST_CACHE)
# Check caching enabled flag
mock_iptables.reset_mock()
@ -330,3 +335,4 @@ class TestFirewall(test_base.NodeTest):
firewall.update_filters()
mock_iptables.assert_any_call('-A', firewall.NEW_CHAIN, '-j', 'ACCEPT')
self.assertEqual({'foobar'}, firewall.BLACKLIST_CACHE)

View File

@ -189,12 +189,12 @@ class TestIntrospect(BaseTest):
cli = client_mock.return_value
cli.node.get.side_effect = exceptions.NotFound()
self.assertRaisesRegexp(utils.Error,
'Cannot find node',
'Node %s was not found' % self.uuid,
introspect.introspect, self.uuid)
cli.node.get.side_effect = exceptions.BadRequest()
self.assertRaisesRegexp(utils.Error,
'Cannot get node',
'%s: Bad Request' % self.uuid,
introspect.introspect, self.uuid)
self.assertEqual(0, self.node_info.ports.call_count)
@ -444,7 +444,7 @@ class TestAbort(BaseTest):
def test_node_not_found(self, client_mock, get_mock, filters_mock):
cli = self._prepare(client_mock)
exc = utils.Error('Not found.', code=404)
get_mock.side_effect = iter([exc])
get_mock.side_effect = exc
self.assertRaisesRegexp(utils.Error, str(exc),
introspect.abort, self.uuid)
@ -487,7 +487,7 @@ class TestAbort(BaseTest):
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
filters_mock.side_effect = iter([Exception('Boom')])
filters_mock.side_effect = Exception('Boom')
introspect.abort(self.uuid)
@ -506,7 +506,7 @@ class TestAbort(BaseTest):
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
cli.node.set_power_state.side_effect = iter([Exception('BadaBoom')])
cli.node.set_power_state.side_effect = Exception('BadaBoom')
introspect.abort(self.uuid)

View File

@ -0,0 +1,115 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from keystoneauth1 import exceptions as kaexc
from keystoneauth1 import loading as kaloading
from oslo_config import cfg
from ironic_inspector.common import keystone
from ironic_inspector.test import base
CONF = cfg.CONF
TESTGROUP = 'keystone_test'
class KeystoneTest(base.BaseTest):
def setUp(self):
super(KeystoneTest, self).setUp()
self.cfg.conf.register_group(cfg.OptGroup(TESTGROUP))
def test_register_auth_opts(self):
keystone.register_auth_opts(TESTGROUP)
auth_opts = ['auth_type', 'auth_section']
sess_opts = ['certfile', 'keyfile', 'insecure', 'timeout', 'cafile']
for o in auth_opts + sess_opts:
self.assertIn(o, self.cfg.conf[TESTGROUP])
self.assertEqual('password', self.cfg.conf[TESTGROUP]['auth_type'])
@mock.patch.object(keystone, '_get_auth')
def test_get_session(self, auth_mock):
keystone.register_auth_opts(TESTGROUP)
self.cfg.config(group=TESTGROUP,
cafile='/path/to/ca/file')
auth1 = mock.Mock()
auth_mock.return_value = auth1
sess = keystone.get_session(TESTGROUP)
self.assertEqual('/path/to/ca/file', sess.verify)
self.assertEqual(auth1, sess.auth)
@mock.patch('keystoneauth1.loading.load_auth_from_conf_options')
@mock.patch.object(keystone, '_get_legacy_auth')
def test__get_auth(self, legacy_mock, load_mock):
auth1 = mock.Mock()
load_mock.side_effect = [
auth1,
None,
kaexc.MissingRequiredOptions([kaloading.Opt('spam')])]
auth2 = mock.Mock()
legacy_mock.return_value = auth2
self.assertEqual(auth1, keystone._get_auth(TESTGROUP))
self.assertEqual(auth2, keystone._get_auth(TESTGROUP))
self.assertEqual(auth2, keystone._get_auth(TESTGROUP))
@mock.patch('keystoneauth1.loading._plugins.identity.generic.Password.'
'load_from_options')
def test__get_legacy_auth(self, load_mock):
self.cfg.register_opts(
[cfg.StrOpt('identity_url'),
cfg.StrOpt('old_user'),
cfg.StrOpt('old_password')],
group=TESTGROUP)
self.cfg.config(group=TESTGROUP,
identity_url='http://fake:5000/v3',
old_password='ham',
old_user='spam')
options = [cfg.StrOpt('old_tenant_name', default='fake'),
cfg.StrOpt('old_user')]
mapping = {'username': 'old_user',
'password': 'old_password',
'auth_url': 'identity_url',
'tenant_name': 'old_tenant_name'}
keystone._get_legacy_auth(TESTGROUP, mapping, options)
load_mock.assert_called_once_with(username='spam',
password='ham',
tenant_name='fake',
user_domain_id='default',
project_domain_id='default',
auth_url='http://fake:5000/v3')
def test__is_api_v3(self):
cases = ((False, 'http://fake:5000', None),
(False, 'http://fake:5000/v2.0', None),
(True, 'http://fake:5000/v3', None),
(True, 'http://fake:5000', '3'),
(True, 'http://fake:5000', 'v3.0'))
for case in cases:
result, url, version = case
self.assertEqual(result, keystone._is_apiv3(url, version))
def test_add_auth_options(self):
group, opts = keystone.add_auth_options([], TESTGROUP)[0]
self.assertEqual(TESTGROUP, group)
# check that there is no duplicates
names = {o.dest for o in opts}
self.assertEqual(len(names), len(opts))
# NOTE(pas-ha) checking for most standard auth and session ones only
expected = {'timeout', 'insecure', 'cafile', 'certfile', 'keyfile',
'auth_type', 'auth_url', 'username', 'password',
'tenant_name', 'project_name', 'trust_id',
'domain_id', 'user_domain_id', 'project_domain_id'}
self.assertTrue(expected.issubset(names))

View File

@ -82,7 +82,7 @@ class TestApiIntrospect(BaseAPITest):
@mock.patch.object(introspect, 'introspect', autospec=True)
def test_intospect_failed(self, introspect_mock):
introspect_mock.side_effect = iter([utils.Error("boom")])
introspect_mock.side_effect = utils.Error("boom")
res = self.app.post('/v1/introspection/%s' % self.uuid)
self.assertEqual(400, res.status_code)
self.assertEqual(
@ -98,18 +98,12 @@ class TestApiIntrospect(BaseAPITest):
def test_introspect_failed_authentication(self, introspect_mock,
auth_mock):
CONF.set_override('auth_strategy', 'keystone')
auth_mock.side_effect = iter([utils.Error('Boom', code=403)])
auth_mock.side_effect = utils.Error('Boom', code=403)
res = self.app.post('/v1/introspection/%s' % self.uuid,
headers={'X-Auth-Token': 'token'})
self.assertEqual(403, res.status_code)
self.assertFalse(introspect_mock.called)
@mock.patch.object(introspect, 'introspect', autospec=True)
def test_introspect_invalid_uuid(self, introspect_mock):
uuid_dummy = 'invalid-uuid'
res = self.app.post('/v1/introspection/%s' % uuid_dummy)
self.assertEqual(400, res.status_code)
@mock.patch.object(process, 'process', autospec=True)
class TestApiContinue(BaseAPITest):
@ -123,7 +117,7 @@ class TestApiContinue(BaseAPITest):
self.assertEqual({"result": 42}, json.loads(res.data.decode()))
def test_continue_failed(self, process_mock):
process_mock.side_effect = iter([utils.Error("boom")])
process_mock.side_effect = utils.Error("boom")
res = self.app.post('/v1/continue', data='{"foo": "bar"}')
self.assertEqual(400, res.status_code)
process_mock.assert_called_once_with({"foo": "bar"})
@ -160,7 +154,7 @@ class TestApiAbort(BaseAPITest):
def test_node_not_found(self, abort_mock):
exc = utils.Error("Not Found.", code=404)
abort_mock.side_effect = iter([exc])
abort_mock.side_effect = exc
res = self.app.post('/v1/introspection/%s/abort' % self.uuid)
@ -171,7 +165,7 @@ class TestApiAbort(BaseAPITest):
def test_abort_failed(self, abort_mock):
exc = utils.Error("Locked.", code=409)
abort_mock.side_effect = iter([exc])
abort_mock.side_effect = exc
res = self.app.post('/v1/introspection/%s/abort' % self.uuid)
@ -233,6 +227,102 @@ class TestApiGetData(BaseAPITest):
self.assertFalse(swift_conn.get_object.called)
self.assertEqual(404, res.status_code)
@mock.patch.object(ir_utils, 'get_node', autospec=True)
@mock.patch.object(main.swift, 'SwiftAPI', autospec=True)
def test_with_name(self, swift_mock, get_mock):
get_mock.return_value = mock.Mock(uuid=self.uuid)
CONF.set_override('store_data', 'swift', 'processing')
data = {
'ipmi_address': '1.2.3.4',
'cpus': 2,
'cpu_arch': 'x86_64',
'memory_mb': 1024,
'local_gb': 20,
'interfaces': {
'em1': {'mac': '11:22:33:44:55:66', 'ip': '1.2.0.1'},
}
}
swift_conn = swift_mock.return_value
swift_conn.get_object.return_value = json.dumps(data)
res = self.app.get('/v1/introspection/name1/data')
name = 'inspector_data-%s' % self.uuid
swift_conn.get_object.assert_called_once_with(name)
self.assertEqual(200, res.status_code)
self.assertEqual(data, json.loads(res.data.decode('utf-8')))
get_mock.assert_called_once_with('name1', fields=['uuid'])
@mock.patch.object(process, 'reapply', autospec=True)
class TestApiReapply(BaseAPITest):
def setUp(self):
super(TestApiReapply, self).setUp()
CONF.set_override('store_data', 'swift', 'processing')
def test_ok(self, reapply_mock):
self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid)
reapply_mock.assert_called_once_with(self.uuid)
def test_user_data(self, reapply_mock):
res = self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid, data='some data')
self.assertEqual(400, res.status_code)
message = json.loads(res.data.decode())['error']['message']
self.assertEqual('User data processing is not supported yet',
message)
self.assertFalse(reapply_mock.called)
def test_swift_disabled(self, reapply_mock):
CONF.set_override('store_data', 'none', 'processing')
res = self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid)
self.assertEqual(400, res.status_code)
message = json.loads(res.data.decode())['error']['message']
self.assertEqual('Inspector is not configured to store '
'data. Set the [processing] store_data '
'configuration option to change this.',
message)
self.assertFalse(reapply_mock.called)
def test_node_locked(self, reapply_mock):
exc = utils.Error('Locked.', code=409)
reapply_mock.side_effect = exc
res = self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid)
self.assertEqual(409, res.status_code)
message = json.loads(res.data.decode())['error']['message']
self.assertEqual(str(exc), message)
reapply_mock.assert_called_once_with(self.uuid)
def test_node_not_found(self, reapply_mock):
exc = utils.Error('Not found.', code=404)
reapply_mock.side_effect = exc
res = self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid)
self.assertEqual(404, res.status_code)
message = json.loads(res.data.decode())['error']['message']
self.assertEqual(str(exc), message)
reapply_mock.assert_called_once_with(self.uuid)
def test_generic_error(self, reapply_mock):
exc = utils.Error('Oops', code=400)
reapply_mock.side_effect = exc
res = self.app.post('/v1/introspection/%s/data/unprocessed' %
self.uuid)
self.assertEqual(400, res.status_code)
message = json.loads(res.data.decode())['error']['message']
self.assertEqual(str(exc), message)
reapply_mock.assert_called_once_with(self.uuid)
class TestApiRules(BaseAPITest):
@mock.patch.object(rules, 'get_all')
@ -279,6 +369,28 @@ class TestApiRules(BaseAPITest):
**{'as_dict.return_value': exp})
res = self.app.post('/v1/rules', data=json.dumps(data))
self.assertEqual(201, res.status_code)
create_mock.assert_called_once_with(conditions_json='cond',
actions_json='act',
uuid=self.uuid,
description=None)
self.assertEqual(exp, json.loads(res.data.decode('utf-8')))
@mock.patch.object(rules, 'create', autospec=True)
def test_create_api_less_1_6(self, create_mock):
data = {'uuid': self.uuid,
'conditions': 'cond',
'actions': 'act'}
exp = data.copy()
exp['description'] = None
create_mock.return_value = mock.Mock(spec=rules.IntrospectionRule,
**{'as_dict.return_value': exp})
headers = {conf.VERSION_HEADER:
main._format_version((1, 5))}
res = self.app.post('/v1/rules', data=json.dumps(data),
headers=headers)
self.assertEqual(200, res.status_code)
create_mock.assert_called_once_with(conditions_json='cond',
actions_json='act',
@ -321,7 +433,7 @@ class TestApiRules(BaseAPITest):
class TestApiMisc(BaseAPITest):
@mock.patch.object(node_cache, 'get_node', autospec=True)
def test_404_expected(self, get_mock):
get_mock.side_effect = iter([utils.Error('boom', code=404)])
get_mock.side_effect = utils.Error('boom', code=404)
res = self.app.get('/v1/introspection/%s' % self.uuid)
self.assertEqual(404, res.status_code)
self.assertEqual('boom', _get_error(res))
@ -334,7 +446,7 @@ class TestApiMisc(BaseAPITest):
@mock.patch.object(node_cache, 'get_node', autospec=True)
def test_500_with_debug(self, get_mock):
CONF.set_override('debug', True)
get_mock.side_effect = iter([RuntimeError('boom')])
get_mock.side_effect = RuntimeError('boom')
res = self.app.get('/v1/introspection/%s' % self.uuid)
self.assertEqual(500, res.status_code)
self.assertEqual('Internal server error (RuntimeError): boom',
@ -343,7 +455,7 @@ class TestApiMisc(BaseAPITest):
@mock.patch.object(node_cache, 'get_node', autospec=True)
def test_500_without_debug(self, get_mock):
CONF.set_override('debug', False)
get_mock.side_effect = iter([RuntimeError('boom')])
get_mock.side_effect = RuntimeError('boom')
res = self.app.get('/v1/introspection/%s' % self.uuid)
self.assertEqual(500, res.status_code)
self.assertEqual('Internal server error',

View File

@ -336,7 +336,25 @@ class TestNodeCacheGetNode(test_base.NodeTest):
self.assertTrue(info._locked)
def test_not_found(self):
self.assertRaises(utils.Error, node_cache.get_node, 'foo')
self.assertRaises(utils.Error, node_cache.get_node,
uuidutils.generate_uuid())
def test_with_name(self):
started_at = time.time() - 42
session = db.get_session()
with session.begin():
db.Node(uuid=self.uuid, started_at=started_at).save(session)
ironic = mock.Mock()
ironic.node.get.return_value = self.node
info = node_cache.get_node('name', ironic=ironic)
self.assertEqual(self.uuid, info.uuid)
self.assertEqual(started_at, info.started_at)
self.assertIsNone(info.finished_at)
self.assertIsNone(info.error)
self.assertFalse(info._locked)
ironic.node.get.assert_called_once_with('name')
@mock.patch.object(time, 'time', lambda: 42.0)
@ -381,16 +399,6 @@ class TestNodeInfoFinished(test_base.NodeTest):
self.assertFalse(self.node_info._locked)
class TestInit(unittest.TestCase):
def setUp(self):
super(TestInit, self).setUp()
def test_ok(self):
db.init()
session = db.get_session()
db.model_query(db.Node, session=session)
class TestNodeInfoOptions(test_base.NodeTest):
def setUp(self):
super(TestNodeInfoOptions, self).setUp()

View File

@ -0,0 +1,77 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from ironic_inspector import node_cache
from ironic_inspector.plugins import base
from ironic_inspector.plugins import capabilities
from ironic_inspector.test import base as test_base
CONF = cfg.CONF
@mock.patch.object(node_cache.NodeInfo, 'update_capabilities', autospec=True)
class TestCapabilitiesHook(test_base.NodeTest):
hook = capabilities.CapabilitiesHook()
def test_loadable_by_name(self, mock_caps):
base.CONF.set_override('processing_hooks', 'capabilities',
'processing')
ext = base.processing_hooks_manager()['capabilities']
self.assertIsInstance(ext.obj, capabilities.CapabilitiesHook)
def test_no_data(self, mock_caps):
self.hook.before_update(self.data, self.node_info)
self.assertFalse(mock_caps.called)
def test_boot_mode(self, mock_caps):
CONF.set_override('boot_mode', True, 'capabilities')
self.inventory['boot'] = {'current_boot_mode': 'uefi'}
self.hook.before_update(self.data, self.node_info)
mock_caps.assert_called_once_with(self.node_info, boot_mode='uefi')
def test_boot_mode_disabled(self, mock_caps):
self.inventory['boot'] = {'current_boot_mode': 'uefi'}
self.hook.before_update(self.data, self.node_info)
self.assertFalse(mock_caps.called)
def test_cpu_flags(self, mock_caps):
self.inventory['cpu']['flags'] = ['fpu', 'vmx', 'aes', 'pse', 'smx']
self.hook.before_update(self.data, self.node_info)
mock_caps.assert_called_once_with(self.node_info,
cpu_vt='true',
cpu_hugepages='true',
cpu_txt='true',
cpu_aes='true')
def test_cpu_no_known_flags(self, mock_caps):
self.inventory['cpu']['flags'] = ['fpu']
self.hook.before_update(self.data, self.node_info)
self.assertFalse(mock_caps.called)
def test_cpu_flags_custom(self, mock_caps):
CONF.set_override('cpu_flags', {'fpu': 'new_cap'},
'capabilities')
self.inventory['cpu']['flags'] = ['fpu', 'vmx', 'aes', 'pse']
self.hook.before_update(self.data, self.node_info)
mock_caps.assert_called_once_with(self.node_info,
new_cap='true')

View File

@ -102,7 +102,10 @@ class TestEnrollNodeNotFoundHook(test_base.NodeTest):
def test__check_existing_nodes_existing_mac(self):
self.ironic.port.list.return_value = [mock.MagicMock(
address=self.macs[0], uuid='fake_port')]
introspection_data = {'macs': self.macs}
introspection_data = {
'all_interfaces': {'eth%d' % i: {'mac': m}
for i, m in enumerate(self.macs)}
}
node_driver_info = {}
self.assertRaises(utils.Error,

View File

@ -84,3 +84,14 @@ class TestExtraHardware(test_base.NodeTest):
self.hook.before_update(introspection_data, self.node_info)
self.assertFalse(patch_mock.called)
self.assertFalse(swift_conn.create_object.called)
def test__convert_edeploy_data(self, patch_mock, swift_mock):
introspection_data = [['Sheldon', 'J.', 'Plankton', '123'],
['Larry', 'the', 'Lobster', None],
['Eugene', 'H.', 'Krabs', 'The cashier']]
data = self.hook._convert_edeploy_data(introspection_data)
expected_data = {'Sheldon': {'J.': {'Plankton': 123}},
'Larry': {'the': {'Lobster': None}},
'Eugene': {'H.': {'Krabs': 'The cashier'}}}
self.assertEqual(expected_data, data)

View File

@ -0,0 +1,138 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from ironic_inspector import node_cache
from ironic_inspector.plugins import local_link_connection
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
class TestGenericLocalLinkConnectionHook(test_base.NodeTest):
hook = local_link_connection.GenericLocalLinkConnectionHook()
def setUp(self):
super(TestGenericLocalLinkConnectionHook, self).setUp()
self.data = {
'inventory': {
'interfaces': [{
'name': 'em1', 'mac_address': '11:11:11:11:11:11',
'ipv4_address': '1.1.1.1',
'lldp': [
(0, ''),
(1, '04885a92ec5459'),
(2, '0545746865726e6574312f3138'),
(3, '0078')]
}],
'cpu': 1,
'disks': 1,
'memory': 1
},
'all_interfaces': {
'em1': {},
}
}
llc = {
'port_id': '56'
}
ports = [mock.Mock(spec=['address', 'uuid', 'local_link_connection'],
address=a, local_link_connection=llc)
for a in ('11:11:11:11:11:11',)]
self.node_info = node_cache.NodeInfo(uuid=self.uuid, started_at=0,
node=self.node, ports=ports)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_expected_data(self, mock_patch):
patches = [
{'path': '/local_link_connection/port_id',
'value': 'Ethernet1/18', 'op': 'add'},
{'path': '/local_link_connection/switch_id',
'value': '88-5A-92-EC-54-59', 'op': 'add'},
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_invalid_chassis_id_subtype(self, mock_patch):
# First byte of TLV value is processed to calculate the subtype for the
# chassis ID, Subtype 5 ('05...') isn't a subtype supported by this
# plugin, so we expect it to skip this TLV.
self.data['inventory']['interfaces'][0]['lldp'][1] = (
1, '05885a92ec5459')
patches = [
{'path': '/local_link_connection/port_id',
'value': 'Ethernet1/18', 'op': 'add'},
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_invalid_port_id_subtype(self, mock_patch):
# First byte of TLV value is processed to calculate the subtype for the
# port ID, Subtype 6 ('06...') isn't a subtype supported by this
# plugin, so we expect it to skip this TLV.
self.data['inventory']['interfaces'][0]['lldp'][2] = (
2, '0645746865726e6574312f3138')
patches = [
{'path': '/local_link_connection/switch_id',
'value': '88-5A-92-EC-54-59', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_port_id_subtype_mac(self, mock_patch):
self.data['inventory']['interfaces'][0]['lldp'][2] = (
2, '03885a92ec5458')
patches = [
{'path': '/local_link_connection/port_id',
'value': '88-5A-92-EC-54-58', 'op': 'add'},
{'path': '/local_link_connection/switch_id',
'value': '88-5A-92-EC-54-59', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_lldp_none(self, mock_patch):
self.data['inventory']['interfaces'][0]['lldp'] = None
patches = []
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_interface_not_in_all_interfaces(self, mock_patch):
self.data['all_interfaces'] = {}
patches = []
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)
def test_no_inventory(self):
del self.data['inventory']
self.assertRaises(utils.Error, self.hook.before_update,
self.data, self.node_info)
@mock.patch.object(node_cache.NodeInfo, 'patch_port')
def test_no_overwrite(self, mock_patch):
cfg.CONF.set_override('overwrite_existing', False, group='processing')
patches = [
{'path': '/local_link_connection/switch_id',
'value': '88-5A-92-EC-54-59', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patches, mock_patch)

View File

@ -23,12 +23,9 @@ class TestRaidDeviceDetection(test_base.NodeTest):
hook = raid_device.RaidDeviceDetection()
def test_loadable_by_name(self):
names = ('raid_device', 'root_device_hint')
base.CONF.set_override('processing_hooks', ','.join(names),
'processing')
for name in names:
ext = base.processing_hooks_manager()[name]
self.assertIsInstance(ext.obj, raid_device.RaidDeviceDetection)
base.CONF.set_override('processing_hooks', 'raid_device', 'processing')
ext = base.processing_hooks_manager()['raid_device']
self.assertIsInstance(ext.obj, raid_device.RaidDeviceDetection)
def test_missing_local_gb(self):
introspection_data = {}

View File

@ -179,7 +179,7 @@ class TestSetCapabilityAction(test_base.NodeTest):
self.act.apply(self.node_info, self.params)
mock_patch.assert_called_once_with(
[{'op': 'add', 'path': '/properties/capabilities',
'value': 'cap1:val'}])
'value': 'cap1:val'}], mock.ANY)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_apply_with_existing(self, mock_patch):
@ -203,7 +203,7 @@ class TestExtendAttributeAction(test_base.NodeTest):
def test_apply(self, mock_patch):
self.act.apply(self.node_info, self.params)
mock_patch.assert_called_once_with(
[{'op': 'add', 'path': '/extra/value', 'value': [42]}])
[{'op': 'add', 'path': '/extra/value', 'value': [42]}], mock.ANY)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_apply_non_empty(self, mock_patch):
@ -211,7 +211,8 @@ class TestExtendAttributeAction(test_base.NodeTest):
self.act.apply(self.node_info, self.params)
mock_patch.assert_called_once_with(
[{'op': 'replace', 'path': '/extra/value', 'value': [0, 42]}])
[{'op': 'replace', 'path': '/extra/value', 'value': [0, 42]}],
mock.ANY)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_apply_unique_with_existing(self, mock_patch):

View File

@ -0,0 +1,324 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from oslo_utils import units
from ironic_inspector import node_cache
from ironic_inspector.plugins import base
from ironic_inspector.plugins import standard as std_plugins
from ironic_inspector import process
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class TestSchedulerHook(test_base.NodeTest):
def setUp(self):
super(TestSchedulerHook, self).setUp()
self.hook = std_plugins.SchedulerHook()
self.node_info = node_cache.NodeInfo(uuid=self.uuid, started_at=0,
node=self.node)
def test_hook_loadable_by_name(self):
CONF.set_override('processing_hooks', 'scheduler', 'processing')
ext = base.processing_hooks_manager()['scheduler']
self.assertIsInstance(ext.obj, std_plugins.SchedulerHook)
def test_no_root_disk(self):
del self.inventory['disks']
self.assertRaisesRegexp(utils.Error, 'disks key is missing or empty',
self.hook.before_update, self.data,
self.node_info)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_ok(self, mock_patch):
patch = [
{'path': '/properties/cpus', 'value': '4', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '12288', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '999', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_no_overwrite(self, mock_patch):
CONF.set_override('overwrite_existing', False, 'processing')
self.node.properties = {
'memory_mb': '4096',
'cpu_arch': 'i686'
}
patch = [
{'path': '/properties/cpus', 'value': '4', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '999', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
@mock.patch.object(node_cache.NodeInfo, 'patch')
def test_root_disk_no_spacing(self, mock_patch):
CONF.set_override('disk_partitioning_spacing', False, 'processing')
patch = [
{'path': '/properties/cpus', 'value': '4', 'op': 'add'},
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
{'path': '/properties/memory_mb', 'value': '12288', 'op': 'add'},
{'path': '/properties/local_gb', 'value': '1000', 'op': 'add'}
]
self.hook.before_update(self.data, self.node_info)
self.assertCalledWithPatch(patch, mock_patch)
class TestValidateInterfacesHook(test_base.NodeTest):
def setUp(self):
super(TestValidateInterfacesHook, self).setUp()
self.hook = std_plugins.ValidateInterfacesHook()
self.existing_ports = [mock.Mock(spec=['address', 'uuid'],
address=a)
for a in (self.macs[1],
'44:44:44:44:44:44')]
self.node_info = node_cache.NodeInfo(uuid=self.uuid, started_at=0,
node=self.node,
ports=self.existing_ports)
def test_hook_loadable_by_name(self):
CONF.set_override('processing_hooks', 'validate_interfaces',
'processing')
ext = base.processing_hooks_manager()['validate_interfaces']
self.assertIsInstance(ext.obj, std_plugins.ValidateInterfacesHook)
def test_wrong_add_ports(self):
CONF.set_override('add_ports', 'foobar', 'processing')
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
def test_wrong_keep_ports(self):
CONF.set_override('keep_ports', 'foobar', 'processing')
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
def test_no_interfaces(self):
self.assertRaisesRegexp(utils.Error,
'Hardware inventory is empty or missing',
self.hook.before_processing, {})
self.assertRaisesRegexp(utils.Error,
'Hardware inventory is empty or missing',
self.hook.before_processing, {'inventory': {}})
del self.inventory['interfaces']
self.assertRaisesRegexp(utils.Error,
'interfaces key is missing or empty',
self.hook.before_processing, self.data)
def test_only_pxe(self):
self.hook.before_processing(self.data)
self.assertEqual(self.pxe_interfaces, self.data['interfaces'])
self.assertEqual([self.pxe_mac], self.data['macs'])
self.assertEqual(self.all_interfaces, self.data['all_interfaces'])
def test_only_pxe_mac_format(self):
self.data['boot_interface'] = self.pxe_mac
self.hook.before_processing(self.data)
self.assertEqual(self.pxe_interfaces, self.data['interfaces'])
self.assertEqual([self.pxe_mac], self.data['macs'])
self.assertEqual(self.all_interfaces, self.data['all_interfaces'])
def test_only_pxe_not_found(self):
self.data['boot_interface'] = 'aa:bb:cc:dd:ee:ff'
self.assertRaisesRegexp(utils.Error, 'No suitable interfaces',
self.hook.before_processing, self.data)
def test_only_pxe_no_boot_interface(self):
del self.data['boot_interface']
self.hook.before_processing(self.data)
self.assertEqual(self.active_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.active_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.all_interfaces, self.data['all_interfaces'])
def test_only_active(self):
CONF.set_override('add_ports', 'active', 'processing')
self.hook.before_processing(self.data)
self.assertEqual(self.active_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.active_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.all_interfaces, self.data['all_interfaces'])
def test_all(self):
CONF.set_override('add_ports', 'all', 'processing')
self.hook.before_processing(self.data)
self.assertEqual(self.all_interfaces, self.data['interfaces'])
self.assertEqual(sorted(i['mac'] for i in
self.all_interfaces.values()),
sorted(self.data['macs']))
self.assertEqual(self.all_interfaces, self.data['all_interfaces'])
def test_malformed_interfaces(self):
self.inventory['interfaces'] = [
# no name
{'mac_address': '11:11:11:11:11:11', 'ipv4_address': '1.1.1.1'},
# empty
{},
]
self.assertRaisesRegexp(utils.Error, 'No interfaces supplied',
self.hook.before_processing, self.data)
def test_skipped_interfaces(self):
CONF.set_override('add_ports', 'all', 'processing')
self.inventory['interfaces'] = [
# local interface (by name)
{'name': 'lo', 'mac_address': '11:11:11:11:11:11',
'ipv4_address': '1.1.1.1'},
# local interface (by IP address)
{'name': 'em1', 'mac_address': '22:22:22:22:22:22',
'ipv4_address': '127.0.0.1'},
# no MAC provided
{'name': 'em3', 'ipv4_address': '2.2.2.2'},
# malformed MAC provided
{'name': 'em4', 'mac_address': 'foobar',
'ipv4_address': '2.2.2.2'},
]
self.assertRaisesRegexp(utils.Error, 'No suitable interfaces found',
self.hook.before_processing, self.data)
@mock.patch.object(node_cache.NodeInfo, 'delete_port', autospec=True)
def test_keep_all(self, mock_delete_port):
self.hook.before_update(self.data, self.node_info)
self.assertFalse(mock_delete_port.called)
@mock.patch.object(node_cache.NodeInfo, 'delete_port')
def test_keep_present(self, mock_delete_port):
CONF.set_override('keep_ports', 'present', 'processing')
self.data['all_interfaces'] = self.all_interfaces
self.hook.before_update(self.data, self.node_info)
mock_delete_port.assert_called_once_with(self.existing_ports[1])
@mock.patch.object(node_cache.NodeInfo, 'delete_port')
def test_keep_added(self, mock_delete_port):
CONF.set_override('keep_ports', 'added', 'processing')
self.data['macs'] = [self.pxe_mac]
self.hook.before_update(self.data, self.node_info)
mock_delete_port.assert_any_call(self.existing_ports[0])
mock_delete_port.assert_any_call(self.existing_ports[1])
class TestRootDiskSelection(test_base.NodeTest):
def setUp(self):
super(TestRootDiskSelection, self).setUp()
self.hook = std_plugins.RootDiskSelectionHook()
self.inventory['disks'] = [
{'model': 'Model 1', 'size': 20 * units.Gi, 'name': '/dev/sdb'},
{'model': 'Model 2', 'size': 5 * units.Gi, 'name': '/dev/sda'},
{'model': 'Model 3', 'size': 10 * units.Gi, 'name': '/dev/sdc'},
{'model': 'Model 4', 'size': 4 * units.Gi, 'name': '/dev/sdd'},
{'model': 'Too Small', 'size': 1 * units.Gi, 'name': '/dev/sde'},
]
self.matched = self.inventory['disks'][2].copy()
self.node_info = mock.Mock(spec=node_cache.NodeInfo,
uuid=self.uuid,
**{'node.return_value': self.node})
def test_no_hints(self):
del self.data['root_disk']
self.hook.before_update(self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
def test_no_inventory(self):
self.node.properties['root_device'] = {'model': 'foo'}
del self.data['inventory']
del self.data['root_disk']
self.assertRaisesRegexp(utils.Error,
'Hardware inventory is empty or missing',
self.hook.before_update,
self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
def test_no_disks(self):
self.node.properties['root_device'] = {'size': 10}
self.inventory['disks'] = []
self.assertRaisesRegexp(utils.Error,
'disks key is missing or empty',
self.hook.before_update,
self.data, self.node_info)
def test_one_matches(self):
self.node.properties['root_device'] = {'size': 10}
self.hook.before_update(self.data, self.node_info)
self.assertEqual(self.matched, self.data['root_disk'])
def test_all_match(self):
self.node.properties['root_device'] = {'size': 10,
'model': 'Model 3'}
self.hook.before_update(self.data, self.node_info)
self.assertEqual(self.matched, self.data['root_disk'])
def test_one_fails(self):
self.node.properties['root_device'] = {'size': 10,
'model': 'Model 42'}
del self.data['root_disk']
self.assertRaisesRegexp(utils.Error,
'No disks satisfied root device hints',
self.hook.before_update,
self.data, self.node_info)
self.assertNotIn('local_gb', self.data)
self.assertNotIn('root_disk', self.data)
def test_size_string(self):
self.node.properties['root_device'] = {'size': '10'}
self.hook.before_update(self.data, self.node_info)
self.assertEqual(self.matched, self.data['root_disk'])
def test_size_invalid(self):
for bad_size in ('foo', None, {}):
self.node.properties['root_device'] = {'size': bad_size}
self.assertRaisesRegexp(utils.Error,
'Invalid root device size hint',
self.hook.before_update,
self.data, self.node_info)
class TestRamdiskError(test_base.InventoryTest):
def setUp(self):
super(TestRamdiskError, self).setUp()
self.msg = 'BOOM'
self.bmc_address = '1.2.3.4'
self.data['error'] = self.msg
def test_no_logs(self):
self.assertRaisesRegexp(utils.Error,
self.msg,
process.process, self.data)

View File

@ -0,0 +1,725 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import copy
import functools
import json
import os
import shutil
import tempfile
import time
import eventlet
import fixtures
from ironicclient import exceptions
import mock
from oslo_config import cfg
from oslo_utils import uuidutils
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector import firewall
from ironic_inspector import node_cache
from ironic_inspector.plugins import base as plugins_base
from ironic_inspector.plugins import example as example_plugin
from ironic_inspector import process
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class BaseTest(test_base.NodeTest):
def setUp(self):
super(BaseTest, self).setUp()
self.started_at = time.time()
self.all_ports = [mock.Mock(uuid=uuidutils.generate_uuid(),
address=mac) for mac in self.macs]
self.ports = [self.all_ports[1]]
self.fake_result_json = 'node json'
self.cli_fixture = self.useFixture(
fixtures.MockPatchObject(ir_utils, 'get_client', autospec=True))
self.cli = self.cli_fixture.mock.return_value
class BaseProcessTest(BaseTest):
def setUp(self):
super(BaseProcessTest, self).setUp()
self.cache_fixture = self.useFixture(
fixtures.MockPatchObject(node_cache, 'find_node', autospec=True))
self.process_fixture = self.useFixture(
fixtures.MockPatchObject(process, '_process_node', autospec=True))
self.find_mock = self.cache_fixture.mock
self.node_info = node_cache.NodeInfo(
uuid=self.node.uuid,
started_at=self.started_at)
self.node_info.finished = mock.Mock()
self.find_mock.return_value = self.node_info
self.cli.node.get.return_value = self.node
self.process_mock = self.process_fixture.mock
self.process_mock.return_value = self.fake_result_json
class TestProcess(BaseProcessTest):
def test_ok(self):
res = process.process(self.data)
self.assertEqual(self.fake_result_json, res)
self.find_mock.assert_called_once_with(bmc_address=self.bmc_address,
mac=mock.ANY)
actual_macs = self.find_mock.call_args[1]['mac']
self.assertEqual(sorted(self.all_macs), sorted(actual_macs))
self.cli.node.get.assert_called_once_with(self.uuid)
self.process_mock.assert_called_once_with(
self.node, self.data, self.node_info)
def test_no_ipmi(self):
del self.inventory['bmc_address']
process.process(self.data)
self.find_mock.assert_called_once_with(bmc_address=None, mac=mock.ANY)
actual_macs = self.find_mock.call_args[1]['mac']
self.assertEqual(sorted(self.all_macs), sorted(actual_macs))
self.cli.node.get.assert_called_once_with(self.uuid)
self.process_mock.assert_called_once_with(self.node, self.data,
self.node_info)
def test_not_found_in_cache(self):
self.find_mock.side_effect = utils.Error('not found')
self.assertRaisesRegexp(utils.Error,
'not found',
process.process, self.data)
self.assertFalse(self.cli.node.get.called)
self.assertFalse(self.process_mock.called)
def test_not_found_in_ironic(self):
self.cli.node.get.side_effect = exceptions.NotFound()
self.assertRaisesRegexp(utils.Error,
'Node %s was not found' % self.uuid,
process.process, self.data)
self.cli.node.get.assert_called_once_with(self.uuid)
self.assertFalse(self.process_mock.called)
self.node_info.finished.assert_called_once_with(error=mock.ANY)
def test_already_finished(self):
self.node_info.finished_at = time.time()
self.assertRaisesRegexp(utils.Error, 'already finished',
process.process, self.data)
self.assertFalse(self.process_mock.called)
self.assertFalse(self.find_mock.return_value.finished.called)
def test_expected_exception(self):
self.process_mock.side_effect = utils.Error('boom')
self.assertRaisesRegexp(utils.Error, 'boom',
process.process, self.data)
self.node_info.finished.assert_called_once_with(error='boom')
def test_unexpected_exception(self):
self.process_mock.side_effect = RuntimeError('boom')
with self.assertRaisesRegexp(utils.Error,
'Unexpected exception') as ctx:
process.process(self.data)
self.assertEqual(500, ctx.exception.http_code)
self.node_info.finished.assert_called_once_with(
error='Unexpected exception RuntimeError during processing: boom')
def test_hook_unexpected_exceptions(self):
for ext in plugins_base.processing_hooks_manager():
patcher = mock.patch.object(ext.obj, 'before_processing',
side_effect=RuntimeError('boom'))
patcher.start()
self.addCleanup(lambda p=patcher: p.stop())
self.assertRaisesRegexp(utils.Error, 'Unexpected exception',
process.process, self.data)
self.node_info.finished.assert_called_once_with(
error=mock.ANY)
error_message = self.node_info.finished.call_args[1]['error']
self.assertIn('RuntimeError', error_message)
self.assertIn('boom', error_message)
def test_hook_unexpected_exceptions_no_node(self):
# Check that error from hooks is raised, not "not found"
self.find_mock.side_effect = utils.Error('not found')
for ext in plugins_base.processing_hooks_manager():
patcher = mock.patch.object(ext.obj, 'before_processing',
side_effect=RuntimeError('boom'))
patcher.start()
self.addCleanup(lambda p=patcher: p.stop())
self.assertRaisesRegexp(utils.Error, 'Unexpected exception',
process.process, self.data)
self.assertFalse(self.node_info.finished.called)
def test_error_if_node_not_found_hook(self):
plugins_base._NOT_FOUND_HOOK_MGR = None
self.find_mock.side_effect = utils.NotFoundInCacheError('BOOM')
self.assertRaisesRegexp(utils.Error,
'Look up error: BOOM',
process.process, self.data)
@mock.patch.object(example_plugin, 'example_not_found_hook',
autospec=True)
class TestNodeNotFoundHook(BaseProcessTest):
def test_node_not_found_hook_run_ok(self, hook_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
self.find_mock.side_effect = utils.NotFoundInCacheError('BOOM')
hook_mock.return_value = node_cache.NodeInfo(
uuid=self.node.uuid,
started_at=self.started_at)
res = process.process(self.data)
self.assertEqual(self.fake_result_json, res)
hook_mock.assert_called_once_with(self.data)
def test_node_not_found_hook_run_none(self, hook_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
self.find_mock.side_effect = utils.NotFoundInCacheError('BOOM')
hook_mock.return_value = None
self.assertRaisesRegexp(utils.Error,
'Node not found hook returned nothing',
process.process, self.data)
hook_mock.assert_called_once_with(self.data)
def test_node_not_found_hook_exception(self, hook_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
plugins_base._NOT_FOUND_HOOK_MGR = None
self.find_mock.side_effect = utils.NotFoundInCacheError('BOOM')
hook_mock.side_effect = Exception('Hook Error')
self.assertRaisesRegexp(utils.Error,
'Node not found hook failed: Hook Error',
process.process, self.data)
hook_mock.assert_called_once_with(self.data)
class TestUnprocessedData(BaseProcessTest):
@mock.patch.object(process, '_store_unprocessed_data', autospec=True)
def test_save_unprocessed_data(self, store_mock):
CONF.set_override('store_data', 'swift', 'processing')
expected = copy.deepcopy(self.data)
process.process(self.data)
store_mock.assert_called_once_with(mock.ANY, expected)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_save_unprocessed_data_failure(self, swift_mock):
CONF.set_override('store_data', 'swift', 'processing')
name = 'inspector_data-%s-%s' % (
self.uuid,
process._UNPROCESSED_DATA_STORE_SUFFIX
)
swift_conn = swift_mock.return_value
swift_conn.create_object.side_effect = utils.Error('Oops')
res = process.process(self.data)
# assert store failure doesn't break processing
self.assertEqual(self.fake_result_json, res)
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_processing',
autospec=True)
class TestStoreLogs(BaseProcessTest):
def setUp(self):
super(TestStoreLogs, self).setUp()
CONF.set_override('processing_hooks', 'ramdisk_error,example',
'processing')
self.tempdir = tempfile.mkdtemp()
self.addCleanup(lambda: shutil.rmtree(self.tempdir))
CONF.set_override('ramdisk_logs_dir', self.tempdir, 'processing')
self.logs = b'test logs'
self.data['logs'] = base64.b64encode(self.logs)
def _check_contents(self, name=None):
files = os.listdir(self.tempdir)
self.assertEqual(1, len(files))
filename = files[0]
if name is None:
self.assertTrue(filename.startswith(self.uuid),
'%s does not start with uuid' % filename)
else:
self.assertEqual(name, filename)
with open(os.path.join(self.tempdir, filename), 'rb') as fp:
self.assertEqual(self.logs, fp.read())
def test_store_on_preprocess_failure(self, hook_mock):
hook_mock.side_effect = Exception('Hook Error')
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_store_on_process_failure(self, hook_mock):
self.process_mock.side_effect = utils.Error('boom')
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_store_on_unexpected_process_failure(self, hook_mock):
self.process_mock.side_effect = RuntimeError('boom')
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_store_on_ramdisk_error(self, hook_mock):
self.data['error'] = 'boom'
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_store_find_node_error(self, hook_mock):
self.cli.node.get.side_effect = exceptions.NotFound('boom')
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_no_error_no_logs(self, hook_mock):
process.process(self.data)
self.assertEqual([], os.listdir(self.tempdir))
def test_logs_disabled(self, hook_mock):
CONF.set_override('ramdisk_logs_dir', None, 'processing')
hook_mock.side_effect = Exception('Hook Error')
self.assertRaises(utils.Error, process.process, self.data)
self.assertEqual([], os.listdir(self.tempdir))
def test_always_store_logs(self, hook_mock):
CONF.set_override('always_store_ramdisk_logs', True, 'processing')
process.process(self.data)
self._check_contents()
@mock.patch.object(process.LOG, 'exception', autospec=True)
def test_failure_to_write(self, log_mock, hook_mock):
CONF.set_override('always_store_ramdisk_logs', True, 'processing')
CONF.set_override('ramdisk_logs_dir', '/I/cannot/write/here',
'processing')
process.process(self.data)
self.assertEqual([], os.listdir(self.tempdir))
self.assertTrue(log_mock.called)
def test_directory_is_created(self, hook_mock):
shutil.rmtree(self.tempdir)
self.data['error'] = 'boom'
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents()
def test_store_custom_name(self, hook_mock):
CONF.set_override('ramdisk_logs_filename_format',
'{uuid}-{bmc}-{mac}',
'processing')
self.process_mock.side_effect = utils.Error('boom')
self.assertRaises(utils.Error, process.process, self.data)
self._check_contents(name='%s-%s-%s' % (self.uuid,
self.bmc_address,
self.pxe_mac.replace(':', '')))
class TestProcessNode(BaseTest):
def setUp(self):
super(TestProcessNode, self).setUp()
CONF.set_override('processing_hooks',
'$processing.default_processing_hooks,example',
'processing')
self.validate_attempts = 5
self.data['macs'] = self.macs # validate_interfaces hook
self.ports = self.all_ports
self.new_creds = ('user', 'password')
self.patch_credentials = [
{'op': 'add', 'path': '/driver_info/ipmi_username',
'value': self.new_creds[0]},
{'op': 'add', 'path': '/driver_info/ipmi_password',
'value': self.new_creds[1]},
]
self.cli.node.get_boot_device.side_effect = (
[RuntimeError()] * self.validate_attempts + [None])
self.cli.port.create.side_effect = self.ports
self.cli.node.update.return_value = self.node
self.cli.node.list_ports.return_value = []
self.useFixture(fixtures.MockPatchObject(
firewall, 'update_filters', autospec=True))
self.useFixture(fixtures.MockPatchObject(
eventlet.greenthread, 'sleep', autospec=True))
def test_return_includes_uuid(self):
ret_val = process._process_node(self.node, self.data, self.node_info)
self.assertEqual(self.uuid, ret_val.get('uuid'))
def test_return_includes_uuid_with_ipmi_creds(self):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
ret_val = process._process_node(self.node, self.data, self.node_info)
self.assertEqual(self.uuid, ret_val.get('uuid'))
self.assertTrue(ret_val.get('ipmi_setup_credentials'))
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_update')
def test_wrong_provision_state(self, post_hook_mock):
self.node.provision_state = 'active'
self.assertRaises(utils.Error, process._process_node,
self.node, self.data, self.node_info)
self.assertFalse(post_hook_mock.called)
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_update')
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_ok(self, finished_mock, post_hook_mock):
process._process_node(self.node, self.data, self.node_info)
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[0])
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[1])
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.assertFalse(self.cli.node.validate.called)
post_hook_mock.assert_called_once_with(self.data, self.node_info)
finished_mock.assert_called_once_with(mock.ANY)
def test_port_failed(self):
self.cli.port.create.side_effect = (
[exceptions.Conflict()] + self.ports[1:])
process._process_node(self.node, self.data, self.node_info)
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[0])
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
address=self.macs[1])
def test_set_ipmi_credentials(self):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
process._process_node(self.node, self.data, self.node_info)
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.cli.node.get_boot_device.assert_called_with(self.uuid)
self.assertEqual(self.validate_attempts + 1,
self.cli.node.get_boot_device.call_count)
def test_set_ipmi_credentials_no_address(self):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
del self.node.driver_info['ipmi_address']
self.patch_credentials.append({'op': 'add',
'path': '/driver_info/ipmi_address',
'value': self.bmc_address})
process._process_node(self.node, self.data, self.node_info)
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.cli.node.get_boot_device.assert_called_with(self.uuid)
self.assertEqual(self.validate_attempts + 1,
self.cli.node.get_boot_device.call_count)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_set_ipmi_credentials_timeout(self, finished_mock):
self.node_info.set_option('new_ipmi_credentials', self.new_creds)
self.cli.node.get_boot_device.side_effect = RuntimeError('boom')
process._process_node(self.node, self.data, self.node_info)
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
self.assertEqual(2, self.cli.node.update.call_count)
self.assertEqual(process._CREDENTIALS_WAIT_RETRIES,
self.cli.node.get_boot_device.call_count)
self.assertFalse(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(
mock.ANY,
error='Failed to validate updated IPMI credentials for node %s, '
'node might require maintenance' % self.uuid)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_power_off_failed(self, finished_mock):
self.cli.node.set_power_state.side_effect = RuntimeError('boom')
process._process_node(self.node, self.data, self.node_info)
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
finished_mock.assert_called_once_with(
mock.ANY,
error='Failed to power off node %s, check its power '
'management configuration: boom' % self.uuid
)
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_update')
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_power_off_enroll_state(self, finished_mock, post_hook_mock):
self.node.provision_state = 'enroll'
self.node_info.node = mock.Mock(return_value=self.node)
process._process_node(self.node, self.data, self.node_info)
self.assertTrue(post_hook_mock.called)
self.assertTrue(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(self.node_info)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
def test_no_power_off(self, finished_mock):
CONF.set_override('power_off', False, 'processing')
process._process_node(self.node, self.data, self.node_info)
self.assertFalse(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(self.node_info)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data(self, swift_mock):
CONF.set_override('store_data', 'swift', 'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
expected = self.data
process._process_node(self.node, self.data, self.node_info)
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertEqual(expected,
json.loads(swift_conn.create_object.call_args[0][1]))
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data_no_logs(self, swift_mock):
CONF.set_override('store_data', 'swift', 'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
self.data['logs'] = 'something'
process._process_node(self.node, self.data, self.node_info)
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertNotIn('logs',
json.loads(swift_conn.create_object.call_args[0][1]))
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
def test_store_data_location(self, swift_mock):
CONF.set_override('store_data', 'swift', 'processing')
CONF.set_override('store_data_location', 'inspector_data_object',
'processing')
swift_conn = swift_mock.return_value
name = 'inspector_data-%s' % self.uuid
patch = [{'path': '/extra/inspector_data_object',
'value': name, 'op': 'add'}]
expected = self.data
process._process_node(self.node, self.data, self.node_info)
swift_conn.create_object.assert_called_once_with(name, mock.ANY)
self.assertEqual(expected,
json.loads(swift_conn.create_object.call_args[0][1]))
self.cli.node.update.assert_any_call(self.uuid, patch)
@mock.patch.object(process, '_reapply', autospec=True)
@mock.patch.object(node_cache, 'get_node', autospec=True)
class TestReapply(BaseTest):
def prepare_mocks(func):
@functools.wraps(func)
def wrapper(self, pop_mock, *args, **kw):
pop_mock.return_value = node_cache.NodeInfo(
uuid=self.node.uuid,
started_at=self.started_at)
pop_mock.return_value.finished = mock.Mock()
pop_mock.return_value.acquire_lock = mock.Mock()
return func(self, pop_mock, *args, **kw)
return wrapper
def setUp(self):
super(TestReapply, self).setUp()
CONF.set_override('store_data', 'swift', 'processing')
@prepare_mocks
def test_ok(self, pop_mock, reapply_mock):
process.reapply(self.uuid)
pop_mock.assert_called_once_with(self.uuid, locked=False)
pop_mock.return_value.acquire_lock.assert_called_once_with(
blocking=False
)
reapply_mock.assert_called_once_with(pop_mock.return_value)
@prepare_mocks
def test_locking_failed(self, pop_mock, reapply_mock):
pop_mock.return_value.acquire_lock.return_value = False
exc = utils.Error('Node locked, please, try again later')
with self.assertRaises(type(exc)) as cm:
process.reapply(self.uuid)
self.assertEqual(str(exc), str(cm.exception))
pop_mock.assert_called_once_with(self.uuid, locked=False)
pop_mock.return_value.acquire_lock.assert_called_once_with(
blocking=False
)
@mock.patch.object(example_plugin.ExampleProcessingHook, 'before_update')
@mock.patch.object(process.rules, 'apply', autospec=True)
@mock.patch.object(process.swift, 'SwiftAPI', autospec=True)
@mock.patch.object(node_cache.NodeInfo, 'finished', autospec=True)
@mock.patch.object(node_cache.NodeInfo, 'release_lock', autospec=True)
class TestReapplyNode(BaseTest):
def setUp(self):
super(TestReapplyNode, self).setUp()
CONF.set_override('processing_hooks',
'$processing.default_processing_hooks,example',
'processing')
CONF.set_override('store_data', 'swift', 'processing')
self.data['macs'] = self.macs
self.ports = self.all_ports
self.node_info = node_cache.NodeInfo(uuid=self.uuid,
started_at=self.started_at,
node=self.node)
self.node_info.invalidate_cache = mock.Mock()
self.new_creds = ('user', 'password')
self.cli.port.create.side_effect = self.ports
self.cli.node.update.return_value = self.node
self.cli.node.list_ports.return_value = []
def call(self):
process._reapply(self.node_info)
# make sure node_info lock is released after a call
self.node_info.release_lock.assert_called_once_with(self.node_info)
def prepare_mocks(fn):
@functools.wraps(fn)
def wrapper(self, release_mock, finished_mock, swift_mock,
*args, **kw):
finished_mock.side_effect = lambda *a, **kw: \
release_mock(self.node_info)
swift_client_mock = swift_mock.return_value
fn(self, finished_mock, swift_client_mock, *args, **kw)
return wrapper
@prepare_mocks
def test_ok(self, finished_mock, swift_mock, apply_mock,
post_hook_mock):
swift_name = 'inspector_data-%s' % self.uuid
swift_mock.get_object.return_value = json.dumps(self.data)
with mock.patch.object(process.LOG, 'error',
autospec=True) as log_mock:
self.call()
# no failures logged
self.assertFalse(log_mock.called)
post_hook_mock.assert_called_once_with(mock.ANY, self.node_info)
swift_mock.create_object.assert_called_once_with(swift_name,
mock.ANY)
swifted_data = json.loads(swift_mock.create_object.call_args[0][1])
self.node_info.invalidate_cache.assert_called_once_with()
apply_mock.assert_called_once_with(self.node_info, swifted_data)
# assert no power operations were performed
self.assertFalse(self.cli.node.set_power_state.called)
finished_mock.assert_called_once_with(self.node_info)
# asserting validate_interfaces was called
self.assertEqual(self.pxe_interfaces, swifted_data['interfaces'])
self.assertEqual([self.pxe_mac], swifted_data['macs'])
# assert ports were created with whatever there was left
# behind validate_interfaces
self.cli.port.create.assert_called_once_with(
node_uuid=self.uuid,
address=swifted_data['macs'][0]
)
@prepare_mocks
def test_get_incomming_data_exception(self, finished_mock,
swift_mock, apply_mock,
post_hook_mock, ):
exc = Exception('Oops')
swift_mock.get_object.side_effect = exc
with mock.patch.object(process.LOG, 'exception',
autospec=True) as log_mock:
self.call()
log_mock.assert_called_once_with('Encountered exception '
'while fetching stored '
'introspection data',
node_info=self.node_info)
self.assertFalse(swift_mock.create_object.called)
self.assertFalse(apply_mock.called)
self.assertFalse(post_hook_mock.called)
self.assertFalse(finished_mock.called)
@prepare_mocks
def test_prehook_failure(self, finished_mock, swift_mock,
apply_mock, post_hook_mock, ):
CONF.set_override('processing_hooks', 'example',
'processing')
plugins_base._HOOKS_MGR = None
exc = Exception('Failed.')
swift_mock.get_object.return_value = json.dumps(self.data)
with mock.patch.object(example_plugin.ExampleProcessingHook,
'before_processing') as before_processing_mock:
before_processing_mock.side_effect = exc
with mock.patch.object(process.LOG, 'error',
autospec=True) as log_mock:
self.call()
exc_failure = ('Unexpected exception %(exc_class)s during '
'preprocessing in hook example: %(error)s' %
{'exc_class': type(exc).__name__, 'error':
exc})
log_mock.assert_called_once_with('Pre-processing failures '
'detected reapplying '
'introspection on stored '
'data:\n%s', exc_failure,
node_info=self.node_info)
finished_mock.assert_called_once_with(self.node_info,
error=exc_failure)
# assert _reapply ended having detected the failure
self.assertFalse(swift_mock.create_object.called)
self.assertFalse(apply_mock.called)
self.assertFalse(post_hook_mock.called)
@prepare_mocks
def test_generic_exception_creating_ports(self, finished_mock,
swift_mock, apply_mock,
post_hook_mock):
swift_mock.get_object.return_value = json.dumps(self.data)
exc = Exception('Oops')
self.cli.port.create.side_effect = exc
with mock.patch.object(process.LOG, 'exception') as log_mock:
self.call()
log_mock.assert_called_once_with('Encountered exception reapplying'
' introspection on stored data',
node_info=self.node_info,
data=mock.ANY)
finished_mock.assert_called_once_with(self.node_info, error=str(exc))
self.assertFalse(swift_mock.create_object.called)
self.assertFalse(apply_mock.called)
self.assertFalse(post_hook_mock.called)

View File

@ -419,6 +419,20 @@ class TestApplyActions(BaseTest):
self.assertRaises(utils.Error, self.rule.apply_actions,
self.node_info, data=self.data)
def test_apply_data_non_format_value(self, mock_ext_mgr):
self.rule = rules.create(actions_json=[
{'action': 'set-attribute',
'path': '/driver_info/ipmi_address',
'value': 1}],
conditions_json=self.conditions_json
)
mock_ext_mgr.return_value.__getitem__.return_value = self.ext_mock
self.rule.apply_actions(self.node_info, data=self.data)
self.assertEqual(1, self.act_mock.apply.call_count)
self.assertFalse(self.act_mock.rollback.called)
def test_rollback(self, mock_ext_mgr):
mock_ext_mgr.return_value.__getitem__.return_value = self.ext_mock

View File

@ -14,23 +14,18 @@
# Mostly copied from ironic/tests/test_swift.py
import sys
try:
from unittest import mock
except ImportError:
import mock
from oslo_config import cfg
from six.moves import reload_module
from swiftclient import client as swift_client
from swiftclient import exceptions as swift_exception
from ironic_inspector.common import keystone
from ironic_inspector.common import swift
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class BaseTest(test_base.NodeTest):
def setUp(self):
@ -52,61 +47,43 @@ class BaseTest(test_base.NodeTest):
}
@mock.patch.object(keystone, 'register_auth_opts')
@mock.patch.object(keystone, 'get_session')
@mock.patch.object(swift_client, 'Connection', autospec=True)
class SwiftTestCase(BaseTest):
def setUp(self):
super(SwiftTestCase, self).setUp()
swift.reset_swift_session()
self.swift_exception = swift_exception.ClientException('', '')
self.cfg.config(group='swift',
os_service_type='object-store',
os_endpoint_type='internalURL',
os_region='somewhere',
max_retries=2)
self.addCleanup(swift.reset_swift_session)
CONF.set_override('username', 'swift', 'swift')
CONF.set_override('tenant_name', 'tenant', 'swift')
CONF.set_override('password', 'password', 'swift')
CONF.set_override('os_auth_url', 'http://authurl/v2.0', 'swift')
CONF.set_override('os_auth_version', '2', 'swift')
CONF.set_override('max_retries', 2, 'swift')
CONF.set_override('os_service_type', 'object-store', 'swift')
CONF.set_override('os_endpoint_type', 'internalURL', 'swift')
# The constructor of SwiftAPI accepts arguments whose
# default values are values of some config options above. So reload
# the module to make sure the required values are set.
reload_module(sys.modules['ironic_inspector.common.swift'])
def test___init__(self, connection_mock):
swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
params = {'retries': 2,
'user': 'swift',
'tenant_name': 'tenant',
'key': 'password',
'authurl': 'http://authurl/v2.0',
'auth_version': '2',
'os_options': {'service_type': 'object-store',
'endpoint_type': 'internalURL'}}
connection_mock.assert_called_once_with(**params)
def test___init__defaults(self, connection_mock):
def test___init__(self, connection_mock, load_mock, opts_mock):
swift_url = 'http://swiftapi'
token = 'secret_token'
mock_sess = mock.Mock()
mock_sess.get_token.return_value = token
mock_sess.get_endpoint.return_value = swift_url
mock_sess.verify = False
load_mock.return_value = mock_sess
swift.SwiftAPI()
params = {'retries': 2,
'user': 'swift',
'tenant_name': 'tenant',
'key': 'password',
'authurl': 'http://authurl/v2.0',
'auth_version': '2',
'os_options': {'service_type': 'object-store',
'endpoint_type': 'internalURL'}}
'preauthurl': swift_url,
'preauthtoken': token,
'insecure': True}
connection_mock.assert_called_once_with(**params)
mock_sess.get_endpoint.assert_called_once_with(
service_type='object-store',
endpoint_type='internalURL',
region_name='somewhere')
def test_create_object(self, connection_mock):
swiftapi = swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
def test_create_object(self, connection_mock, load_mock, opts_mock):
swiftapi = swift.SwiftAPI()
connection_obj_mock = connection_mock.return_value
connection_obj_mock.put_object.return_value = 'object-uuid'
@ -119,12 +96,9 @@ class SwiftTestCase(BaseTest):
'ironic-inspector', 'object', 'some-string-data', headers=None)
self.assertEqual('object-uuid', object_uuid)
def test_create_object_create_container_fails(self, connection_mock):
swiftapi = swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
def test_create_object_create_container_fails(self, connection_mock,
load_mock, opts_mock):
swiftapi = swift.SwiftAPI()
connection_obj_mock = connection_mock.return_value
connection_obj_mock.put_container.side_effect = self.swift_exception
self.assertRaises(utils.Error, swiftapi.create_object, 'object',
@ -133,12 +107,9 @@ class SwiftTestCase(BaseTest):
'inspector')
self.assertFalse(connection_obj_mock.put_object.called)
def test_create_object_put_object_fails(self, connection_mock):
swiftapi = swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
def test_create_object_put_object_fails(self, connection_mock, load_mock,
opts_mock):
swiftapi = swift.SwiftAPI()
connection_obj_mock = connection_mock.return_value
connection_obj_mock.put_object.side_effect = self.swift_exception
self.assertRaises(utils.Error, swiftapi.create_object, 'object',
@ -148,12 +119,8 @@ class SwiftTestCase(BaseTest):
connection_obj_mock.put_object.assert_called_once_with(
'ironic-inspector', 'object', 'some-string-data', headers=None)
def test_get_object(self, connection_mock):
swiftapi = swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
def test_get_object(self, connection_mock, load_mock, opts_mock):
swiftapi = swift.SwiftAPI()
connection_obj_mock = connection_mock.return_value
expected_obj = self.data
@ -165,12 +132,8 @@ class SwiftTestCase(BaseTest):
'ironic-inspector', 'object')
self.assertEqual(expected_obj, swift_obj)
def test_get_object_fails(self, connection_mock):
swiftapi = swift.SwiftAPI(user=CONF.swift.username,
tenant_name=CONF.swift.tenant_name,
key=CONF.swift.password,
auth_url=CONF.swift.os_auth_url,
auth_version=CONF.swift.os_auth_version)
def test_get_object_fails(self, connection_mock, load_mock, opts_mock):
swiftapi = swift.SwiftAPI()
connection_obj_mock = connection_mock.return_value
connection_obj_mock.get_object.side_effect = self.swift_exception
self.assertRaises(utils.Error, swiftapi.get_object,

View File

@ -198,3 +198,29 @@ def get_auth_strategy():
if CONF.authenticate is not None:
return 'keystone' if CONF.authenticate else 'noauth'
return CONF.auth_strategy
def get_valid_macs(data):
"""Get a list of valid MAC's from the introspection data."""
return [m['mac']
for m in data.get('all_interfaces', {}).values()
if m.get('mac')]
_INVENTORY_MANDATORY_KEYS = ('disks', 'memory', 'cpu', 'interfaces')
def get_inventory(data, node_info=None):
"""Get and validate the hardware inventory from introspection data."""
inventory = data.get('inventory')
# TODO(dtantsur): validate inventory using JSON schema
if not inventory:
raise Error(_('Hardware inventory is empty or missing'),
data=data, node_info=node_info)
for key in _INVENTORY_MANDATORY_KEYS:
if not inventory.get(key):
raise Error(_('Invalid hardware inventory: %s key is missing '
'or empty') % key, data=data, node_info=node_info)
return inventory

View File

@ -0,0 +1,5 @@
---
features:
- Added GenericLocalLinkConnectionHook processing plugin to process LLDP data
returned during inspection and set port ID and switch ID in an Ironic
node's port local link connection information using that data.

View File

@ -0,0 +1,4 @@
---
features:
- Add configuration option `processing.power_off` defaulting to True,
which allows to leave nodes powered on after introspection.

View File

@ -0,0 +1,4 @@
---
features:
- Added a new "capabilities" processing hook detecting the CPU and boot mode
capabilities (the latter disabled by default).

View File

@ -0,0 +1,5 @@
---
fixes:
- Fix setting non string 'value' field for rule's actions. As
non string value is obviously not a formatted value, add the
check to avoid AttributeError exception.

View File

@ -0,0 +1,4 @@
---
fixes:
- Fixed "/v1/continue" to return HTTP 500 on unexpected exceptions, not
HTTP 400.

View File

@ -0,0 +1,8 @@
---
features:
- File name for stored ramdisk logs can now be customized via
"ramdisk_logs_filename_format" option.
upgrade:
- The default file name for stored ramdisk logs was change to contain only
node UUID (if known) and the current date time. A proper ".tar.gz"
extension is now appended.

View File

@ -0,0 +1,5 @@
---
fixes:
- Fixes a problem which caused an unhandled TypeError exception to
bubble up when inspector was attempting to convert some eDeploy data
to integer.

View File

@ -0,0 +1,4 @@
---
fixes:
- Fixed a regression in the firewall code, which causes re-running
introspection for an already inspected node to fail.

View File

@ -0,0 +1,10 @@
---
upgrade:
- API "POST /v1/rules" returns 201 response code instead of
200 on creating success. API version was bumped to 1.6.
API less than 1.6 continues to return 200.
- Default API version was changed from minimum to maximum
which Inspector can support.
fixes:
- Fix response return code for rule creating endpoint, it
returns 201 now instead of 200 on success.

View File

@ -0,0 +1,3 @@
---
fixes:
- Fixed the "is-empty" condition to return True on missing values.

View File

@ -0,0 +1,17 @@
---
features:
- Ironic-Inspector is now using keystoneauth and proper auth_plugins
instead of keystoneclient for communicating with Ironic and Swift.
It allows to finely tune authentification for each service independently.
For each service, the keystone session is created and reused, minimizing
the number of authentification requests to Keystone.
upgrade:
- Operators are advised to specify a proper keystoneauth plugin
and its appropriate settings in [ironic] and [swift] config sections.
Backward compatibility with previous authentification options is included.
Using authentification informaiton for Ironic and Swift from
[keystone_authtoken] config section is no longer supported.
deprecations:
- Most of current authentification options for either Ironic or Swift are
deprecated and will be removed in a future release. Please configure
the keystoneauth auth plugin authentification instead.

View File

@ -0,0 +1,6 @@
---
fixes:
- The lookup procedure now uses all valid MAC's, not only the MAC(s) that
will be used for creating port(s).
- The "enroll" node_not_found_hook now uses all valid MAC's to check node
existence, not only the MAC(s) that will be used for creating port(s).

View File

@ -0,0 +1,5 @@
---
features:
- Add support for using Ironic node names in API instead of UUIDs.
Note that using node names in the introspection status API will require
a call to Ironic to be made by the service.

View File

@ -0,0 +1,5 @@
---
features:
- Database migrations downgrade was removed. More info about
database migration/rollback could be found here
http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-back.html

View File

@ -0,0 +1,7 @@
---
prelude: >
Starting with this release only ironic-python-agent (IPA) is supported
as an introspection ramdisk.
upgrade:
- Support for the old bash-based ramdisk was removed. Please switch to IPA
before upgrading.

View File

@ -0,0 +1,3 @@
---
upgrade:
- Removed the deprecated "root_device_hint" alias for the "raid_device" hook.

View File

@ -0,0 +1,11 @@
---
fixes:
- The ramdisk logs are now stored on all preprocessing errors, not only
ones reported by the ramdisk itself. This required moving the ramdisk
logs handling from the "ramdisk_error" plugin to the generic processing
code.
upgrade:
- Handling ramdisk logs was moved out of the "ramdisk_error" plugin, so
disabling it will no longer disable handling ramdisk logs. As before,
you can set "ramdisk_logs_dir" option to an empty value (the default)
to disable storing ramdisk logs.

View File

@ -0,0 +1,4 @@
---
features:
- Introduced API "POST /v1/introspection/UUID/data/unprocessed"
for reapplying the introspection over stored data.

View File

@ -0,0 +1,4 @@
---
fixes:
- The "size" root device hint is now always converted to an integer for
consistency with IPA.

View File

@ -5,8 +5,8 @@
.. toctree::
:maxdepth: 1
Current (2.3.0 - unreleased) <current-series>
Mitaka (2.3.0 - unreleased) <mitaka>
Current (3.3.0 - unreleased) <current-series>
Mitaka (2.3.0 - 3.2.x) <mitaka>
Liberty (2.0.0 - 2.2.x) <liberty>

View File

@ -1,6 +1,6 @@
============================
Mitaka Series Release Notes
============================
=============================
Mitaka Series Release Notes
=============================
.. release-notes::
:branch: origin/master
:branch: origin/stable/mitaka

View File

@ -1,27 +1,27 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
alembic>=0.8.0 # MIT
Babel>=1.3 # BSD
alembic>=0.8.4 # MIT
Babel>=2.3.4 # BSD
eventlet!=0.18.3,>=0.18.2 # MIT
Flask<1.0,>=0.10 # BSD
futurist>=0.11.0 # Apache-2.0
Flask!=0.11,<1.0,>=0.10 # BSD
futurist!=0.15.0,>=0.11.0 # Apache-2.0
jsonpath-rw<2.0,>=1.2.0 # Apache-2.0
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0
keystoneauth1>=2.10.0 # Apache-2.0
keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0
netaddr!=0.7.16,>=0.7.12 # BSD
pbr>=1.6 # Apache-2.0
python-ironicclient>=1.1.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
python-ironicclient>=1.6.0 # Apache-2.0
python-swiftclient>=2.2.0 # Apache-2.0
oslo.concurrency>=3.5.0 # Apache-2.0
oslo.config>=3.7.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.config>=3.14.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.middleware>=3.0.0 # Apache-2.0
oslo.rootwrap>=2.0.0 # Apache-2.0
oslo.utils>=3.5.0 # Apache-2.0
oslo.rootwrap>=5.0.0 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
six>=1.9.0 # MIT
stevedore>=1.5.0 # Apache-2.0
stevedore>=1.16.0 # Apache-2.0
SQLAlchemy<1.1.0,>=1.0.10 # MIT

View File

@ -31,8 +31,8 @@ ironic_inspector.hooks.processing =
example = ironic_inspector.plugins.example:ExampleProcessingHook
extra_hardware = ironic_inspector.plugins.extra_hardware:ExtraHardwareHook
raid_device = ironic_inspector.plugins.raid_device:RaidDeviceDetection
# Deprecated name for raid_device, don't confuse with root_disk_selection
root_device_hint = ironic_inspector.plugins.raid_device:RootDeviceHintHook
capabilities = ironic_inspector.plugins.capabilities:CapabilitiesHook
local_link_connection = ironic_inspector.plugins.local_link_connection:GenericLocalLinkConnectionHook
ironic_inspector.hooks.node_not_found =
example = ironic_inspector.plugins.example:example_not_found_hook
enroll = ironic_inspector.plugins.discovery:enroll_node_not_found_hook
@ -58,9 +58,13 @@ oslo.config.opts =
ironic_inspector.common.ironic = ironic_inspector.common.ironic:list_opts
ironic_inspector.common.swift = ironic_inspector.common.swift:list_opts
ironic_inspector.plugins.discovery = ironic_inspector.plugins.discovery:list_opts
ironic_inspector.plugins.capabilities = ironic_inspector.plugins.capabilities:list_opts
oslo.config.opts.defaults =
ironic_inspector = ironic_inspector.conf:set_config_defaults
tempest.test_plugins =
ironic_inspector_tests = ironic_inspector.test.inspector_tempest_plugin.plugin:InspectorTempestPlugin
[compile_catalog]
directory = ironic_inspector/locale
domain = ironic_inspector

View File

@ -4,11 +4,11 @@
coverage>=3.6 # Apache-2.0
doc8 # Apache-2.0
hacking<0.11,>=0.10.0
mock>=1.2 # BSD
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
mock>=2.0 # BSD
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
reno>=0.1.1 # Apache2
fixtures>=1.3.1 # Apache-2.0/BSD
reno>=1.8.0 # Apache2
fixtures>=3.0.0 # Apache-2.0/BSD
testresources>=0.2.4 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
oslotest>=1.10.0 # Apache-2.0

23
tox.ini
View File

@ -3,25 +3,44 @@ envlist = py34,py27,pep8,func
[testenv]
usedevelop = True
install_command = pip install -U -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
deps =
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/plugin-requirements.txt
commands =
coverage run --branch --include "ironic_inspector*" -m unittest discover ironic_inspector.test
coverage run --branch --include "ironic_inspector*" -m unittest discover ironic_inspector.test.unit
coverage report -m --fail-under 90
setenv = PYTHONDONTWRITEBYTECODE=1
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
[testenv:venv]
# NOTE(amrith) The setting of the install_command in this location
# is only required because currently infra does not actually
# support constraints files for the environment job, and while
# the environment variable UPPER_CONSTRAINTS_FILE is set, there's
# no file there. It can be removed when infra changes this.
install_command = pip install -U {opts} {packages}
commands = {posargs}
[testenv:releasenotes]
# NOTE(amrith) The setting of the install_command in this location
# is only required because currently infra does not actually
# support constraints files for the release notes job, and while
# the environment variable UPPER_CONSTRAINTS_FILE is set, there's
# no file there. It can be removed when infra changes this.
install_command = pip install -U {opts} {packages}
envdir = {toxworkdir}/venv
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:cover]
# NOTE(amrith) The setting of the install_command in this location
# is only required because currently infra does not actually
# support constraints files for the cover job, and while
# the environment variable UPPER_CONSTRAINTS_FILE is set, there's
# no file there. It can be removed when infra changes this.
install_command = pip install -U {opts} {packages}
commands =
coverage run --branch --include "ironic_inspector*" -m unittest discover ironic_inspector.test
coverage run --branch --include "ironic_inspector*" -m unittest discover ironic_inspector.test.unit
coverage report -m
[testenv:pep8]