ironic 6.1.0 release
meta:version: 6.1.0 meta:series: newton meta:release-type: release meta:announce: openstack-announce@lists.openstack.org meta:pypi: no meta:first: no meta:release:Author: Jim Rollenhagen <jim@jimrollenhagen.com> meta:release:Commit: Davanum Srinivas <davanum@gmail.com> meta:release:Change-Id: Ib2538025a6d738a481143660c24cc84e72b94d56 -----BEGIN PGP SIGNATURE----- iEYEABECAAYFAlesa3YACgkQgNg6eWEDv1mu8gCcC4aIWIURFHkQoHMDfKSLQhe7 IycAoMjZQm9ywk9lZmIBbP7+WjclkRR+ =oR3n -----END PGP SIGNATURE----- Merge tag '6.1.0' into debian/newton ironic 6.1.0 release * New upstream release. * Fixed (build-)depends for this release. * Using OpenStack's Gerrit as VCS URLs. * Add upstream patch: - Fix-broken-unit-tests-for-get_ilo_object.patch Change-Id: I08dd4ef9bf4cee9067fb0b742dded30a50bae3ed
This commit is contained in:
commit
751affc19b
|
@ -1,4 +1,5 @@
|
|||
[gerrit]
|
||||
host=review.openstack.org
|
||||
port=29418
|
||||
project=openstack/ironic.git
|
||||
project=openstack/deb-ironic.git
|
||||
defaultbranch=debian/newton
|
||||
|
|
|
@ -81,6 +81,9 @@ Response
|
|||
- uuid: uuid
|
||||
- address: port_address
|
||||
- node_uuid: node_uuid
|
||||
- local_link_connection: local_link_connection
|
||||
- pxe_enabled: pxe_enabled
|
||||
- internal_info: internal_info
|
||||
- extra: extra
|
||||
- created_at: created_at
|
||||
- updated_at: updated_at
|
||||
|
|
|
@ -32,6 +32,11 @@ API microversion 1.8 added the ``fields`` Request parameter. When specified,
|
|||
this causes the content of the Response to include only the specified fields,
|
||||
rather than the default set.
|
||||
|
||||
API microversion 1.19 added the ``pxe_enabled`` and ``local_link_connection``
|
||||
fields.
|
||||
|
||||
.. TODO: add pxe_enabled and local_link_connection to all sample files
|
||||
|
||||
Normal response code: 200
|
||||
|
||||
Request
|
||||
|
@ -97,6 +102,9 @@ Response
|
|||
- uuid: uuid
|
||||
- address: port_address
|
||||
- node_uuid: node_uuid
|
||||
- local_link_connection: local_link_connection
|
||||
- pxe_enabled: pxe_enabled
|
||||
- internal_info: internal_info
|
||||
- extra: extra
|
||||
- created_at: created_at
|
||||
- updated_at: updated_at
|
||||
|
@ -143,6 +151,9 @@ Response
|
|||
- uuid: uuid
|
||||
- address: port_address
|
||||
- node_uuid: node_uuid
|
||||
- local_link_connection: local_link_connection
|
||||
- pxe_enabled: pxe_enabled
|
||||
- internal_info: internal_info
|
||||
- extra: extra
|
||||
- created_at: created_at
|
||||
- updated_at: updated_at
|
||||
|
@ -183,6 +194,9 @@ Response
|
|||
- uuid: uuid
|
||||
- address: port_address
|
||||
- node_uuid: node_uuid
|
||||
- local_link_connection: local_link_connection
|
||||
- pxe_enabled: pxe_enabled
|
||||
- internal_info: internal_info
|
||||
- extra: extra
|
||||
- created_at: created_at
|
||||
- updated_at: updated_at
|
||||
|
@ -226,6 +240,9 @@ Response
|
|||
- uuid: uuid
|
||||
- address: port_address
|
||||
- node_uuid: node_uuid
|
||||
- local_link_connection: local_link_connection
|
||||
- pxe_enabled: pxe_enabled
|
||||
- internal_info: internal_info
|
||||
- extra: extra
|
||||
- created_at: created_at
|
||||
- updated_at: updated_at
|
||||
|
|
|
@ -86,11 +86,13 @@ fields:
|
|||
type: array
|
||||
limit:
|
||||
description: |
|
||||
Requests a page size of items. Returns a number
|
||||
of items up to a limit value. Use the ``limit`` parameter to make
|
||||
an initial limited request and use the ID of the last-seen item
|
||||
from the response as the ``marker`` parameter value in a
|
||||
subsequent limited request.
|
||||
Requests a page size of items. Returns a number of items up to a limit
|
||||
value. Use the ``limit`` parameter to make an initial limited request and
|
||||
use the ID of the last-seen item from the response as the ``marker``
|
||||
parameter value in a subsequent limited request. This value cannot be
|
||||
larger than the ``max_limit`` option in the ``[api]`` section of the
|
||||
configuration. If it is higher than ``max_limit``, only ``max-limit``
|
||||
resources will be returned.
|
||||
in: query
|
||||
required: false
|
||||
type: integer
|
||||
|
@ -336,6 +338,13 @@ instance_uuid:
|
|||
in: body
|
||||
required: true
|
||||
type: string
|
||||
internal_info:
|
||||
description: |
|
||||
Internal metadata set and stored by the Port. This field is read-only.
|
||||
Added in API microversion 1.18.
|
||||
in: body
|
||||
required: true
|
||||
type: JSON
|
||||
last_error:
|
||||
description: |
|
||||
Any error from the most recent (last) transaction that started but failed to finish.
|
||||
|
@ -349,6 +358,17 @@ links:
|
|||
in: body
|
||||
required: true
|
||||
type: array
|
||||
local_link_connection:
|
||||
description: |
|
||||
The Port binding profile. If specified, must contain ``switch_id`` (only
|
||||
a MAC address or an OpenFlow based datapath_id of the switch are accepted
|
||||
in this field) and ``port_id`` (identifier of the physical port on the
|
||||
switch to which node's port is connected to) fields. ``switch_info`` is an
|
||||
optional string field to be used to store any vendor-specific information.
|
||||
Added in API microversion 1.19.
|
||||
in: body
|
||||
required: true
|
||||
type: JSON
|
||||
maintenance:
|
||||
description: |
|
||||
Whether or not this Node is currently in "maintenance mode". Setting a Node
|
||||
|
@ -485,6 +505,13 @@ provision_updated_at:
|
|||
in: body
|
||||
required: true
|
||||
type: string
|
||||
pxe_enabled:
|
||||
description: |
|
||||
Indicates whether PXE is enabled or disabled on the Port. Added in API
|
||||
microversion 1.19.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
r_driver_name:
|
||||
description: |
|
||||
The name of the driver used to manage this Node.
|
||||
|
|
|
@ -16,7 +16,14 @@
|
|||
}
|
||||
],
|
||||
"created_at" : "2016-05-05T22:30:57+00:00",
|
||||
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2"
|
||||
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
|
||||
"pxe_enabled": true,
|
||||
"local_link_connection": {
|
||||
"switch_id": "0a:1b:2c:3d:4e:5f",
|
||||
"port_id": "Ethernet3/1",
|
||||
"switch_info": "switch1"
|
||||
},
|
||||
"internal_info": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,4 +1,9 @@
|
|||
{
|
||||
"node_uuid": "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
|
||||
"address": "11:11:11:11:11:11"
|
||||
"address": "11:11:11:11:11:11",
|
||||
"local_link_connection": {
|
||||
"switch_id": "0a:1b:2c:3d:4e:5f",
|
||||
"port_id": "Ethernet3/1",
|
||||
"switch_info": "switch1"
|
||||
}
|
||||
}
|
|
@ -14,5 +14,12 @@
|
|||
"address" : "11:11:11:11:11:11",
|
||||
"updated_at" : null,
|
||||
"node_uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
|
||||
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2"
|
||||
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
|
||||
"pxe_enabled": true,
|
||||
"local_link_connection": {
|
||||
"switch_id": "0a:1b:2c:3d:4e:5f",
|
||||
"port_id": "Ethernet3/1",
|
||||
"switch_info": "switch1"
|
||||
},
|
||||
"internal_info": {}
|
||||
}
|
||||
|
|
|
@ -16,7 +16,14 @@
|
|||
"rel" : "bookmark"
|
||||
}
|
||||
],
|
||||
"created_at" : "2016-05-05T22:30:57+00:00"
|
||||
"created_at" : "2016-05-05T22:30:57+00:00",
|
||||
"pxe_enabled": true,
|
||||
"local_link_connection": {
|
||||
"switch_id": "0a:1b:2c:3d:4e:5f",
|
||||
"port_id": "Ethernet3/1",
|
||||
"switch_info": "switch1"
|
||||
},
|
||||
"internal_info": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -14,5 +14,12 @@
|
|||
"rel" : "bookmark"
|
||||
}
|
||||
],
|
||||
"created_at" : "2016-05-05T22:30:57+00:00"
|
||||
"created_at" : "2016-05-05T22:30:57+00:00",
|
||||
"pxe_enabled": true,
|
||||
"local_link_connection": {
|
||||
"switch_id": "0a:1b:2c:3d:4e:5f",
|
||||
"port_id": "Ethernet3/1",
|
||||
"switch_info": "switch1"
|
||||
},
|
||||
"internal_info": {}
|
||||
}
|
||||
|
|
|
@ -1,3 +1,13 @@
|
|||
ironic (1:6.1.0-1) experimental; urgency=medium
|
||||
|
||||
* New upstream release.
|
||||
* Fixed (build-)depends for this release.
|
||||
* Using OpenStack's Gerrit as VCS URLs.
|
||||
* Add upstream patch:
|
||||
- Fix-broken-unit-tests-for-get_ilo_object.patch
|
||||
|
||||
-- Thomas Goirand <zigo@debian.org> Mon, 19 Sep 2016 23:06:38 +0200
|
||||
|
||||
ironic (1:6.0.0-1) experimental; urgency=medium
|
||||
|
||||
* New upstream release.
|
||||
|
|
|
@ -25,13 +25,13 @@ Build-Depends-Indep: alembic (>= 0.8.4),
|
|||
python-glanceclient (>= 1:2.0.0),
|
||||
python-greenlet,
|
||||
python-hacking (>= 0.10.0),
|
||||
python-ironic-lib (>= 1.3.0),
|
||||
python-ironicclient (>= 1.1.0),
|
||||
python-ironic-lib (>= 2.0.0),
|
||||
python-ironicclient (>= 1.6.0),
|
||||
python-iso8601 (>= 0.1.11),
|
||||
python-jinja2 (>= 2.8),
|
||||
python-jsonpatch,
|
||||
python-jsonschema,
|
||||
python-keystoneclient (>= 1:2.0.0),
|
||||
python-keystoneauth1 (>= 2.10.0),
|
||||
python-keystonemiddleware (>= 4.0.0),
|
||||
python-mock (>= 2.0),
|
||||
python-mysqldb,
|
||||
|
@ -39,7 +39,7 @@ Build-Depends-Indep: alembic (>= 0.8.4),
|
|||
python-neutronclient (>= 1:4.2.0),
|
||||
python-os-testr (>= 0.7.0),
|
||||
python-oslo.concurrency (>= 3.8.0),
|
||||
python-oslo.config (>= 1:3.10.0),
|
||||
python-oslo.config (>= 1:3.14.0),
|
||||
python-oslo.context (>= 2.4.0),
|
||||
python-oslo.db (>= 4.1.0),
|
||||
python-oslo.i18n (>= 2.1.0),
|
||||
|
@ -47,17 +47,17 @@ Build-Depends-Indep: alembic (>= 0.8.4),
|
|||
python-oslo.messaging (>= 5.2.0),
|
||||
python-oslo.middleware (>= 3.0.0),
|
||||
python-oslo.policy (>= 1.9.0),
|
||||
python-oslo.rootwrap (>= 2.0.0),
|
||||
python-oslo.rootwrap (>= 5.0.0),
|
||||
python-oslo.serialization (>= 2.0.0),
|
||||
python-oslo.service (>= 1.10.0),
|
||||
python-oslo.utils (>= 3.11.0),
|
||||
python-oslo.versionedobjects (>= 1.9.1),
|
||||
python-oslo.utils (>= 3.16.0),
|
||||
python-oslo.versionedobjects (>= 1.13.0),
|
||||
python-oslosphinx (>= 2.5.0),
|
||||
python-oslotest (>= 1.10.0),
|
||||
python-paramiko (>= 2.0),
|
||||
python-pecan (>= 1.0.0),
|
||||
python-pil,
|
||||
python-proliantutils (>= 2.1.5),
|
||||
python-proliantutils (>= 2.1.7),
|
||||
python-psutil,
|
||||
python-psycopg2 (>= 2.5),
|
||||
python-pyghmi (>= 0.8.0),
|
||||
|
@ -71,7 +71,7 @@ Build-Depends-Indep: alembic (>= 0.8.4),
|
|||
python-sphinxcontrib-pecanwsme,
|
||||
python-sphinxcontrib.seqdiag,
|
||||
python-sqlalchemy (>= 1.0.10),
|
||||
python-stevedore (>= 1.10.0),
|
||||
python-stevedore (>= 1.16.0),
|
||||
python-swiftclient (>= 1:2.2.0),
|
||||
python-testresources,
|
||||
python-testtools (>= 1.4.0),
|
||||
|
@ -83,8 +83,8 @@ Build-Depends-Indep: alembic (>= 0.8.4),
|
|||
testrepository,
|
||||
websockify (>= 0.8.0),
|
||||
Standards-Version: 3.9.8
|
||||
Vcs-Browser: https://anonscm.debian.org/cgit/openstack/ironic.git/
|
||||
Vcs-Git: https://anonscm.debian.org/git/openstack/ironic.git
|
||||
Vcs-Browser: https://git.openstack.org/cgit/openstack/deb-ironic
|
||||
Vcs-Git: https://git.openstack.org/openstack/deb-ironic
|
||||
Homepage: https://github.com/openstack/ironic
|
||||
|
||||
Package: python-ironic
|
||||
|
@ -97,16 +97,16 @@ Depends: alembic (>= 0.8.4),
|
|||
python-futurist (>= 0.11.0),
|
||||
python-glanceclient (>= 1:2.0.0),
|
||||
python-greenlet,
|
||||
python-ironic-lib (>= 1.3.0),
|
||||
python-ironic-lib (>= 2.0.0),
|
||||
python-jinja2 (>= 2.8),
|
||||
python-jsonpatch,
|
||||
python-jsonschema,
|
||||
python-keystoneclient (>= 1:2.0.0),
|
||||
python-keystoneauth1 (>= 2.10.0),
|
||||
python-keystonemiddleware (>= 4.0.0),
|
||||
python-netaddr (>= 0.7.12),
|
||||
python-neutronclient (>= 1:4.2.0),
|
||||
python-oslo.concurrency (>= 3.8.0),
|
||||
python-oslo.config (>= 1:3.10.0),
|
||||
python-oslo.config (>= 1:3.14.0),
|
||||
python-oslo.context (>= 2.4.0),
|
||||
python-oslo.db (>= 4.1.0),
|
||||
python-oslo.i18n (>= 2.1.0),
|
||||
|
@ -114,15 +114,15 @@ Depends: alembic (>= 0.8.4),
|
|||
python-oslo.messaging (>= 5.2.0),
|
||||
python-oslo.middleware (>= 3.0.0),
|
||||
python-oslo.policy (>= 1.9.0),
|
||||
python-oslo.rootwrap (>= 2.0.0),
|
||||
python-oslo.rootwrap (>= 5.0.0),
|
||||
python-oslo.serialization (>= 2.0.0),
|
||||
python-oslo.service (>= 1.10.0),
|
||||
python-oslo.utils (>= 3.11.0),
|
||||
python-oslo.versionedobjects (>= 1.9.1),
|
||||
python-oslo.utils (>= 3.16.0),
|
||||
python-oslo.versionedobjects (>= 1.13.0),
|
||||
python-paramiko (>= 2.0),
|
||||
python-pbr (>= 1.8),
|
||||
python-pecan (>= 1.0.0),
|
||||
python-proliantutils (>= 2.1.5),
|
||||
python-proliantutils (>= 2.1.7),
|
||||
python-psutil,
|
||||
python-psycopg2 (>= 2.5),
|
||||
python-pyghmi (>= 0.8.0),
|
||||
|
@ -134,7 +134,7 @@ Depends: alembic (>= 0.8.4),
|
|||
python-sendfile,
|
||||
python-six (>= 1.9.0),
|
||||
python-sqlalchemy (>= 1.0.10),
|
||||
python-stevedore (>= 1.10.0),
|
||||
python-stevedore (>= 1.16.0),
|
||||
python-swiftclient (>= 1:2.2.0),
|
||||
python-tz,
|
||||
python-webob,
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
Description: Fix broken unit tests for get_ilo_object
|
||||
First, the tested function signature was wrong. We didn't catch it in gate,
|
||||
as we mock proliantutils, but it does break e.g. Debian package build.
|
||||
.
|
||||
Second, the arguments override was not actually working. We didn't catch
|
||||
it in gate, because the new values were the same as the defaults.
|
||||
Author: Dmitry Tantsur <divius.inside@gmail.com>
|
||||
Date: Wed, 21 Sep 2016 13:45:21 +0000 (+0200)
|
||||
X-Git-Url: https://review.openstack.org/gitweb?p=openstack%2Fironic.git;a=commitdiff_plain;h=87327803772ef7c80f7981d32851b90253d5c655
|
||||
Bug-Ubuntu: #1626089
|
||||
Change-Id: I2e4899e368b0b882dcd59bf33fdca98f47e5b405
|
||||
Origin: upstream, https://review.openstack.org/374161
|
||||
Last-Update: 2016-09-21
|
||||
|
||||
diff --git a/ironic/tests/unit/drivers/modules/ilo/test_common.py b/ironic/tests/unit/drivers/modules/ilo/test_common.py
|
||||
index d61a7a0..c1bf3cb 100644
|
||||
--- a/ironic/tests/unit/drivers/modules/ilo/test_common.py
|
||||
+++ b/ironic/tests/unit/drivers/modules/ilo/test_common.py
|
||||
@@ -154,9 +154,10 @@ class IloCommonMethodsTestCase(db_base.DbTestCase):
|
||||
@mock.patch.object(ilo_client, 'IloClient', spec_set=True,
|
||||
autospec=True)
|
||||
def _test_get_ilo_object(self, ilo_client_mock, isFile_mock, ca_file=None):
|
||||
- self.info['client_timeout'] = 60
|
||||
- self.info['client_port'] = 443
|
||||
+ self.info['client_timeout'] = 600
|
||||
+ self.info['client_port'] = 4433
|
||||
self.info['ca_file'] = ca_file
|
||||
+ self.node.driver_info = self.info
|
||||
ilo_client_mock.return_value = 'ilo_object'
|
||||
returned_ilo_object = ilo_common.get_ilo_object(self.node)
|
||||
ilo_client_mock.assert_called_with(
|
||||
@@ -164,7 +165,8 @@ class IloCommonMethodsTestCase(db_base.DbTestCase):
|
||||
self.info['ilo_username'],
|
||||
self.info['ilo_password'],
|
||||
self.info['client_timeout'],
|
||||
- self.info['client_port'])
|
||||
+ self.info['client_port'],
|
||||
+ cacert=self.info['ca_file'])
|
||||
self.assertEqual('ilo_object', returned_ilo_object)
|
||||
|
||||
def test_get_ilo_object_cafile(self):
|
|
@ -6,11 +6,11 @@ Last-Update: 2016-06-30
|
|||
--- ironic-6.0.0.orig/requirements.txt
|
||||
+++ ironic-6.0.0/requirements.txt
|
||||
@@ -15,7 +15,7 @@ python-glanceclient>=2.0.0 # Apache-2.0
|
||||
python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
|
||||
ironic-lib>=1.3.0 # Apache-2.0
|
||||
keystoneauth1>=2.10.0 # Apache-2.0
|
||||
ironic-lib>=2.0.0 # Apache-2.0
|
||||
python-swiftclient>=2.2.0 # Apache-2.0
|
||||
-pytz>=2013.6 # MIT
|
||||
+pytz
|
||||
stevedore>=1.10.0 # Apache-2.0
|
||||
stevedore>=1.16.0 # Apache-2.0
|
||||
pysendfile>=2.0.0 # MIT
|
||||
websockify>=0.8.0 # LGPLv3
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
adds-alembic.ini-in-MANIFEST.in.patch
|
||||
allow-any-pytz-version.patch
|
||||
allow-any-fixtures-version.patch
|
||||
Fix-broken-unit-tests-for-get_ilo_object.patch
|
||||
|
|
|
@ -51,7 +51,7 @@ ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
|
|||
rm -rf .testrepository ; \
|
||||
testr-python$$PYMAJOR init ; \
|
||||
TEMP_REZ=`mktemp -t` ; \
|
||||
PYTHONPATH=$(CURDIR) PYTHON=python$$i testr-python$$PYMAJOR run --subunit | tee $$TEMP_REZ | subunit2pyunit ; \
|
||||
PYTHONPATH=$(CURDIR) PYTHON=python$$i testr-python$$PYMAJOR run --subunit --parallel | tee $$TEMP_REZ | subunit2pyunit ; \
|
||||
cat $$TEMP_REZ | subunit-filter -s --no-passthrough | subunit-stats ; \
|
||||
rm -f $$TEMP_REZ ; \
|
||||
testr-python$$PYMAJOR slowest ; \
|
||||
|
|
|
@ -21,3 +21,4 @@ tftpd-hpa
|
|||
xinetd
|
||||
squashfs-tools
|
||||
libvirt-dev
|
||||
socat
|
||||
|
|
|
@ -16,3 +16,4 @@ tftp-server
|
|||
xinetd
|
||||
squashfs-tools
|
||||
libvirt-devel
|
||||
socat
|
||||
|
|
|
@ -82,6 +82,9 @@ IRONIC_HW_ARCH=${IRONIC_HW_ARCH:-x86_64}
|
|||
# *_ucs:
|
||||
# <BMC address> <MAC address> <BMC username> <BMC password> <UCS service profile>
|
||||
#
|
||||
# *_oneview:
|
||||
# <Server Hardware URI> <Server Hardware Type URI> <Enclosure Group URI> <Server Profile Template URI> <MAC of primary connection> <Applied Server Profile URI>
|
||||
#
|
||||
# IRONIC_IPMIINFO_FILE is deprecated, please use IRONIC_HWINFO_FILE. IRONIC_IPMIINFO_FILE will be removed in Ocata.
|
||||
IRONIC_IPMIINFO_FILE=${IRONIC_IPMIINFO_FILE:-""}
|
||||
if [ ! -z "$IRONIC_IPMIINFO_FILE" ]; then
|
||||
|
@ -108,7 +111,7 @@ IRONIC_VM_SSH_PORT=${IRONIC_VM_SSH_PORT:-22}
|
|||
IRONIC_VM_SSH_ADDRESS=${IRONIC_VM_SSH_ADDRESS:-$HOST_IP}
|
||||
IRONIC_VM_COUNT=${IRONIC_VM_COUNT:-1}
|
||||
IRONIC_VM_SPECS_CPU=${IRONIC_VM_SPECS_CPU:-1}
|
||||
IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-1024}
|
||||
IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-1280}
|
||||
IRONIC_VM_SPECS_CPU_ARCH=${IRONIC_VM_SPECS_CPU_ARCH:-'x86_64'}
|
||||
IRONIC_VM_SPECS_DISK=${IRONIC_VM_SPECS_DISK:-10}
|
||||
IRONIC_VM_SPECS_DISK_FORMAT=${IRONIC_VM_SPECS_DISK_FORMAT:-qcow2}
|
||||
|
@ -118,7 +121,7 @@ IRONIC_VM_NETWORK_BRIDGE=${IRONIC_VM_NETWORK_BRIDGE:-brbm}
|
|||
IRONIC_VM_NETWORK_RANGE=${IRONIC_VM_NETWORK_RANGE:-192.0.2.0/24}
|
||||
IRONIC_VM_MACS_CSV_FILE=${IRONIC_VM_MACS_CSV_FILE:-$IRONIC_DATA_DIR/ironic_macs.csv}
|
||||
IRONIC_AUTHORIZED_KEYS_FILE=${IRONIC_AUTHORIZED_KEYS_FILE:-$HOME/.ssh/authorized_keys}
|
||||
IRONIC_CLEAN_NET_NAME=${IRONIC_CLEAN_NET_NAME:-private}
|
||||
IRONIC_CLEAN_NET_NAME=${IRONIC_CLEAN_NET_NAME:-$PRIVATE_NETWORK_NAME}
|
||||
IRONIC_EXTRA_PXE_PARAMS=${IRONIC_EXTRA_PXE_PARAMS:-}
|
||||
IRONIC_TTY_DEV=${IRONIC_TTY_DEV:-ttyS0}
|
||||
|
||||
|
@ -130,45 +133,69 @@ IRONIC_VM_LOG_ROTATE=$(trueorfalse True IRONIC_VM_LOG_ROTATE)
|
|||
# Whether to build the ramdisk or download a prebuilt one.
|
||||
IRONIC_BUILD_DEPLOY_RAMDISK=$(trueorfalse True IRONIC_BUILD_DEPLOY_RAMDISK)
|
||||
|
||||
# Ironic IPA ramdisk type, supported types are: coreos, tinyipa and dib.
|
||||
# Ironic IPA ramdisk type, supported types are:
|
||||
IRONIC_SUPPORTED_RAMDISK_TYPES_RE="^(coreos|tinyipa|dib)$"
|
||||
IRONIC_RAMDISK_TYPE=${IRONIC_RAMDISK_TYPE:-tinyipa}
|
||||
|
||||
# Confirm we have a supported ramdisk type or fail early.
|
||||
if [[ ! "$IRONIC_RAMDISK_TYPE" =~ $IRONIC_SUPPORTED_RAMDISK_TYPES_RE ]]; then
|
||||
die $LINENO "Unrecognized IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected 'coreos', 'tinyipa' or 'dib'"
|
||||
fi
|
||||
|
||||
# If present, these files are used as deploy ramdisk/kernel.
|
||||
# (The value must be an absolute path)
|
||||
IRONIC_DEPLOY_RAMDISK=${IRONIC_DEPLOY_RAMDISK:-}
|
||||
IRONIC_DEPLOY_KERNEL=${IRONIC_DEPLOY_KERNEL:-}
|
||||
IRONIC_DEPLOY_ISO=${IRONIC_DEPLOY_ISO:-}
|
||||
|
||||
# NOTE(jroll) this needs to be updated when stable branches are cut
|
||||
IPA_DOWNLOAD_BRANCH=${IPA_DOWNLOAD_BRANCH:-master}
|
||||
IPA_DOWNLOAD_BRANCH=$(echo $IPA_DOWNLOAD_BRANCH | tr / -)
|
||||
|
||||
case $IRONIC_RAMDISK_TYPE in
|
||||
coreos)
|
||||
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-${IPA_DOWNLOAD_BRANCH}.vmlinuz}
|
||||
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-${IPA_DOWNLOAD_BRANCH}.cpio.gz}
|
||||
;;
|
||||
tinyipa)
|
||||
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/tinyipa-${IPA_DOWNLOAD_BRANCH}.vmlinuz}
|
||||
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/tinyipa-${IPA_DOWNLOAD_BRANCH}.gz}
|
||||
;;
|
||||
dib)
|
||||
echo "IRONIC_RAMDISK_TYPE setting 'dib' has no pre-built images"
|
||||
;;
|
||||
*)
|
||||
die $LINENO "Unrecognised IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected 'coreos', 'tinyipa' or 'dib'"
|
||||
;;
|
||||
esac
|
||||
# Configure URLs required to download ramdisk if we're not building it, and
|
||||
# IRONIC_DEPLOY_RAMDISK/KERNEL or the RAMDISK/KERNEL_URLs have not been
|
||||
# preconfigured.
|
||||
if [[ "$IRONIC_BUILD_DEPLOY_RAMDISK" == "False" && \
|
||||
! (-e "$IRONIC_DEPLOY_RAMDISK" && -e "$IRONIC_DEPLOY_KERNEL") && \
|
||||
(-z "$IRONIC_AGENT_KERNEL_URL" || -z "$IRONIC_AGENT_RAMDISK_URL") ]]; then
|
||||
case $IRONIC_RAMDISK_TYPE in
|
||||
coreos)
|
||||
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-${IPA_DOWNLOAD_BRANCH}.vmlinuz}
|
||||
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-${IPA_DOWNLOAD_BRANCH}.cpio.gz}
|
||||
;;
|
||||
tinyipa)
|
||||
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/tinyipa-${IPA_DOWNLOAD_BRANCH}.vmlinuz}
|
||||
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/tinyipa-${IPA_DOWNLOAD_BRANCH}.gz}
|
||||
;;
|
||||
dib)
|
||||
die "IRONIC_RAMDISK_TYPE 'dib' has no official pre-built "\
|
||||
"images. To fix this select a different ramdisk type, set "\
|
||||
"IRONIC_BUILD_DEPLOY_RAMDISK=True, or manually configure "\
|
||||
"IRONIC_DEPLOY_RAMDISK(_URL) and IRONIC_DEPLOY_KERNEL(_URL) "\
|
||||
"to use your own pre-built ramdisk."
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# This refers the options for disk-image-create and the platform on which
|
||||
# to build the dib based ironic-python-agent ramdisk.
|
||||
# "ubuntu" is set as the default value.
|
||||
IRONIC_DIB_RAMDISK_OPTIONS=${IRONIC_DIB_RAMDISK_OPTIONS:-'ubuntu'}
|
||||
|
||||
# Some drivers in Ironic require deploy ramdisk in bootable ISO format.
|
||||
# Set this variable to "true" to build an ISO for deploy ramdisk and
|
||||
# upload to Glance.
|
||||
IRONIC_DEPLOY_ISO_REQUIRED=$(trueorfalse False IRONIC_DEPLOY_ISO_REQUIRED)
|
||||
if $IRONIC_DEPLOY_ISO_REQUIRED = 'True' && $IRONIC_BUILD_DEPLOY_RAMDISK = 'False'\
|
||||
&& [ -n $IRONIC_DEPLOY_ISO ]; then
|
||||
die "Prebuilt ISOs are not available, provide an ISO via IRONIC_DEPLOY_ISO \
|
||||
or set IRONIC_BUILD_DEPLOY_RAMDISK=True to use ISOs"
|
||||
fi
|
||||
# Which deploy driver to use - valid choices right now
|
||||
# are ``pxe_ssh``, ``pxe_ipmitool``, ``agent_ssh`` and ``agent_ipmitool``.
|
||||
#
|
||||
# Additional valid choices if IRONIC_IS_HARDWARE == true are:
|
||||
# ``pxe_iscsi_cimc``, ``pxe_agent_cimc``, ``pxe_ucs`` and ``pxe_cimc``.
|
||||
# ``pxe_iscsi_cimc``, ``pxe_agent_cimc``, ``pxe_ucs``, ``pxe_cimc`` and ``*_pxe_oneview``
|
||||
IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh}
|
||||
|
||||
# Support entry points installation of console scripts
|
||||
|
@ -196,6 +223,11 @@ IRONIC_VBMC_PORT_RANGE_START=${IRONIC_VBMC_PORT_RANGE_START:-6230}
|
|||
IRONIC_VBMC_CONFIG_FILE=${IRONIC_VBMC_CONFIG_FILE:-$HOME/.vbmc/virtualbmc.conf}
|
||||
IRONIC_VBMC_LOGFILE=${IRONIC_VBMC_LOGFILE:-$IRONIC_VM_LOG_DIR/virtualbmc.log}
|
||||
|
||||
# To explicitly enable configuration of Glance with Swift
|
||||
# (which is required by some vendor drivers), set this
|
||||
# variable to true.
|
||||
IRONIC_CONFIGURE_GLANCE_WITH_SWIFT=$(trueorfalse False IRONIC_CONFIGURE_GLANCE_WITH_SWIFT)
|
||||
|
||||
# The path to the libvirt hooks directory, used if IRONIC_VM_LOG_ROTATE is True
|
||||
IRONIC_LIBVIRT_HOOKS_PATH=${IRONIC_LIBVIRT_HOOKS_PATH:-/etc/libvirt/hooks/}
|
||||
|
||||
|
@ -207,6 +239,41 @@ IRONIC_AUTH_STRATEGY=${IRONIC_AUTH_STRATEGY:-keystone}
|
|||
IRONIC_TERMINAL_SSL=$(trueorfalse False IRONIC_TERMINAL_SSL)
|
||||
IRONIC_TERMINAL_CERT_DIR=${IRONIC_TERMINAL_CERT_DIR:-$IRONIC_DATA_DIR/terminal_cert/}
|
||||
|
||||
# This flag is used to allow adding Link-Local-Connection info
|
||||
# to ironic port-create command. LLC info is obtained from
|
||||
# IRONIC_{VM,HW}_NODES_FILE
|
||||
IRONIC_USE_LINK_LOCAL=$(trueorfalse False IRONIC_USE_LINK_LOCAL)
|
||||
|
||||
# This flag is used to specify enabled network drivers
|
||||
IRONIC_ENABLED_NETWORK_INTERFACES=${IRONIC_ENABLED_NETWORK_INTERFACES:-}
|
||||
|
||||
# This is the network interface to use for a node
|
||||
IRONIC_NETWORK_INTERFACE=${IRONIC_NETWORK_INTERFACE:-}
|
||||
|
||||
# Ironic provision network name
|
||||
IRONIC_PROVISION_NETWORK_NAME=${IRONIC_PROVISION_NETWORK_NAME:-}
|
||||
|
||||
# Provision network provider type. Can be flat or vlan.
|
||||
IRONIC_PROVISION_PROVIDER_NETWORK_TYPE=${IRONIC_PROVISION_PROVIDER_NETWORK_TYPE:-'vlan'}
|
||||
|
||||
# If IRONIC_PROVISION_PROVIDER_NETWORK_TYPE is vlan. VLAN_ID may be specified. If it is not set,
|
||||
# vlan will be allocated dynamically.
|
||||
IRONIC_PROVISION_SEGMENTATION_ID=${IRONIC_PROVISION_SEGMENTATION_ID:-}
|
||||
|
||||
# Allocation network pool for provision network
|
||||
# Example: IRONIC_PROVISION_ALLOCATION_POOL=start=10.0.5.10,end=10.0.5.100
|
||||
IRONIC_PROVISION_ALLOCATION_POOL=${IRONIC_PROVISION_ALLOCATION_POOL:-}
|
||||
|
||||
# Ironic provision subnet name.
|
||||
IRONIC_PROVISION_PROVIDER_SUBNET_NAME=${IRONIC_PROVISION_PROVIDER_SUBNET_NAME:-${IRONIC_PROVISION_NETWORK_NAME}-subnet}
|
||||
|
||||
# Ironic provision subnet gateway.
|
||||
IRONIC_PROVISION_SUBNET_GATEWAY=${IRONIC_PROVISION_SUBNET_GATEWAY:-}
|
||||
|
||||
# Ironic provision subnet prefix
|
||||
# Example: IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24
|
||||
IRONIC_PROVISION_SUBNET_PREFIX=${IRONIC_PROVISION_SUBNET_PREFIX:-}
|
||||
|
||||
# get_pxe_boot_file() - Get the PXE/iPXE boot file path
|
||||
function get_pxe_boot_file {
|
||||
local relpath=syslinux/pxelinux.0
|
||||
|
@ -260,6 +327,25 @@ function is_deployed_by_ucs {
|
|||
return 1
|
||||
}
|
||||
|
||||
function is_deployed_by_oneview {
|
||||
[[ -z "${IRONIC_DEPLOY_DRIVER##*_oneview}" ]] && return 0
|
||||
}
|
||||
|
||||
function is_deployed_by_ilo {
|
||||
[[ -z "${IRONIC_DEPLOY_DRIVER##*_ilo}" ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
function is_glance_configuration_required {
|
||||
is_deployed_by_agent || [[ "$IRONIC_CONFIGURE_GLANCE_WITH_SWIFT" == "True" ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
function is_deploy_iso_required {
|
||||
[[ "$IRONIC_IS_HARDWARE" == "True" && "$IRONIC_DEPLOY_ISO_REQUIRED" == "True" ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
function setup_virtualbmc {
|
||||
# Install pyghmi from source, if requested, otherwise it will be
|
||||
# downloaded as part of the virtualbmc installation
|
||||
|
@ -390,6 +476,56 @@ function configure_ironic_dirs {
|
|||
fi
|
||||
}
|
||||
|
||||
function configure_ironic_provision_network {
|
||||
|
||||
die_if_not_set $LINENO IRONIC_PROVISION_SUBNET_PREFIX "You must specify the IRONIC_PROVISION_SUBNET_PREFIX"
|
||||
die_if_not_set $LINENO PHYSICAL_NETWORK "You must specify the PHYSICAL_NETWORK"
|
||||
die_if_not_set $LINENO IRONIC_PROVISION_SUBNET_GATEWAY "You must specify the IRONIC_PROVISION_SUBNET_GATEWAY"
|
||||
|
||||
local net_id
|
||||
net_id=$(openstack network create --provider-network-type $IRONIC_PROVISION_PROVIDER_NETWORK_TYPE \
|
||||
--provider-physical-network "$PHYSICAL_NETWORK" \
|
||||
${IRONIC_PROVISION_SEGMENTATION_ID:+--provider-segment $IRONIC_PROVISION_SEGMENTATION_ID} \
|
||||
${IRONIC_PROVISION_NETWORK_NAME} -f value -c id)
|
||||
|
||||
die_if_not_set $LINENO net_id "Failure creating net_id for $IRONIC_PROVISION_NETWORK_NAME"
|
||||
local subnet_id
|
||||
subnet_id="$(openstack subnet create --ip-version 4 \
|
||||
${IRONIC_PROVISION_ALLOCATION_POOL:+--allocation-pool $IRONIC_PROVISION_ALLOCATION_POOL} \
|
||||
$IRONIC_PROVISION_PROVIDER_SUBNET_NAME \
|
||||
--gateway $IRONIC_PROVISION_SUBNET_GATEWAY --network $net_id \
|
||||
--subnet-range $IRONIC_PROVISION_SUBNET_PREFIX -f value -c id)"
|
||||
|
||||
die_if_not_set $LINENO subnet_id "Failure creating SUBNET_ID for $IRONIC_PROVISION_NETWORK_NAME"
|
||||
|
||||
iniset $IRONIC_CONF_FILE neutron provisioning_network_uuid $net_id
|
||||
|
||||
IRONIC_PROVISION_SEGMENTATION_ID=${IRONIC_PROVISION_SEGMENTATION_ID:-`openstack network show ${net_id} -f value -c provider:segmentation_id`}
|
||||
provision_net_prefix=${IRONIC_PROVISION_SUBNET_PREFIX##*/}
|
||||
|
||||
# Set provision network GW on physical interface
|
||||
# Add vlan on br interface in case of IRONIC_PROVISION_PROVIDER_NETWORK_TYPE==vlan
|
||||
# othervise assign ip to br interface directly.
|
||||
if [[ "$IRONIC_PROVISION_PROVIDER_NETWORK_TYPE" == "vlan" ]]; then
|
||||
sudo vconfig add $OVS_PHYSICAL_BRIDGE $IRONIC_PROVISION_SEGMENTATION_ID
|
||||
sudo ip link set dev $OVS_PHYSICAL_BRIDGE.$IRONIC_PROVISION_SEGMENTATION_ID up
|
||||
sudo ip addr add dev $OVS_PHYSICAL_BRIDGE.$IRONIC_PROVISION_SEGMENTATION_ID $IRONIC_PROVISION_SUBNET_GATEWAY/$provision_net_prefix
|
||||
else
|
||||
sudo ip link set dev $OVS_PHYSICAL_BRIDGE up
|
||||
sudo ip addr add dev $OVS_PHYSICAL_BRIDGE $IRONIC_PROVISION_SUBNET_GATEWAY/$provision_net_prefix
|
||||
fi
|
||||
}
|
||||
|
||||
function cleanup_ironic_provision_network {
|
||||
# Cleanup OVS_PHYSICAL_BRIDGE subinterfaces
|
||||
local bridge_subint
|
||||
bridge_subint=$(cat /proc/net/dev | sed -n "s/^\(${OVS_PHYSICAL_BRIDGE}\.[0-9]*\).*/\1/p")
|
||||
for sub_int in $bridge_subint; do
|
||||
sudo ip link set dev $sub_int down
|
||||
sudo ip link del dev $sub_int
|
||||
done
|
||||
}
|
||||
|
||||
# configure_ironic() - Set config files, create data dirs, etc
|
||||
function configure_ironic {
|
||||
configure_ironic_dirs
|
||||
|
@ -425,19 +561,9 @@ function configure_ironic {
|
|||
# API specific configuration.
|
||||
function configure_ironic_api {
|
||||
iniset $IRONIC_CONF_FILE DEFAULT auth_strategy $IRONIC_AUTH_STRATEGY
|
||||
configure_auth_token_middleware $IRONIC_CONF_FILE ironic $IRONIC_AUTH_CACHE_DIR/api
|
||||
iniset $IRONIC_CONF_FILE oslo_policy policy_file $IRONIC_POLICY_JSON
|
||||
|
||||
# TODO(Yuki Nishiwaki): This is a temporary work-around until Ironic is fixed(bug#1422632).
|
||||
# These codes need to be changed to use the function of configure_auth_token_middleware
|
||||
# after Ironic conforms to the new auth plugin.
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken admin_user ironic
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken admin_password $SERVICE_PASSWORD
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken admin_tenant_name $SERVICE_PROJECT_NAME
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken cafile $SSL_BUNDLE_FILE
|
||||
iniset $IRONIC_CONF_FILE keystone_authtoken signing_dir $IRONIC_AUTH_CACHE_DIR/api
|
||||
|
||||
iniset_rpc_backend ironic $IRONIC_CONF_FILE
|
||||
iniset $IRONIC_CONF_FILE api port $IRONIC_SERVICE_PORT
|
||||
|
||||
|
@ -446,9 +572,35 @@ function configure_ironic_api {
|
|||
cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON
|
||||
}
|
||||
|
||||
function configure_auth_for {
|
||||
local service_config_section
|
||||
service_config_section=$1
|
||||
iniset $IRONIC_CONF_FILE $service_config_section auth_type password
|
||||
iniset $IRONIC_CONF_FILE $service_config_section auth_url $KEYSTONE_SERVICE_URI
|
||||
iniset $IRONIC_CONF_FILE $service_config_section username ironic
|
||||
iniset $IRONIC_CONF_FILE $service_config_section password $SERVICE_PASSWORD
|
||||
iniset $IRONIC_CONF_FILE $service_config_section project_name $SERVICE_PROJECT_NAME
|
||||
iniset $IRONIC_CONF_FILE $service_config_section user_domain_id default
|
||||
iniset $IRONIC_CONF_FILE $service_config_section project_domain_id default
|
||||
iniset $IRONIC_CONF_FILE $service_config_section cafile $SSL_BUNDLE_FILE
|
||||
|
||||
}
|
||||
|
||||
# configure_ironic_conductor() - Is used by configure_ironic().
|
||||
# Sets conductor specific settings.
|
||||
function configure_ironic_conductor {
|
||||
|
||||
# set keystone region for all services
|
||||
iniset $IRONIC_CONF_FILE keystone region_name $REGION_NAME
|
||||
|
||||
# set keystone auth plugin options for services
|
||||
configure_auth_for neutron
|
||||
configure_auth_for swift
|
||||
configure_auth_for glance
|
||||
configure_auth_for inspector
|
||||
# this one is needed for lookup of Ironic API endpoint via Keystone
|
||||
configure_auth_for service_catalog
|
||||
|
||||
cp $IRONIC_DIR/etc/ironic/rootwrap.conf $IRONIC_ROOTWRAP_CONF
|
||||
cp -r $IRONIC_DIR/etc/ironic/rootwrap.d $IRONIC_CONF_DIR
|
||||
local ironic_rootwrap
|
||||
|
@ -510,11 +662,15 @@ function configure_ironic_conductor {
|
|||
# Set these options for scenarios in which the agent fetches the image
|
||||
# directly from glance, and don't set them where the image is pushed
|
||||
# over iSCSI.
|
||||
if is_deployed_by_agent; then
|
||||
if is_glance_configuration_required; then
|
||||
if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]] ; then
|
||||
iniset $IRONIC_CONF_FILE glance swift_temp_url_key $SWIFT_TEMPURL_KEY
|
||||
else
|
||||
die $LINENO "SWIFT_ENABLE_TEMPURLS must be True to use agent_* driver in Ironic."
|
||||
die $LINENO "SWIFT_ENABLE_TEMPURLS must be True. This is " \
|
||||
"required either because IRONIC_DEPLOY_DRIVER was " \
|
||||
"set to some agent_* driver OR configuration of " \
|
||||
"Glance with Swift was explicitly requested with " \
|
||||
"IRONIC_CONFIGURE_GLANCE_WITH_SWIFT=True"
|
||||
fi
|
||||
iniset $IRONIC_CONF_FILE glance swift_endpoint_url http://${HOST_IP}:${SWIFT_DEFAULT_BIND_PORT:-8080}
|
||||
iniset $IRONIC_CONF_FILE glance swift_api_version v1
|
||||
|
@ -523,7 +679,10 @@ function configure_ironic_conductor {
|
|||
iniset $IRONIC_CONF_FILE glance swift_account AUTH_${tenant_id}
|
||||
iniset $IRONIC_CONF_FILE glance swift_container glance
|
||||
iniset $IRONIC_CONF_FILE glance swift_temp_url_duration 3600
|
||||
iniset $IRONIC_CONF_FILE agent heartbeat_timeout 30
|
||||
fi
|
||||
|
||||
if is_deployed_by_agent; then
|
||||
iniset $IRONIC_CONF_FILE api ramdisk_heartbeat_timeout 30
|
||||
fi
|
||||
|
||||
# FIXME: this really needs to be tested in the gate. For now, any
|
||||
|
@ -544,6 +703,10 @@ function configure_ironic_conductor {
|
|||
if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then
|
||||
iniset $IRONIC_CONF_FILE neutron port_setup_delay 15
|
||||
fi
|
||||
|
||||
if [[ -n "$IRONIC_ENABLED_NETWORK_INTERFACES" ]]; then
|
||||
iniset $IRONIC_CONF_FILE DEFAULT enabled_network_interfaces $IRONIC_ENABLED_NETWORK_INTERFACES
|
||||
fi
|
||||
}
|
||||
|
||||
# create_ironic_cache_dir() - Part of the init_ironic() process
|
||||
|
@ -559,24 +722,33 @@ function create_ironic_cache_dir {
|
|||
|
||||
# create_ironic_accounts() - Set up common required ironic accounts
|
||||
|
||||
# Tenant User Roles
|
||||
# Project User Roles
|
||||
# ------------------------------------------------------------------
|
||||
# service ironic admin # if enabled
|
||||
# service ironic admin
|
||||
# service nova baremetal_admin
|
||||
# demo demo baremetal_observer
|
||||
function create_ironic_accounts {
|
||||
|
||||
# Ironic
|
||||
if [[ "$ENABLED_SERVICES" =~ "ir-api" ]]; then
|
||||
# Get ironic user if exists
|
||||
|
||||
# NOTE(Shrews): This user MUST have admin level privileges!
|
||||
create_service_user "ironic" "admin"
|
||||
|
||||
if [[ "$ENABLED_SERVICES" =~ "ir-api" && "$ENABLED_SERVICES" =~ "key" ]]; then
|
||||
# Define service and endpoints in Keystone
|
||||
get_or_create_service "ironic" "baremetal" "Ironic baremetal provisioning service"
|
||||
get_or_create_endpoint "baremetal" \
|
||||
"$REGION_NAME" \
|
||||
"$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
|
||||
"$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
|
||||
"$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT"
|
||||
|
||||
# Create ironic service user
|
||||
# TODO(deva): make this work with the 'service' role
|
||||
# https://bugs.launchpad.net/ironic/+bug/1605398
|
||||
create_service_user "ironic" "admin"
|
||||
|
||||
# Create additional bare metal tenant and roles
|
||||
get_or_create_role baremetal_admin
|
||||
get_or_create_role baremetal_observer
|
||||
if is_service_enabled nova; then
|
||||
get_or_add_user_project_role baremetal_admin nova $SERVICE_PROJECT_NAME
|
||||
fi
|
||||
get_or_add_user_project_role baremetal_observer demo demo
|
||||
fi
|
||||
}
|
||||
|
||||
|
@ -730,7 +902,7 @@ function create_bridge_and_vms {
|
|||
# Call libvirt setup scripts in a new shell to ensure any new group membership
|
||||
sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/setup-network.sh"
|
||||
if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then
|
||||
local log_arg="$IRONIC_VM_LOG_DIR"
|
||||
local log_arg="-l $IRONIC_VM_LOG_DIR"
|
||||
|
||||
if [[ "$IRONIC_VM_LOG_ROTATE" == "True" ]] ; then
|
||||
setup_qemu_log_hook
|
||||
|
@ -742,14 +914,14 @@ function create_bridge_and_vms {
|
|||
local vbmc_port=$IRONIC_VBMC_PORT_RANGE_START
|
||||
local vm_name
|
||||
for vm_name in $(_ironic_bm_vm_names); do
|
||||
sudo -E su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/create-node.sh $vm_name \
|
||||
$IRONIC_VM_SPECS_CPU $IRONIC_VM_SPECS_RAM $IRONIC_VM_SPECS_DISK \
|
||||
$IRONIC_VM_SPECS_CPU_ARCH $IRONIC_VM_NETWORK_BRIDGE $IRONIC_VM_EMULATOR \
|
||||
$vbmc_port $log_arg $IRONIC_VM_SPECS_DISK_FORMAT" >> $IRONIC_VM_MACS_CSV_FILE
|
||||
sudo -E su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/create-node.sh -n $vm_name \
|
||||
-c $IRONIC_VM_SPECS_CPU -m $IRONIC_VM_SPECS_RAM -d $IRONIC_VM_SPECS_DISK \
|
||||
-a $IRONIC_VM_SPECS_CPU_ARCH -b $IRONIC_VM_NETWORK_BRIDGE -e $IRONIC_VM_EMULATOR \
|
||||
-p $vbmc_port -f $IRONIC_VM_SPECS_DISK_FORMAT $log_arg" >> $IRONIC_VM_MACS_CSV_FILE
|
||||
vbmc_port=$((vbmc_port+1))
|
||||
done
|
||||
local ironic_net_id
|
||||
ironic_net_id=$(openstack network show "private" -c id -f value)
|
||||
ironic_net_id=$(openstack network show "$PRIVATE_NETWORK_NAME" -c id -f value)
|
||||
create_ovs_taps $ironic_net_id
|
||||
}
|
||||
|
||||
|
@ -832,6 +1004,21 @@ function enroll_nodes {
|
|||
vbmc_port=$(echo $hardware_info | awk '{print $2}')
|
||||
node_options+=" -i ipmi_port=$vbmc_port"
|
||||
fi
|
||||
# Local-link-connection options
|
||||
if [[ "${IRONIC_USE_LINK_LOCAL}" == "True" ]]; then
|
||||
local llc_opts=""
|
||||
local switch_info
|
||||
local switch_id
|
||||
local port_id
|
||||
|
||||
switch_info=$(echo $hardware_info |awk '{print $3}')
|
||||
switch_id=$(echo $hardware_info |awk '{print $4}')
|
||||
port_id=$(echo $hardware_info |awk '{print $5}')
|
||||
|
||||
llc_opts="-l switch_id=${switch_id} -l switch_info=${switch_info} -l port_id=${port_id}"
|
||||
|
||||
local ironic_api_version='--ironic-api-version latest'
|
||||
fi
|
||||
else
|
||||
# Currently we require all hardware platform have same CPU/RAM/DISK info
|
||||
# in future, this can be enhanced to support different type, and then
|
||||
|
@ -858,6 +1045,31 @@ function enroll_nodes {
|
|||
ucs_service_profile=$(echo $hardware_info |awk '{print $5}')
|
||||
node_options+=" -i ucs_address=$bmc_address -i ucs_password=$bmc_passwd\
|
||||
-i ucs_username=$bmc_username -i ucs_service_profile=$ucs_service_profile"
|
||||
elif is_deployed_by_oneview; then
|
||||
local server_hardware_uri
|
||||
server_hardware_uri=$(echo $hardware_info |awk '{print $1}')
|
||||
local server_hardware_type_uri
|
||||
server_hardware_type_uri=$(echo $hardware_info |awk '{print $2}')
|
||||
local enclosure_group_uri
|
||||
enclosure_group_uri=$(echo $hardware_info |awk '{print $3}')
|
||||
local server_profile_template_uri
|
||||
server_profile_template_uri=$(echo $hardware_info |awk '{print $4}')
|
||||
mac_address=$(echo $hardware_info |awk '{print $5}')
|
||||
local applied_server_profile_uri
|
||||
applied_server_profile_uri=$(echo $hardware_info |awk '{print $6}')
|
||||
|
||||
node_options+=" -i server_hardware_uri=$server_hardware_uri"
|
||||
node_options+=" -i applied_server_profile_uri=$applied_server_profile_uri"
|
||||
node_options+=" -p capabilities="
|
||||
node_options+="server_hardware_type_uri:$server_hardware_type_uri,"
|
||||
node_options+="enclosure_group_uri:$enclosure_group_uri,"
|
||||
node_options+="server_profile_template_uri:$server_profile_template_uri"
|
||||
elif is_deployed_by_ilo; then
|
||||
node_options+=" -i ilo_address=$bmc_address -i ilo_password=$bmc_passwd\
|
||||
-i ilo_username=$bmc_username"
|
||||
if [[ $IRONIC_DEPLOY_DRIVER -ne "pxe_ilo" ]]; then
|
||||
node_options+=" -i ilo_deploy_iso=$IRONIC_DEPLOY_ISO_ID"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
|
@ -887,7 +1099,20 @@ function enroll_nodes {
|
|||
ironic node-update $node_id add properties/root_device='{"vendor": "0x1af4"}'
|
||||
fi
|
||||
|
||||
ironic port-create --address $mac_address --node $node_id
|
||||
# In case we using portgroups, we should API version that support them.
|
||||
# Othervise API will return 406 ERROR
|
||||
ironic $ironic_api_version port-create --address $mac_address --node $node_id $llc_opts
|
||||
|
||||
# NOTE(vsaienko) use node-update instead of specifying network_interface
|
||||
# during node creation. If node is added with latest version of API it
|
||||
# will NOT go to available state automatically.
|
||||
if [[ -n "${IRONIC_NETWORK_INTERFACE}" ]]; then
|
||||
local n_id
|
||||
ironic node-set-maintenance $node_id true
|
||||
n_id=$(ironic $ironic_api_version node-update $node_id add network_interface=$IRONIC_NETWORK_INTERFACE)
|
||||
die_if_not_set $LINENO n_id "Failed to update network interface for node"
|
||||
ironic node-set-maintenance $node_id false
|
||||
fi
|
||||
|
||||
total_nodes=$((total_nodes+1))
|
||||
total_cpus=$((total_cpus+$ironic_node_cpu))
|
||||
|
@ -959,6 +1184,10 @@ function configure_ironic_ssh_keypair {
|
|||
fi
|
||||
echo -e 'n\n' | ssh-keygen -q -t rsa -P '' -f $IRONIC_KEY_FILE
|
||||
fi
|
||||
# NOTE(vsaienko) check for new line character, add if doesn't exist.
|
||||
if [[ "$(tail -c1 $IRONIC_AUTHORIZED_KEYS_FILE | wc -l)" == "0" ]]; then
|
||||
echo "" >> $IRONIC_AUTHORIZED_KEYS_FILE
|
||||
fi
|
||||
cat $IRONIC_KEY_FILE.pub | tee -a $IRONIC_AUTHORIZED_KEYS_FILE
|
||||
# remove duplicate keys.
|
||||
sort -u -o $IRONIC_AUTHORIZED_KEYS_FILE $IRONIC_AUTHORIZED_KEYS_FILE
|
||||
|
@ -994,15 +1223,16 @@ function configure_ironic_auxiliary {
|
|||
function build_ipa_ramdisk {
|
||||
local kernel_path=$1
|
||||
local ramdisk_path=$2
|
||||
local iso_path=$3
|
||||
case $IRONIC_RAMDISK_TYPE in
|
||||
'coreos')
|
||||
build_ipa_coreos_ramdisk $kernel_path $ramdisk_path
|
||||
build_ipa_coreos_ramdisk $kernel_path $ramdisk_path $iso_path
|
||||
;;
|
||||
'tinyipa')
|
||||
build_tinyipa_ramdisk $kernel_path $ramdisk_path
|
||||
build_tinyipa_ramdisk $kernel_path $ramdisk_path $iso_path
|
||||
;;
|
||||
'dib')
|
||||
build_ipa_dib_ramdisk $kernel_path $ramdisk_path
|
||||
build_ipa_dib_ramdisk $kernel_path $ramdisk_path $iso_path
|
||||
;;
|
||||
*)
|
||||
die $LINENO "Unrecognised IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected either of 'dib', 'coreos', or 'tinyipa'."
|
||||
|
@ -1014,6 +1244,7 @@ function build_ipa_coreos_ramdisk {
|
|||
echo "Building coreos ironic-python-agent deploy ramdisk"
|
||||
local kernel_path=$1
|
||||
local ramdisk_path=$2
|
||||
local iso_path=$3
|
||||
# on fedora services do not start by default
|
||||
restart_service docker
|
||||
git_clone $IRONIC_PYTHON_AGENT_REPO $IRONIC_PYTHON_AGENT_DIR $IRONIC_PYTHON_AGENT_BRANCH
|
||||
|
@ -1021,6 +1252,9 @@ function build_ipa_coreos_ramdisk {
|
|||
imagebuild/coreos/build_coreos_image.sh
|
||||
cp imagebuild/coreos/UPLOAD/coreos_production_pxe_image-oem.cpio.gz $ramdisk_path
|
||||
cp imagebuild/coreos/UPLOAD/coreos_production_pxe.vmlinuz $kernel_path
|
||||
if is_deploy_iso_required; then
|
||||
imagebuild/coreos/iso-image-create -k $kernel_path -i $ramdisk_path -o $iso_path
|
||||
fi
|
||||
sudo rm -rf UPLOAD
|
||||
cd -
|
||||
}
|
||||
|
@ -1029,12 +1263,17 @@ function build_tinyipa_ramdisk {
|
|||
echo "Building ironic-python-agent deploy ramdisk"
|
||||
local kernel_path=$1
|
||||
local ramdisk_path=$2
|
||||
local iso_path=$3
|
||||
git_clone $IRONIC_PYTHON_AGENT_REPO $IRONIC_PYTHON_AGENT_DIR $IRONIC_PYTHON_AGENT_BRANCH
|
||||
cd $IRONIC_PYTHON_AGENT_DIR/imagebuild/tinyipa
|
||||
export BUILD_AND_INSTALL_TINYIPA=true
|
||||
make
|
||||
cp tinyipa.gz $ramdisk_path
|
||||
cp tinyipa.vmlinuz $kernel_path
|
||||
if is_deploy_iso_required; then
|
||||
make iso
|
||||
cp tinyipa.iso $iso_path
|
||||
fi
|
||||
make clean
|
||||
cd -
|
||||
}
|
||||
|
@ -1052,15 +1291,28 @@ function install_diskimage_builder {
|
|||
function build_ipa_dib_ramdisk {
|
||||
local kernel_path=$1
|
||||
local ramdisk_path=$2
|
||||
local iso_path=$3
|
||||
local tempdir
|
||||
tempdir=$(mktemp -d --tmpdir=${DEST})
|
||||
|
||||
# install diskimage-builder if not present
|
||||
if ! $(type -P disk-image-create > /dev/null); then
|
||||
install_diskimage_builder
|
||||
fi
|
||||
|
||||
echo "Building IPA ramdisk with DIB options: $IRONIC_DIB_RAMDISK_OPTIONS"
|
||||
if is_deploy_iso_required; then
|
||||
IRONIC_DIB_RAMDISK_OPTIONS+=" iso"
|
||||
fi
|
||||
disk-image-create "$IRONIC_DIB_RAMDISK_OPTIONS" \
|
||||
-o "$tempdir/ironic-agent" \
|
||||
ironic-agent
|
||||
chmod -R +r $tempdir
|
||||
mv "$tempdir/ironic-agent.kernel" "$kernel_path"
|
||||
mv "$tempdir/ironic-agent.initramfs" "$ramdisk_path"
|
||||
if is_deploy_iso_required; then
|
||||
mv "$tempdir/ironic-agent.iso" "$iso_path"
|
||||
fi
|
||||
rm -rf $tempdir
|
||||
}
|
||||
|
||||
|
@ -1070,22 +1322,26 @@ function upload_baremetal_ironic_deploy {
|
|||
declare -g IRONIC_DEPLOY_KERNEL_ID IRONIC_DEPLOY_RAMDISK_ID
|
||||
echo_summary "Creating and uploading baremetal images for ironic"
|
||||
|
||||
if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then
|
||||
if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" -o -z "$IRONIC_DEPLOY_ISO" ]; then
|
||||
local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.kernel
|
||||
local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.initramfs
|
||||
local IRONIC_DEPLOY_ISO_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.iso
|
||||
else
|
||||
local IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL
|
||||
local IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK
|
||||
local IRONIC_DEPLOY_ISO_PATH=$IRONIC_DEPLOY_ISO
|
||||
fi
|
||||
|
||||
if [ ! -e "$IRONIC_DEPLOY_RAMDISK_PATH" -o ! -e "$IRONIC_DEPLOY_KERNEL_PATH" ]; then
|
||||
if [ ! -e "$IRONIC_DEPLOY_RAMDISK_PATH" ] || \
|
||||
[ ! -e "$IRONIC_DEPLOY_KERNEL_PATH" ] || \
|
||||
( is_deploy_iso_required && [ ! -e "$IRONIC_DEPLOY_ISO_PATH" ] ); then
|
||||
# files don't exist, need to build them
|
||||
if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then
|
||||
# we can build them only if we're not offline
|
||||
if [ "$OFFLINE" != "True" ]; then
|
||||
build_ipa_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH
|
||||
build_ipa_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH $IRONIC_DEPLOY_ISO_PATH
|
||||
else
|
||||
die $LINENO "Deploy kernel+ramdisk files don't exist and cannot be built in OFFLINE mode"
|
||||
die $LINENO "Deploy kernel+ramdisk or iso files don't exist and cannot be built in OFFLINE mode"
|
||||
fi
|
||||
else
|
||||
# download the agent image tarball
|
||||
|
@ -1110,6 +1366,16 @@ function upload_baremetal_ironic_deploy {
|
|||
--container-format=ari \
|
||||
< $IRONIC_DEPLOY_RAMDISK_PATH | grep ' id ' | get_field 2)
|
||||
die_if_not_set $LINENO IRONIC_DEPLOY_RAMDISK_ID "Failed to load ramdisk image into glance"
|
||||
|
||||
if is_deploy_iso_required; then
|
||||
IRONIC_DEPLOY_ISO_ID=$(openstack \
|
||||
image create \
|
||||
$(basename $IRONIC_DEPLOY_ISO_PATH) \
|
||||
--public --disk-format=iso \
|
||||
--container-format=bare \
|
||||
< $IRONIC_DEPLOY_ISO_PATH -f value -c id)
|
||||
die_if_not_set $LINENO IRONIC_DEPLOY_ISO_ID "Failed to load deploy iso into glance"
|
||||
fi
|
||||
}
|
||||
|
||||
function prepare_baremetal_basic_ops {
|
||||
|
@ -1121,9 +1387,6 @@ function prepare_baremetal_basic_ops {
|
|||
configure_ironic_auxiliary
|
||||
fi
|
||||
upload_baremetal_ironic_deploy
|
||||
if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then
|
||||
create_bridge_and_vms
|
||||
fi
|
||||
enroll_nodes
|
||||
configure_tftpd
|
||||
configure_iptables
|
||||
|
@ -1146,6 +1409,12 @@ function cleanup_baremetal_basic_ops {
|
|||
local vm_name
|
||||
for vm_name in $(_ironic_bm_vm_names); do
|
||||
sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/cleanup-node.sh $vm_name"
|
||||
# Cleanup node bridge/interfaces
|
||||
sudo ip link set ovs-$vm_name down
|
||||
sudo ip link set br-$vm_name down
|
||||
sudo ovs-vsctl del-port ovs-$vm_name
|
||||
sudo ip link del dev ovs-$vm_name
|
||||
sudo ip link del dev br-$vm_name
|
||||
done
|
||||
|
||||
sudo ovs-vsctl --if-exists del-br $IRONIC_VM_NETWORK_BRIDGE
|
||||
|
@ -1169,6 +1438,9 @@ function ironic_configure_tempest {
|
|||
iniset $TEMPEST_CONFIG compute flavor_ref $bm_flavor_id
|
||||
iniset $TEMPEST_CONFIG compute flavor_ref_alt $bm_flavor_id
|
||||
iniset $TEMPEST_CONFIG compute-feature-enabled disk_config False
|
||||
if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then
|
||||
iniset $TEMPEST_CONFIG baremetal use_provision_network True
|
||||
fi
|
||||
}
|
||||
|
||||
# Restore xtrace + pipefail
|
||||
|
|
|
@ -37,11 +37,25 @@ if is_service_enabled ir-api ir-cond; then
|
|||
# Initialize ironic
|
||||
init_ironic
|
||||
|
||||
if [[ "$IRONIC_BAREMETAL_BASIC_OPS" == "True" && "$IRONIC_IS_HARDWARE" == "False" ]]; then
|
||||
echo_summary "Creating bridge and VMs"
|
||||
create_bridge_and_vms
|
||||
fi
|
||||
if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then
|
||||
echo_summary "Configuring Ironic provisioning network"
|
||||
configure_ironic_provision_network
|
||||
fi
|
||||
|
||||
# Start the ironic API and ironic taskmgr components
|
||||
echo_summary "Starting Ironic"
|
||||
start_ironic
|
||||
prepare_baremetal_basic_ops
|
||||
|
||||
elif [[ "$2" == "test-config" ]]; then
|
||||
# stack/test-config - Called at the end of devstack used to configure tempest
|
||||
# or any other test environments
|
||||
if is_service_enabled tempest; then
|
||||
echo_summary "Configuring Tempest for Ironic needs"
|
||||
ironic_configure_tempest
|
||||
fi
|
||||
fi
|
||||
|
@ -51,6 +65,7 @@ if is_service_enabled ir-api ir-cond; then
|
|||
# unstack - Called by unstack.sh before other services are shut down.
|
||||
|
||||
stop_ironic
|
||||
cleanup_ironic_provision_network
|
||||
cleanup_baremetal_basic_ops
|
||||
fi
|
||||
|
||||
|
|
|
@ -68,10 +68,11 @@ def main():
|
|||
help="CPU count for the VM.")
|
||||
parser.add_argument('--bootdev', default='hd',
|
||||
help="What boot device to use (hd/network).")
|
||||
parser.add_argument('--network', default="brbm",
|
||||
help='The libvirt network name to use')
|
||||
parser.add_argument('--libvirt-nic-driver', default='virtio',
|
||||
help='The libvirt network driver to use')
|
||||
parser.add_argument('--bridge', default="br-seed",
|
||||
help='The linux bridge name to use for seeding \
|
||||
the baremetal pseudo-node\'s OS image')
|
||||
parser.add_argument('--console-log',
|
||||
help='File to log console')
|
||||
parser.add_argument('--emulator', default=None,
|
||||
|
@ -89,7 +90,7 @@ def main():
|
|||
'memory': args.memory,
|
||||
'cpus': args.cpus,
|
||||
'bootdev': args.bootdev,
|
||||
'network': args.network,
|
||||
'bridge': args.bridge,
|
||||
'nicdriver': args.libvirt_nic_driver,
|
||||
'emulator': args.emulator,
|
||||
'disk_format': args.disk_format
|
||||
|
@ -111,7 +112,7 @@ def main():
|
|||
conn = libvirt.open("qemu:///system")
|
||||
|
||||
a = conn.defineXML(libvirt_template)
|
||||
print ("Created machine %s with UUID %s" % (args.name, a.UUIDString()))
|
||||
print("Created machine %s with UUID %s" % (args.name, a.UUIDString()))
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
|
|
@ -9,18 +9,24 @@ set -ex
|
|||
# Keep track of the DevStack directory
|
||||
TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
|
||||
|
||||
NAME=$1
|
||||
CPU=$2
|
||||
MEM=$(( 1024 * $3 ))
|
||||
# Extra G to allow fuzz for partition table : flavor size and registered size
|
||||
# need to be different to actual size.
|
||||
DISK=$(( $4 + 1))
|
||||
ARCH=$5
|
||||
BRIDGE=$6
|
||||
EMULATOR=$7
|
||||
VBMC_PORT=$8
|
||||
LOGDIR=$9
|
||||
DISK_FORMAT=${10}
|
||||
while getopts "n:c:m:d:a:b:e:p:f:l:" arg; do
|
||||
case $arg in
|
||||
n) NAME=$OPTARG;;
|
||||
c) CPU=$OPTARG;;
|
||||
m) MEM=$(( 1024 * OPTARG ));;
|
||||
# Extra G to allow fuzz for partition table : flavor size and registered
|
||||
# size need to be different to actual size.
|
||||
d) DISK=$(( OPTARG + 1 ));;
|
||||
a) ARCH=$OPTARG;;
|
||||
b) BRIDGE=$OPTARG;;
|
||||
e) EMULATOR=$OPTARG;;
|
||||
p) VBMC_PORT=$OPTARG;;
|
||||
f) DISK_FORMAT=$OPTARG;;
|
||||
l) LOGDIR=$OPTARG;;
|
||||
esac
|
||||
done
|
||||
|
||||
shift $(( $OPTIND - 1 ))
|
||||
|
||||
LIBVIRT_NIC_DRIVER=${LIBVIRT_NIC_DRIVER:-"virtio"}
|
||||
LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"}
|
||||
|
@ -56,6 +62,18 @@ else
|
|||
fi
|
||||
VOL_NAME="${NAME}.${DISK_FORMAT}"
|
||||
|
||||
# Create bridge and add VM interface to it.
|
||||
# Additional interface will be added to this bridge and
|
||||
# it will be plugged to OVS.
|
||||
# This is needed in order to have interface in OVS even
|
||||
# when VM is in shutdown state
|
||||
|
||||
sudo brctl addbr br-$NAME
|
||||
sudo ip link set br-$NAME up
|
||||
sudo ovs-vsctl add-port $BRIDGE ovs-$NAME -- set Interface ovs-$NAME type=internal
|
||||
sudo ip link set ovs-$NAME up
|
||||
sudo brctl addif br-$NAME ovs-$NAME
|
||||
|
||||
if ! virsh list --all | grep -q $NAME; then
|
||||
virsh vol-list --pool $LIBVIRT_STORAGE_POOL | grep -q $VOL_NAME &&
|
||||
virsh vol-delete $VOL_NAME --pool $LIBVIRT_STORAGE_POOL >&2
|
||||
|
@ -67,7 +85,7 @@ if ! virsh list --all | grep -q $NAME; then
|
|||
$TOP_DIR/scripts/configure-vm.py \
|
||||
--bootdev network --name $NAME --image "$volume_path" \
|
||||
--arch $ARCH --cpus $CPU --memory $MEM --libvirt-nic-driver $LIBVIRT_NIC_DRIVER \
|
||||
--emulator $EMULATOR --network $BRIDGE --disk-format $DISK_FORMAT $VM_LOGGING >&2
|
||||
--emulator $EMULATOR --bridge br-$NAME --disk-format $DISK_FORMAT $VM_LOGGING >&2
|
||||
|
||||
# Createa Virtual BMC for the node if IPMI is used
|
||||
if [[ $(type -P vbmc) != "" ]]; then
|
||||
|
@ -78,4 +96,5 @@ fi
|
|||
|
||||
# echo mac
|
||||
VM_MAC=$(virsh dumpxml $NAME | grep "mac address" | head -1 | cut -d\' -f2)
|
||||
echo $VM_MAC $VBMC_PORT
|
||||
switch_id=$(ip link show dev $BRIDGE | egrep -o "ether [A-Za-z0-9:]+"|sed "s/ether\ //")
|
||||
echo $VM_MAC $VBMC_PORT $BRIDGE $switch_id ovs-$NAME
|
||||
|
|
|
@ -28,9 +28,8 @@
|
|||
<controller type='ide' index='0'>
|
||||
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
|
||||
</controller>
|
||||
<interface type='network'>
|
||||
<source network='%(network)s'/>
|
||||
<virtualport type='openvswitch'/>
|
||||
<interface type='bridge'>
|
||||
<source bridge='%(bridge)s'/>
|
||||
<model type='%(nicdriver)s'/>
|
||||
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
|
||||
</interface>
|
||||
|
|
|
@ -64,6 +64,11 @@ MOCK_MODULES = ['nova', 'nova.compute', 'nova.context']
|
|||
for module in MOCK_MODULES:
|
||||
sys.modules[module] = mock.Mock()
|
||||
|
||||
# A list of glob-style patterns that should be excluded when looking for
|
||||
# source files. They are matched against the source file names relative to the
|
||||
# source directory, using slashes as directory separators on all platforms.
|
||||
exclude_patterns = ['api/ironic_tempest_plugin.*']
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
|
|
|
@ -132,7 +132,8 @@ from the ``manageable`` state to ``active`` state.::
|
|||
ironic port-create --node <node_uuid> -a <node_mac_address>
|
||||
|
||||
ironic node-update testnode add \
|
||||
instance_info/image_source="http://localhost:8080/blankimage"
|
||||
instance_info/image_source="http://localhost:8080/blankimage" \
|
||||
instance_info/capabilities="{\"boot_option\": \"local\"}"
|
||||
|
||||
ironic node-set-provision-state testnode manage
|
||||
|
||||
|
@ -142,6 +143,11 @@ from the ``manageable`` state to ``active`` state.::
|
|||
In the above example, the image_source setting must reference a valid
|
||||
image or file, however that image or file can ultimately be empty.
|
||||
|
||||
.. NOTE::
|
||||
The above example utilizes a capability that defines the boot operation
|
||||
to be local. It is recommended to define the node as such unless network
|
||||
booting is desired.
|
||||
|
||||
.. NOTE::
|
||||
The above example will fail a re-deployment as a fake image is
|
||||
defined and no instance_info/image_checksum value is defined.
|
||||
|
@ -156,6 +162,12 @@ from the ``manageable`` state to ``active`` state.::
|
|||
|
||||
ironic node-update <node name or uuid> add instance_uuid=<uuid>
|
||||
|
||||
.. NOTE::
|
||||
In Newton, coupled with API version 1.20, the concept of a
|
||||
network_interface was introduced. A user of this feature may wish to
|
||||
add new nodes with a network_interface of ``noop`` and then change
|
||||
the interface at a later point and time.
|
||||
|
||||
Troubleshooting
|
||||
===============
|
||||
|
||||
|
|
|
@ -0,0 +1,110 @@
|
|||
.. _api-audit-support:
|
||||
|
||||
API Audit Logging
|
||||
=================
|
||||
|
||||
Audit middleware supports delivery of CADF audit events via Oslo messaging
|
||||
notifier capability. Based on `notification_driver` configuration, audit events
|
||||
can be routed to messaging infrastructure (notification_driver = messagingv2)
|
||||
or can be routed to a log file (notification_driver = log).
|
||||
|
||||
Audit middleware creates two events per REST API interaction. First event has
|
||||
information extracted from request data and the second one has request outcome
|
||||
(response).
|
||||
|
||||
Enabling API Audit Logging
|
||||
==========================
|
||||
|
||||
Audit middleware is available as part of `keystonemiddleware` (>= 1.6) library.
|
||||
For infomation regarding how audit middleware functions refer `here.
|
||||
<http://docs.openstack.org/developer/keystonemiddleware/audit.html>`_
|
||||
|
||||
Auditing can be enabled for the Bare Metal service by making the following changes
|
||||
to ``/etc/ironic/ironic.conf``.
|
||||
|
||||
#. To enable audit logging of API requests::
|
||||
|
||||
[audit]
|
||||
...
|
||||
enabled=true
|
||||
|
||||
#. To customize auditing API requests, the audit middleware requires the audit_map_file setting
|
||||
to be defined. Update the value of configuration setting 'audit_map_file' to set its
|
||||
location. Audit map file configuration options for the Bare Metal service are included
|
||||
in the etc/ironic/ironic_api_audit_map.conf.sample file. To understand CADF format
|
||||
specified in ironic_api_audit_map.conf file refer to `CADF Format.
|
||||
<http://www.dmtf.org/sites/default/files/standards/documents/DSP2038_1.0.0.pdf>`_::
|
||||
|
||||
[audit]
|
||||
...
|
||||
audit_map_file=/etc/ironic/ironic_api_audit_map.conf
|
||||
|
||||
#. Comma separated list of Ironic REST API HTTP methods to be ignored during audit.
|
||||
For example: GET,POST. It is used only when API audit is enabled.
|
||||
|
||||
[audit]
|
||||
...
|
||||
ignore_req_list=GET,POST
|
||||
|
||||
Sample Audit Event
|
||||
==================
|
||||
|
||||
Following is the sample of audit event for ironic node list request.
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"event_type":"audit.http.request",
|
||||
"timestamp":"2016-06-15 06:04:30.904397",
|
||||
"payload":{
|
||||
"typeURI":"http://schemas.dmtf.org/cloud/audit/1.0/event",
|
||||
"eventTime":"2016-06-15T06:04:30.903071+0000",
|
||||
"target":{
|
||||
"id":"ironic",
|
||||
"typeURI":"unknown",
|
||||
"addresses":[
|
||||
{
|
||||
"url":"http://{ironic_admin_host}:6385",
|
||||
"name":"admin"
|
||||
},
|
||||
{
|
||||
"url":"http://{ironic_internal_host}:6385",
|
||||
"name":"private"
|
||||
},
|
||||
{
|
||||
"url":"http://{ironic_public_host}:6385",
|
||||
"name":"public"
|
||||
}
|
||||
],
|
||||
"name":"ironic"
|
||||
},
|
||||
"observer":{
|
||||
"id":"target"
|
||||
},
|
||||
"tags":[
|
||||
"correlation_id?value=685f1abb-620e-5d5d-b74a-b4135fb32373"
|
||||
],
|
||||
"eventType":"activity",
|
||||
"initiator":{
|
||||
"typeURI":"service/security/account/user",
|
||||
"name":"admin",
|
||||
"credential":{
|
||||
"token":"***",
|
||||
"identity_status":"Confirmed"
|
||||
},
|
||||
"host":{
|
||||
"agent":"python-ironicclient",
|
||||
"address":"10.1.200.129"
|
||||
},
|
||||
"project_id":"d8f52dd7d9e1475dbbf3ba47a4a83313",
|
||||
"id":"8c1a948bad3948929aa5d5b50627a174"
|
||||
},
|
||||
"action":"read",
|
||||
"outcome":"pending",
|
||||
"id":"061b7aa7-5879-5225-a331-c002cf23cb6c",
|
||||
"requestPath":"/v1/nodes/?associated=True"
|
||||
},
|
||||
"priority":"INFO",
|
||||
"publisher_id":"ironic-api",
|
||||
"message_id":"2f61ebaa-2d3e-4023-afba-f9fca6f21fc2"
|
||||
}
|
|
@ -201,8 +201,8 @@ out-of-band. Ironic supports using both methods to clean a node.
|
|||
In-band
|
||||
-------
|
||||
In-band steps are performed by ironic making API calls to a ramdisk running
|
||||
on the node using a Deploy driver. Currently, only the ironic-python-agent
|
||||
ramdisk used with an agent_* driver supports in-band cleaning. By default,
|
||||
on the node using a Deploy driver. Currently, all the drivers using
|
||||
ironic-python-agent ramdisk support in-band cleaning. By default,
|
||||
ironic-python-agent ships with a minimal cleaning configuration, only erasing
|
||||
disks. However, with this ramdisk, you can add your own cleaning steps and/or
|
||||
override default cleaning steps with a custom Hardware Manager.
|
||||
|
|
|
@ -116,7 +116,7 @@ configuration file must be set::
|
|||
keep_ports = present
|
||||
|
||||
.. note::
|
||||
During Kilo cycle we used on older verions of Inspector called
|
||||
During Kilo cycle we used an older version of Inspector called
|
||||
ironic-discoverd_. Inspector is expected to be a mostly drop-in
|
||||
replacement, and the same client library should be used to connect to both.
|
||||
|
||||
|
|
|
@ -109,12 +109,22 @@ Configure the Identity service for the Bare Metal service
|
|||
registering the service (above), to create the endpoint,
|
||||
and replace IRONIC_NODE with your Bare Metal service's API node::
|
||||
|
||||
openstack endpoint create --region RegionOne \
|
||||
baremetal admin http://IRONIC_NODE:6385
|
||||
openstack endpoint create --region RegionOne \
|
||||
baremetal public http://IRONIC_NODE:6385
|
||||
openstack endpoint create --region RegionOne \
|
||||
baremetal internal http://IRONIC_NODE:6385
|
||||
|
||||
If only keystone v2 API is available, use this command instead::
|
||||
|
||||
openstack endpoint create --region RegionOne \
|
||||
--publicurl http://IRONIC_NODE:6385 \
|
||||
--internalurl http://IRONIC_NODE:6385 \
|
||||
--adminurl http://IRONIC_NODE:6385 \
|
||||
baremetal
|
||||
|
||||
|
||||
Set up the database for Bare Metal
|
||||
----------------------------------
|
||||
|
||||
|
@ -424,6 +434,9 @@ Bare Metal service comes with an example file for configuring the
|
|||
|
||||
- Modify the ``Directory`` directive to set the path to the Ironic API code.
|
||||
|
||||
- Modify the ``ErrorLog`` and ``CustomLog`` to redirect the logs
|
||||
to the right directory (on Red Hat systems this is usually under
|
||||
/var/log/httpd).
|
||||
|
||||
4. Enable the apache ``ironic`` in site and reload::
|
||||
|
||||
|
@ -466,8 +479,8 @@ Compute service's controller nodes and compute nodes.*
|
|||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||
|
||||
# The scheduler host manager class to use (string value)
|
||||
#scheduler_host_manager=nova.scheduler.host_manager.HostManager
|
||||
scheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManager
|
||||
#scheduler_host_manager=host_manager
|
||||
scheduler_host_manager=ironic_host_manager
|
||||
|
||||
# Virtual ram to physical ram allocation ratio which affects
|
||||
# all ram filters. This configuration specifies a global ratio
|
||||
|
@ -823,10 +836,10 @@ node(s) where ``ironic-conductor`` is running.
|
|||
sudo apt-get install xinetd tftpd-hpa syslinux-common pxelinux
|
||||
|
||||
Fedora 21/RHEL7/CentOS7:
|
||||
sudo yum install tftp-server syslinux-tftpboot
|
||||
sudo yum install tftp-server syslinux-tftpboot xinetd
|
||||
|
||||
Fedora 22 or higher:
|
||||
sudo dnf install tftp-server syslinux-tftpboot
|
||||
sudo dnf install tftp-server syslinux-tftpboot xinetd
|
||||
|
||||
#. Using xinetd to provide a tftp server setup to serve ``/tftpboot``.
|
||||
Create or edit ``/etc/xinetd.d/tftp`` as below::
|
||||
|
@ -1443,7 +1456,7 @@ reboots won't happen via PXE or Virtual Media. Instead, it will boot from a
|
|||
local boot loader installed on the disk.
|
||||
|
||||
It's important to note that in order for this to work the image being
|
||||
deployed with Bare Metal serivce **must** contain ``grub2`` installed within it.
|
||||
deployed with Bare Metal service **must** contain ``grub2`` installed within it.
|
||||
|
||||
Enabling the local boot is different when Bare Metal service is used with
|
||||
Compute service and without it.
|
||||
|
@ -1905,6 +1918,9 @@ deployment. The list of support hints is:
|
|||
* wwn (STRING): unique storage identifier
|
||||
* wwn_with_extension (STRING): unique storage identifier with the vendor extension appended
|
||||
* wwn_vendor_extension (STRING): unique vendor storage identifier
|
||||
* rotational (BOOLEAN): whether it's a rotational device or not. This
|
||||
hint makes it easier to distinguish HDDs (rotational) and SSDs (not
|
||||
rotational) when choosing which disk Ironic should deploy the image onto.
|
||||
* name (STRING): the device name, e.g /dev/md0
|
||||
|
||||
|
||||
|
@ -2191,10 +2207,11 @@ Enabling the configuration drive (configdrive)
|
|||
Starting with the Kilo release, the Bare Metal service supports exposing
|
||||
a configuration drive image to the instances.
|
||||
|
||||
Configuration drive can store metadata and attaches to the instance when it
|
||||
boots. One use case for using the configuration drive is to expose a
|
||||
networking configuration when you do not use DHCP to assign IP addresses to
|
||||
instances.
|
||||
The configuration drive is used to store instance-specific metadata and is present to
|
||||
the instance as a disk partition labeled ``config-2``. The configuration drive has
|
||||
a maximum size of 64MB. One use case for using the configuration drive is to
|
||||
expose a networking configuration when you do not use DHCP to assign IP
|
||||
addresses to instances.
|
||||
|
||||
The configuration drive is usually used in conjunction with the Compute
|
||||
service, but the Bare Metal service also offers a standalone way of using it.
|
||||
|
@ -2204,14 +2221,10 @@ The following sections will describe both methods.
|
|||
When used with Compute service
|
||||
------------------------------
|
||||
|
||||
To enable the configuration drive and passes user customized script when deploying an
|
||||
instance, pass ``--config-drive true`` parameter and ``--user-data`` to the
|
||||
``nova boot`` command, for example::
|
||||
To enable the configuration drive for a specific request, pass
|
||||
``--config-drive true`` parameter to the ``nova boot`` command, for example::
|
||||
|
||||
nova boot --config-drive true --flavor baremetal --image test-image --user-data ./my-script instance-1
|
||||
|
||||
Then ``my-script`` is accessible from the configuration drive and could be
|
||||
performed automatically by cloud-init if it is integrated with the instance image.
|
||||
nova boot --config-drive true --flavor baremetal --image test-image instance-1
|
||||
|
||||
It's also possible to enable the configuration drive automatically on
|
||||
all instances by configuring the ``OpenStack Compute service`` to always
|
||||
|
@ -2223,6 +2236,10 @@ create a configuration drive by setting the following option in the
|
|||
|
||||
force_config_drive=True
|
||||
|
||||
In some cases, you may wish to pass a user customized script when deploying an instance.
|
||||
To do this, pass ``--user-data /path/to/file`` to the ``nova boot`` command.
|
||||
More information can be found at `Provide user data to instances <http://docs.openstack.org/user-guide/cli_provide_user_data_to_instances.html>`_
|
||||
|
||||
|
||||
When used standalone
|
||||
--------------------
|
||||
|
@ -2298,6 +2315,72 @@ but in order to use it we should follow some rules:
|
|||
|
||||
.. _`expected format`: http://docs.openstack.org/user-guide/cli_config_drive.html#openstack-metadata-format
|
||||
|
||||
|
||||
Appending kernel parameters to boot instances
|
||||
=============================================
|
||||
|
||||
The Bare Metal service supports passing custom kernel parameters to boot instances to fit
|
||||
users' requirements. The way to append the kernel parameters is depending on how to boot instances.
|
||||
|
||||
Network boot
|
||||
------------
|
||||
Currently, the Bare Metal service supports assigning unified kernel parameters to PXE
|
||||
booted instances by:
|
||||
|
||||
* Modifying the ``[pxe]/pxe_append_params`` configuration option, for example::
|
||||
|
||||
[pxe]
|
||||
|
||||
pxe_append_params = quiet splash
|
||||
|
||||
* Copying a template from shipped templates to another place, for example::
|
||||
|
||||
https://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/pxe_config.template
|
||||
|
||||
Making the modifications and pointing to the custom template via the configuration
|
||||
options: ``[pxe]/pxe_config_template`` and ``[pxe]/uefi_pxe_config_template``.
|
||||
|
||||
Local boot
|
||||
----------
|
||||
For local boot instances, users can make use of configuration drive
|
||||
(see `Enabling the configuration drive (configdrive)`_) to pass a custom
|
||||
script to append kernel parameters when creating an instance. This is more
|
||||
flexible and can vary per instance.
|
||||
Here is an example for grub2 with ubuntu, users can customize it
|
||||
to fit their use case:
|
||||
|
||||
.. code:: python
|
||||
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
|
||||
# Default grub2 config file in Ubuntu
|
||||
grub_file = '/etc/default/grub'
|
||||
# Add parameters here to pass to instance.
|
||||
kernel_parameters = ['quiet', 'splash']
|
||||
grub_cmd = 'GRUB_CMDLINE_LINUX'
|
||||
old_grub_file = grub_file+'~'
|
||||
os.rename(grub_file, old_grub_file)
|
||||
cmdline_existed = False
|
||||
with open(grub_file, 'w') as writer, \
|
||||
open(old_grub_file, 'r') as reader:
|
||||
for line in reader:
|
||||
key = line.split('=')[0]
|
||||
if key == grub_cmd:
|
||||
#If there is already some value:
|
||||
if line.strip()[-1] == '"':
|
||||
line = line.strip()[:-1] + ' ' + ' '.join(kernel_parameters) + '"'
|
||||
cmdline_existed = True
|
||||
writer.write(line)
|
||||
if not cmdline_existed:
|
||||
line = grub_cmd + '=' + '"' + ' '.join(kernel_parameters) + '"'
|
||||
writer.write(line)
|
||||
|
||||
os.remove(old_grub_file)
|
||||
os.system('update-grub')
|
||||
os.system('reboot')
|
||||
|
||||
|
||||
.. _BuildingDeployRamdisk:
|
||||
|
||||
Building or downloading a deploy ramdisk image
|
||||
|
|
|
@ -19,18 +19,19 @@ In-band Inspection
|
|||
|
||||
If you used in-band inspection with **ironic-discoverd**, you have to install
|
||||
**python-ironic-inspector-client** during the upgrade. This package contains a
|
||||
client module for in-band inspection service, which was previously part of
|
||||
**ironic-discoverd** package. Ironic Liberty supports **ironic-discoverd**
|
||||
service, but does not support its in-tree client module. Please refer to
|
||||
client module for the in-band inspection service, which was previously part of
|
||||
the **ironic-discoverd** package. Ironic Liberty supports the
|
||||
**ironic-discoverd** service, but does not support its in-tree client module.
|
||||
Please refer to
|
||||
`ironic-inspector version support matrix
|
||||
<http://docs.openstack.org/developer/ironic-inspector/install.html#version-support-matrix>`_
|
||||
for details on which Ironic version can work with which
|
||||
**ironic-inspector**/**ironic-discoverd** version.
|
||||
for details on which Ironic versions can work with which
|
||||
**ironic-inspector**/**ironic-discoverd** versions.
|
||||
|
||||
It's also highly recommended that you switch to using **ironic-inspector**,
|
||||
which is a newer (and compatible on API level) version of the same service.
|
||||
|
||||
The discoverd to inspector upgrade procedure:
|
||||
The discoverd to inspector upgrade procedure is as follows:
|
||||
|
||||
#. Install **ironic-inspector** on the machine where you have
|
||||
**ironic-discoverd** (usually the same as conductor).
|
||||
|
@ -40,13 +41,13 @@ The discoverd to inspector upgrade procedure:
|
|||
`example.conf
|
||||
<https://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf>`_.
|
||||
|
||||
The file name is provided on command line when starting
|
||||
The file name is provided on the command line when starting
|
||||
**ironic-discoverd**, and the previously recommended default was
|
||||
``/etc/ironic-discoverd/discoverd.conf``. In this case, for the sake of
|
||||
consistency it's recommended you move the configuration file to
|
||||
``/etc/ironic-inspector/inspector.conf``.
|
||||
|
||||
#. Shutdown **ironic-discoverd**, start **ironic-inspector**.
|
||||
#. Shutdown **ironic-discoverd**, and start **ironic-inspector**.
|
||||
|
||||
#. During upgrade of each conductor instance:
|
||||
|
||||
|
@ -78,8 +79,8 @@ your Nova and Ironic services are as follows:
|
|||
nova-compute if necessary.
|
||||
|
||||
Note that during the period between Nova's upgrade and Ironic's upgrades,
|
||||
instances can still be provisioned to nodes, however, any attempt by users
|
||||
to specify a config drive for an instance will cause error until Ironic's
|
||||
instances can still be provisioned to nodes. However, any attempt by users to
|
||||
specify a config drive for an instance will cause an error until Ironic's
|
||||
upgrade has completed.
|
||||
|
||||
Cleaning
|
||||
|
|
|
@ -76,8 +76,8 @@ Driver-Specific Periodic Tasks
|
|||
------------------------------
|
||||
|
||||
Drivers may run their own periodic tasks, i.e. actions run repeatedly after
|
||||
a certain amount of time. Such task is created by decorating a method on the
|
||||
driver itself or on any interface with periodic_ decorator, e.g.
|
||||
a certain amount of time. Such task is created by decorating a method on
|
||||
an interface with periodic_ decorator, e.g.
|
||||
|
||||
::
|
||||
|
||||
|
@ -88,18 +88,15 @@ driver itself or on any interface with periodic_ decorator, e.g.
|
|||
def task(self, manager, context):
|
||||
pass # do something
|
||||
|
||||
class FakeDriver(base.BaseDriver):
|
||||
def __init__(self):
|
||||
self.power = FakePower()
|
||||
|
||||
@periodics.periodic(spacing=42)
|
||||
def task2(self, manager, context):
|
||||
pass # do something
|
||||
|
||||
|
||||
Here the ``spacing`` argument is a period in seconds for a given periodic task.
|
||||
For example 'spacing=5' means every 5 seconds.
|
||||
|
||||
.. note::
|
||||
As of the Newton release, it's possible to bind periodic tasks to a driver
|
||||
object instead of an interface. This is deprecated and support for it will
|
||||
be removed in the Ocata release.
|
||||
|
||||
|
||||
Message Routing
|
||||
===============
|
||||
|
|
|
@ -8,7 +8,7 @@ This document provides some necessary points for developers to consider when
|
|||
writing and reviewing Ironic code. The checklist will help developers get
|
||||
things right.
|
||||
|
||||
Adding new features
|
||||
Adding New Features
|
||||
===================
|
||||
|
||||
Starting with the Mitaka development cycle, Ironic tracks new features using
|
||||
|
@ -49,9 +49,7 @@ Ironic:
|
|||
#. The ironic-drivers team will evaluate the RFE and may advise the submitter
|
||||
to file a spec in ironic-specs to elaborate on the feature request, in case
|
||||
the RFE requires extra scrutiny, more design discussion, etc. For the spec
|
||||
submission process, please see the
|
||||
`specs process <https://wiki.openstack.org/wiki/Ironic/Specs_Process>`_
|
||||
wiki page.
|
||||
submission process, please see the `Ironic Specs Process`_.
|
||||
|
||||
#. If a spec is not required, once the discussion has happened and there is
|
||||
positive consensus among the ironic-drivers team on the RFE, the RFE is
|
||||
|
@ -160,6 +158,54 @@ Agent driver attributes:
|
|||
These are only some fields in use. Other vendor drivers might expose more ``driver_internal_info``
|
||||
properties, please check their development documentation and/or module docstring for details.
|
||||
It is important for developers to make sure these properties follow the precedent of prefixing their
|
||||
variable names with a specific interface name(e.g., iboot_bar, amt_xyz), so as to minimize or avoid
|
||||
variable names with a specific interface name (e.g., iboot_bar, amt_xyz), so as to minimize or avoid
|
||||
any conflicts between interfaces.
|
||||
|
||||
|
||||
Ironic Specs Process
|
||||
====================
|
||||
|
||||
Specifications must follow the template which can be found at
|
||||
`specs/template.rst <http://git.openstack.org/cgit/openstack/ironic-specs/tree/
|
||||
specs/template.rst>`_, which is quite self-documenting. Specifications are
|
||||
proposed by adding them to the `specs/approved` directory, adding a soft link
|
||||
to it from the `specs/not-implemented` directory, and posting it for
|
||||
review to Gerrit. For more information, please see the `README <http://git.
|
||||
openstack.org/cgit/openstack/ironic-specs/tree/README.rst>`_.
|
||||
|
||||
The same `Gerrit process
|
||||
<http://docs.openstack.org/infra/manual/developers.html>`_ as with source code,
|
||||
using the repository `ironic-specs <http://git.openstack.org/cgit/openstack/
|
||||
ironic-specs/>`_, is used to add new specifications.
|
||||
|
||||
All approved specifications are available at:
|
||||
http://specs.openstack.org/openstack/ironic-specs. If a specification has
|
||||
been approved but not completed within one or more releases since the
|
||||
approval, it may be re-reviewed to make sure it still makes sense as written.
|
||||
|
||||
Ironic specifications are part of the `RFE (Requests for Feature Enhancements)
|
||||
process <#adding-new-features>`_.
|
||||
You are welcome to submit patches associated with an RFE, but they will have
|
||||
a -2 ("do not merge") until the specification has been approved. This is to
|
||||
ensure that the patches don't get accidentally merged beforehand. You will
|
||||
still be able to get reviewer feedback and push new patch sets, even with a -2.
|
||||
The `list of core reviewers <https://review.openstack.org/#/admin/groups/352,
|
||||
members>`_ for the specifications is small but mighty. (This is not
|
||||
necessarily the same list of core reviewers for code patches.)
|
||||
|
||||
Changes to existing specs
|
||||
-------------------------
|
||||
|
||||
For approved but not-completed specs:
|
||||
- cosmetic cleanup, fixing errors, and changing the definition of a feature
|
||||
can be done to the spec.
|
||||
|
||||
For approved and completed specs:
|
||||
- changing a previously approved and completed spec should only be done
|
||||
for cosmetic cleanup or fixing errors.
|
||||
- changing the definition of the feature should be done in a new spec.
|
||||
|
||||
|
||||
Please see the `Ironic specs process wiki page <https://wiki.openstack.org/
|
||||
wiki/Ironic/Specs_Process>`_ for further reference.
|
||||
|
||||
|
|
|
@ -8,23 +8,26 @@ This is a quick walkthrough to get you started developing code for Ironic.
|
|||
This assumes you are already familiar with submitting code reviews to
|
||||
an OpenStack project.
|
||||
|
||||
The gate currently runs the unit tests under both
|
||||
Python 2.7 and Python 3.4. It is strongly encouraged to run the unit tests
|
||||
locally under one, the other, or both prior to submitting a patch.
|
||||
The gate currently runs the unit tests under Python 2.7, Python 3.4
|
||||
and Python 3.5. It is strongly encouraged to run the unit tests locally prior
|
||||
to submitting a patch.
|
||||
|
||||
.. note::
|
||||
Do not run unit tests on the same environment as devstack due to
|
||||
conflicting configuration with system dependencies.
|
||||
|
||||
.. note::
|
||||
This document is compatible with Python (3.5), Ubuntu (16.04) and Fedora (23).
|
||||
|
||||
.. seealso::
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Install prerequisites (for python 2.7):
|
||||
Install prerequisites for python 2.7:
|
||||
|
||||
- Ubuntu/Debian::
|
||||
|
||||
sudo apt-get install python-dev libssl-dev python-pip libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git git-review libffi-dev gettext ipmitool psmisc graphviz libjpeg-dev
|
||||
sudo apt-get install build-essential python-dev libssl-dev python-pip libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git git-review libffi-dev gettext ipmitool psmisc graphviz libjpeg-dev
|
||||
|
||||
- Fedora 21/RHEL7/CentOS7::
|
||||
|
||||
|
@ -50,13 +53,26 @@ Install prerequisites (for python 2.7):
|
|||
`<https://software.opensuse.org/download.html?project=graphics&package=graphviz-plugins>`_.
|
||||
|
||||
|
||||
To use Python 3.4, follow the instructions above to install prerequisites and
|
||||
If you need Python 3.4, follow the instructions above to install prerequisites for 2.7 and
|
||||
additionally install the following packages:
|
||||
|
||||
- On Ubuntu/Debian::
|
||||
- On Ubuntu 14.x/Debian::
|
||||
|
||||
sudo apt-get install python3-dev
|
||||
|
||||
- On Ubuntu 16.04::
|
||||
|
||||
wget https://www.python.org/ftp/python/3.4.4/Python-3.4.4.tgz
|
||||
sudo tar xzf Python-3.4.4.tgz
|
||||
cd Python-3.4.4
|
||||
sudo ./configure
|
||||
sudo make altinstall
|
||||
|
||||
# This will install Python 3.4 without replacing 3.5. To check if 3.4 was installed properly
|
||||
run this command:
|
||||
|
||||
python3.4 -V
|
||||
|
||||
- On Fedora 21/RHEL7/CentOS7::
|
||||
|
||||
sudo yum install python3-devel
|
||||
|
@ -65,6 +81,29 @@ additionally install the following packages:
|
|||
|
||||
sudo dnf install python3-devel
|
||||
|
||||
If you need Python 3.5, follow the instructions for installing prerequisites for Python 2.7 and
|
||||
run the following commands.
|
||||
|
||||
- On Ubuntu 14.04::
|
||||
|
||||
wget https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tgz
|
||||
sudo tar xzf Python-3.5.2.tgz
|
||||
cd Python-3.5.2
|
||||
sudo ./configure
|
||||
sudo make altinstall
|
||||
|
||||
# This will install Python 3.5 without replacing 3.4. To check if 3.5 was installed properly
|
||||
run this command:
|
||||
|
||||
python3.5 -V
|
||||
|
||||
- On Fedora 23::
|
||||
|
||||
sudo dnf install -y dnf-plugins-core
|
||||
sudo dnf copr enable -y mstuchli/Python3.5
|
||||
dnf install -y python35-python3
|
||||
|
||||
|
||||
If your distro has at least tox 1.8, use similar command to install
|
||||
``python-tox`` package. Otherwise install this on all distros::
|
||||
|
||||
|
@ -95,15 +134,24 @@ All unit tests should be run using tox. To run Ironic's entire test suite::
|
|||
# run all tests (unit under both py27 and py34, and pep8)
|
||||
tox
|
||||
|
||||
To run the unit tests under py34 and also run the pep8 tests::
|
||||
|
||||
# run all tests (unit under py34 and pep8)
|
||||
tox -epy34 -epep8
|
||||
|
||||
To run the unit tests under py27 and also run the pep8 tests::
|
||||
|
||||
# run all tests (unit under py27 and pep8)
|
||||
tox -epy27 -epep8
|
||||
|
||||
To run the unit tests under py34 and also run the pep8 tests::
|
||||
.. note::
|
||||
If tests are run under py27 and then run under py34 or py35 the following error may occur::
|
||||
|
||||
# run all tests (unit under py34 and pep8)
|
||||
tox -epy34 -epep8
|
||||
db type could not be determined
|
||||
ERROR: InvocationError: '/home/ubuntu/ironic/.tox/py35/bin/ostestr'
|
||||
|
||||
To overcome this error remove the file `.testrepository/times.dbm`
|
||||
and then run the py34 or py35 test.
|
||||
|
||||
You may pass options to the test programs using positional arguments.
|
||||
To run a specific unit test, this passes the -r option and desired test
|
||||
|
@ -377,9 +425,9 @@ Switch to the stack user and clone DevStack::
|
|||
git clone https://git.openstack.org/openstack-dev/devstack.git devstack
|
||||
|
||||
Create devstack/local.conf with minimal settings required to enable Ironic.
|
||||
You can use either of two drivers for deploy: pxe_* or agent_*, see :ref:`IPA`
|
||||
You can use either of two drivers for deploy: agent\_\* or pxe\_\*, see :ref:`IPA`
|
||||
for explanation. An example local.conf that enables both types of drivers
|
||||
and uses the ``pxe_ssh`` driver by default::
|
||||
and uses the ``agent_ipmitool`` driver by default::
|
||||
|
||||
cd devstack
|
||||
cat >local.conf <<END
|
||||
|
@ -429,14 +477,13 @@ and uses the ``pxe_ssh`` driver by default::
|
|||
IRONIC_VM_SSH_PORT=22
|
||||
IRONIC_BAREMETAL_BASIC_OPS=True
|
||||
DEFAULT_INSTANCE_TYPE=baremetal
|
||||
IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA=True
|
||||
|
||||
# Enable Ironic drivers.
|
||||
IRONIC_ENABLED_DRIVERS=fake,agent_ssh,agent_ipmitool,pxe_ssh,pxe_ipmitool
|
||||
|
||||
# Change this to alter the default driver for nodes created by devstack.
|
||||
# This driver should be in the enabled list above.
|
||||
IRONIC_DEPLOY_DRIVER=pxe_ssh
|
||||
IRONIC_DEPLOY_DRIVER=agent_ipmitool
|
||||
|
||||
# The parameters below represent the minimum possible values to create
|
||||
# functional nodes.
|
||||
|
@ -465,6 +512,24 @@ and uses the ``pxe_ssh`` driver by default::
|
|||
|
||||
END
|
||||
|
||||
.. note::
|
||||
Git protocol requires access to port 9418, which is not a standard port that
|
||||
corporate firewalls always allow. If you are behind a firewall or on a proxy that
|
||||
blocks Git protocol, modify the ``enable_plugin`` line to use ``https://`` instead
|
||||
of ``git://`` and add ``GIT_BASE=https://git.openstack.org`` to the credentials::
|
||||
|
||||
GIT_BASE=https://git.openstack.org
|
||||
|
||||
# Enable Ironic plugin
|
||||
enable_plugin ironic https://git.openstack.org/openstack/ironic
|
||||
|
||||
.. note::
|
||||
The agent_ssh and pxe_ssh drivers are being deprecated in favor of the
|
||||
more production-like agent_ipmitool and pxe_ipmitool drivers. When a
|
||||
\*_ipmitool driver is set and IRONIC_IS_HARDWARE variable is false devstack
|
||||
will automatically set up `VirtualBMC <https://github.com/openstack/virtualbmc>`_
|
||||
to control the power state of the virtual baremetal nodes.
|
||||
|
||||
.. note::
|
||||
When running QEMU as non-root user (e.g. ``qemu`` on Fedora or ``libvirt-qemu`` on Ubuntu),
|
||||
make sure ``IRONIC_VM_LOG_DIR`` points to a directory where QEMU will be able to write.
|
||||
|
@ -642,7 +707,9 @@ Building developer documentation
|
|||
|
||||
If you would like to build the documentation locally, eg. to test your
|
||||
documentation changes before uploading them for review, run these
|
||||
commands to build the documentation set::
|
||||
commands to build the documentation set:
|
||||
|
||||
- On your local machine::
|
||||
|
||||
# activate your development virtualenv
|
||||
source .tox/venv/bin/activate
|
||||
|
@ -650,7 +717,26 @@ commands to build the documentation set::
|
|||
# build the docs
|
||||
tox -edocs
|
||||
|
||||
Now use your browser to open the top-level index.html located at::
|
||||
#Now use your browser to open the top-level index.html located at:
|
||||
|
||||
ironic/doc/build/html/index.html
|
||||
|
||||
|
||||
- On a remote machine::
|
||||
|
||||
# Go to the directory that contains the docs
|
||||
cd ~/ironic/doc/source/
|
||||
|
||||
# Build the docs
|
||||
tox -edocs
|
||||
|
||||
# Change directory to the newly built HTML files
|
||||
cd ~/ironic/doc/build/html/
|
||||
|
||||
# Create a server using python on port 8000
|
||||
python -m SimpleHTTPServer 8000
|
||||
|
||||
#Now use your browser to open the top-level index.html located at:
|
||||
|
||||
http://your_ip:8000
|
||||
|
||||
|
|
|
@ -0,0 +1,133 @@
|
|||
==========================================
|
||||
Ironic multitenant networking and DevStack
|
||||
==========================================
|
||||
|
||||
This guide will walk you through using OpenStack Ironic/Neutron with the ML2
|
||||
``networking-generic-switch`` plugin.
|
||||
|
||||
|
||||
Using VMs as baremetal servers
|
||||
==============================
|
||||
|
||||
This scenario shows how to setup Devstack to use Ironic/Neutron integration
|
||||
with VMs as baremetal servers and ML2 ``networking-generic-switch``
|
||||
that interacts with OVS.
|
||||
|
||||
|
||||
DevStack Configuration
|
||||
----------------------
|
||||
The following is ``local.conf`` that will setup Devstack with 3 VMs that are
|
||||
registered in ironic. ``networking-generic-switch`` driver will be installed and
|
||||
configured in Neutron.
|
||||
|
||||
::
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
# Configure ironic from ironic devstack plugin.
|
||||
enable_plugin ironic https://review.openstack.org/openstack/ironic
|
||||
|
||||
# Install networking-generic-switch Neutron ML2 driver that interacts with OVS
|
||||
enable_plugin networking-generic-switch https://review.openstack.org/openstack/networking-generic-switch
|
||||
Q_PLUGIN_EXTRA_CONF_PATH=/etc/neutron/plugins/ml2
|
||||
Q_PLUGIN_EXTRA_CONF_FILES['networking-generic-switch']=ml2_conf_genericswitch.ini
|
||||
|
||||
# Add link local info when registering Ironic node
|
||||
IRONIC_USE_LINK_LOCAL=True
|
||||
|
||||
IRONIC_ENABLED_NETWORK_INTERFACES=flat,neutron
|
||||
IRONIC_NETWORK_INTERFACE=neutron
|
||||
|
||||
#Networking configuration
|
||||
OVS_PHYSICAL_BRIDGE=brbm
|
||||
PHYSICAL_NETWORK=mynetwork
|
||||
IRONIC_PROVISION_NETWORK_NAME=ironic-provision
|
||||
IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24
|
||||
IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1
|
||||
|
||||
Q_PLUGIN=ml2
|
||||
ENABLE_TENANT_VLANS=True
|
||||
Q_ML2_TENANT_NETWORK_TYPE=vlan
|
||||
TENANT_VLAN_RANGE=100:150
|
||||
|
||||
# Credentials
|
||||
ADMIN_PASSWORD=password
|
||||
RABBIT_PASSWORD=password
|
||||
DATABASE_PASSWORD=password
|
||||
SERVICE_PASSWORD=password
|
||||
SERVICE_TOKEN=password
|
||||
SWIFT_HASH=password
|
||||
SWIFT_TEMPURL_KEY=password
|
||||
|
||||
# Enable Ironic API and Ironic Conductor
|
||||
enable_service ironic
|
||||
enable_service ir-api
|
||||
enable_service ir-cond
|
||||
|
||||
# Enable Neutron which is required by Ironic and disable nova-network.
|
||||
disable_service n-net
|
||||
disable_service n-novnc
|
||||
enable_service q-svc
|
||||
enable_service q-agt
|
||||
enable_service q-dhcp
|
||||
enable_service q-l3
|
||||
enable_service q-meta
|
||||
enable_service neutron
|
||||
|
||||
# Enable Swift for agent_* drivers
|
||||
enable_service s-proxy
|
||||
enable_service s-object
|
||||
enable_service s-container
|
||||
enable_service s-account
|
||||
|
||||
# Disable Horizon
|
||||
disable_service horizon
|
||||
|
||||
# Disable Heat
|
||||
disable_service heat h-api h-api-cfn h-api-cw h-eng
|
||||
|
||||
# Disable Cinder
|
||||
disable_service cinder c-sch c-api c-vol
|
||||
|
||||
# Disable Tempest
|
||||
disable_service tempest
|
||||
|
||||
# Swift temp URL's are required for agent_* drivers.
|
||||
SWIFT_ENABLE_TEMPURLS=True
|
||||
|
||||
# Create 3 virtual machines to pose as Ironic's baremetal nodes.
|
||||
IRONIC_VM_COUNT=3
|
||||
IRONIC_VM_SSH_PORT=22
|
||||
IRONIC_BAREMETAL_BASIC_OPS=True
|
||||
|
||||
# Enable Ironic drivers.
|
||||
IRONIC_ENABLED_DRIVERS=fake,agent_ssh,agent_ipmitool,pxe_ssh,pxe_ipmitool
|
||||
|
||||
# Change this to alter the default driver for nodes created by devstack.
|
||||
# This driver should be in the enabled list above.
|
||||
IRONIC_DEPLOY_DRIVER=agent_ssh
|
||||
|
||||
# The parameters below represent the minimum possible values to create
|
||||
# functional nodes.
|
||||
IRONIC_VM_SPECS_RAM=1024
|
||||
IRONIC_VM_SPECS_DISK=10
|
||||
|
||||
# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
|
||||
IRONIC_VM_EPHEMERAL_DISK=0
|
||||
|
||||
# To build your own IPA ramdisk from source, set this to True
|
||||
IRONIC_BUILD_DEPLOY_RAMDISK=False
|
||||
|
||||
VIRT_DRIVER=ironic
|
||||
|
||||
# By default, DevStack creates a 10.0.0.0/24 network for instances.
|
||||
# If this overlaps with the hosts network, you may adjust with the
|
||||
# following.
|
||||
NETWORK_GATEWAY=10.1.0.1
|
||||
FIXED_RANGE=10.1.0.0/24
|
||||
FIXED_NETWORK_SIZE=256
|
||||
|
||||
# Log all output to files
|
||||
LOGFILE=$HOME/devstack.log
|
||||
LOGDIR=$HOME/logs
|
||||
IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs
|
|
@ -245,7 +245,8 @@ Features
|
|||
image provisioning is done using iSCSI over data network, so this driver has
|
||||
the benefit of security enhancement with the same performance. It segregates
|
||||
management info from data channel.
|
||||
* Support for out-of-band cleaning operations.
|
||||
* Supports both out-of-band and in-band cleaning operations. For more details,
|
||||
see :ref:`InbandvsOutOfBandCleaning`.
|
||||
* Remote Console
|
||||
* HW Sensors
|
||||
* Works well for machines with resource constraints (lesser amount of memory).
|
||||
|
@ -288,6 +289,7 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``ilo_username``: Username for the iLO with administrator privileges.
|
||||
- ``ilo_password``: Password for the above iLO user.
|
||||
- ``ilo_deploy_iso``: The glance UUID of the deploy ramdisk ISO image.
|
||||
- ``ca_file``: (optional) CA certificate file to validate iLO.
|
||||
- ``client_port``: (optional) Port to be used for iLO operations if you are
|
||||
using a custom port on the iLO. Default port used is 443.
|
||||
- ``client_timeout``: (optional) Timeout for iLO operations. Default timeout
|
||||
|
@ -295,6 +297,14 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``console_port``: (optional) Node's UDP port for console access. Any unused
|
||||
port on the ironic conductor node may be used.
|
||||
|
||||
.. note::
|
||||
To update SSL certificates into iLO, you can refer to `HPE Integrated
|
||||
Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_.
|
||||
You can use iLO hostname or IP address as a 'Common Name (CN)' while
|
||||
generating Certificate Signing Request (CSR). Use the same value as
|
||||
`ilo_address` while enrolling node to Bare Metal service to avoid SSL
|
||||
certificate validation errors related to hostname mismatch.
|
||||
|
||||
For example, you could run a similar command like below to enroll the ProLiant
|
||||
node::
|
||||
|
||||
|
@ -425,6 +435,7 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``ilo_username``: Username for the iLO with administrator privileges.
|
||||
- ``ilo_password``: Password for the above iLO user.
|
||||
- ``ilo_deploy_iso``: The glance UUID of the deploy ramdisk ISO image.
|
||||
- ``ca_file``: (optional) CA certificate file to validate iLO.
|
||||
- ``client_port``: (optional) Port to be used for iLO operations if you are
|
||||
using a custom port on the iLO. Default port used is 443.
|
||||
- ``client_timeout``: (optional) Timeout for iLO operations. Default timeout
|
||||
|
@ -432,6 +443,14 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``console_port``: (optional) Node's UDP port for console access. Any unused
|
||||
port on the ironic conductor node may be used.
|
||||
|
||||
.. note::
|
||||
To update SSL certificates into iLO, you can refer to `HPE Integrated
|
||||
Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_.
|
||||
You can use iLO hostname or IP address as a 'Common Name (CN)' while
|
||||
generating Certificate Signing Request (CSR). Use the same value as
|
||||
`ilo_address` while enrolling node to Bare Metal service to avoid SSL
|
||||
certificate validation errors related to hostname mismatch.
|
||||
|
||||
For example, you could run a similar command like below to enroll the ProLiant
|
||||
node::
|
||||
|
||||
|
@ -503,7 +522,8 @@ Features
|
|||
* Automatic detection of current boot mode.
|
||||
* Automatic setting of the required boot mode, if UEFI boot mode is requested
|
||||
by the nova flavor's extra spec.
|
||||
* Support for out-of-band cleaning operations.
|
||||
* Supports both out-of-band and in-band cleaning operations. For more details,
|
||||
see :ref:`InbandvsOutOfBandCleaning`.
|
||||
* Support for out-of-band hardware inspection.
|
||||
* Supports UEFI Boot mode
|
||||
* Supports UEFI Secure Boot
|
||||
|
@ -543,6 +563,7 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``ilo_password``: Password for the above iLO user.
|
||||
- ``deploy_kernel``: The glance UUID of the deployment kernel.
|
||||
- ``deploy_ramdisk``: The glance UUID of the deployment ramdisk.
|
||||
- ``ca_file``: (optional) CA certificate file to validate iLO.
|
||||
- ``client_port``: (optional) Port to be used for iLO operations if you are
|
||||
using a custom port on the iLO. Default port used is 443.
|
||||
- ``client_timeout``: (optional) Timeout for iLO operations. Default timeout
|
||||
|
@ -550,6 +571,14 @@ Nodes configured for iLO driver should have the ``driver`` property set to
|
|||
- ``console_port``: (optional) Node's UDP port for console access. Any unused
|
||||
port on the ironic conductor node may be used.
|
||||
|
||||
.. note::
|
||||
To update SSL certificates into iLO, you can refer to `HPE Integrated
|
||||
Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_.
|
||||
You can use iLO hostname or IP address as a 'Common Name (CN)' while
|
||||
generating Certificate Signing Request (CSR). Use the same value as
|
||||
`ilo_address` while enrolling node to Bare Metal service to avoid SSL
|
||||
certificate validation errors related to hostname mismatch.
|
||||
|
||||
For example, you could run a similar command like below to enroll the ProLiant
|
||||
node::
|
||||
|
||||
|
@ -775,7 +804,6 @@ Supported **Automated** Cleaning Operations
|
|||
- clean_priority_reset_secure_boot_keys_to_default=20
|
||||
- clean_priority_clear_secure_boot_keys=0
|
||||
- clean_priority_reset_ilo_credential=30
|
||||
- clean_priority_erase_devices=10
|
||||
|
||||
For more information on node automated cleaning, see :ref:`automated_cleaning`
|
||||
|
||||
|
|
|
@ -10,8 +10,8 @@ Overview
|
|||
HP OneView [1]_ is a single integrated platform, packaged as an appliance that
|
||||
implements a software-defined approach to managing physical infrastructure.
|
||||
The appliance supports scenarios such as deploying bare metal servers, for
|
||||
instance. In this context, the ``HP OneView driver`` for Ironic enables the
|
||||
users of OneView to use Ironic as a bare metal provider to their managed
|
||||
instance. In this context, the ``HP OneView driver`` for ironic enables the
|
||||
users of OneView to use ironic as a bare metal provider to their managed
|
||||
physical hardware.
|
||||
|
||||
Currently there are two OneView drivers:
|
||||
|
@ -20,25 +20,42 @@ Currently there are two OneView drivers:
|
|||
* ``agent_pxe_oneview``
|
||||
|
||||
The ``iscsi_pxe_oneview`` and ``agent_pxe_oneview`` drivers implement the
|
||||
core interfaces of an Ironic Driver [2]_, and use the ``python-oneviewclient``
|
||||
[3]_ to provide communication between Ironic and OneView through OneView's
|
||||
Rest API.
|
||||
core interfaces of an ironic Driver [2]_, and use the ``python-oneviewclient``
|
||||
[3]_ to provide communication between ironic and OneView through OneView's
|
||||
REST API.
|
||||
|
||||
To provide a bare metal instance there are four components involved in the
|
||||
process:
|
||||
|
||||
* Ironic service
|
||||
* python-oneviewclient
|
||||
* OneView appliance
|
||||
* iscsi_pxe_oneview/agent_pxe_oneview driver
|
||||
* The ironic service
|
||||
* The ironic driver for OneView, which can be:
|
||||
* `iscsi_pxe_oneview` or
|
||||
* `agent_pxe_oneview`
|
||||
* The python-oneviewclient library
|
||||
* The OneView appliance
|
||||
|
||||
The role of Ironic is to serve as a bare metal provider to OneView's managed
|
||||
The role of ironic is to serve as a bare metal provider to OneView's managed
|
||||
physical hardware and to provide communication with other necessary OpenStack
|
||||
services such as Nova and Glance. When Ironic receives a boot request, it
|
||||
works together with the Ironic OneView driver to access a machine in OneView,
|
||||
services such as Nova and Glance. When ironic receives a boot request, it
|
||||
works together with the ironic OneView driver to access a machine in OneView,
|
||||
the ``python-oneviewclient`` being responsible for the communication with the
|
||||
OneView appliance.
|
||||
|
||||
The Mitaka version of the ironic OneView drivers only supported what we call
|
||||
**pre-allocation** of nodes, meaning that resources in OneView are allocated
|
||||
prior to the node being made available in ironic. This model is deprecated and
|
||||
will be supported until OpenStack's `P` release. From the Newton release on,
|
||||
OneView drivers enables a new feature called **dynamic allocation** of nodes
|
||||
[6]_. In this model, the driver allocates resources in OneView only at boot
|
||||
time, allowing idle resources in ironic to be used by OneView users, enabling
|
||||
actual resource sharing among ironic and OneView users.
|
||||
|
||||
Since OneView can claim nodes in ``available`` state at any time, a set of
|
||||
tasks runs periodically to detect nodes in use by OneView. A node in use by
|
||||
OneView is placed in ``manageable`` state and has maintenance mode set. Once
|
||||
the node is no longer in use, these tasks will make place them back in
|
||||
``available`` state and clear maintenance mode.
|
||||
|
||||
Prerequisites
|
||||
=============
|
||||
|
||||
|
@ -51,13 +68,13 @@ The following requirements apply for both ``iscsi_pxe_oneview`` and
|
|||
Minimum version supported is 2.0.
|
||||
|
||||
* ``python-oneviewclient`` is a python package containing a client to manage
|
||||
the communication between Ironic and OneView.
|
||||
the communication between ironic and OneView.
|
||||
|
||||
Install the ``python-oneviewclient`` module to enable the communication.
|
||||
Minimum version required is 2.0.2 but it is recommended to install the most
|
||||
Minimum version required is 2.4.0 but it is recommended to install the most
|
||||
up-to-date version.::
|
||||
|
||||
$ pip install "python-oneviewclient<3.0.0,>=2.0.2"
|
||||
$ pip install "python-oneviewclient<3.0.0,>=2.4.0"
|
||||
|
||||
Tested platforms
|
||||
================
|
||||
|
@ -71,11 +88,13 @@ Tested platforms
|
|||
OneView's ServerProfile. It has been tested with the following servers:
|
||||
|
||||
- Proliant BL460c Gen8
|
||||
- Proliant BL460c Gen9
|
||||
- Proliant BL465c Gen8
|
||||
- Proliant DL360 Gen9 (starting with python-oneviewclient 2.1.0)
|
||||
|
||||
Notice here that to the driver work correctly with Gen8 and Gen9 DL servers
|
||||
in general, the hardware also needs to run version 4.2.3 of iLO, with Redfish.
|
||||
Notice that for the driver to work correctly with Gen8 and Gen9 DL servers
|
||||
in general, the hardware also needs to run version 4.2.3 of iLO, with
|
||||
Redfish enabled.
|
||||
|
||||
Drivers
|
||||
=======
|
||||
|
@ -91,15 +110,28 @@ Overview
|
|||
Configuring and enabling the driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Add ``iscsi_pxe_oneview`` to the list of ``enabled_drivers`` in
|
||||
``/etc/ironic/ironic.conf``. For example::
|
||||
1. Add ``iscsi_pxe_oneview`` to the list of ``enabled_drivers`` in your
|
||||
``ironic.conf`` file. For example::
|
||||
|
||||
enabled_drivers = iscsi_pxe_oneview
|
||||
|
||||
2. Update the [oneview] section of your ``ironic.conf`` file with your
|
||||
OneView credentials and CA certificate files information.
|
||||
|
||||
3. Restart the Ironic conductor service. For Ubuntu users, do::
|
||||
.. note::
|
||||
If you are using the deprecated ``pre-allocation`` feature (i.e.:
|
||||
``dynamic_allocation`` is set to False on all nodes), you can disable the
|
||||
driver periodic tasks by setting ``enable_periodic_tasks=false`` on the
|
||||
[oneview] section of ``ironic.conf``
|
||||
|
||||
.. note::
|
||||
An operator can set the ``periodic_check_interval`` option in the [oneview]
|
||||
section to set the interval between running the periodic check. The default
|
||||
value is 300 seconds (5 minutes). A lower value will reduce the likelyhood
|
||||
of races between ironic and OneView at the cost of being more resource
|
||||
intensive.
|
||||
|
||||
3. Restart the ironic conductor service. For Ubuntu users, do::
|
||||
|
||||
$ sudo service ironic-conductor restart
|
||||
|
||||
|
@ -112,10 +144,10 @@ Here is an overview of the deploy process for this driver:
|
|||
|
||||
1. Admin configures the Proliant baremetal node to use ``iscsi_pxe_oneview``
|
||||
driver.
|
||||
2. Ironic gets a request to deploy a Glance image on the baremetal node.
|
||||
2. ironic gets a request to deploy a Glance image on the baremetal node.
|
||||
3. Driver sets the boot device to PXE.
|
||||
4. Driver powers on the baremetal node.
|
||||
5. Ironic downloads the deploy and user images from a TFTP server.
|
||||
5. ironic downloads the deploy and user images from a TFTP server.
|
||||
6. Driver reboots the baremetal node.
|
||||
7. User image is now deployed.
|
||||
8. Driver powers off the machine.
|
||||
|
@ -134,15 +166,28 @@ Overview
|
|||
Configuring and enabling the driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Add ``agent_pxe_oneview`` to the list of ``enabled_drivers`` in
|
||||
``/etc/ironic/ironic.conf``. For example::
|
||||
1. Add ``agent_pxe_oneview`` to the list of ``enabled_drivers`` in your
|
||||
``ironic.conf``. For example::
|
||||
|
||||
enabled_drivers = fake,pxe_ssh,pxe_ipmitool,agent_pxe_oneview
|
||||
|
||||
2. Update the [oneview] section of your ``ironic.conf`` file with your
|
||||
OneView credentials and CA certificate files information.
|
||||
|
||||
3. Restart the Ironic conductor service. For Ubuntu users, do::
|
||||
.. note::
|
||||
If you are using the deprecated ``pre-allocation`` feature (i.e.:
|
||||
``dynamic_allocation`` is set to False on all nodes), you can disable the
|
||||
driver periodic tasks by setting ``enable_periodic_tasks=false`` on the
|
||||
[oneview] section of ``ironic.conf``
|
||||
|
||||
.. note::
|
||||
An operator can set the ``periodic_check_interval`` option in the [oneview]
|
||||
section to set the interval between running the periodic check. The default
|
||||
value is 300 seconds (5 minutes). A lower value will reduce the likelyhood
|
||||
of races between ironic and OneView at the cost of being more resource
|
||||
intensive.
|
||||
|
||||
3. Restart the ironic conductor service. For Ubuntu users, do::
|
||||
|
||||
$ service ironic-conductor restart
|
||||
|
||||
|
@ -155,7 +200,7 @@ Here is an overview of the deploy process for this driver:
|
|||
|
||||
1. Admin configures the Proliant baremetal node to use ``agent_pxe_oneview``
|
||||
driver.
|
||||
2. Ironic gets a request to deploy a Glance image on the baremetal node.
|
||||
2. ironic gets a request to deploy a Glance image on the baremetal node.
|
||||
3. Driver sets the boot device to PXE.
|
||||
4. Driver powers on the baremetal node.
|
||||
5. Node downloads the agent deploy images.
|
||||
|
@ -167,7 +212,7 @@ Here is an overview of the deploy process for this driver:
|
|||
11. Driver powers on the machine.
|
||||
12. Baremetal node is active and ready to be used.
|
||||
|
||||
Registering a OneView node in Ironic
|
||||
Registering a OneView node in ironic
|
||||
====================================
|
||||
|
||||
Nodes configured to use any of the OneView drivers should have the ``driver``
|
||||
|
@ -181,6 +226,12 @@ etc. In this case, to be enrolled, the node must have the following parameters:
|
|||
|
||||
- ``server_hardware_uri``: URI of the Server Hardware on OneView.
|
||||
|
||||
- ``dynamic_allocation``: Boolean value to enable or disable (True/False)
|
||||
``dynamic allocation`` for the given node. If this parameter is not set,
|
||||
the driver will consider the ``pre-allocation`` model to maintain
|
||||
compatibility on ironic upgrade. The support for this key will be dropped
|
||||
in P, where only dynamic allocation will be used.
|
||||
|
||||
* In ``properties/capabilities``
|
||||
|
||||
- ``server_hardware_type_uri``: URI of the Server Hardware Type of the
|
||||
|
@ -205,26 +256,77 @@ OneView node, do::
|
|||
$ ironic node-update $NODE_UUID add \
|
||||
properties/capabilities=server_hardware_type_uri:$SHT_URI,enclosure_group_uri:$EG_URI,server_profile_template_uri=$SPT_URI
|
||||
|
||||
In order to deploy, a Server Profile consistent with the Server Profile
|
||||
Template of the node MUST be applied to the Server Hardware it represents.
|
||||
Server Profile Templates and Server Profiles to be utilized for deployments
|
||||
MUST have configuration such that its **first Network Interface** ``boot``
|
||||
property is set to "Primary" and connected to Ironic's provisioning network.
|
||||
In order to deploy, ironic will create and apply, at boot time, a Server
|
||||
Profile based on the Server Profile Template specified on the node to the
|
||||
Server Hardware it represents on OneView. The URI of such Server Profile will
|
||||
be stored in ``driver_info.applied_server_profile_uri`` field while the Server
|
||||
is allocated to ironic.
|
||||
|
||||
To tell Ironic which NIC should be connected to the provisioning network, do::
|
||||
The Server Profile Templates and, therefore, the Server Profiles derived from
|
||||
them MUST comply with the following requirements:
|
||||
|
||||
* The option `MAC Address` in the `Advanced` section of Server Profile/Server
|
||||
Profile Template should be set to `Physical` option;
|
||||
* Their first `Connection` interface should be:
|
||||
|
||||
* Connected to ironic's provisioning network and;
|
||||
* The `Boot` option should be set to primary.
|
||||
|
||||
Node ports should be created considering the **MAC address of the first
|
||||
Interface** of the given Server Hardware.
|
||||
|
||||
.. note::
|
||||
Old versions of ironic using ``pre-allocation`` model (before Newton
|
||||
release) and nodes with `dynamic_allocation` flag disabled shall have their
|
||||
Server Profiles applied during node enrollment and can have their ports
|
||||
created using the `Virtual` MAC addresses provided on Server Profile
|
||||
application.
|
||||
|
||||
To tell ironic which NIC should be connected to the provisioning network, do::
|
||||
|
||||
$ ironic port-create -n $NODE_UUID -a $MAC_ADDRESS
|
||||
|
||||
For more information on the enrollment process of an Ironic node, see [4]_.
|
||||
For more information on the enrollment process of an ironic node, see [4]_.
|
||||
|
||||
For more information on the definitions of ``Server Hardware``,
|
||||
``Server Profile``, ``Server Profile Template`` and many other OneView
|
||||
entities, see [1]_ or browse Help in your OneView appliance menu.
|
||||
For more information on the definitions of ``Server Hardware``, ``Server
|
||||
Profile``, ``Server Profile Template`` and other OneView entities, refer to
|
||||
[1]_ or browse Help in your OneView appliance menu.
|
||||
|
||||
3rd Party Tools
|
||||
===============
|
||||
|
||||
In order to ease user manual tasks, which are often time-consuming, we provide
|
||||
useful tools that work nicely with the OneView drivers.
|
||||
|
||||
ironic-oneview-cli
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``ironic-oneView`` CLI is a command line interface for management tasks
|
||||
involving OneView nodes. Its features include a facility to create of ironic
|
||||
nodes with all required parameters for OneView nodes, creation of Nova flavors
|
||||
for OneView nodes and, starting from version 0.3.0, the migration of nodes from
|
||||
``pre-allocation`` to the ``dynamic allocation`` model.
|
||||
|
||||
For more details on how Ironic-OneView CLI works and how to set it up, see
|
||||
[8]_.
|
||||
|
||||
ironic-oneviewd
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
The ``ironic-oneviewd`` daemon monitors the ironic inventory of resources and
|
||||
providing facilities to operators managing OneView driver deployments. The
|
||||
daemon supports both allocation models (dynamic and pre-allocation) as of
|
||||
version 0.1.0.
|
||||
|
||||
For more details on how Ironic-OneViewd works and how to set it up, see [7]_.
|
||||
|
||||
References
|
||||
==========
|
||||
.. [1] HP OneView - http://www8.hp.com/us/en/business-solutions/converged-systems/oneview.html
|
||||
.. [1] HP OneView - https://www.hpe.com/us/en/integrated-systems/software.html
|
||||
.. [2] Driver interfaces - http://docs.openstack.org/developer/ironic/dev/architecture.html#drivers
|
||||
.. [3] python-oneviewclient - https://pypi.python.org/pypi/python-oneviewclient
|
||||
.. [4] Enrollment process of a node - http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enrollment-process
|
||||
.. [5] Ironic install guide - http://docs.openstack.org/developer/ironic/deploy/install-guide.html#installation-guide
|
||||
.. [5] ironic install guide - http://docs.openstack.org/developer/ironic/deploy/install-guide.html#installation-guide
|
||||
.. [6] Dynamic Allocation in OneView drivers - http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/oneview-drivers-dynamic-allocation.html
|
||||
.. [7] ironic-oneviewd - https://pypi.python.org/pypi/ironic-oneviewd/
|
||||
.. [8] ironic-oneview-cli - https://pypi.python.org/pypi/ironic-oneview-cli/
|
||||
|
|
|
@ -42,6 +42,7 @@ Administrator's Guide
|
|||
deploy/inspection
|
||||
deploy/security
|
||||
deploy/adoption
|
||||
deploy/api-audit-support
|
||||
deploy/troubleshooting
|
||||
Release Notes <http://docs.openstack.org/releasenotes/ironic/>
|
||||
Dashboard (horizon) plugin <http://docs.openstack.org/developer/ironic-ui/>
|
||||
|
@ -68,6 +69,8 @@ Developer's Guide
|
|||
dev/code-contribution-guide
|
||||
dev/dev-quickstart
|
||||
dev/vendor-passthru
|
||||
dev/ironic-multitenant-networking
|
||||
|
||||
dev/faq
|
||||
|
||||
Indices and tables
|
||||
|
|
|
@ -32,6 +32,27 @@ always requests the newest supported API version.
|
|||
API Versions History
|
||||
--------------------
|
||||
|
||||
**1.22**
|
||||
|
||||
Added endpoints for deployment ramdisks.
|
||||
|
||||
**1.21**
|
||||
|
||||
Add node ``resource_class`` field.
|
||||
|
||||
**1.20**
|
||||
|
||||
Add node ``network_interface`` field.
|
||||
|
||||
**1.19**
|
||||
|
||||
Add ``local_link_connection`` and ``pxe_enabled`` fields to the port object.
|
||||
|
||||
**1.18**
|
||||
|
||||
Add ``internal_info`` readonly field to the port object, that will be used
|
||||
by ironic to store internal port-related information.
|
||||
|
||||
**1.17**
|
||||
|
||||
Addition of provision_state verb ``adopt`` which allows an operator
|
||||
|
@ -52,6 +73,7 @@ API Versions History
|
|||
**1.14**
|
||||
|
||||
Make the following endpoints discoverable via Ironic API:
|
||||
|
||||
* '/v1/nodes/<UUID or logical name>/states'
|
||||
* '/v1/drivers/<driver name>/properties'
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,29 @@
|
|||
[DEFAULT]
|
||||
# default target endpoint type
|
||||
# should match the endpoint type defined in service catalog
|
||||
target_endpoint_type = None
|
||||
|
||||
# possible end path of API requests
|
||||
# path of api requests for CADF target typeURI
|
||||
# Just need to include top resource path to identify class
|
||||
# of resources. Ex: Log audit event for API requests
|
||||
# path containing "nodes" keyword and node uuid.
|
||||
[path_keywords]
|
||||
nodes = node
|
||||
drivers = driver
|
||||
chassis = chassis
|
||||
ports = port
|
||||
states = state
|
||||
power = None
|
||||
provision = None
|
||||
maintenance = None
|
||||
validate = None
|
||||
boot_device = None
|
||||
supported = None
|
||||
console = None
|
||||
vendor_passthrus = vendor_passthru
|
||||
|
||||
|
||||
# map endpoint type defined in service catalog to CADF typeURI
|
||||
[service_endpoints]
|
||||
baremetal = service/compute/baremetal
|
|
@ -1,5 +1,5 @@
|
|||
# Beginning with the Newton release, you may leave this file empty
|
||||
# to use default policy defined in code.
|
||||
{
|
||||
"admin_api": "role:admin or role:administrator",
|
||||
"show_password": "!",
|
||||
"default": "rule:admin_api"
|
||||
|
||||
}
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
# Legacy rule for cloud admin access
|
||||
"admin_api": "role:admin or role:administrator"
|
||||
# Internal flag for public API routes
|
||||
"public_api": "is_public_api:True"
|
||||
# Show or mask passwords in API responses
|
||||
"show_password": "!"
|
||||
# May be used to restrict access to specific tenants
|
||||
"is_member": "tenant:demo or tenant:baremetal"
|
||||
# Read-only API access
|
||||
"is_observer": "rule:is_member and (role:observer or role:baremetal_observer)"
|
||||
# Full read/write API access
|
||||
"is_admin": "rule:admin_api or (rule:is_member and role:baremetal_admin)"
|
||||
# Retrieve Node records
|
||||
"baremetal:node:get": "rule:is_admin or rule:is_observer"
|
||||
# Retrieve Node boot device metadata
|
||||
"baremetal:node:get_boot_device": "rule:is_admin or rule:is_observer"
|
||||
# View Node power and provision state
|
||||
"baremetal:node:get_states": "rule:is_admin or rule:is_observer"
|
||||
# Create Node records
|
||||
"baremetal:node:create": "rule:is_admin"
|
||||
# Delete Node records
|
||||
"baremetal:node:delete": "rule:is_admin"
|
||||
# Update Node records
|
||||
"baremetal:node:update": "rule:is_admin"
|
||||
# Request active validation of Nodes
|
||||
"baremetal:node:validate": "rule:is_admin"
|
||||
# Set maintenance flag, taking a Node out of service
|
||||
"baremetal:node:set_maintenance": "rule:is_admin"
|
||||
# Clear maintenance flag, placing the Node into service again
|
||||
"baremetal:node:clear_maintenance": "role:is_admin"
|
||||
# Change Node boot device
|
||||
"baremetal:node:set_boot_device": "rule:is_admin"
|
||||
# Change Node power status
|
||||
"baremetal:node:set_power_state": "rule:is_admin"
|
||||
# Change Node provision status
|
||||
"baremetal:node:set_provision_state": "rule:is_admin"
|
||||
# Change Node RAID status
|
||||
"baremetal:node:set_raid_state": "rule:is_admin"
|
||||
# Get Node console connection information
|
||||
"baremetal:node:get_console": "rule:is_admin"
|
||||
# Change Node console status
|
||||
"baremetal:node:set_console_state": "rule:is_admin"
|
||||
# Retrieve Port records
|
||||
"baremetal:port:get": "rule:is_admin or rule:is_observer"
|
||||
# Create Port records
|
||||
"baremetal:port:create": "rule:is_admin"
|
||||
# Delete Port records
|
||||
"baremetal:port:delete": "rule:is_admin"
|
||||
# Update Port records
|
||||
"baremetal:port:update": "rule:is_admin"
|
||||
# Retrieve Chassis records
|
||||
"baremetal:chassis:get": "rule:is_admin or rule:is_observer"
|
||||
# Create Chassis records
|
||||
"baremetal:chassis:create": "rule:is_admin"
|
||||
# Delete Chassis records
|
||||
"baremetal:chassis:delete": "rule:is_admin"
|
||||
# Update Chassis records
|
||||
"baremetal:chassis:update": "rule:is_admin"
|
||||
# View list of available drivers
|
||||
"baremetal:driver:get": "rule:is_admin or rule:is_observer"
|
||||
# View driver-specific properties
|
||||
"baremetal:driver:get_properties": "rule:is_admin or rule:is_observer"
|
||||
# View driver-specific RAID metadata
|
||||
"baremetal:driver:get_raid_logical_disk_properties": "rule:is_admin or rule:is_observer"
|
||||
# Access vendor-specific Node functions
|
||||
"baremetal:node:vendor_passthru": "rule:is_admin"
|
||||
# Access vendor-specific Driver functions
|
||||
"baremetal:driver:vendor_passthru": "rule:is_admin"
|
||||
# Send heartbeats from IPA ramdisk
|
||||
"baremetal:node:ipa_heartbeat": "rule:public_api"
|
||||
# Access IPA ramdisk functions
|
||||
"baremetal:driver:ipa_lookup": "rule:public_api"
|
|
@ -1,57 +0,0 @@
|
|||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
API_SERVICE_OPTS = [
|
||||
cfg.StrOpt('host_ip',
|
||||
default='0.0.0.0',
|
||||
help=_('The IP address on which ironic-api listens.')),
|
||||
cfg.PortOpt('port',
|
||||
default=6385,
|
||||
help=_('The TCP port on which ironic-api listens.')),
|
||||
cfg.IntOpt('max_limit',
|
||||
default=1000,
|
||||
help=_('The maximum number of items returned in a single '
|
||||
'response from a collection resource.')),
|
||||
cfg.StrOpt('public_endpoint',
|
||||
help=_("Public URL to use when building the links to the API "
|
||||
"resources (for example, \"https://ironic.rocks:6384\")."
|
||||
" If None the links will be built using the request's "
|
||||
"host URL. If the API is operating behind a proxy, you "
|
||||
"will want to change this to represent the proxy's URL. "
|
||||
"Defaults to None.")),
|
||||
cfg.IntOpt('api_workers',
|
||||
help=_('Number of workers for OpenStack Ironic API service. '
|
||||
'The default is equal to the number of CPUs available '
|
||||
'if that can be determined, else a default worker '
|
||||
'count of 1 is returned.')),
|
||||
cfg.BoolOpt('enable_ssl_api',
|
||||
default=False,
|
||||
help=_("Enable the integrated stand-alone API to service "
|
||||
"requests via HTTPS instead of HTTP. If there is a "
|
||||
"front-end service performing HTTPS offloading from "
|
||||
"the service, this option should be False; note, you "
|
||||
"will want to change public API endpoint to represent "
|
||||
"SSL termination URL with 'public_endpoint' option.")),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
opt_group = cfg.OptGroup(name='api',
|
||||
title='Options for the ironic-api service')
|
||||
CONF.register_group(opt_group)
|
||||
CONF.register_opts(API_SERVICE_OPTS, opt_group)
|
|
@ -1,34 +0,0 @@
|
|||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright © 2012 New Dream Network, LLC (DreamHost)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Access Control Lists (ACL's) control access the API server."""
|
||||
|
||||
from ironic.api.middleware import auth_token
|
||||
|
||||
|
||||
def install(app, conf, public_routes):
|
||||
"""Install ACL check on application.
|
||||
|
||||
:param app: A WSGI application.
|
||||
:param conf: Settings. Dict'ified and passed to keystonemiddleware
|
||||
:param public_routes: The list of the routes which will be allowed to
|
||||
access without authentication.
|
||||
:return: The same WSGI application with ACL installed.
|
||||
|
||||
"""
|
||||
return auth_token.AuthTokenMiddleware(app,
|
||||
conf=dict(conf),
|
||||
public_api_routes=public_routes)
|
|
@ -15,38 +15,19 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import keystonemiddleware.audit as audit_middleware
|
||||
from keystonemiddleware.audit import PycadfAuditApiConfigError
|
||||
from oslo_config import cfg
|
||||
import oslo_middleware.cors as cors_middleware
|
||||
import pecan
|
||||
|
||||
from ironic.api import acl
|
||||
from ironic.api import config
|
||||
from ironic.api.controllers.base import Version
|
||||
from ironic.api import hooks
|
||||
from ironic.api import middleware
|
||||
from ironic.common.i18n import _
|
||||
|
||||
api_opts = [
|
||||
cfg.StrOpt(
|
||||
'auth_strategy',
|
||||
default='keystone',
|
||||
choices=['noauth', 'keystone'],
|
||||
help=_('Authentication strategy used by ironic-api. "noauth" should '
|
||||
'not be used in a production environment because all '
|
||||
'authentication will be disabled.')),
|
||||
cfg.BoolOpt('debug_tracebacks_in_api',
|
||||
default=False,
|
||||
help=_('Return server tracebacks in the API response for any '
|
||||
'error responses. WARNING: this is insecure '
|
||||
'and should not be used in a production environment.')),
|
||||
cfg.BoolOpt('pecan_debug',
|
||||
default=False,
|
||||
help=_('Enable pecan debug mode. WARNING: this is insecure '
|
||||
'and should not be used in a production environment.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(api_opts)
|
||||
from ironic.api.middleware import auth_token
|
||||
from ironic.common import exception
|
||||
from ironic.conf import CONF
|
||||
|
||||
|
||||
def get_pecan_config():
|
||||
|
@ -68,9 +49,6 @@ def setup_app(pecan_config=None, extra_hooks=None):
|
|||
if not pecan_config:
|
||||
pecan_config = get_pecan_config()
|
||||
|
||||
if pecan_config.app.enable_acl:
|
||||
app_hooks.append(hooks.TrustedCallHook())
|
||||
|
||||
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
|
||||
|
||||
app = pecan.make_app(
|
||||
|
@ -82,8 +60,23 @@ def setup_app(pecan_config=None, extra_hooks=None):
|
|||
wrap_app=middleware.ParsableErrorMiddleware,
|
||||
)
|
||||
|
||||
if pecan_config.app.enable_acl:
|
||||
app = acl.install(app, cfg.CONF, pecan_config.app.acl_public_routes)
|
||||
if CONF.audit.enabled:
|
||||
try:
|
||||
app = audit_middleware.AuditMiddleware(
|
||||
app,
|
||||
audit_map_file=CONF.audit.audit_map_file,
|
||||
ignore_req_list=CONF.audit.ignore_req_list
|
||||
)
|
||||
except (EnvironmentError, OSError, PycadfAuditApiConfigError) as e:
|
||||
raise exception.InputFileError(
|
||||
file_name=CONF.audit.audit_map_file,
|
||||
reason=e
|
||||
)
|
||||
|
||||
if CONF.auth_strategy == "keystone":
|
||||
app = auth_token.AuthTokenMiddleware(
|
||||
app, dict(cfg.CONF),
|
||||
public_api_routes=pecan_config.app.acl_public_routes)
|
||||
|
||||
# Create a CORS wrapper, and attach ironic-specific defaults that must be
|
||||
# included in all CORS responses.
|
||||
|
@ -100,7 +93,6 @@ def setup_app(pecan_config=None, extra_hooks=None):
|
|||
class VersionSelectorApplication(object):
|
||||
def __init__(self):
|
||||
pc = get_pecan_config()
|
||||
pc.app.enable_acl = (CONF.auth_strategy == 'keystone')
|
||||
self.v1 = setup_app(pecan_config=pc)
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
|
|
|
@ -26,11 +26,13 @@ app = {
|
|||
'modules': ['ironic.api'],
|
||||
'static_root': '%(confdir)s/public',
|
||||
'debug': False,
|
||||
'enable_acl': True,
|
||||
'acl_public_routes': [
|
||||
'/',
|
||||
'/v1',
|
||||
# IPA ramdisk methods
|
||||
'/v1/lookup',
|
||||
'/v1/heartbeat/[a-z0-9\-]+',
|
||||
# Old IPA ramdisk methods - will be removed in the Ocata release
|
||||
'/v1/drivers/[a-z0-9_]*/vendor_passthru/lookup',
|
||||
'/v1/nodes/[a-z0-9\-]+/vendor_passthru/heartbeat',
|
||||
],
|
||||
|
|
|
@ -29,6 +29,8 @@ from ironic.api.controllers.v1 import chassis
|
|||
from ironic.api.controllers.v1 import driver
|
||||
from ironic.api.controllers.v1 import node
|
||||
from ironic.api.controllers.v1 import port
|
||||
from ironic.api.controllers.v1 import ramdisk
|
||||
from ironic.api.controllers.v1 import utils
|
||||
from ironic.api.controllers.v1 import versions
|
||||
from ironic.api import expose
|
||||
from ironic.common.i18n import _
|
||||
|
@ -78,6 +80,12 @@ class V1(base.APIBase):
|
|||
drivers = [link.Link]
|
||||
"""Links to the drivers resource"""
|
||||
|
||||
lookup = [link.Link]
|
||||
"""Links to the lookup resource"""
|
||||
|
||||
heartbeat = [link.Link]
|
||||
"""Links to the heartbeat resource"""
|
||||
|
||||
@staticmethod
|
||||
def convert():
|
||||
v1 = V1()
|
||||
|
@ -120,6 +128,22 @@ class V1(base.APIBase):
|
|||
'drivers', '',
|
||||
bookmark=True)
|
||||
]
|
||||
if utils.allow_ramdisk_endpoints():
|
||||
v1.lookup = [link.Link.make_link('self', pecan.request.public_url,
|
||||
'lookup', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.public_url,
|
||||
'lookup', '',
|
||||
bookmark=True)
|
||||
]
|
||||
v1.heartbeat = [link.Link.make_link('self',
|
||||
pecan.request.public_url,
|
||||
'heartbeat', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.public_url,
|
||||
'heartbeat', '',
|
||||
bookmark=True)
|
||||
]
|
||||
return v1
|
||||
|
||||
|
||||
|
@ -130,6 +154,8 @@ class Controller(rest.RestController):
|
|||
ports = port.PortsController()
|
||||
chassis = chassis.ChassisController()
|
||||
drivers = driver.DriversController()
|
||||
lookup = ramdisk.LookupController()
|
||||
heartbeat = ramdisk.HeartbeatController()
|
||||
|
||||
@expose.expose(V1)
|
||||
def get(self):
|
||||
|
@ -179,4 +205,4 @@ class Controller(rest.RestController):
|
|||
return super(Controller, self)._route(args)
|
||||
|
||||
|
||||
__all__ = (Controller)
|
||||
__all__ = ('Controller',)
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
|
||||
import datetime
|
||||
|
||||
from ironic_lib import metrics_utils
|
||||
import pecan
|
||||
from pecan import rest
|
||||
from six.moves import http_client
|
||||
|
@ -30,8 +31,11 @@ from ironic.api.controllers.v1 import utils as api_utils
|
|||
from ironic.api import expose
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import policy
|
||||
from ironic import objects
|
||||
|
||||
METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
|
||||
_DEFAULT_RETURN_FIELDS = ('uuid', 'description')
|
||||
|
||||
|
@ -190,6 +194,7 @@ class ChassisController(rest.RestController):
|
|||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
@METRICS.timer('ChassisController.get_all')
|
||||
@expose.expose(ChassisCollection, types.uuid, int,
|
||||
wtypes.text, wtypes.text, types.listtype)
|
||||
def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc',
|
||||
|
@ -198,17 +203,24 @@ class ChassisController(rest.RestController):
|
|||
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:get', cdict, cdict)
|
||||
|
||||
api_utils.check_allow_specify_fields(fields)
|
||||
if fields is None:
|
||||
fields = _DEFAULT_RETURN_FIELDS
|
||||
return self._get_chassis_collection(marker, limit, sort_key, sort_dir,
|
||||
fields=fields)
|
||||
|
||||
@METRICS.timer('ChassisController.detail')
|
||||
@expose.expose(ChassisCollection, types.uuid, int,
|
||||
wtypes.text, wtypes.text)
|
||||
def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
|
||||
|
@ -216,9 +228,15 @@ class ChassisController(rest.RestController):
|
|||
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:get', cdict, cdict)
|
||||
|
||||
# /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "chassis":
|
||||
|
@ -228,6 +246,7 @@ class ChassisController(rest.RestController):
|
|||
return self._get_chassis_collection(marker, limit, sort_key, sort_dir,
|
||||
resource_url)
|
||||
|
||||
@METRICS.timer('ChassisController.get_one')
|
||||
@expose.expose(Chassis, types.uuid, types.listtype)
|
||||
def get_one(self, chassis_uuid, fields=None):
|
||||
"""Retrieve information about the given chassis.
|
||||
|
@ -236,17 +255,24 @@ class ChassisController(rest.RestController):
|
|||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:get', cdict, cdict)
|
||||
|
||||
api_utils.check_allow_specify_fields(fields)
|
||||
rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context,
|
||||
chassis_uuid)
|
||||
return Chassis.convert_with_links(rpc_chassis, fields=fields)
|
||||
|
||||
@METRICS.timer('ChassisController.post')
|
||||
@expose.expose(Chassis, body=Chassis, status_code=http_client.CREATED)
|
||||
def post(self, chassis):
|
||||
"""Create a new chassis.
|
||||
|
||||
:param chassis: a chassis within the request body.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:create', cdict, cdict)
|
||||
|
||||
new_chassis = objects.Chassis(pecan.request.context,
|
||||
**chassis.as_dict())
|
||||
new_chassis.create()
|
||||
|
@ -254,6 +280,7 @@ class ChassisController(rest.RestController):
|
|||
pecan.response.location = link.build_url('chassis', new_chassis.uuid)
|
||||
return Chassis.convert_with_links(new_chassis)
|
||||
|
||||
@METRICS.timer('ChassisController.patch')
|
||||
@wsme.validate(types.uuid, [ChassisPatchType])
|
||||
@expose.expose(Chassis, types.uuid, body=[ChassisPatchType])
|
||||
def patch(self, chassis_uuid, patch):
|
||||
|
@ -262,6 +289,9 @@ class ChassisController(rest.RestController):
|
|||
:param chassis_uuid: UUID of a chassis.
|
||||
:param patch: a json PATCH document to apply to this chassis.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:update', cdict, cdict)
|
||||
|
||||
rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context,
|
||||
chassis_uuid)
|
||||
try:
|
||||
|
@ -286,12 +316,16 @@ class ChassisController(rest.RestController):
|
|||
rpc_chassis.save()
|
||||
return Chassis.convert_with_links(rpc_chassis)
|
||||
|
||||
@METRICS.timer('ChassisController.delete')
|
||||
@expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT)
|
||||
def delete(self, chassis_uuid):
|
||||
"""Delete a chassis.
|
||||
|
||||
:param chassis_uuid: UUID of a chassis.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:chassis:delete', cdict, cdict)
|
||||
|
||||
rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context,
|
||||
chassis_uuid)
|
||||
rpc_chassis.destroy()
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from ironic_lib import metrics_utils
|
||||
import pecan
|
||||
from pecan import rest
|
||||
from six.moves import http_client
|
||||
|
@ -25,8 +26,11 @@ from ironic.api.controllers.v1 import types
|
|||
from ironic.api.controllers.v1 import utils as api_utils
|
||||
from ironic.api import expose
|
||||
from ironic.common import exception
|
||||
from ironic.common import policy
|
||||
|
||||
|
||||
METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
# Property information for drivers:
|
||||
# key = driver name;
|
||||
# value = dictionary of properties of that driver:
|
||||
|
@ -139,6 +143,7 @@ class DriverPassthruController(rest.RestController):
|
|||
'methods': ['GET']
|
||||
}
|
||||
|
||||
@METRICS.timer('DriverPassthruController.methods')
|
||||
@expose.expose(wtypes.text, wtypes.text)
|
||||
def methods(self, driver_name):
|
||||
"""Retrieve information about vendor methods of the given driver.
|
||||
|
@ -149,6 +154,9 @@ class DriverPassthruController(rest.RestController):
|
|||
:raises: DriverNotFound if the driver name is invalid or the
|
||||
driver cannot be loaded.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:vendor_passthru', cdict, cdict)
|
||||
|
||||
if driver_name not in _VENDOR_METHODS:
|
||||
topic = pecan.request.rpcapi.get_topic_for_driver(driver_name)
|
||||
ret = pecan.request.rpcapi.get_driver_vendor_passthru_methods(
|
||||
|
@ -157,6 +165,7 @@ class DriverPassthruController(rest.RestController):
|
|||
|
||||
return _VENDOR_METHODS[driver_name]
|
||||
|
||||
@METRICS.timer('DriverPassthruController._default')
|
||||
@expose.expose(wtypes.text, wtypes.text, wtypes.text,
|
||||
body=wtypes.text)
|
||||
def _default(self, driver_name, method, data=None):
|
||||
|
@ -167,6 +176,12 @@ class DriverPassthruController(rest.RestController):
|
|||
implementation.
|
||||
:param data: body of data to supply to the specified method.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
if method == "lookup":
|
||||
policy.authorize('baremetal:driver:ipa_lookup', cdict, cdict)
|
||||
else:
|
||||
policy.authorize('baremetal:driver:vendor_passthru', cdict, cdict)
|
||||
|
||||
topic = pecan.request.rpcapi.get_topic_for_driver(driver_name)
|
||||
return api_utils.vendor_passthru(driver_name, method, topic, data=data,
|
||||
driver_passthru=True)
|
||||
|
@ -178,6 +193,7 @@ class DriverRaidController(rest.RestController):
|
|||
'logical_disk_properties': ['GET']
|
||||
}
|
||||
|
||||
@METRICS.timer('DriverRaidController.logical_disk_properties')
|
||||
@expose.expose(types.jsontype, wtypes.text)
|
||||
def logical_disk_properties(self, driver_name):
|
||||
"""Returns the logical disk properties for the driver.
|
||||
|
@ -192,6 +208,10 @@ class DriverRaidController(rest.RestController):
|
|||
:raises: DriverNotFound, if driver is not loaded on any of the
|
||||
conductors.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:get_raid_logical_disk_properties',
|
||||
cdict, cdict)
|
||||
|
||||
if not api_utils.allow_raid_config():
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
|
@ -222,6 +242,7 @@ class DriversController(rest.RestController):
|
|||
'properties': ['GET'],
|
||||
}
|
||||
|
||||
@METRICS.timer('DriversController.get_all')
|
||||
@expose.expose(DriverList)
|
||||
def get_all(self):
|
||||
"""Retrieve a list of drivers."""
|
||||
|
@ -229,9 +250,13 @@ class DriversController(rest.RestController):
|
|||
# will break from a single-line doc string.
|
||||
# This is a result of a bug in sphinxcontrib-pecanwsme
|
||||
# https://github.com/dreamhost/sphinxcontrib-pecanwsme/issues/8
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:get', cdict, cdict)
|
||||
|
||||
driver_list = pecan.request.dbapi.get_active_driver_dict()
|
||||
return DriverList.convert_with_links(driver_list)
|
||||
|
||||
@METRICS.timer('DriversController.get_one')
|
||||
@expose.expose(Driver, wtypes.text)
|
||||
def get_one(self, driver_name):
|
||||
"""Retrieve a single driver."""
|
||||
|
@ -239,6 +264,8 @@ class DriversController(rest.RestController):
|
|||
# retrieving a list of drivers using the current sqlalchemy schema, but
|
||||
# this path must be exposed for Pecan to route any paths we might
|
||||
# choose to expose below it.
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:get', cdict, cdict)
|
||||
|
||||
driver_dict = pecan.request.dbapi.get_active_driver_dict()
|
||||
for name, hosts in driver_dict.items():
|
||||
|
@ -247,6 +274,7 @@ class DriversController(rest.RestController):
|
|||
|
||||
raise exception.DriverNotFound(driver_name=driver_name)
|
||||
|
||||
@METRICS.timer('DriversController.properties')
|
||||
@expose.expose(wtypes.text, wtypes.text)
|
||||
def properties(self, driver_name):
|
||||
"""Retrieve property information of the given driver.
|
||||
|
@ -257,6 +285,9 @@ class DriversController(rest.RestController):
|
|||
:raises: DriverNotFound (HTTP 404) if the driver name is invalid or
|
||||
the driver cannot be loaded.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:get_properties', cdict, cdict)
|
||||
|
||||
if driver_name not in _DRIVER_PROPERTIES:
|
||||
topic = pecan.request.rpcapi.get_topic_for_driver(driver_name)
|
||||
properties = pecan.request.rpcapi.get_driver_properties(
|
||||
|
|
|
@ -13,9 +13,9 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import ast
|
||||
import datetime
|
||||
|
||||
from ironic_lib import metrics_utils
|
||||
import jsonschema
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
@ -37,6 +37,7 @@ from ironic.api.controllers.v1 import versions
|
|||
from ironic.api import expose
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import policy
|
||||
from ironic.common import states as ir_states
|
||||
from ironic.conductor import utils as conductor_utils
|
||||
from ironic import objects
|
||||
|
@ -45,6 +46,7 @@ from ironic import objects
|
|||
CONF = cfg.CONF
|
||||
CONF.import_opt('heartbeat_timeout', 'ironic.conductor.manager',
|
||||
group='conductor')
|
||||
CONF.import_opt('enabled_network_interfaces', 'ironic.common.driver_factory')
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
_CLEAN_STEPS_SCHEMA = {
|
||||
|
@ -78,6 +80,8 @@ _CLEAN_STEPS_SCHEMA = {
|
|||
}
|
||||
}
|
||||
|
||||
METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
# Vendor information for node's driver:
|
||||
# key = driver name;
|
||||
# value = dictionary of node vendor methods of that driver:
|
||||
|
@ -109,7 +113,12 @@ def get_nodes_controller_reserved_names():
|
|||
|
||||
|
||||
def hide_fields_in_newer_versions(obj):
|
||||
# if requested version is < 1.3, hide driver_internal_info
|
||||
"""This method hides fields that were added in newer API versions.
|
||||
|
||||
Certain node fields were introduced at certain API versions.
|
||||
These fields are only made available when the request's API version
|
||||
matches or exceeds the versions when these fields were introduced.
|
||||
"""
|
||||
if pecan.request.version.minor < versions.MINOR_3_DRIVER_INTERNAL_INFO:
|
||||
obj.driver_internal_info = wsme.Unset
|
||||
|
||||
|
@ -128,6 +137,12 @@ def hide_fields_in_newer_versions(obj):
|
|||
obj.raid_config = wsme.Unset
|
||||
obj.target_raid_config = wsme.Unset
|
||||
|
||||
if pecan.request.version.minor < versions.MINOR_20_NETWORK_INTERFACE:
|
||||
obj.network_interface = wsme.Unset
|
||||
|
||||
if not api_utils.allow_resource_class():
|
||||
obj.resource_class = wsme.Unset
|
||||
|
||||
|
||||
def update_state_in_older_versions(obj):
|
||||
"""Change provision state names for API backwards compatability.
|
||||
|
@ -167,6 +182,7 @@ class BootDeviceController(rest.RestController):
|
|||
return pecan.request.rpcapi.get_boot_device(pecan.request.context,
|
||||
rpc_node.uuid, topic)
|
||||
|
||||
@METRICS.timer('BootDeviceController.put')
|
||||
@expose.expose(None, types.uuid_or_name, wtypes.text, types.boolean,
|
||||
status_code=http_client.NO_CONTENT)
|
||||
def put(self, node_ident, boot_device, persistent=False):
|
||||
|
@ -182,6 +198,9 @@ class BootDeviceController(rest.RestController):
|
|||
Default: False.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_boot_device', cdict, cdict)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
pecan.request.rpcapi.set_boot_device(pecan.request.context,
|
||||
|
@ -190,6 +209,7 @@ class BootDeviceController(rest.RestController):
|
|||
persistent=persistent,
|
||||
topic=topic)
|
||||
|
||||
@METRICS.timer('BootDeviceController.get')
|
||||
@expose.expose(wtypes.text, types.uuid_or_name)
|
||||
def get(self, node_ident):
|
||||
"""Get the current boot device for a node.
|
||||
|
@ -203,8 +223,12 @@ class BootDeviceController(rest.RestController):
|
|||
future boots or not, None if it is unknown.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get_boot_device', cdict, cdict)
|
||||
|
||||
return self._get_boot_device(node_ident)
|
||||
|
||||
@METRICS.timer('BootDeviceController.supported')
|
||||
@expose.expose(wtypes.text, types.uuid_or_name)
|
||||
def supported(self, node_ident):
|
||||
"""Get a list of the supported boot devices.
|
||||
|
@ -214,6 +238,9 @@ class BootDeviceController(rest.RestController):
|
|||
devices.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get_boot_device', cdict, cdict)
|
||||
|
||||
boot_devices = self._get_boot_device(node_ident, supported=True)
|
||||
return {'supported_boot_devices': boot_devices}
|
||||
|
||||
|
@ -242,12 +269,16 @@ class ConsoleInfo(base.APIBase):
|
|||
|
||||
class NodeConsoleController(rest.RestController):
|
||||
|
||||
@METRICS.timer('NodeConsoleController.get')
|
||||
@expose.expose(ConsoleInfo, types.uuid_or_name)
|
||||
def get(self, node_ident):
|
||||
"""Get connection information about the console.
|
||||
|
||||
:param node_ident: UUID or logical name of a node.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get_console', cdict, cdict)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
try:
|
||||
|
@ -260,6 +291,7 @@ class NodeConsoleController(rest.RestController):
|
|||
|
||||
return ConsoleInfo(console_enabled=console_state, console_info=console)
|
||||
|
||||
@METRICS.timer('NodeConsoleController.put')
|
||||
@expose.expose(None, types.uuid_or_name, types.boolean,
|
||||
status_code=http_client.ACCEPTED)
|
||||
def put(self, node_ident, enabled):
|
||||
|
@ -269,6 +301,9 @@ class NodeConsoleController(rest.RestController):
|
|||
:param enabled: Boolean value; whether to enable or disable the
|
||||
console.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_console_state', cdict, cdict)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
pecan.request.rpcapi.set_console_mode(pecan.request.context,
|
||||
|
@ -350,18 +385,23 @@ class NodeStatesController(rest.RestController):
|
|||
console = NodeConsoleController()
|
||||
"""Expose console as a sub-element of states"""
|
||||
|
||||
@METRICS.timer('NodeStatesController.get')
|
||||
@expose.expose(NodeStates, types.uuid_or_name)
|
||||
def get(self, node_ident):
|
||||
"""List the states of the node.
|
||||
|
||||
:param node_ident: the UUID or logical_name of a node.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get_states', cdict, cdict)
|
||||
|
||||
# NOTE(lucasagomes): All these state values come from the
|
||||
# DB. Ironic counts with a periodic task that verify the current
|
||||
# power states of the nodes and update the DB accordingly.
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
return NodeStates.convert(rpc_node)
|
||||
|
||||
@METRICS.timer('NodeStatesController.raid')
|
||||
@expose.expose(None, types.uuid_or_name, body=types.jsontype)
|
||||
def raid(self, node_ident, target_raid_config):
|
||||
"""Set the target raid config of the node.
|
||||
|
@ -376,6 +416,9 @@ class NodeStatesController(rest.RestController):
|
|||
:raises: NotAcceptable, if requested version of the API is less than
|
||||
1.12.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_raid_state', cdict, cdict)
|
||||
|
||||
if not api_utils.allow_raid_config():
|
||||
raise exception.NotAcceptable()
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
|
@ -388,8 +431,9 @@ class NodeStatesController(rest.RestController):
|
|||
# Change error code as 404 seems appropriate because RAID is a
|
||||
# standard interface and all drivers might not have it.
|
||||
e.code = http_client.NOT_FOUND
|
||||
raise e
|
||||
raise
|
||||
|
||||
@METRICS.timer('NodeStatesController.power')
|
||||
@expose.expose(None, types.uuid_or_name, wtypes.text,
|
||||
status_code=http_client.ACCEPTED)
|
||||
def power(self, node_ident, target):
|
||||
|
@ -403,6 +447,9 @@ class NodeStatesController(rest.RestController):
|
|||
state is not valid or if the node is in CLEANING state.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_power_state', cdict, cdict)
|
||||
|
||||
# TODO(lucasagomes): Test if it's able to transition to the
|
||||
# target state from the current one
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
|
@ -429,6 +476,7 @@ class NodeStatesController(rest.RestController):
|
|||
url_args = '/'.join([node_ident, 'states'])
|
||||
pecan.response.location = link.build_url('nodes', url_args)
|
||||
|
||||
@METRICS.timer('NodeStatesController.provision')
|
||||
@expose.expose(None, types.uuid_or_name, wtypes.text,
|
||||
wtypes.text, types.jsontype,
|
||||
status_code=http_client.ACCEPTED)
|
||||
|
@ -479,6 +527,9 @@ class NodeStatesController(rest.RestController):
|
|||
:raises: NotAcceptable (HTTP 406) if the API version specified does
|
||||
not allow the requested state transition.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_provision_state', cdict, cdict)
|
||||
|
||||
api_utils.check_allow_management_verbs(target)
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
|
@ -599,7 +650,7 @@ class Node(base.APIBase):
|
|||
# Change error code because 404 (NotFound) is inappropriate
|
||||
# response for a POST request to create a Port
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise e
|
||||
raise
|
||||
elif value == wtypes.Unset:
|
||||
self._chassis_uuid = wtypes.Unset
|
||||
|
||||
|
@ -678,6 +729,11 @@ class Node(base.APIBase):
|
|||
extra = {wtypes.text: types.jsontype}
|
||||
"""This node's meta data"""
|
||||
|
||||
resource_class = wsme.wsattr(wtypes.StringType(max_length=80))
|
||||
"""The resource class for the node, useful for classifying or grouping
|
||||
nodes. Used, for example, to classify nodes in Nova's placement
|
||||
engine."""
|
||||
|
||||
# NOTE: properties should use a class to enforce required properties
|
||||
# current list: arch, cpus, disk, ram, image
|
||||
properties = {wtypes.text: types.jsontype}
|
||||
|
@ -696,6 +752,9 @@ class Node(base.APIBase):
|
|||
states = wsme.wsattr([link.Link], readonly=True)
|
||||
"""Links to endpoint for retrieving and setting node states"""
|
||||
|
||||
network_interface = wsme.wsattr(wtypes.text)
|
||||
"""The network interface to be used for this node"""
|
||||
|
||||
# NOTE(deva): "conductor_affinity" shouldn't be presented on the
|
||||
# API because it's an internal value. Don't add it here.
|
||||
|
||||
|
@ -742,9 +801,8 @@ class Node(base.APIBase):
|
|||
bookmark=True)]
|
||||
|
||||
if not show_password and node.driver_info != wtypes.Unset:
|
||||
node.driver_info = ast.literal_eval(strutils.mask_password(
|
||||
node.driver_info,
|
||||
"******"))
|
||||
node.driver_info = strutils.mask_dict_password(node.driver_info,
|
||||
"******")
|
||||
|
||||
# NOTE(lucasagomes): The numeric ID should not be exposed to
|
||||
# the user, it's internal only.
|
||||
|
@ -794,7 +852,8 @@ class Node(base.APIBase):
|
|||
maintenance=False, maintenance_reason=None,
|
||||
inspection_finished_at=None, inspection_started_at=time,
|
||||
console_enabled=False, clean_step={},
|
||||
raid_config=None, target_raid_config=None)
|
||||
raid_config=None, target_raid_config=None,
|
||||
network_interface='flat', resource_class='baremetal-gold')
|
||||
# NOTE(matty_dubs): The chassis_uuid getter() is based on the
|
||||
# _chassis_uuid variable:
|
||||
sample._chassis_uuid = 'edcad704-b2da-41d5-96d9-afd580ecfa12'
|
||||
|
@ -860,6 +919,7 @@ class NodeVendorPassthruController(rest.RestController):
|
|||
'methods': ['GET']
|
||||
}
|
||||
|
||||
@METRICS.timer('NodeVendorPassthruController.methods')
|
||||
@expose.expose(wtypes.text, types.uuid_or_name)
|
||||
def methods(self, node_ident):
|
||||
"""Retrieve information about vendor methods of the given node.
|
||||
|
@ -869,6 +929,9 @@ class NodeVendorPassthruController(rest.RestController):
|
|||
entries.
|
||||
:raises: NodeNotFound if the node is not found.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:vendor_passthru', cdict, cdict)
|
||||
|
||||
# Raise an exception if node is not found
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
|
||||
|
@ -880,6 +943,7 @@ class NodeVendorPassthruController(rest.RestController):
|
|||
|
||||
return _VENDOR_METHODS[rpc_node.driver]
|
||||
|
||||
@METRICS.timer('NodeVendorPassthruController._default')
|
||||
@expose.expose(wtypes.text, types.uuid_or_name, wtypes.text,
|
||||
body=wtypes.text)
|
||||
def _default(self, node_ident, method, data=None):
|
||||
|
@ -889,6 +953,12 @@ class NodeVendorPassthruController(rest.RestController):
|
|||
:param method: name of the method in vendor driver.
|
||||
:param data: body of data to supply to the specified method.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
if method == 'heartbeat':
|
||||
policy.authorize('baremetal:node:ipa_heartbeat', cdict, cdict)
|
||||
else:
|
||||
policy.authorize('baremetal:node:vendor_passthru', cdict, cdict)
|
||||
|
||||
# Raise an exception if node is not found
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
|
@ -907,10 +977,11 @@ class NodeMaintenanceController(rest.RestController):
|
|||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
except exception.NoValidHost as e:
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise e
|
||||
raise
|
||||
pecan.request.rpcapi.update_node(pecan.request.context,
|
||||
rpc_node, topic=topic)
|
||||
|
||||
@METRICS.timer('NodeMaintenanceController.put')
|
||||
@expose.expose(None, types.uuid_or_name, wtypes.text,
|
||||
status_code=http_client.ACCEPTED)
|
||||
def put(self, node_ident, reason=None):
|
||||
|
@ -920,8 +991,12 @@ class NodeMaintenanceController(rest.RestController):
|
|||
:param reason: Optional, the reason why it's in maintenance.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:set_maintenance', cdict, cdict)
|
||||
|
||||
self._set_maintenance(node_ident, True, reason=reason)
|
||||
|
||||
@METRICS.timer('NodeMaintenanceController.delete')
|
||||
@expose.expose(None, types.uuid_or_name, status_code=http_client.ACCEPTED)
|
||||
def delete(self, node_ident):
|
||||
"""Remove the node from maintenance mode.
|
||||
|
@ -929,6 +1004,9 @@ class NodeMaintenanceController(rest.RestController):
|
|||
:param node_ident: the UUID or logical name of a node.
|
||||
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:clear_maintenance', cdict, cdict)
|
||||
|
||||
self._set_maintenance(node_ident, False)
|
||||
|
||||
|
||||
|
@ -977,6 +1055,7 @@ class NodesController(rest.RestController):
|
|||
def _get_nodes_collection(self, chassis_uuid, instance_uuid, associated,
|
||||
maintenance, provision_state, marker, limit,
|
||||
sort_key, sort_dir, driver=None,
|
||||
resource_class=None,
|
||||
resource_url=None, fields=None):
|
||||
if self.from_chassis and not chassis_uuid:
|
||||
raise exception.MissingParameterValue(
|
||||
|
@ -1009,6 +1088,8 @@ class NodesController(rest.RestController):
|
|||
filters['provision_state'] = provision_state
|
||||
if driver:
|
||||
filters['driver'] = driver
|
||||
if resource_class is not None:
|
||||
filters['resource_class'] = resource_class
|
||||
|
||||
nodes = objects.Node.list(pecan.request.context, limit, marker_obj,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
|
@ -1096,13 +1177,14 @@ class NodesController(rest.RestController):
|
|||
"enabled. Please stop the console first.") % node_ident,
|
||||
status_code=http_client.CONFLICT)
|
||||
|
||||
@METRICS.timer('NodesController.get_all')
|
||||
@expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean,
|
||||
types.boolean, wtypes.text, types.uuid, int, wtypes.text,
|
||||
wtypes.text, wtypes.text, types.listtype)
|
||||
wtypes.text, wtypes.text, types.listtype, wtypes.text)
|
||||
def get_all(self, chassis_uuid=None, instance_uuid=None, associated=None,
|
||||
maintenance=None, provision_state=None, marker=None,
|
||||
limit=None, sort_key='id', sort_dir='asc', driver=None,
|
||||
fields=None):
|
||||
fields=None, resource_class=None):
|
||||
"""Retrieve a list of nodes.
|
||||
|
||||
:param chassis_uuid: Optional UUID of a chassis, to get only nodes for
|
||||
|
@ -1119,30 +1201,44 @@ class NodesController(rest.RestController):
|
|||
that provision state.
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
:param driver: Optional string value to get only nodes using that
|
||||
driver.
|
||||
:param resource_class: Optional string value to get only nodes with
|
||||
that resource_class.
|
||||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get', cdict, cdict)
|
||||
|
||||
api_utils.check_allow_specify_fields(fields)
|
||||
api_utils.check_allowed_fields(fields)
|
||||
api_utils.check_for_invalid_state_and_allow_filter(provision_state)
|
||||
api_utils.check_allow_specify_driver(driver)
|
||||
api_utils.check_allow_specify_resource_class(resource_class)
|
||||
if fields is None:
|
||||
fields = _DEFAULT_RETURN_FIELDS
|
||||
return self._get_nodes_collection(chassis_uuid, instance_uuid,
|
||||
associated, maintenance,
|
||||
provision_state, marker,
|
||||
limit, sort_key, sort_dir,
|
||||
driver, fields=fields)
|
||||
driver=driver,
|
||||
resource_class=resource_class,
|
||||
fields=fields)
|
||||
|
||||
@METRICS.timer('NodesController.detail')
|
||||
@expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean,
|
||||
types.boolean, wtypes.text, types.uuid, int, wtypes.text,
|
||||
wtypes.text, wtypes.text)
|
||||
wtypes.text, wtypes.text, wtypes.text)
|
||||
def detail(self, chassis_uuid=None, instance_uuid=None, associated=None,
|
||||
maintenance=None, provision_state=None, marker=None,
|
||||
limit=None, sort_key='id', sort_dir='asc', driver=None):
|
||||
limit=None, sort_key='id', sort_dir='asc', driver=None,
|
||||
resource_class=None):
|
||||
"""Retrieve a list of nodes with detail.
|
||||
|
||||
:param chassis_uuid: Optional UUID of a chassis, to get only nodes for
|
||||
|
@ -1159,13 +1255,22 @@ class NodesController(rest.RestController):
|
|||
that provision state.
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
:param driver: Optional string value to get only nodes using that
|
||||
driver.
|
||||
:param resource_class: Optional string value to get only nodes with
|
||||
that resource_class.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get', cdict, cdict)
|
||||
|
||||
api_utils.check_for_invalid_state_and_allow_filter(provision_state)
|
||||
api_utils.check_allow_specify_driver(driver)
|
||||
api_utils.check_allow_specify_resource_class(resource_class)
|
||||
# /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "nodes":
|
||||
|
@ -1176,8 +1281,11 @@ class NodesController(rest.RestController):
|
|||
associated, maintenance,
|
||||
provision_state, marker,
|
||||
limit, sort_key, sort_dir,
|
||||
driver, resource_url)
|
||||
driver=driver,
|
||||
resource_class=resource_class,
|
||||
resource_url=resource_url)
|
||||
|
||||
@METRICS.timer('NodesController.validate')
|
||||
@expose.expose(wtypes.text, types.uuid_or_name, types.uuid)
|
||||
def validate(self, node=None, node_uuid=None):
|
||||
"""Validate the driver interfaces, using the node's UUID or name.
|
||||
|
@ -1188,6 +1296,9 @@ class NodesController(rest.RestController):
|
|||
:param node: UUID or name of a node.
|
||||
:param node_uuid: UUID of a node.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:validate', cdict, cdict)
|
||||
|
||||
if node is not None:
|
||||
# We're invoking this interface using positional notation, or
|
||||
# explicitly using 'node'. Try and determine which one.
|
||||
|
@ -1201,6 +1312,7 @@ class NodesController(rest.RestController):
|
|||
return pecan.request.rpcapi.validate_driver_interfaces(
|
||||
pecan.request.context, rpc_node.uuid, topic)
|
||||
|
||||
@METRICS.timer('NodesController.get_one')
|
||||
@expose.expose(Node, types.uuid_or_name, types.listtype)
|
||||
def get_one(self, node_ident, fields=None):
|
||||
"""Retrieve information about the given node.
|
||||
|
@ -1209,23 +1321,55 @@ class NodesController(rest.RestController):
|
|||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:get', cdict, cdict)
|
||||
|
||||
if self.from_chassis:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
api_utils.check_allow_specify_fields(fields)
|
||||
api_utils.check_allowed_fields(fields)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
return Node.convert_with_links(rpc_node, fields=fields)
|
||||
|
||||
@METRICS.timer('NodesController.post')
|
||||
@expose.expose(Node, body=Node, status_code=http_client.CREATED)
|
||||
def post(self, node):
|
||||
"""Create a new node.
|
||||
|
||||
:param node: a node within the request body.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:create', cdict, cdict)
|
||||
|
||||
if self.from_chassis:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
if (not api_utils.allow_resource_class() and
|
||||
node.resource_class is not wtypes.Unset):
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
n_interface = node.network_interface
|
||||
if (not api_utils.allow_network_interface() and
|
||||
n_interface is not wtypes.Unset):
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
# NOTE(vsaienko) The validation is performed on API side,
|
||||
# all conductors and api should have the same list of
|
||||
# enabled_network_interfaces.
|
||||
# TODO(vsaienko) remove it once driver-composition-reform
|
||||
# is implemented.
|
||||
if (n_interface is not wtypes.Unset and
|
||||
not api_utils.is_valid_network_interface(n_interface)):
|
||||
error_msg = _("Cannot create node with the invalid network "
|
||||
"interface '%(n_interface)s'. Enabled network "
|
||||
"interfaces are: %(enabled_int)s")
|
||||
raise wsme.exc.ClientSideError(
|
||||
error_msg % {'n_interface': n_interface,
|
||||
'enabled_int': CONF.enabled_network_interfaces},
|
||||
status_code=http_client.BAD_REQUEST)
|
||||
|
||||
# NOTE(deva): get_topic_for checks if node.driver is in the hash ring
|
||||
# and raises NoValidHost if it is not.
|
||||
# We need to ensure that node has a UUID before it can
|
||||
|
@ -1240,7 +1384,7 @@ class NodesController(rest.RestController):
|
|||
# list of available drivers and shouldn't request
|
||||
# one that doesn't exist.
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise e
|
||||
raise
|
||||
|
||||
if node.name != wtypes.Unset and node.name is not None:
|
||||
error_msg = _("Cannot create node with invalid name '%(name)s'")
|
||||
|
@ -1254,6 +1398,7 @@ class NodesController(rest.RestController):
|
|||
pecan.response.location = link.build_url('nodes', new_node.uuid)
|
||||
return Node.convert_with_links(new_node)
|
||||
|
||||
@METRICS.timer('NodesController.patch')
|
||||
@wsme.validate(types.uuid, [NodePatchType])
|
||||
@expose.expose(Node, types.uuid_or_name, body=[NodePatchType])
|
||||
def patch(self, node_ident, patch):
|
||||
|
@ -1262,9 +1407,31 @@ class NodesController(rest.RestController):
|
|||
:param node_ident: UUID or logical name of a node.
|
||||
:param patch: a json PATCH document to apply to this node.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:update', cdict, cdict)
|
||||
|
||||
if self.from_chassis:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
resource_class = api_utils.get_patch_values(patch, '/resource_class')
|
||||
if resource_class and not api_utils.allow_resource_class():
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
n_interfaces = api_utils.get_patch_values(patch, '/network_interface')
|
||||
if n_interfaces and not api_utils.allow_network_interface():
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
for n_interface in n_interfaces:
|
||||
if (n_interface is not None and
|
||||
not api_utils.is_valid_network_interface(n_interface)):
|
||||
error_msg = _("Node %(node)s: Cannot change "
|
||||
"network_interface to invalid value: "
|
||||
"%(n_interface)s")
|
||||
raise wsme.exc.ClientSideError(
|
||||
error_msg % {'node': node_ident,
|
||||
'n_interface': n_interface},
|
||||
status_code=http_client.BAD_REQUEST)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
|
||||
remove_inst_uuid_patch = [{'op': 'remove', 'path': '/instance_uuid'}]
|
||||
|
@ -1309,13 +1476,14 @@ class NodesController(rest.RestController):
|
|||
# list of available drivers and shouldn't request
|
||||
# one that doesn't exist.
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise e
|
||||
raise
|
||||
self._check_driver_changed_and_console_enabled(rpc_node, node_ident)
|
||||
new_node = pecan.request.rpcapi.update_node(
|
||||
pecan.request.context, rpc_node, topic)
|
||||
|
||||
return Node.convert_with_links(new_node)
|
||||
|
||||
@METRICS.timer('NodesController.delete')
|
||||
@expose.expose(None, types.uuid_or_name,
|
||||
status_code=http_client.NO_CONTENT)
|
||||
def delete(self, node_ident):
|
||||
|
@ -1323,6 +1491,9 @@ class NodesController(rest.RestController):
|
|||
|
||||
:param node_ident: UUID or logical name of a node.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:delete', cdict, cdict)
|
||||
|
||||
if self.from_chassis:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
|
@ -1332,7 +1503,7 @@ class NodesController(rest.RestController):
|
|||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
except exception.NoValidHost as e:
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise e
|
||||
raise
|
||||
|
||||
pecan.request.rpcapi.destroy_node(pecan.request.context,
|
||||
rpc_node.uuid, topic)
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
|
||||
import datetime
|
||||
|
||||
from ironic_lib import metrics_utils
|
||||
from oslo_utils import uuidutils
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
@ -30,12 +31,26 @@ from ironic.api.controllers.v1 import utils as api_utils
|
|||
from ironic.api import expose
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import policy
|
||||
from ironic import objects
|
||||
|
||||
METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
|
||||
_DEFAULT_RETURN_FIELDS = ('uuid', 'address')
|
||||
|
||||
|
||||
def hide_fields_in_newer_versions(obj):
|
||||
# if requested version is < 1.18, hide internal_info field
|
||||
if not api_utils.allow_port_internal_info():
|
||||
obj.internal_info = wsme.Unset
|
||||
# if requested version is < 1.19, hide local_link_connection and
|
||||
# pxe_enabled fields
|
||||
if not api_utils.allow_port_advanced_net_fields():
|
||||
obj.pxe_enabled = wsme.Unset
|
||||
obj.local_link_connection = wsme.Unset
|
||||
|
||||
|
||||
class Port(base.APIBase):
|
||||
"""API representation of a port.
|
||||
|
||||
|
@ -64,7 +79,7 @@ class Port(base.APIBase):
|
|||
# Change error code because 404 (NotFound) is inappropriate
|
||||
# response for a POST request to create a Port
|
||||
e.code = http_client.BAD_REQUEST # BadRequest
|
||||
raise e
|
||||
raise
|
||||
elif value == wtypes.Unset:
|
||||
self._node_uuid = wtypes.Unset
|
||||
|
||||
|
@ -77,10 +92,19 @@ class Port(base.APIBase):
|
|||
extra = {wtypes.text: types.jsontype}
|
||||
"""This port's meta data"""
|
||||
|
||||
internal_info = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True)
|
||||
"""This port's internal information maintained by ironic"""
|
||||
|
||||
node_uuid = wsme.wsproperty(types.uuid, _get_node_uuid, _set_node_uuid,
|
||||
mandatory=True)
|
||||
"""The UUID of the node this port belongs to"""
|
||||
|
||||
pxe_enabled = types.boolean
|
||||
"""Indicates whether pxe is enabled or disabled on the node."""
|
||||
|
||||
local_link_connection = types.locallinkconnectiontype
|
||||
"""The port binding profile for the port"""
|
||||
|
||||
links = wsme.wsattr([link.Link], readonly=True)
|
||||
"""A list containing a self link and associated port links"""
|
||||
|
||||
|
@ -130,6 +154,8 @@ class Port(base.APIBase):
|
|||
if fields is not None:
|
||||
api_utils.check_for_invalid_fields(fields, port.as_dict())
|
||||
|
||||
hide_fields_in_newer_versions(port)
|
||||
|
||||
return cls._convert_with_links(port, pecan.request.public_url,
|
||||
fields=fields)
|
||||
|
||||
|
@ -138,8 +164,13 @@ class Port(base.APIBase):
|
|||
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
|
||||
address='fe:54:00:77:07:d9',
|
||||
extra={'foo': 'bar'},
|
||||
internal_info={},
|
||||
created_at=datetime.datetime.utcnow(),
|
||||
updated_at=datetime.datetime.utcnow())
|
||||
updated_at=datetime.datetime.utcnow(),
|
||||
pxe_enabled=True,
|
||||
local_link_connection={
|
||||
'switch_info': 'host', 'port_id': 'Gig0/1',
|
||||
'switch_id': 'aa:bb:cc:dd:ee:ff'})
|
||||
# NOTE(lucasagomes): node_uuid getter() method look at the
|
||||
# _node_uuid variable
|
||||
sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae'
|
||||
|
@ -151,6 +182,11 @@ class Port(base.APIBase):
|
|||
class PortPatchType(types.JsonPatchType):
|
||||
_api_base = Port
|
||||
|
||||
@staticmethod
|
||||
def internal_attrs():
|
||||
defaults = types.JsonPatchType.internal_attrs()
|
||||
return defaults + ['/internal_info']
|
||||
|
||||
|
||||
class PortCollection(collection.Collection):
|
||||
"""API representation of a collection of ports."""
|
||||
|
@ -187,7 +223,9 @@ class PortsController(rest.RestController):
|
|||
'detail': ['GET'],
|
||||
}
|
||||
|
||||
invalid_sort_key_list = ['extra']
|
||||
invalid_sort_key_list = ['extra', 'internal_info', 'local_link_connection']
|
||||
|
||||
advanced_net_fields = ['pxe_enabled', 'local_link_connection']
|
||||
|
||||
def _get_ports_collection(self, node_ident, address, marker, limit,
|
||||
sort_key, sort_dir, resource_url=None,
|
||||
|
@ -246,6 +284,7 @@ class PortsController(rest.RestController):
|
|||
except exception.PortNotFound:
|
||||
return []
|
||||
|
||||
@METRICS.timer('PortsController.get_all')
|
||||
@expose.expose(PortCollection, types.uuid_or_name, types.uuid,
|
||||
types.macaddress, types.uuid, int, wtypes.text,
|
||||
wtypes.text, types.listtype)
|
||||
|
@ -264,12 +303,23 @@ class PortsController(rest.RestController):
|
|||
this MAC address.
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
:raises: NotAcceptable
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:get', cdict, cdict)
|
||||
|
||||
api_utils.check_allow_specify_fields(fields)
|
||||
if (fields and not api_utils.allow_port_advanced_net_fields() and
|
||||
set(fields).intersection(self.advanced_net_fields)):
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
if fields is None:
|
||||
fields = _DEFAULT_RETURN_FIELDS
|
||||
|
||||
|
@ -285,6 +335,7 @@ class PortsController(rest.RestController):
|
|||
limit, sort_key, sort_dir,
|
||||
fields=fields)
|
||||
|
||||
@METRICS.timer('PortsController.detail')
|
||||
@expose.expose(PortCollection, types.uuid_or_name, types.uuid,
|
||||
types.macaddress, types.uuid, int, wtypes.text,
|
||||
wtypes.text)
|
||||
|
@ -303,9 +354,16 @@ class PortsController(rest.RestController):
|
|||
this MAC address.
|
||||
:param marker: pagination marker for large data sets.
|
||||
:param limit: maximum number of resources to return in a single result.
|
||||
This value cannot be larger than the value of max_limit
|
||||
in the [api] section of the ironic configuration, or only
|
||||
max_limit resources will be returned.
|
||||
:param sort_key: column to sort results by. Default: id.
|
||||
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
|
||||
:raises: NotAcceptable, HTTPNotFound
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:get', cdict, cdict)
|
||||
|
||||
if not node_uuid and node:
|
||||
# We're invoking this interface using positional notation, or
|
||||
# explicitly using 'node'. Try and determine which one.
|
||||
|
@ -324,6 +382,7 @@ class PortsController(rest.RestController):
|
|||
limit, sort_key, sort_dir,
|
||||
resource_url)
|
||||
|
||||
@METRICS.timer('PortsController.get_one')
|
||||
@expose.expose(Port, types.uuid, types.listtype)
|
||||
def get_one(self, port_uuid, fields=None):
|
||||
"""Retrieve information about the given port.
|
||||
|
@ -331,7 +390,11 @@ class PortsController(rest.RestController):
|
|||
:param port_uuid: UUID of a port.
|
||||
:param fields: Optional, a list with a specified set of fields
|
||||
of the resource to be returned.
|
||||
:raises: NotAcceptable
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:get', cdict, cdict)
|
||||
|
||||
if self.from_nodes:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
|
@ -340,22 +403,34 @@ class PortsController(rest.RestController):
|
|||
rpc_port = objects.Port.get_by_uuid(pecan.request.context, port_uuid)
|
||||
return Port.convert_with_links(rpc_port, fields=fields)
|
||||
|
||||
@METRICS.timer('PortsController.post')
|
||||
@expose.expose(Port, body=Port, status_code=http_client.CREATED)
|
||||
def post(self, port):
|
||||
"""Create a new port.
|
||||
|
||||
:param port: a port within the request body.
|
||||
:raises: NotAcceptable
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:create', cdict, cdict)
|
||||
|
||||
if self.from_nodes:
|
||||
raise exception.OperationNotPermitted()
|
||||
|
||||
pdict = port.as_dict()
|
||||
if not api_utils.allow_port_advanced_net_fields():
|
||||
if set(pdict).intersection(self.advanced_net_fields):
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
new_port = objects.Port(pecan.request.context,
|
||||
**port.as_dict())
|
||||
**pdict)
|
||||
|
||||
new_port.create()
|
||||
# Set the HTTP Location Header
|
||||
pecan.response.location = link.build_url('ports', new_port.uuid)
|
||||
return Port.convert_with_links(new_port)
|
||||
|
||||
@METRICS.timer('PortsController.patch')
|
||||
@wsme.validate(types.uuid, [PortPatchType])
|
||||
@expose.expose(Port, types.uuid, body=[PortPatchType])
|
||||
def patch(self, port_uuid, patch):
|
||||
|
@ -363,9 +438,19 @@ class PortsController(rest.RestController):
|
|||
|
||||
:param port_uuid: UUID of a port.
|
||||
:param patch: a json PATCH document to apply to this port.
|
||||
:raises: NotAcceptable
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:update', cdict, cdict)
|
||||
|
||||
if self.from_nodes:
|
||||
raise exception.OperationNotPermitted()
|
||||
if not api_utils.allow_port_advanced_net_fields():
|
||||
for field in self.advanced_net_fields:
|
||||
field_path = '/%s' % field
|
||||
if (api_utils.get_patch_values(patch, field_path) or
|
||||
api_utils.is_path_removed(patch, field_path)):
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
rpc_port = objects.Port.get_by_uuid(pecan.request.context, port_uuid)
|
||||
try:
|
||||
|
@ -400,12 +485,16 @@ class PortsController(rest.RestController):
|
|||
|
||||
return Port.convert_with_links(new_port)
|
||||
|
||||
@METRICS.timer('PortsController.delete')
|
||||
@expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT)
|
||||
def delete(self, port_uuid):
|
||||
"""Delete a port.
|
||||
|
||||
:param port_uuid: UUID of a port.
|
||||
"""
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:port:delete', cdict, cdict)
|
||||
|
||||
if self.from_nodes:
|
||||
raise exception.OperationNotPermitted()
|
||||
rpc_port = objects.Port.get_by_uuid(pecan.request.context,
|
||||
|
|
|
@ -0,0 +1,150 @@
|
|||
# Copyright 2016 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
import pecan
|
||||
from pecan import rest
|
||||
from six.moves import http_client
|
||||
from wsme import types as wtypes
|
||||
|
||||
from ironic.api.controllers import base
|
||||
from ironic.api.controllers.v1 import node as node_ctl
|
||||
from ironic.api.controllers.v1 import types
|
||||
from ironic.api.controllers.v1 import utils as api_utils
|
||||
from ironic.api import expose
|
||||
from ironic.common import exception
|
||||
from ironic.common import policy
|
||||
from ironic.common import states
|
||||
from ironic import objects
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
_LOOKUP_RETURN_FIELDS = ('uuid', 'properties', 'instance_info',
|
||||
'driver_internal_info')
|
||||
_LOOKUP_ALLOWED_STATES = {states.DEPLOYING, states.DEPLOYWAIT,
|
||||
states.CLEANING, states.CLEANWAIT,
|
||||
states.INSPECTING}
|
||||
|
||||
|
||||
def config():
|
||||
return {
|
||||
'metrics': {
|
||||
'backend': CONF.metrics.agent_backend,
|
||||
'prepend_host': CONF.metrics.agent_prepend_host,
|
||||
'prepend_uuid': CONF.metrics.agent_prepend_uuid,
|
||||
'prepend_host_reverse': CONF.metrics.agent_prepend_host_reverse,
|
||||
'global_prefix': CONF.metrics.agent_global_prefix
|
||||
},
|
||||
'metrics_statsd': {
|
||||
'statsd_host': CONF.metrics_statsd.agent_statsd_host,
|
||||
'statsd_port': CONF.metrics_statsd.agent_statsd_port
|
||||
},
|
||||
'heartbeat_timeout': CONF.api.ramdisk_heartbeat_timeout
|
||||
}
|
||||
|
||||
|
||||
class LookupResult(base.APIBase):
|
||||
"""API representation of the node lookup result."""
|
||||
|
||||
node = node_ctl.Node
|
||||
"""The short node representation."""
|
||||
|
||||
config = {wtypes.text: types.jsontype}
|
||||
"""The configuration to pass to the ramdisk."""
|
||||
|
||||
@classmethod
|
||||
def sample(cls):
|
||||
return cls(node=node_ctl.Node.sample(),
|
||||
config={'heartbeat_timeout': 600})
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, node):
|
||||
node = node_ctl.Node.convert_with_links(node, _LOOKUP_RETURN_FIELDS)
|
||||
return cls(node=node, config=config())
|
||||
|
||||
|
||||
class LookupController(rest.RestController):
|
||||
"""Controller handling node lookup for a deploy ramdisk."""
|
||||
|
||||
@expose.expose(LookupResult, types.list_of_macaddress, types.uuid)
|
||||
def get_all(self, addresses=None, node_uuid=None):
|
||||
"""Look up a node by its MAC addresses and optionally UUID.
|
||||
|
||||
If the "restrict_lookup" option is set to True (the default), limit
|
||||
the search to nodes in certain transient states (e.g. deploy wait).
|
||||
|
||||
:param addresses: list of MAC addresses for a node.
|
||||
:param node_uuid: UUID of a node.
|
||||
:raises: NotFound if requested API version does not allow this
|
||||
endpoint.
|
||||
:raises: NotFound if suitable node was not found.
|
||||
"""
|
||||
if not api_utils.allow_ramdisk_endpoints():
|
||||
raise exception.NotFound()
|
||||
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:driver:ipa_lookup', cdict, cdict)
|
||||
|
||||
if not addresses and not node_uuid:
|
||||
raise exception.IncompleteLookup()
|
||||
|
||||
try:
|
||||
if node_uuid:
|
||||
node = objects.Node.get_by_uuid(
|
||||
pecan.request.context, node_uuid)
|
||||
else:
|
||||
node = objects.Node.get_by_port_addresses(
|
||||
pecan.request.context, addresses)
|
||||
except exception.NotFound:
|
||||
# NOTE(dtantsur): we are reraising the same exception to make sure
|
||||
# we don't disclose the difference between nodes that are not found
|
||||
# at all and nodes in a wrong state by different error messages.
|
||||
raise exception.NotFound()
|
||||
|
||||
if (CONF.api.restrict_lookup and
|
||||
node.provision_state not in _LOOKUP_ALLOWED_STATES):
|
||||
raise exception.NotFound()
|
||||
|
||||
return LookupResult.convert_with_links(node)
|
||||
|
||||
|
||||
class HeartbeatController(rest.RestController):
|
||||
"""Controller handling heartbeats from deploy ramdisk."""
|
||||
|
||||
@expose.expose(None, types.uuid_or_name, wtypes.text,
|
||||
status_code=http_client.ACCEPTED)
|
||||
def post(self, node_ident, callback_url):
|
||||
"""Process a heartbeat from the deploy ramdisk.
|
||||
|
||||
:param node_ident: the UUID or logical name of a node.
|
||||
:param callback_url: the URL to reach back to the ramdisk.
|
||||
"""
|
||||
if not api_utils.allow_ramdisk_endpoints():
|
||||
raise exception.NotFound()
|
||||
|
||||
cdict = pecan.request.context.to_dict()
|
||||
policy.authorize('baremetal:node:ipa_heartbeat', cdict, cdict)
|
||||
|
||||
rpc_node = api_utils.get_rpc_node(node_ident)
|
||||
|
||||
try:
|
||||
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
|
||||
except exception.NoValidHost as e:
|
||||
e.code = http_client.BAD_REQUEST
|
||||
raise
|
||||
|
||||
pecan.request.rpcapi.heartbeat(pecan.request.context,
|
||||
rpc_node.uuid, callback_url,
|
||||
topic=topic)
|
|
@ -176,6 +176,26 @@ class ListType(wtypes.UserType):
|
|||
return ListType.validate(value)
|
||||
|
||||
|
||||
class ListOfMacAddressesType(ListType):
|
||||
"""List of MAC addresses."""
|
||||
|
||||
@staticmethod
|
||||
def validate(value):
|
||||
"""Validate and convert the input to a ListOfMacAddressesType.
|
||||
|
||||
:param value: A comma separated string of MAC addresses.
|
||||
:returns: A list of unique MACs, whose order is not guaranteed.
|
||||
"""
|
||||
items = ListType.validate(value)
|
||||
return [MacAddressType.validate(item) for item in items]
|
||||
|
||||
@staticmethod
|
||||
def frombasetype(value):
|
||||
if value is None:
|
||||
return None
|
||||
return ListOfMacAddressesType.validate(value)
|
||||
|
||||
|
||||
macaddress = MacAddressType()
|
||||
uuid_or_name = UuidOrNameType()
|
||||
name = NameType()
|
||||
|
@ -184,6 +204,7 @@ boolean = BooleanType()
|
|||
listtype = ListType()
|
||||
# Can't call it 'json' because that's the name of the stdlib module
|
||||
jsontype = JsonType()
|
||||
list_of_macaddress = ListOfMacAddressesType()
|
||||
|
||||
|
||||
class JsonPatchType(wtypes.Base):
|
||||
|
@ -255,3 +276,76 @@ class JsonPatchType(wtypes.Base):
|
|||
if patch.value is not wsme.Unset:
|
||||
ret['value'] = patch.value
|
||||
return ret
|
||||
|
||||
|
||||
class LocalLinkConnectionType(wtypes.UserType):
|
||||
"""A type describing local link connection."""
|
||||
|
||||
basetype = wtypes.DictType
|
||||
name = 'locallinkconnection'
|
||||
|
||||
mandatory_fields = {'switch_id',
|
||||
'port_id'}
|
||||
valid_fields = mandatory_fields.union({'switch_info'})
|
||||
|
||||
@staticmethod
|
||||
def validate(value):
|
||||
"""Validate and convert the input to a LocalLinkConnectionType.
|
||||
|
||||
:param value: A dictionary of values to validate, switch_id is a MAC
|
||||
address or an OpenFlow based datapath_id, switch_info is an
|
||||
optional field.
|
||||
|
||||
For example::
|
||||
|
||||
{
|
||||
'switch_id': mac_or_datapath_id(),
|
||||
'port_id': 'Ethernet3/1',
|
||||
'switch_info': 'switch1'
|
||||
}
|
||||
|
||||
:returns: A dictionary.
|
||||
:raises: Invalid if some of the keys in the dictionary being validated
|
||||
are unknown, invalid, or some required ones are missing.
|
||||
"""
|
||||
wtypes.DictType(wtypes.text, wtypes.text).validate(value)
|
||||
|
||||
keys = set(value)
|
||||
|
||||
# This is to workaround an issue when an API object is initialized from
|
||||
# RPC object, in which dictionary fields that are set to None become
|
||||
# empty dictionaries
|
||||
if not keys:
|
||||
return value
|
||||
|
||||
invalid = keys - LocalLinkConnectionType.valid_fields
|
||||
if invalid:
|
||||
raise exception.Invalid(_('%s are invalid keys') % (invalid))
|
||||
|
||||
# Check all mandatory fields are present
|
||||
missing = LocalLinkConnectionType.mandatory_fields - keys
|
||||
if missing:
|
||||
msg = _('Missing mandatory keys: %s') % missing
|
||||
raise exception.Invalid(msg)
|
||||
|
||||
# Check switch_id is either a valid mac address or
|
||||
# OpenFlow datapath_id and normalize it.
|
||||
try:
|
||||
value['switch_id'] = utils.validate_and_normalize_mac(
|
||||
value['switch_id'])
|
||||
except exception.InvalidMAC:
|
||||
try:
|
||||
value['switch_id'] = utils.validate_and_normalize_datapath_id(
|
||||
value['switch_id'])
|
||||
except exception.InvalidDatapathID:
|
||||
raise exception.InvalidSwitchID(switch_id=value['switch_id'])
|
||||
|
||||
return value
|
||||
|
||||
@staticmethod
|
||||
def frombasetype(value):
|
||||
if value is None:
|
||||
return None
|
||||
return LocalLinkConnectionType.validate(value)
|
||||
|
||||
locallinkconnectiontype = LocalLinkConnectionType()
|
||||
|
|
|
@ -99,6 +99,20 @@ def get_patch_values(patch, path):
|
|||
if p['path'] == path and p['op'] != 'remove']
|
||||
|
||||
|
||||
def is_path_removed(patch, path):
|
||||
"""Returns whether the patch includes removal of the path (or subpath of).
|
||||
|
||||
:param patch: HTTP PATCH request body.
|
||||
:param path: the path to check.
|
||||
:returns: True if path or subpath being removed, False otherwise.
|
||||
"""
|
||||
path = path.rstrip('/')
|
||||
for p in patch:
|
||||
if ((p['path'] == path or p['path'].startswith(path + '/')) and
|
||||
p['op'] == 'remove'):
|
||||
return True
|
||||
|
||||
|
||||
def allow_node_logical_names():
|
||||
# v1.5 added logical name aliases
|
||||
return pecan.request.version.minor >= versions.MINOR_5_NODE_NAME
|
||||
|
@ -226,6 +240,35 @@ def check_allow_specify_fields(fields):
|
|||
raise exception.NotAcceptable()
|
||||
|
||||
|
||||
def check_allowed_fields(fields):
|
||||
"""Check if fetching a particular field is allowed.
|
||||
|
||||
This method checks if the required version is being requested for fields
|
||||
that are only allowed to be fetched in a particular API version.
|
||||
"""
|
||||
if fields is None:
|
||||
return
|
||||
if 'network_interface' in fields and not allow_network_interface():
|
||||
raise exception.NotAcceptable()
|
||||
if 'resource_class' in fields and not allow_resource_class():
|
||||
raise exception.NotAcceptable()
|
||||
|
||||
|
||||
# NOTE(vsaienko) The validation is performed on API side, all conductors
|
||||
# and api should have the same list of enabled_network_interfaces.
|
||||
# TODO(vsaienko) remove it once driver-composition-reform is implemented.
|
||||
def is_valid_network_interface(network_interface):
|
||||
"""Determine if the provided network_interface is valid.
|
||||
|
||||
Check to see that the provided network_interface is in the enabled
|
||||
network interfaces list.
|
||||
|
||||
:param: network_interface: the node network interface to check.
|
||||
:returns: True if the network_interface is valid, False otherwise.
|
||||
"""
|
||||
return network_interface in CONF.enabled_network_interfaces
|
||||
|
||||
|
||||
def check_allow_management_verbs(verb):
|
||||
min_version = MIN_VERB_VERSIONS.get(verb)
|
||||
if min_version is not None and pecan.request.version.minor < min_version:
|
||||
|
@ -261,6 +304,20 @@ def check_allow_specify_driver(driver):
|
|||
'opr': versions.MINOR_16_DRIVER_FILTER})
|
||||
|
||||
|
||||
def check_allow_specify_resource_class(resource_class):
|
||||
"""Check if filtering nodes by resource_class is allowed.
|
||||
|
||||
Version 1.21 of the API allows filtering nodes by resource_class.
|
||||
"""
|
||||
if (resource_class is not None and pecan.request.version.minor <
|
||||
versions.MINOR_21_RESOURCE_CLASS):
|
||||
raise exception.NotAcceptable(_(
|
||||
"Request not acceptable. The minimal required API version "
|
||||
"should be %(base)s.%(opr)s") %
|
||||
{'base': versions.BASE_VERSION,
|
||||
'opr': versions.MINOR_21_RESOURCE_CLASS})
|
||||
|
||||
|
||||
def initial_node_provision_state():
|
||||
"""Return node state to use by default when creating new nodes.
|
||||
|
||||
|
@ -290,6 +347,50 @@ def allow_links_node_states_and_driver_properties():
|
|||
versions.MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES)
|
||||
|
||||
|
||||
def allow_port_internal_info():
|
||||
"""Check if accessing internal_info is allowed for the port.
|
||||
|
||||
Version 1.18 of the API exposes internal_info readonly field for the port.
|
||||
"""
|
||||
return (pecan.request.version.minor >=
|
||||
versions.MINOR_18_PORT_INTERNAL_INFO)
|
||||
|
||||
|
||||
def allow_port_advanced_net_fields():
|
||||
"""Check if we should return local_link_connection and pxe_enabled fields.
|
||||
|
||||
Version 1.19 of the API added support for these new fields in port object.
|
||||
"""
|
||||
return (pecan.request.version.minor >=
|
||||
versions.MINOR_19_PORT_ADVANCED_NET_FIELDS)
|
||||
|
||||
|
||||
def allow_network_interface():
|
||||
"""Check if we should support network_interface node field.
|
||||
|
||||
Version 1.20 of the API added support for network interfaces.
|
||||
"""
|
||||
return (pecan.request.version.minor >=
|
||||
versions.MINOR_20_NETWORK_INTERFACE)
|
||||
|
||||
|
||||
def allow_resource_class():
|
||||
"""Check if we should support resource_class node field.
|
||||
|
||||
Version 1.21 of the API added support for resource_class.
|
||||
"""
|
||||
return (pecan.request.version.minor >=
|
||||
versions.MINOR_21_RESOURCE_CLASS)
|
||||
|
||||
|
||||
def allow_ramdisk_endpoints():
|
||||
"""Check if heartbeat and lookup endpoints are allowed.
|
||||
|
||||
Version 1.22 of the API introduced them.
|
||||
"""
|
||||
return pecan.request.version.minor >= versions.MINOR_22_LOOKUP_HEARTBEAT
|
||||
|
||||
|
||||
def get_controller_reserved_names(cls):
|
||||
"""Get reserved names for a given controller.
|
||||
|
||||
|
|
|
@ -47,6 +47,11 @@ BASE_VERSION = 1
|
|||
# v1.15: Add ability to do manual cleaning of nodes
|
||||
# v1.16: Add ability to filter nodes by driver.
|
||||
# v1.17: Add 'adopt' verb for ADOPTING active nodes.
|
||||
# v1.18: Add port.internal_info.
|
||||
# v1.19: Add port.local_link_connection and port.pxe_enabled.
|
||||
# v1.20: Add node.network_interface
|
||||
# v1.21: Add node.resource_class
|
||||
# v1.22: Ramdisk lookup and heartbeat endpoints.
|
||||
|
||||
MINOR_0_JUNO = 0
|
||||
MINOR_1_INITIAL_VERSION = 1
|
||||
|
@ -66,11 +71,16 @@ MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES = 14
|
|||
MINOR_15_MANUAL_CLEAN = 15
|
||||
MINOR_16_DRIVER_FILTER = 16
|
||||
MINOR_17_ADOPT_VERB = 17
|
||||
MINOR_18_PORT_INTERNAL_INFO = 18
|
||||
MINOR_19_PORT_ADVANCED_NET_FIELDS = 19
|
||||
MINOR_20_NETWORK_INTERFACE = 20
|
||||
MINOR_21_RESOURCE_CLASS = 21
|
||||
MINOR_22_LOOKUP_HEARTBEAT = 22
|
||||
|
||||
# When adding another version, update MINOR_MAX_VERSION and also update
|
||||
# doc/source/webapi/v1.rst with a detailed explanation of what the version has
|
||||
# changed.
|
||||
MINOR_MAX_VERSION = MINOR_17_ADOPT_VERB
|
||||
MINOR_MAX_VERSION = MINOR_22_LOOKUP_HEARTBEAT
|
||||
|
||||
# String representations of the minor and maximum versions
|
||||
MIN_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_1_INITIAL_VERSION)
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
from oslo_config import cfg
|
||||
from pecan import hooks
|
||||
from six.moves import http_client
|
||||
from webob import exc
|
||||
|
||||
from ironic.common import context
|
||||
from ironic.common import policy
|
||||
|
@ -69,6 +68,7 @@ class ContextHook(hooks.PecanHook):
|
|||
# Do not pass any token with context for noauth mode
|
||||
auth_token = (None if cfg.CONF.auth_strategy == 'noauth' else
|
||||
headers.get('X-Auth-Token'))
|
||||
is_public_api = state.request.environ.get('is_public_api', False)
|
||||
|
||||
creds = {
|
||||
'user': headers.get('X-User') or headers.get('X-User-Id'),
|
||||
|
@ -77,16 +77,17 @@ class ContextHook(hooks.PecanHook):
|
|||
'domain_name': headers.get('X-User-Domain-Name'),
|
||||
'auth_token': auth_token,
|
||||
'roles': headers.get('X-Roles', '').split(','),
|
||||
'is_public_api': is_public_api,
|
||||
}
|
||||
|
||||
is_admin = policy.enforce('admin_api', creds, creds)
|
||||
is_public_api = state.request.environ.get('is_public_api', False)
|
||||
show_password = policy.enforce('show_password', creds, creds)
|
||||
# TODO(deva): refactor this so enforce is called directly at relevant
|
||||
# places in code, not globally and for every request
|
||||
show_password = policy.check('show_password', creds, creds)
|
||||
is_admin = policy.check('is_admin', creds, creds)
|
||||
|
||||
state.request.context = context.RequestContext(
|
||||
is_admin=is_admin,
|
||||
is_public_api=is_public_api,
|
||||
show_password=show_password,
|
||||
is_admin=is_admin,
|
||||
**creds)
|
||||
|
||||
def after(self, state):
|
||||
|
@ -106,22 +107,6 @@ class RPCHook(hooks.PecanHook):
|
|||
state.request.rpcapi = rpcapi.ConductorAPI()
|
||||
|
||||
|
||||
class TrustedCallHook(hooks.PecanHook):
|
||||
"""Verify that the user has admin rights.
|
||||
|
||||
Checks whether the API call is performed against a public
|
||||
resource or the user has admin privileges in the appropriate
|
||||
tenant, domain or other administrative unit.
|
||||
|
||||
"""
|
||||
def before(self, state):
|
||||
ctx = state.request.context
|
||||
if ctx.is_public_api:
|
||||
return
|
||||
policy.enforce('admin_api', ctx.to_dict(), ctx.to_dict(),
|
||||
do_raise=True, exc=exc.HTTPForbidden)
|
||||
|
||||
|
||||
class NoExceptionTracebackHook(hooks.PecanHook):
|
||||
"""Workaround rpc.common: deserialize_remote_exception.
|
||||
|
||||
|
|
|
@ -19,5 +19,5 @@ from ironic.api.middleware import parsable_error
|
|||
ParsableErrorMiddleware = parsable_error.ParsableErrorMiddleware
|
||||
AuthTokenMiddleware = auth_token.AuthTokenMiddleware
|
||||
|
||||
__all__ = (ParsableErrorMiddleware,
|
||||
AuthTokenMiddleware)
|
||||
__all__ = ('ParsableErrorMiddleware',
|
||||
'AuthTokenMiddleware')
|
||||
|
|
|
@ -25,10 +25,37 @@ from oslo_config import cfg
|
|||
from oslo_log import log
|
||||
from oslo_service import service
|
||||
|
||||
from ironic.common.i18n import _LW
|
||||
from ironic.common import service as ironic_service
|
||||
from ironic.conf import auth
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
SECTIONS_WITH_AUTH = (
|
||||
'service_catalog', 'neutron', 'glance', 'swift', 'inspector')
|
||||
|
||||
|
||||
# TODO(pas-ha) remove this check after deprecation period
|
||||
def _check_auth_options(conf):
|
||||
missing = []
|
||||
for section in SECTIONS_WITH_AUTH:
|
||||
if not auth.load_auth(conf, section):
|
||||
missing.append('[%s]' % section)
|
||||
if missing:
|
||||
link = "http://docs.openstack.org/releasenotes/ironic/newton.html"
|
||||
LOG.warning(_LW("Failed to load authentification credentials from "
|
||||
"%(missing)s config sections. "
|
||||
"The corresponding service users' credentials "
|
||||
"will be loaded from [%(old)s] config section, "
|
||||
"which is deprecated for this purpose. "
|
||||
"Please update the config file. "
|
||||
"For more info see %(link)s."),
|
||||
dict(missing=", ".join(missing),
|
||||
old=auth.LEGACY_SECTION,
|
||||
link=link))
|
||||
|
||||
|
||||
def main():
|
||||
# Parse config file and command line options, then start logging
|
||||
|
@ -38,9 +65,7 @@ def main():
|
|||
'ironic.conductor.manager',
|
||||
'ConductorManager')
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
LOG.debug("Configuration:")
|
||||
CONF.log_opt_values(LOG, log.DEBUG)
|
||||
_check_auth_options(CONF)
|
||||
|
||||
launcher = service.launch(CONF, mgr)
|
||||
launcher.wait()
|
||||
|
|
|
@ -25,12 +25,10 @@ from oslo_config import cfg
|
|||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import service
|
||||
from ironic.conf import CONF
|
||||
from ironic.db import migration
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class DBCommand(object):
|
||||
|
||||
def upgrade(self):
|
||||
|
|
|
@ -16,7 +16,7 @@ from oslo_context import context
|
|||
|
||||
|
||||
class RequestContext(context.RequestContext):
|
||||
"""Extends security contexts from the OpenStack common library."""
|
||||
"""Extends security contexts from the oslo.context library."""
|
||||
|
||||
def __init__(self, auth_token=None, domain_id=None, domain_name=None,
|
||||
user=None, tenant=None, is_admin=False, is_public_api=False,
|
||||
|
@ -92,9 +92,8 @@ class RequestContext(context.RequestContext):
|
|||
|
||||
|
||||
def get_admin_context():
|
||||
"""Create an administrator context.
|
||||
"""Create an administrator context."""
|
||||
|
||||
"""
|
||||
context = RequestContext(None,
|
||||
tenant=None,
|
||||
is_admin=True,
|
||||
|
|
|
@ -16,36 +16,18 @@
|
|||
import collections
|
||||
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from stevedore import dispatch
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LI
|
||||
from ironic.common.i18n import _LW
|
||||
from ironic.conf import CONF
|
||||
from ironic.drivers import base as driver_base
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
driver_opts = [
|
||||
cfg.ListOpt('enabled_drivers',
|
||||
default=['pxe_ipmitool'],
|
||||
help=_('Specify the list of drivers to load during service '
|
||||
'initialization. Missing drivers, or drivers which '
|
||||
'fail to initialize, will prevent the conductor '
|
||||
'service from starting. The option default is a '
|
||||
'recommended set of production-oriented drivers. A '
|
||||
'complete list of drivers present on your system may '
|
||||
'be found by enumerating the "ironic.drivers" '
|
||||
'entrypoint. An example may be found in the '
|
||||
'developer documentation online.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(driver_opts)
|
||||
|
||||
EM_SEMAPHORE = 'extension_manager'
|
||||
|
||||
|
||||
|
@ -76,6 +58,16 @@ def _attach_interfaces_to_driver(driver, node, driver_name=None):
|
|||
impl = getattr(driver_singleton, iface, None)
|
||||
setattr(driver, iface, impl)
|
||||
|
||||
network_iface = node.network_interface
|
||||
network_factory = NetworkInterfaceFactory()
|
||||
try:
|
||||
net_driver = network_factory.get_driver(network_iface)
|
||||
except KeyError:
|
||||
raise exception.DriverNotFoundInEntrypoint(
|
||||
driver_name=network_iface,
|
||||
entrypoint=network_factory._entrypoint_name)
|
||||
driver.network = net_driver
|
||||
|
||||
|
||||
def get_driver(driver_name):
|
||||
"""Simple method to get a ref to an instance of a driver.
|
||||
|
@ -93,7 +85,7 @@ def get_driver(driver_name):
|
|||
|
||||
try:
|
||||
factory = DriverFactory()
|
||||
return factory[driver_name].obj
|
||||
return factory.get_driver(driver_name)
|
||||
except KeyError:
|
||||
raise exception.DriverNotFound(driver_name=driver_name)
|
||||
|
||||
|
@ -109,8 +101,11 @@ def drivers():
|
|||
for name in factory.names)
|
||||
|
||||
|
||||
class DriverFactory(object):
|
||||
"""Discover, load and manage the drivers available."""
|
||||
class BaseDriverFactory(object):
|
||||
"""Discover, load and manage the drivers available.
|
||||
|
||||
This is subclassed to load both main drivers and extra interfaces.
|
||||
"""
|
||||
|
||||
# NOTE(deva): loading the _extension_manager as a class member will break
|
||||
# stevedore when it loads a driver, because the driver will
|
||||
|
@ -119,13 +114,25 @@ class DriverFactory(object):
|
|||
# once, the first time DriverFactory.__init__ is called.
|
||||
_extension_manager = None
|
||||
|
||||
# Entrypoint name containing the list of all available drivers/interfaces
|
||||
_entrypoint_name = None
|
||||
# Name of the [DEFAULT] section config option containing a list of enabled
|
||||
# drivers/interfaces
|
||||
_enabled_driver_list_config_option = ''
|
||||
# This field will contain the list of the enabled drivers/interfaces names
|
||||
# without duplicates
|
||||
_enabled_driver_list = None
|
||||
|
||||
def __init__(self):
|
||||
if not DriverFactory._extension_manager:
|
||||
DriverFactory._init_extension_manager()
|
||||
if not self.__class__._extension_manager:
|
||||
self.__class__._init_extension_manager()
|
||||
|
||||
def __getitem__(self, name):
|
||||
return self._extension_manager[name]
|
||||
|
||||
def get_driver(self, name):
|
||||
return self[name].obj
|
||||
|
||||
# NOTE(deva): Use lockutils to avoid a potential race in eventlet
|
||||
# that might try to create two driver factories.
|
||||
@classmethod
|
||||
|
@ -136,19 +143,24 @@ class DriverFactory(object):
|
|||
# creation of multiple NameDispatchExtensionManagers.
|
||||
if cls._extension_manager:
|
||||
return
|
||||
enabled_drivers = getattr(CONF, cls._enabled_driver_list_config_option,
|
||||
[])
|
||||
|
||||
# Check for duplicated driver entries and warn the operator
|
||||
# about them
|
||||
counter = collections.Counter(CONF.enabled_drivers).items()
|
||||
duplicated_drivers = list(dup for (dup, i) in counter if i > 1)
|
||||
counter = collections.Counter(enabled_drivers).items()
|
||||
duplicated_drivers = []
|
||||
cls._enabled_driver_list = []
|
||||
for item, cnt in counter:
|
||||
if cnt > 1:
|
||||
duplicated_drivers.append(item)
|
||||
cls._enabled_driver_list.append(item)
|
||||
if duplicated_drivers:
|
||||
LOG.warning(_LW('The driver(s) "%s" is/are duplicated in the '
|
||||
'list of enabled_drivers. Please check your '
|
||||
'configuration file.'),
|
||||
', '.join(duplicated_drivers))
|
||||
|
||||
enabled_drivers = set(CONF.enabled_drivers)
|
||||
|
||||
# NOTE(deva): Drivers raise "DriverLoadError" if they are unable to be
|
||||
# loaded, eg. due to missing external dependencies.
|
||||
# We capture that exception, and, only if it is for an
|
||||
|
@ -160,30 +172,31 @@ class DriverFactory(object):
|
|||
def _catch_driver_not_found(mgr, ep, exc):
|
||||
# NOTE(deva): stevedore loads plugins *before* evaluating
|
||||
# _check_func, so we need to check here, too.
|
||||
if ep.name in enabled_drivers:
|
||||
if ep.name in cls._enabled_driver_list:
|
||||
if not isinstance(exc, exception.DriverLoadError):
|
||||
raise exception.DriverLoadError(driver=ep.name, reason=exc)
|
||||
raise exc
|
||||
|
||||
def _check_func(ext):
|
||||
return ext.name in enabled_drivers
|
||||
return ext.name in cls._enabled_driver_list
|
||||
|
||||
cls._extension_manager = (
|
||||
dispatch.NameDispatchExtensionManager(
|
||||
'ironic.drivers',
|
||||
cls._entrypoint_name,
|
||||
_check_func,
|
||||
invoke_on_load=True,
|
||||
on_load_failure_callback=_catch_driver_not_found))
|
||||
|
||||
# NOTE(deva): if we were unable to load any configured driver, perhaps
|
||||
# because it is not present on the system, raise an error.
|
||||
if (sorted(enabled_drivers) !=
|
||||
if (sorted(cls._enabled_driver_list) !=
|
||||
sorted(cls._extension_manager.names())):
|
||||
found = cls._extension_manager.names()
|
||||
names = [n for n in enabled_drivers if n not in found]
|
||||
names = [n for n in cls._enabled_driver_list if n not in found]
|
||||
# just in case more than one could not be found ...
|
||||
names = ', '.join(names)
|
||||
raise exception.DriverNotFound(driver_name=names)
|
||||
raise exception.DriverNotFoundInEntrypoint(
|
||||
driver_name=names, entrypoint=cls._entrypoint_name)
|
||||
|
||||
LOG.info(_LI("Loaded the following drivers: %s"),
|
||||
cls._extension_manager.names())
|
||||
|
@ -192,3 +205,13 @@ class DriverFactory(object):
|
|||
def names(self):
|
||||
"""The list of driver names available."""
|
||||
return self._extension_manager.names()
|
||||
|
||||
|
||||
class DriverFactory(BaseDriverFactory):
|
||||
_entrypoint_name = 'ironic.drivers'
|
||||
_enabled_driver_list_config_option = 'enabled_drivers'
|
||||
|
||||
|
||||
class NetworkInterfaceFactory(BaseDriverFactory):
|
||||
_entrypoint_name = 'ironic.hardware.interfaces.network'
|
||||
_enabled_driver_list_config_option = 'enabled_network_interfaces'
|
||||
|
|
|
@ -20,29 +20,16 @@ SHOULD include dedicated exception logging.
|
|||
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import six
|
||||
from six.moves import http_client
|
||||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
|
||||
from ironic.conf import CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
exc_log_opts = [
|
||||
cfg.BoolOpt('fatal_exception_format_errors',
|
||||
default=False,
|
||||
help=_('Used if there is a formatting error when generating '
|
||||
'an exception message (a programming error). If True, '
|
||||
'raise an exception; if False, use the unformatted '
|
||||
'message.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(exc_log_opts)
|
||||
|
||||
|
||||
class IronicException(Exception):
|
||||
"""Base Ironic Exception
|
||||
|
@ -173,11 +160,11 @@ class DuplicateName(Conflict):
|
|||
|
||||
|
||||
class InvalidUUID(Invalid):
|
||||
_msg_fmt = _("Expected a uuid but received %(uuid)s.")
|
||||
_msg_fmt = _("Expected a UUID but received %(uuid)s.")
|
||||
|
||||
|
||||
class InvalidUuidOrName(Invalid):
|
||||
_msg_fmt = _("Expected a logical name or uuid but received %(name)s.")
|
||||
_msg_fmt = _("Expected a logical name or UUID but received %(name)s.")
|
||||
|
||||
|
||||
class InvalidName(Invalid):
|
||||
|
@ -185,13 +172,23 @@ class InvalidName(Invalid):
|
|||
|
||||
|
||||
class InvalidIdentity(Invalid):
|
||||
_msg_fmt = _("Expected an uuid or int but received %(identity)s.")
|
||||
_msg_fmt = _("Expected a UUID or int but received %(identity)s.")
|
||||
|
||||
|
||||
class InvalidMAC(Invalid):
|
||||
_msg_fmt = _("Expected a MAC address but received %(mac)s.")
|
||||
|
||||
|
||||
class InvalidSwitchID(Invalid):
|
||||
_msg_fmt = _("Expected a MAC address or OpenFlow datapath ID but "
|
||||
"received %(switch_id)s.")
|
||||
|
||||
|
||||
class InvalidDatapathID(Invalid):
|
||||
_msg_fmt = _("Expected an OpenFlow datapath ID but received "
|
||||
"%(datapath_id)s.")
|
||||
|
||||
|
||||
class InvalidStateRequested(Invalid):
|
||||
_msg_fmt = _('The requested action "%(action)s" can not be performed '
|
||||
'on node "%(node)s" while it is in state "%(state)s".')
|
||||
|
@ -241,6 +238,11 @@ class DriverNotFound(NotFound):
|
|||
_msg_fmt = _("Could not find the following driver(s): %(driver_name)s.")
|
||||
|
||||
|
||||
class DriverNotFoundInEntrypoint(DriverNotFound):
|
||||
_msg_fmt = _("Could not find the following driver(s) in the "
|
||||
"'%(entrypoint)s' entrypoint: %(driver_name)s.")
|
||||
|
||||
|
||||
class ImageNotFound(NotFound):
|
||||
_msg_fmt = _("Image %(image_id)s could not be found.")
|
||||
|
||||
|
@ -253,6 +255,10 @@ class InstanceNotFound(NotFound):
|
|||
_msg_fmt = _("Instance %(instance)s could not be found.")
|
||||
|
||||
|
||||
class InputFileError(IronicException):
|
||||
_msg_fmt = _("Error with file %(file_name)s. Reason: %(reason)s")
|
||||
|
||||
|
||||
class NodeNotFound(NotFound):
|
||||
_msg_fmt = _("Node %(node)s could not be found.")
|
||||
|
||||
|
@ -424,8 +430,8 @@ class CommunicationError(IronicException):
|
|||
_msg_fmt = _("Unable to communicate with the server.")
|
||||
|
||||
|
||||
class HTTPForbidden(Forbidden):
|
||||
pass
|
||||
class HTTPForbidden(NotAuthorized):
|
||||
_msg_fmt = _("Access was denied to the following resource: %(resource)s")
|
||||
|
||||
|
||||
class Unauthorized(IronicException):
|
||||
|
@ -483,10 +489,6 @@ class PasswordFileFailedToCreate(IronicException):
|
|||
_msg_fmt = _("Failed to create the password file. %(error)s")
|
||||
|
||||
|
||||
class IBootOperationError(IronicException):
|
||||
pass
|
||||
|
||||
|
||||
class IloOperationError(IronicException):
|
||||
_msg_fmt = _("%(operation)s failed, error: %(error)s")
|
||||
|
||||
|
@ -593,5 +595,19 @@ class OneViewError(IronicException):
|
|||
_msg_fmt = _("OneView exception occurred. Error: %(error)s")
|
||||
|
||||
|
||||
class OneViewInvalidNodeParameter(OneViewError):
|
||||
_msg_fmt = _("Error while obtaining OneView info from node %(node_uuid)s. "
|
||||
"Error: %(error)s")
|
||||
|
||||
|
||||
class NodeTagNotFound(IronicException):
|
||||
_msg_fmt = _("Node %(node_id)s doesn't have a tag '%(tag)s'")
|
||||
|
||||
|
||||
class NetworkError(IronicException):
|
||||
_msg_fmt = _("Network operation failure.")
|
||||
|
||||
|
||||
class IncompleteLookup(Invalid):
|
||||
_msg_fmt = _("At least one of 'addresses' and 'node_uuid' parameters "
|
||||
"is required")
|
||||
|
|
|
@ -70,7 +70,7 @@ class FSM(machines.FiniteMachine):
|
|||
|
||||
:param state: the state of interest
|
||||
:raises: InvalidState if the state is invalid
|
||||
:returns True if it is a stable state; False otherwise
|
||||
:returns: True if it is a stable state; False otherwise
|
||||
"""
|
||||
try:
|
||||
return self._states[state]['stable']
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
import collections
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
from six.moves.urllib import parse as urlparse
|
||||
|
@ -27,101 +26,7 @@ from ironic.common.glance_service import base_image_service
|
|||
from ironic.common.glance_service import service
|
||||
from ironic.common.glance_service import service_utils
|
||||
from ironic.common.i18n import _
|
||||
|
||||
|
||||
glance_opts = [
|
||||
cfg.ListOpt('allowed_direct_url_schemes',
|
||||
default=[],
|
||||
help=_('A list of URL schemes that can be downloaded directly '
|
||||
'via the direct_url. Currently supported schemes: '
|
||||
'[file].')),
|
||||
# To upload this key to Swift:
|
||||
# swift post -m Temp-Url-Key:secretkey
|
||||
# When using radosgw, temp url key could be uploaded via the above swift
|
||||
# command, or with:
|
||||
# radosgw-admin user modify --uid=user --temp-url-key=secretkey
|
||||
cfg.StrOpt('swift_temp_url_key',
|
||||
help=_('The secret token given to Swift to allow temporary URL '
|
||||
'downloads. Required for temporary URLs.'),
|
||||
secret=True),
|
||||
cfg.IntOpt('swift_temp_url_duration',
|
||||
default=1200,
|
||||
help=_('The length of time in seconds that the temporary URL '
|
||||
'will be valid for. Defaults to 20 minutes. If some '
|
||||
'deploys get a 401 response code when trying to '
|
||||
'download from the temporary URL, try raising this '
|
||||
'duration. This value must be greater than or equal to '
|
||||
'the value for '
|
||||
'swift_temp_url_expected_download_start_delay')),
|
||||
cfg.BoolOpt('swift_temp_url_cache_enabled',
|
||||
default=False,
|
||||
help=_('Whether to cache generated Swift temporary URLs. '
|
||||
'Setting it to true is only useful when an image '
|
||||
'caching proxy is used. Defaults to False.')),
|
||||
cfg.IntOpt('swift_temp_url_expected_download_start_delay',
|
||||
default=0, min=0,
|
||||
help=_('This is the delay (in seconds) from the time of the '
|
||||
'deploy request (when the Swift temporary URL is '
|
||||
'generated) to when the IPA ramdisk starts up and URL '
|
||||
'is used for the image download. This value is used to '
|
||||
'check if the Swift temporary URL duration is large '
|
||||
'enough to let the image download begin. Also if '
|
||||
'temporary URL caching is enabled this will determine '
|
||||
'if a cached entry will still be valid when the '
|
||||
'download starts. swift_temp_url_duration value must be '
|
||||
'greater than or equal to this option\'s value. '
|
||||
'Defaults to 0.')),
|
||||
cfg.StrOpt(
|
||||
'swift_endpoint_url',
|
||||
help=_('The "endpoint" (scheme, hostname, optional port) for '
|
||||
'the Swift URL of the form '
|
||||
'"endpoint_url/api_version/[account/]container/object_id". '
|
||||
'Do not include trailing "/". '
|
||||
'For example, use "https://swift.example.com". If using RADOS '
|
||||
'Gateway, endpoint may also contain /swift path; if it does '
|
||||
'not, it will be appended. Required for temporary URLs.')),
|
||||
cfg.StrOpt(
|
||||
'swift_api_version',
|
||||
default='v1',
|
||||
help=_('The Swift API version to create a temporary URL for. '
|
||||
'Defaults to "v1". Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.StrOpt(
|
||||
'swift_account',
|
||||
help=_('The account that Glance uses to communicate with '
|
||||
'Swift. The format is "AUTH_uuid". "uuid" is the '
|
||||
'UUID for the account configured in the glance-api.conf. '
|
||||
'Required for temporary URLs when Glance backend is Swift. '
|
||||
'For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". '
|
||||
'Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.StrOpt(
|
||||
'swift_container',
|
||||
default='glance',
|
||||
help=_('The Swift container Glance is configured to store its '
|
||||
'images in. Defaults to "glance", which is the default '
|
||||
'in glance-api.conf. '
|
||||
'Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.IntOpt('swift_store_multiple_containers_seed',
|
||||
default=0,
|
||||
help=_('This should match a config by the same name in the '
|
||||
'Glance configuration file. When set to 0, a '
|
||||
'single-tenant store will only use one '
|
||||
'container to store all images. When set to an integer '
|
||||
'value between 1 and 32, a single-tenant store will use '
|
||||
'multiple containers to store images, and this value '
|
||||
'will determine how many containers are created.')),
|
||||
cfg.StrOpt('temp_url_endpoint_type',
|
||||
default='swift',
|
||||
choices=['swift', 'radosgw'],
|
||||
help=_('Type of endpoint to use for temporary URLs. If the '
|
||||
'Glance backend is Swift, use "swift"; if it is CEPH '
|
||||
'with RADOS gateway, use "radosgw".')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(glance_opts, group='glance')
|
||||
from ironic.conf import CONF
|
||||
|
||||
TempUrlCacheElement = collections.namedtuple('TempUrlCacheElement',
|
||||
['url', 'url_expires_at'])
|
||||
|
|
|
@ -18,44 +18,13 @@ import hashlib
|
|||
import threading
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
import six
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.conf import CONF
|
||||
from ironic.db import api as dbapi
|
||||
|
||||
hash_opts = [
|
||||
cfg.IntOpt('hash_partition_exponent',
|
||||
default=5,
|
||||
help=_('Exponent to determine number of hash partitions to use '
|
||||
'when distributing load across conductors. Larger '
|
||||
'values will result in more even distribution of load '
|
||||
'and less load when rebalancing the ring, but more '
|
||||
'memory usage. Number of partitions per conductor is '
|
||||
'(2^hash_partition_exponent). This determines the '
|
||||
'granularity of rebalancing: given 10 hosts, and an '
|
||||
'exponent of the 2, there are 40 partitions in the ring.'
|
||||
'A few thousand partitions should make rebalancing '
|
||||
'smooth in most cases. The default is suitable for up '
|
||||
'to a few hundred conductors. Too many partitions has a '
|
||||
'CPU impact.')),
|
||||
cfg.IntOpt('hash_distribution_replicas',
|
||||
default=1,
|
||||
help=_('[Experimental Feature] '
|
||||
'Number of hosts to map onto each hash partition. '
|
||||
'Setting this to more than one will cause additional '
|
||||
'conductor services to prepare deployment environments '
|
||||
'and potentially allow the Ironic cluster to recover '
|
||||
'more quickly if a conductor instance is terminated.')),
|
||||
cfg.IntOpt('hash_ring_reset_interval',
|
||||
default=180,
|
||||
help=_('Interval (in seconds) between hash ring resets.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(hash_opts)
|
||||
|
||||
|
||||
class HashRing(object):
|
||||
"""A stable hash ring.
|
||||
|
|
|
@ -20,7 +20,6 @@ import datetime
|
|||
import os
|
||||
import shutil
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import importutils
|
||||
import requests
|
||||
import sendfile
|
||||
|
@ -32,52 +31,18 @@ from ironic.common import exception
|
|||
from ironic.common.i18n import _
|
||||
from ironic.common import keystone
|
||||
from ironic.common import utils
|
||||
|
||||
from ironic.conf import CONF
|
||||
|
||||
IMAGE_CHUNK_SIZE = 1024 * 1024 # 1mb
|
||||
|
||||
_GLANCE_SESSION = None
|
||||
|
||||
CONF = cfg.CONF
|
||||
# Import this opt early so that it is available when registering
|
||||
# glance_opts below.
|
||||
CONF.import_opt('my_ip', 'ironic.netconf')
|
||||
|
||||
glance_opts = [
|
||||
cfg.StrOpt('glance_host',
|
||||
default='$my_ip',
|
||||
help=_('Default glance hostname or IP address.')),
|
||||
cfg.PortOpt('glance_port',
|
||||
default=9292,
|
||||
help=_('Default glance port.')),
|
||||
cfg.StrOpt('glance_protocol',
|
||||
default='http',
|
||||
choices=['http', 'https'],
|
||||
help=_('Default protocol to use when connecting to glance. '
|
||||
'Set to https for SSL.')),
|
||||
cfg.ListOpt('glance_api_servers',
|
||||
help=_('A list of the glance api servers available to ironic. '
|
||||
'Prefix with https:// for SSL-based glance API '
|
||||
'servers. Format is [hostname|IP]:port.')),
|
||||
cfg.BoolOpt('glance_api_insecure',
|
||||
default=False,
|
||||
help=_('Allow to perform insecure SSL (https) requests to '
|
||||
'glance.')),
|
||||
cfg.IntOpt('glance_num_retries',
|
||||
default=0,
|
||||
help=_('Number of retries when downloading an image from '
|
||||
'glance.')),
|
||||
cfg.StrOpt('auth_strategy',
|
||||
default='keystone',
|
||||
choices=['keystone', 'noauth'],
|
||||
help=_('Authentication strategy to use when connecting to '
|
||||
'glance.')),
|
||||
cfg.StrOpt('glance_cafile',
|
||||
help=_('Optional path to a CA certificate bundle to be used to '
|
||||
'validate the SSL certificate served by glance. It is '
|
||||
'used when glance_api_insecure is set to False.')),
|
||||
]
|
||||
|
||||
CONF.register_opts(glance_opts, group='glance')
|
||||
def _get_glance_session():
|
||||
global _GLANCE_SESSION
|
||||
if not _GLANCE_SESSION:
|
||||
_GLANCE_SESSION = keystone.get_session('glance')
|
||||
return _GLANCE_SESSION
|
||||
|
||||
|
||||
def import_versioned_module(version, submodule=None):
|
||||
|
@ -92,7 +57,8 @@ def GlanceImageService(client=None, version=1, context=None):
|
|||
service_class = getattr(module, 'GlanceImageService')
|
||||
if (context is not None and CONF.glance.auth_strategy == 'keystone'
|
||||
and not context.auth_token):
|
||||
context.auth_token = keystone.get_admin_auth_token()
|
||||
session = _get_glance_session()
|
||||
context.auth_token = keystone.get_admin_auth_token(session)
|
||||
return service_class(client, version, context)
|
||||
|
||||
|
||||
|
|
|
@ -26,7 +26,6 @@ from ironic_lib import disk_utils
|
|||
from ironic_lib import utils as ironic_utils
|
||||
import jinja2
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import fileutils
|
||||
|
||||
|
@ -35,31 +34,11 @@ from ironic.common.glance_service import service_utils as glance_utils
|
|||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.common import image_service as service
|
||||
from ironic.common import paths
|
||||
from ironic.common import utils
|
||||
from ironic.conf import CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
image_opts = [
|
||||
cfg.BoolOpt('force_raw_images',
|
||||
default=True,
|
||||
help=_('If True, convert backing images to "raw" disk image '
|
||||
'format.')),
|
||||
cfg.StrOpt('isolinux_bin',
|
||||
default='/usr/lib/syslinux/isolinux.bin',
|
||||
help=_('Path to isolinux binary file.')),
|
||||
cfg.StrOpt('isolinux_config_template',
|
||||
default=paths.basedir_def('common/isolinux_config.template'),
|
||||
help=_('Template file for isolinux configuration file.')),
|
||||
cfg.StrOpt('grub_config_template',
|
||||
default=paths.basedir_def('common/grub_conf.template'),
|
||||
help=_('Template file for grub configuration file.')),
|
||||
]
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(image_opts)
|
||||
|
||||
|
||||
def _create_root_fs(root_directory, files_info):
|
||||
"""Creates a filesystem root in given directory.
|
||||
|
|
|
@ -12,141 +12,125 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from keystoneclient import exceptions as ksexception
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from six.moves.urllib import parse
|
||||
"""Central place for handling Keystone authorization and service lookup."""
|
||||
|
||||
from keystoneauth1 import exceptions as kaexception
|
||||
from keystoneauth1 import loading as kaloading
|
||||
from oslo_log import log as logging
|
||||
import six
|
||||
from six.moves.urllib import parse # for legacy options loading only
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
keystone_opts = [
|
||||
cfg.StrOpt('region_name',
|
||||
help=_('The region used for getting endpoints of OpenStack'
|
||||
' services.')),
|
||||
]
|
||||
|
||||
CONF.register_opts(keystone_opts, group='keystone')
|
||||
CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token')
|
||||
|
||||
_KS_CLIENT = None
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.conf import auth as ironic_auth
|
||||
from ironic.conf import CONF
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# FIXME(pas-ha): for backward compat with legacy options loading only
|
||||
def _is_apiv3(auth_url, auth_version):
|
||||
"""Checks if V3 version of API is being used or not.
|
||||
"""Check if V3 version of API is being used or not.
|
||||
|
||||
This method inspects auth_url and auth_version, and checks whether V3
|
||||
version of the API is being used or not.
|
||||
|
||||
When no auth_version is specified and auth_url is not a versioned
|
||||
endpoint, v2.0 is assumed.
|
||||
:param auth_url: a http or https url to be inspected (like
|
||||
'http://127.0.0.1:9898/').
|
||||
:param auth_version: a string containing the version (like 'v2', 'v3.0')
|
||||
or None
|
||||
:returns: True if V3 of the API is being used.
|
||||
"""
|
||||
return auth_version == 'v3.0' or '/v3' in parse.urlparse(auth_url).path
|
||||
|
||||
|
||||
def _get_ksclient(token=None):
|
||||
auth_url = CONF.keystone_authtoken.auth_uri
|
||||
if not auth_url:
|
||||
raise exception.KeystoneFailure(_('Keystone API endpoint is missing'))
|
||||
|
||||
auth_version = CONF.keystone_authtoken.auth_version
|
||||
api_v3 = _is_apiv3(auth_url, auth_version)
|
||||
|
||||
if api_v3:
|
||||
from keystoneclient.v3 import client
|
||||
else:
|
||||
from keystoneclient.v2_0 import client
|
||||
|
||||
auth_url = get_keystone_url(auth_url, auth_version)
|
||||
try:
|
||||
if token:
|
||||
return client.Client(token=token, auth_url=auth_url)
|
||||
else:
|
||||
params = {'username': CONF.keystone_authtoken.admin_user,
|
||||
'password': CONF.keystone_authtoken.admin_password,
|
||||
'tenant_name': CONF.keystone_authtoken.admin_tenant_name,
|
||||
'region_name': CONF.keystone.region_name,
|
||||
'auth_url': auth_url}
|
||||
return _get_ksclient_from_conf(client, **params)
|
||||
except ksexception.Unauthorized:
|
||||
raise exception.KeystoneUnauthorized()
|
||||
except ksexception.AuthorizationFailure as err:
|
||||
raise exception.KeystoneFailure(_('Could not authorize in Keystone:'
|
||||
' %s') % err)
|
||||
def ks_exceptions(f):
|
||||
"""Wraps keystoneclient functions and centralizes exception handling."""
|
||||
@six.wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
return f(*args, **kwargs)
|
||||
except kaexception.EndpointNotFound:
|
||||
service_type = kwargs.get('service_type', 'baremetal')
|
||||
endpoint_type = kwargs.get('endpoint_type', 'internal')
|
||||
raise exception.CatalogNotFound(
|
||||
service_type=service_type, endpoint_type=endpoint_type)
|
||||
except (kaexception.Unauthorized, kaexception.AuthorizationFailure):
|
||||
raise exception.KeystoneUnauthorized()
|
||||
except (kaexception.NoMatchingPlugin,
|
||||
kaexception.MissingRequiredOptions) as e:
|
||||
raise exception.ConfigInvalid(six.text_type(e))
|
||||
except Exception as e:
|
||||
LOG.exception(_LE('Keystone request failed: %(msg)s'),
|
||||
{'msg': six.text_type(e)})
|
||||
raise exception.KeystoneFailure(six.text_type(e))
|
||||
return wrapper
|
||||
|
||||
|
||||
@lockutils.synchronized('keystone_client', 'ironic-')
|
||||
def _get_ksclient_from_conf(client, **params):
|
||||
global _KS_CLIENT
|
||||
# NOTE(yuriyz): use Keystone client default gap, to determine whether the
|
||||
# given token is about to expire
|
||||
if _KS_CLIENT is None or _KS_CLIENT.auth_ref.will_expire_soon():
|
||||
_KS_CLIENT = client.Client(**params)
|
||||
return _KS_CLIENT
|
||||
@ks_exceptions
|
||||
def get_session(group):
|
||||
auth = ironic_auth.load_auth(CONF, group) or _get_legacy_auth()
|
||||
if not auth:
|
||||
msg = _("Failed to load auth from either [%(new)s] or [%(old)s] "
|
||||
"config sections.")
|
||||
raise exception.ConfigInvalid(message=msg, new=group,
|
||||
old=ironic_auth.LEGACY_SECTION)
|
||||
session = kaloading.load_session_from_conf_options(
|
||||
CONF, group, auth=auth)
|
||||
return session
|
||||
|
||||
|
||||
def get_keystone_url(auth_url, auth_version):
|
||||
"""Gives an http/https url to contact keystone.
|
||||
# FIXME(pas-ha) remove legacy path after deprecation
|
||||
def _get_legacy_auth():
|
||||
"""Load auth from keystone_authtoken config section
|
||||
|
||||
Given an auth_url and auth_version, this method generates the url in
|
||||
which keystone can be reached.
|
||||
|
||||
:param auth_url: a http or https url to be inspected (like
|
||||
'http://127.0.0.1:9898/').
|
||||
:param auth_version: a string containing the version (like v2, v3.0, etc)
|
||||
:returns: a string containing the keystone url
|
||||
Used only to provide backward compatibility with old configs.
|
||||
"""
|
||||
api_v3 = _is_apiv3(auth_url, auth_version)
|
||||
api_version = 'v3' if api_v3 else 'v2.0'
|
||||
# NOTE(lucasagomes): Get rid of the trailing '/' otherwise urljoin()
|
||||
# fails to override the version in the URL
|
||||
return parse.urljoin(auth_url.rstrip('/'), api_version)
|
||||
conf = getattr(CONF, ironic_auth.LEGACY_SECTION)
|
||||
legacy_loader = kaloading.get_plugin_loader('password')
|
||||
auth_params = {
|
||||
'auth_url': conf.auth_uri,
|
||||
'username': conf.admin_user,
|
||||
'password': conf.admin_password,
|
||||
'tenant_name': conf.admin_tenant_name
|
||||
}
|
||||
api_v3 = _is_apiv3(conf.auth_uri, conf.auth_version)
|
||||
if api_v3:
|
||||
# NOTE(pas-ha): mimic defaults of keystoneclient
|
||||
auth_params.update({
|
||||
'project_domain_id': 'default',
|
||||
'user_domain_id': 'default',
|
||||
})
|
||||
return legacy_loader.load_from_options(**auth_params)
|
||||
|
||||
|
||||
def get_service_url(service_type='baremetal', endpoint_type='internal'):
|
||||
@ks_exceptions
|
||||
def get_service_url(session, service_type='baremetal',
|
||||
endpoint_type='internal'):
|
||||
"""Wrapper for get service url from keystone service catalog.
|
||||
|
||||
Given a service_type and an endpoint_type, this method queries keystone
|
||||
service catalog and provides the url for the desired endpoint.
|
||||
Given a service_type and an endpoint_type, this method queries
|
||||
keystone service catalog and provides the url for the desired
|
||||
endpoint.
|
||||
|
||||
:param service_type: the keystone service for which url is required.
|
||||
:param endpoint_type: the type of endpoint for the service.
|
||||
:returns: an http/https url for the desired endpoint.
|
||||
"""
|
||||
ksclient = _get_ksclient()
|
||||
|
||||
if not ksclient.has_service_catalog():
|
||||
raise exception.KeystoneFailure(_('No Keystone service catalog '
|
||||
'loaded'))
|
||||
|
||||
try:
|
||||
endpoint = ksclient.service_catalog.url_for(
|
||||
service_type=service_type,
|
||||
endpoint_type=endpoint_type,
|
||||
region_name=CONF.keystone.region_name)
|
||||
|
||||
except ksexception.EndpointNotFound:
|
||||
raise exception.CatalogNotFound(service_type=service_type,
|
||||
endpoint_type=endpoint_type)
|
||||
|
||||
return endpoint
|
||||
return session.get_endpoint(service_type=service_type,
|
||||
interface_type=endpoint_type,
|
||||
region=CONF.keystone.region_name)
|
||||
|
||||
|
||||
def get_admin_auth_token():
|
||||
"""Get an admin auth_token from the Keystone."""
|
||||
ksclient = _get_ksclient()
|
||||
return ksclient.auth_token
|
||||
@ks_exceptions
|
||||
def get_admin_auth_token(session):
|
||||
"""Get admin token.
|
||||
|
||||
|
||||
def token_expires_soon(token, duration=None):
|
||||
"""Determines if token expiration is about to occur.
|
||||
|
||||
:param duration: time interval in seconds
|
||||
:returns: boolean : true if expiration is within the given duration
|
||||
Currently used for inspector, glance and swift clients.
|
||||
Only swift client does not actually support using sessions directly,
|
||||
LP #1518938, others will be updated in ironic code.
|
||||
"""
|
||||
ksclient = _get_ksclient(token=token)
|
||||
return ksclient.auth_ref.will_expire_soon(stale_duration=duration)
|
||||
return session.get_token()
|
||||
|
|
|
@ -32,12 +32,20 @@ def get_node_vif_ids(task):
|
|||
portgroup_vifs = {}
|
||||
port_vifs = {}
|
||||
for portgroup in task.portgroups:
|
||||
vif = portgroup.extra.get('vif_port_id')
|
||||
# NOTE(vdrok): We are booting the node only in one network at a time,
|
||||
# and presence of cleaning_vif_port_id means we're doing cleaning, of
|
||||
# provisioning_vif_port_id - provisioning. Otherwise it's a tenant
|
||||
# network
|
||||
vif = (portgroup.internal_info.get('cleaning_vif_port_id') or
|
||||
portgroup.internal_info.get('provisioning_vif_port_id') or
|
||||
portgroup.extra.get('vif_port_id'))
|
||||
if vif:
|
||||
portgroup_vifs[portgroup.uuid] = vif
|
||||
vifs['portgroups'] = portgroup_vifs
|
||||
for port in task.ports:
|
||||
vif = port.extra.get('vif_port_id')
|
||||
vif = (port.internal_info.get('cleaning_vif_port_id') or
|
||||
port.internal_info.get('provisioning_vif_port_id') or
|
||||
port.extra.get('vif_port_id'))
|
||||
if vif:
|
||||
port_vifs[port.uuid] = vif
|
||||
vifs['ports'] = port_vifs
|
||||
|
|
|
@ -0,0 +1,259 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from neutronclient.common import exceptions as neutron_exceptions
|
||||
from neutronclient.v2_0 import client as clientv20
|
||||
from oslo_log import log
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.common.i18n import _LI
|
||||
from ironic.common.i18n import _LW
|
||||
from ironic.common import keystone
|
||||
from ironic.conf import CONF
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
DEFAULT_NEUTRON_URL = 'http://%s:9696' % CONF.my_ip
|
||||
|
||||
_NEUTRON_SESSION = None
|
||||
|
||||
|
||||
def _get_neutron_session():
|
||||
global _NEUTRON_SESSION
|
||||
if not _NEUTRON_SESSION:
|
||||
_NEUTRON_SESSION = keystone.get_session('neutron')
|
||||
return _NEUTRON_SESSION
|
||||
|
||||
|
||||
def get_client(token=None):
|
||||
params = {'retries': CONF.neutron.retries}
|
||||
url = CONF.neutron.url
|
||||
if CONF.neutron.auth_strategy == 'noauth':
|
||||
params['endpoint_url'] = url or DEFAULT_NEUTRON_URL
|
||||
params['auth_strategy'] = 'noauth'
|
||||
params.update({
|
||||
'timeout': CONF.neutron.url_timeout or CONF.neutron.timeout,
|
||||
'insecure': CONF.neutron.insecure,
|
||||
'ca_cert': CONF.neutron.cafile})
|
||||
else:
|
||||
session = _get_neutron_session()
|
||||
if token is None:
|
||||
params['session'] = session
|
||||
# NOTE(pas-ha) endpoint_override==None will auto-discover
|
||||
# endpoint from Keystone catalog.
|
||||
# Region is needed only in this case.
|
||||
# SSL related options are ignored as they are already embedded
|
||||
# in keystoneauth Session object
|
||||
if url:
|
||||
params['endpoint_override'] = url
|
||||
else:
|
||||
params['region_name'] = CONF.keystone.region_name
|
||||
else:
|
||||
params['token'] = token
|
||||
params['endpoint_url'] = url or keystone.get_service_url(
|
||||
session, service_type='network')
|
||||
params.update({
|
||||
'timeout': CONF.neutron.url_timeout or CONF.neutron.timeout,
|
||||
'insecure': CONF.neutron.insecure,
|
||||
'ca_cert': CONF.neutron.cafile})
|
||||
|
||||
return clientv20.Client(**params)
|
||||
|
||||
|
||||
def add_ports_to_network(task, network_uuid, is_flat=False):
|
||||
"""Create neutron ports to boot the ramdisk.
|
||||
|
||||
Create neutron ports for each pxe_enabled port on task.node to boot
|
||||
the ramdisk.
|
||||
|
||||
:param task: a TaskManager instance.
|
||||
:param network_uuid: UUID of a neutron network where ports will be
|
||||
created.
|
||||
:param is_flat: Indicates whether it is a flat network or not.
|
||||
:raises: NetworkError
|
||||
:returns: a dictionary in the form {port.uuid: neutron_port['id']}
|
||||
"""
|
||||
client = get_client(task.context.auth_token)
|
||||
node = task.node
|
||||
|
||||
LOG.debug('For node %(node)s, creating neutron ports on network '
|
||||
'%(network_uuid)s using %(net_iface)s network interface.',
|
||||
{'net_iface': task.driver.network.__class__.__name__,
|
||||
'node': node.uuid, 'network_uuid': network_uuid})
|
||||
body = {
|
||||
'port': {
|
||||
'network_id': network_uuid,
|
||||
'admin_state_up': True,
|
||||
'binding:vnic_type': 'baremetal',
|
||||
'device_owner': 'baremetal:none',
|
||||
}
|
||||
}
|
||||
|
||||
if not is_flat:
|
||||
# NOTE(vdrok): It seems that change
|
||||
# I437290affd8eb87177d0626bf7935a165859cbdd to neutron broke the
|
||||
# possibility to always bind port. Set binding:host_id only in
|
||||
# case of non flat network.
|
||||
body['port']['binding:host_id'] = node.uuid
|
||||
|
||||
# Since instance_uuid will not be available during cleaning
|
||||
# operations, we need to check that and populate them only when
|
||||
# available
|
||||
body['port']['device_id'] = node.instance_uuid or node.uuid
|
||||
|
||||
ports = {}
|
||||
failures = []
|
||||
portmap = get_node_portmap(task)
|
||||
pxe_enabled_ports = [p for p in task.ports if p.pxe_enabled]
|
||||
for ironic_port in pxe_enabled_ports:
|
||||
body['port']['mac_address'] = ironic_port.address
|
||||
binding_profile = {'local_link_information':
|
||||
[portmap[ironic_port.uuid]]}
|
||||
body['port']['binding:profile'] = binding_profile
|
||||
try:
|
||||
port = client.create_port(body)
|
||||
except neutron_exceptions.NeutronClientException as e:
|
||||
rollback_ports(task, network_uuid)
|
||||
msg = (_('Could not create neutron port for ironic port '
|
||||
'%(ir-port)s on given network %(net)s from node '
|
||||
'%(node)s. %(exc)s') %
|
||||
{'net': network_uuid, 'node': node.uuid,
|
||||
'ir-port': ironic_port.uuid, 'exc': e})
|
||||
LOG.exception(msg)
|
||||
raise exception.NetworkError(msg)
|
||||
|
||||
try:
|
||||
ports[ironic_port.uuid] = port['port']['id']
|
||||
except KeyError:
|
||||
failures.append(ironic_port.uuid)
|
||||
|
||||
if failures:
|
||||
if len(failures) == len(pxe_enabled_ports):
|
||||
raise exception.NetworkError(_(
|
||||
"Failed to update vif_port_id for any PXE enabled port "
|
||||
"on node %s.") % node.uuid)
|
||||
else:
|
||||
LOG.warning(_LW("Some errors were encountered when updating "
|
||||
"vif_port_id for node %(node)s on "
|
||||
"the following ports: %(ports)s."),
|
||||
{'node': node.uuid, 'ports': failures})
|
||||
else:
|
||||
LOG.info(_LI('Successfully created ports for node %(node_uuid)s in '
|
||||
'network %(net)s.'),
|
||||
{'node_uuid': node.uuid, 'net': network_uuid})
|
||||
|
||||
return ports
|
||||
|
||||
|
||||
def remove_ports_from_network(task, network_uuid):
|
||||
"""Deletes the neutron ports created for booting the ramdisk.
|
||||
|
||||
:param task: a TaskManager instance.
|
||||
:param network_uuid: UUID of a neutron network ports will be deleted from.
|
||||
:raises: NetworkError
|
||||
"""
|
||||
macs = [p.address for p in task.ports if p.pxe_enabled]
|
||||
if macs:
|
||||
params = {
|
||||
'network_id': network_uuid,
|
||||
'mac_address': macs,
|
||||
}
|
||||
LOG.debug("Removing ports on network %(net)s on node %(node)s.",
|
||||
{'net': network_uuid, 'node': task.node.uuid})
|
||||
|
||||
remove_neutron_ports(task, params)
|
||||
|
||||
|
||||
def remove_neutron_ports(task, params):
|
||||
"""Deletes the neutron ports matched by params.
|
||||
|
||||
:param task: a TaskManager instance.
|
||||
:param params: Dict of params to filter ports.
|
||||
:raises: NetworkError
|
||||
"""
|
||||
client = get_client(task.context.auth_token)
|
||||
node_uuid = task.node.uuid
|
||||
|
||||
try:
|
||||
response = client.list_ports(**params)
|
||||
except neutron_exceptions.NeutronClientException as e:
|
||||
msg = (_('Could not get given network VIF for %(node)s '
|
||||
'from neutron, possible network issue. %(exc)s') %
|
||||
{'node': node_uuid, 'exc': e})
|
||||
LOG.exception(msg)
|
||||
raise exception.NetworkError(msg)
|
||||
|
||||
ports = response.get('ports', [])
|
||||
if not ports:
|
||||
LOG.debug('No ports to remove for node %s', node_uuid)
|
||||
return
|
||||
|
||||
for port in ports:
|
||||
if not port['id']:
|
||||
# TODO(morgabra) client.list_ports() sometimes returns
|
||||
# port objects with null ids. It's unclear why this happens.
|
||||
LOG.warning(_LW("Deleting neutron port failed, missing 'id'. "
|
||||
"Node: %(node)s, neutron port: %(port)s."),
|
||||
{'node': node_uuid, 'port': port})
|
||||
continue
|
||||
|
||||
LOG.debug('Deleting neutron port %(vif_port_id)s of node '
|
||||
'%(node_id)s.',
|
||||
{'vif_port_id': port['id'], 'node_id': node_uuid})
|
||||
|
||||
try:
|
||||
client.delete_port(port['id'])
|
||||
except neutron_exceptions.NeutronClientException as e:
|
||||
msg = (_('Could not remove VIF %(vif)s of node %(node)s, possibly '
|
||||
'a network issue: %(exc)s') %
|
||||
{'vif': port['id'], 'node': node_uuid, 'exc': e})
|
||||
LOG.exception(msg)
|
||||
raise exception.NetworkError(msg)
|
||||
|
||||
LOG.info(_LI('Successfully removed node %(node_uuid)s neutron ports.'),
|
||||
{'node_uuid': node_uuid})
|
||||
|
||||
|
||||
def get_node_portmap(task):
|
||||
"""Extract the switch port information for the node.
|
||||
|
||||
:param task: a task containing the Node object.
|
||||
:returns: a dictionary in the form {port.uuid: port.local_link_connection}
|
||||
"""
|
||||
|
||||
portmap = {}
|
||||
for port in task.ports:
|
||||
portmap[port.uuid] = port.local_link_connection
|
||||
return portmap
|
||||
# TODO(jroll) raise InvalidParameterValue if a port doesn't have the
|
||||
# necessary info? (probably)
|
||||
|
||||
|
||||
def rollback_ports(task, network_uuid):
|
||||
"""Attempts to delete any ports created by cleaning/provisioning
|
||||
|
||||
Purposefully will not raise any exceptions so error handling can
|
||||
continue.
|
||||
|
||||
:param task: a TaskManager instance.
|
||||
:param network_uuid: UUID of a neutron network.
|
||||
"""
|
||||
try:
|
||||
remove_ports_from_network(task, network_uuid)
|
||||
except exception.NetworkError:
|
||||
# Only log the error
|
||||
LOG.exception(_LE(
|
||||
'Failed to rollback port changes for node %(node)s '
|
||||
'on network %(network)s'), {'node': task.node.uuid,
|
||||
'network': network_uuid})
|
|
@ -17,27 +17,7 @@
|
|||
|
||||
import os
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
path_opts = [
|
||||
cfg.StrOpt('pybasedir',
|
||||
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
|
||||
'../')),
|
||||
sample_default='/usr/lib/python/site-packages/ironic/ironic',
|
||||
help=_('Directory where the ironic python module is '
|
||||
'installed.')),
|
||||
cfg.StrOpt('bindir',
|
||||
default='$pybasedir/bin',
|
||||
help=_('Directory where ironic binaries are installed.')),
|
||||
cfg.StrOpt('state_path',
|
||||
default='$pybasedir',
|
||||
help=_("Top-level directory for maintaining ironic's state.")),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(path_opts)
|
||||
from ironic.conf import CONF
|
||||
|
||||
|
||||
def basedir_def(*args):
|
||||
|
|
|
@ -17,10 +17,165 @@
|
|||
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_policy import policy
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _LW
|
||||
|
||||
_ENFORCER = None
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
default_policies = [
|
||||
# Legacy setting, don't remove. Likely to be overridden by operators who
|
||||
# forget to update their policy.json configuration file.
|
||||
# This gets rolled into the new "is_admin" rule below.
|
||||
policy.RuleDefault('admin_api',
|
||||
'role:admin or role:administrator',
|
||||
description='Legacy rule for cloud admin access'),
|
||||
# is_public_api is set in the environment from AuthTokenMiddleware
|
||||
policy.RuleDefault('public_api',
|
||||
'is_public_api:True',
|
||||
description='Internal flag for public API routes'),
|
||||
# Generic default to hide passwords
|
||||
policy.RuleDefault('show_password',
|
||||
'!',
|
||||
description='Show or mask passwords in API responses'),
|
||||
# Roles likely to be overriden by operator
|
||||
policy.RuleDefault('is_member',
|
||||
'tenant:demo or tenant:baremetal',
|
||||
description='May be used to restrict access to specific tenants'), # noqa
|
||||
policy.RuleDefault('is_observer',
|
||||
'rule:is_member and (role:observer or role:baremetal_observer)', # noqa
|
||||
description='Read-only API access'),
|
||||
policy.RuleDefault('is_admin',
|
||||
'rule:admin_api or (rule:is_member and role:baremetal_admin)', # noqa
|
||||
description='Full read/write API access'),
|
||||
]
|
||||
|
||||
# NOTE(deva): to follow policy-in-code spec, we define defaults for
|
||||
# the granular policies in code, rather than in policy.json.
|
||||
# All of these may be overridden by configuration, but we can
|
||||
# depend on their existence throughout the code.
|
||||
|
||||
node_policies = [
|
||||
policy.RuleDefault('baremetal:node:get',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='Retrieve Node records'),
|
||||
policy.RuleDefault('baremetal:node:get_boot_device',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='Retrieve Node boot device metadata'),
|
||||
policy.RuleDefault('baremetal:node:get_states',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='View Node power and provision state'),
|
||||
policy.RuleDefault('baremetal:node:create',
|
||||
'rule:is_admin',
|
||||
description='Create Node records'),
|
||||
policy.RuleDefault('baremetal:node:delete',
|
||||
'rule:is_admin',
|
||||
description='Delete Node records'),
|
||||
policy.RuleDefault('baremetal:node:update',
|
||||
'rule:is_admin',
|
||||
description='Update Node records'),
|
||||
policy.RuleDefault('baremetal:node:validate',
|
||||
'rule:is_admin',
|
||||
description='Request active validation of Nodes'),
|
||||
policy.RuleDefault('baremetal:node:set_maintenance',
|
||||
'rule:is_admin',
|
||||
description='Set maintenance flag, taking a Node '
|
||||
'out of service'),
|
||||
policy.RuleDefault('baremetal:node:clear_maintenance',
|
||||
'rule:is_admin',
|
||||
description='Clear maintenance flag, placing the Node '
|
||||
'into service again'),
|
||||
policy.RuleDefault('baremetal:node:set_boot_device',
|
||||
'rule:is_admin',
|
||||
description='Change Node boot device'),
|
||||
policy.RuleDefault('baremetal:node:set_power_state',
|
||||
'rule:is_admin',
|
||||
description='Change Node power status'),
|
||||
policy.RuleDefault('baremetal:node:set_provision_state',
|
||||
'rule:is_admin',
|
||||
description='Change Node provision status'),
|
||||
policy.RuleDefault('baremetal:node:set_raid_state',
|
||||
'rule:is_admin',
|
||||
description='Change Node RAID status'),
|
||||
policy.RuleDefault('baremetal:node:get_console',
|
||||
'rule:is_admin',
|
||||
description='Get Node console connection information'),
|
||||
policy.RuleDefault('baremetal:node:set_console_state',
|
||||
'rule:is_admin',
|
||||
description='Change Node console status'),
|
||||
]
|
||||
|
||||
port_policies = [
|
||||
policy.RuleDefault('baremetal:port:get',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='Retrieve Port records'),
|
||||
policy.RuleDefault('baremetal:port:create',
|
||||
'rule:is_admin',
|
||||
description='Create Port records'),
|
||||
policy.RuleDefault('baremetal:port:delete',
|
||||
'rule:is_admin',
|
||||
description='Delete Port records'),
|
||||
policy.RuleDefault('baremetal:port:update',
|
||||
'rule:is_admin',
|
||||
description='Update Port records'),
|
||||
]
|
||||
|
||||
chassis_policies = [
|
||||
policy.RuleDefault('baremetal:chassis:get',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='Retrieve Chassis records'),
|
||||
policy.RuleDefault('baremetal:chassis:create',
|
||||
'rule:is_admin',
|
||||
description='Create Chassis records'),
|
||||
policy.RuleDefault('baremetal:chassis:delete',
|
||||
'rule:is_admin',
|
||||
description='Delete Chassis records'),
|
||||
policy.RuleDefault('baremetal:chassis:update',
|
||||
'rule:is_admin',
|
||||
description='Update Chassis records'),
|
||||
]
|
||||
|
||||
driver_policies = [
|
||||
policy.RuleDefault('baremetal:driver:get',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='View list of available drivers'),
|
||||
policy.RuleDefault('baremetal:driver:get_properties',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='View driver-specific properties'),
|
||||
policy.RuleDefault('baremetal:driver:get_raid_logical_disk_properties',
|
||||
'rule:is_admin or rule:is_observer',
|
||||
description='View driver-specific RAID metadata'),
|
||||
|
||||
]
|
||||
|
||||
extra_policies = [
|
||||
policy.RuleDefault('baremetal:node:vendor_passthru',
|
||||
'rule:is_admin',
|
||||
description='Access vendor-specific Node functions'),
|
||||
policy.RuleDefault('baremetal:driver:vendor_passthru',
|
||||
'rule:is_admin',
|
||||
description='Access vendor-specific Driver functions'),
|
||||
policy.RuleDefault('baremetal:node:ipa_heartbeat',
|
||||
'rule:public_api',
|
||||
description='Send heartbeats from IPA ramdisk'),
|
||||
policy.RuleDefault('baremetal:driver:ipa_lookup',
|
||||
'rule:public_api',
|
||||
description='Access IPA ramdisk functions'),
|
||||
]
|
||||
|
||||
|
||||
def list_policies():
|
||||
policies = (default_policies
|
||||
+ node_policies
|
||||
+ port_policies
|
||||
+ chassis_policies
|
||||
+ driver_policies
|
||||
+ extra_policies)
|
||||
return policies
|
||||
|
||||
|
||||
@lockutils.synchronized('policy_enforcer', 'ironic-')
|
||||
|
@ -29,10 +184,11 @@ def init_enforcer(policy_file=None, rules=None,
|
|||
"""Synchronously initializes the policy enforcer
|
||||
|
||||
:param policy_file: Custom policy file to use, if none is specified,
|
||||
`CONF.policy_file` will be used.
|
||||
`CONF.oslo_policy.policy_file` will be used.
|
||||
:param rules: Default dictionary / Rules to use. It will be
|
||||
considered just in the first instantiation.
|
||||
:param default_rule: Default rule to use, CONF.default_rule will
|
||||
:param default_rule: Default rule to use,
|
||||
CONF.oslo_policy.policy_default_rule will
|
||||
be used if none is specified.
|
||||
:param use_conf: Whether to load rules from config file.
|
||||
|
||||
|
@ -42,10 +198,15 @@ def init_enforcer(policy_file=None, rules=None,
|
|||
if _ENFORCER:
|
||||
return
|
||||
|
||||
# NOTE(deva): Register defaults for policy-in-code here so that they are
|
||||
# loaded exactly once - when this module-global is initialized.
|
||||
# Defining these in the relevant API modules won't work
|
||||
# because API classes lack singletons and don't use globals.
|
||||
_ENFORCER = policy.Enforcer(CONF, policy_file=policy_file,
|
||||
rules=rules,
|
||||
default_rule=default_rule,
|
||||
use_conf=use_conf)
|
||||
_ENFORCER.register_defaults(list_policies())
|
||||
|
||||
|
||||
def get_enforcer():
|
||||
|
@ -57,12 +218,57 @@ def get_enforcer():
|
|||
return _ENFORCER
|
||||
|
||||
|
||||
# NOTE(deva): We can't call these methods from within decorators because the
|
||||
# 'target' and 'creds' parameter must be fetched from the call time
|
||||
# context-local pecan.request magic variable, but decorators are compiled
|
||||
# at module-load time.
|
||||
|
||||
|
||||
def authorize(rule, target, creds, *args, **kwargs):
|
||||
"""A shortcut for policy.Enforcer.authorize()
|
||||
|
||||
Checks authorization of a rule against the target and credentials, and
|
||||
raises an exception if the rule is not defined.
|
||||
Always returns true if CONF.auth_strategy == noauth.
|
||||
|
||||
Beginning with the Newton cycle, this should be used in place of 'enforce'.
|
||||
"""
|
||||
if CONF.auth_strategy == 'noauth':
|
||||
return True
|
||||
enforcer = get_enforcer()
|
||||
try:
|
||||
return enforcer.authorize(rule, target, creds, do_raise=True,
|
||||
*args, **kwargs)
|
||||
except policy.PolicyNotAuthorized:
|
||||
raise exception.HTTPForbidden(resource=rule)
|
||||
|
||||
|
||||
def check(rule, target, creds, *args, **kwargs):
|
||||
"""A shortcut for policy.Enforcer.enforce()
|
||||
|
||||
Checks authorization of a rule against the target and credentials
|
||||
and returns True or False.
|
||||
"""
|
||||
enforcer = get_enforcer()
|
||||
return enforcer.enforce(rule, target, creds, *args, **kwargs)
|
||||
|
||||
|
||||
def enforce(rule, target, creds, do_raise=False, exc=None, *args, **kwargs):
|
||||
"""A shortcut for policy.Enforcer.enforce()
|
||||
|
||||
Checks authorization of a rule against the target and credentials.
|
||||
Always returns true if CONF.auth_strategy == noauth.
|
||||
|
||||
"""
|
||||
# NOTE(deva): this method is obsoleted by authorize(), but retained for
|
||||
# backwards compatibility in case it has been used downstream.
|
||||
# It may be removed in the 'P' cycle.
|
||||
LOG.warning(_LW(
|
||||
"Deprecation warning: calls to ironic.common.policy.enforce() "
|
||||
"should be replaced with authorize(). This method may be removed "
|
||||
"in a future release."))
|
||||
if CONF.auth_strategy == 'noauth':
|
||||
return True
|
||||
enforcer = get_enforcer()
|
||||
return enforcer.enforce(rule, target, creds, do_raise=do_raise,
|
||||
exc=exc, *args, **kwargs)
|
||||
|
|
|
@ -30,12 +30,7 @@ ALLOWED_EXMODS = [
|
|||
]
|
||||
EXTRA_EXMODS = []
|
||||
|
||||
# NOTE(lucasagomes): The ironic.openstack.common.rpc entries are for
|
||||
# backwards compat with IceHouse rpc_backend configuration values.
|
||||
TRANSPORT_ALIASES = {
|
||||
'ironic.openstack.common.rpc.impl_kombu': 'rabbit',
|
||||
'ironic.openstack.common.rpc.impl_qpid': 'qpid',
|
||||
'ironic.openstack.common.rpc.impl_zmq': 'zmq',
|
||||
'ironic.rpc.impl_kombu': 'rabbit',
|
||||
'ironic.rpc.impl_qpid': 'qpid',
|
||||
'ironic.rpc.impl_zmq': 'zmq',
|
||||
|
|
|
@ -15,10 +15,8 @@
|
|||
# under the License.
|
||||
|
||||
import signal
|
||||
import socket
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
import oslo_messaging as messaging
|
||||
from oslo_service import service
|
||||
|
@ -33,26 +31,12 @@ from ironic.common.i18n import _
|
|||
from ironic.common.i18n import _LE
|
||||
from ironic.common.i18n import _LI
|
||||
from ironic.common import rpc
|
||||
from ironic.conf import CONF
|
||||
from ironic import objects
|
||||
from ironic.objects import base as objects_base
|
||||
|
||||
|
||||
service_opts = [
|
||||
cfg.StrOpt('host',
|
||||
default=socket.getfqdn(),
|
||||
sample_default='localhost',
|
||||
help=_('Name of this node. This can be an opaque identifier. '
|
||||
'It is not necessarily a hostname, FQDN, or IP address. '
|
||||
'However, the node name must be valid within '
|
||||
'an AMQP key, and if using ZeroMQ, a valid '
|
||||
'hostname, FQDN, or IP address.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
CONF.register_opts(service_opts)
|
||||
|
||||
|
||||
class RPCService(service.Service):
|
||||
|
||||
|
@ -124,7 +108,6 @@ def prepare_service(argv=None):
|
|||
'qpid.messaging=INFO',
|
||||
'oslo_messaging=INFO',
|
||||
'sqlalchemy=WARNING',
|
||||
'keystoneclient=INFO',
|
||||
'stevedore=INFO',
|
||||
'eventlet.wsgi.server=INFO',
|
||||
'iso8601=WARNING',
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
import six
|
||||
from six.moves import http_client
|
||||
from six.moves.urllib import parse
|
||||
from swiftclient import client as swift_client
|
||||
|
@ -24,72 +24,41 @@ from swiftclient import utils as swift_utils
|
|||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import keystone
|
||||
|
||||
swift_opts = [
|
||||
cfg.IntOpt('swift_max_retries',
|
||||
default=2,
|
||||
help=_('Maximum number of times to retry a Swift request, '
|
||||
'before failing.'))
|
||||
]
|
||||
from ironic.conf import CONF
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(swift_opts, group='swift')
|
||||
_SWIFT_SESSION = None
|
||||
|
||||
CONF.import_opt('admin_user', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('admin_tenant_name', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('admin_password', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('auth_uri', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('auth_version', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('insecure', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('cafile', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
CONF.import_opt('region_name', 'keystonemiddleware.auth_token',
|
||||
group='keystone_authtoken')
|
||||
|
||||
def _get_swift_session():
|
||||
global _SWIFT_SESSION
|
||||
if not _SWIFT_SESSION:
|
||||
_SWIFT_SESSION = keystone.get_session('swift')
|
||||
return _SWIFT_SESSION
|
||||
|
||||
|
||||
class SwiftAPI(object):
|
||||
"""API for communicating with Swift."""
|
||||
|
||||
def __init__(self,
|
||||
user=None,
|
||||
tenant_name=None,
|
||||
key=None,
|
||||
auth_url=None,
|
||||
auth_version=None,
|
||||
region_name=None):
|
||||
"""Constructor for creating a SwiftAPI object.
|
||||
|
||||
:param user: the name of the user for Swift account
|
||||
:param tenant_name: the name of the tenant for Swift account
|
||||
:param key: the 'password' or key to authenticate with
|
||||
:param auth_url: the url for authentication
|
||||
:param auth_version: the version of api to use for authentication
|
||||
:param region_name: the region used for getting endpoints of swift
|
||||
"""
|
||||
user = user or CONF.keystone_authtoken.admin_user
|
||||
tenant_name = tenant_name or CONF.keystone_authtoken.admin_tenant_name
|
||||
key = key or CONF.keystone_authtoken.admin_password
|
||||
auth_url = auth_url or CONF.keystone_authtoken.auth_uri
|
||||
auth_version = auth_version or CONF.keystone_authtoken.auth_version
|
||||
auth_url = keystone.get_keystone_url(auth_url, auth_version)
|
||||
params = {'retries': CONF.swift.swift_max_retries,
|
||||
'insecure': CONF.keystone_authtoken.insecure,
|
||||
'cacert': CONF.keystone_authtoken.cafile,
|
||||
'user': user,
|
||||
'tenant_name': tenant_name,
|
||||
'key': key,
|
||||
'authurl': auth_url,
|
||||
'auth_version': auth_version}
|
||||
region_name = region_name or CONF.keystone_authtoken.region_name
|
||||
if region_name:
|
||||
params['os_options'] = {'region_name': region_name}
|
||||
def __init__(self):
|
||||
# TODO(pas-ha): swiftclient does not support keystone sessions ATM.
|
||||
# Must be reworked when LP bug #1518938 is fixed.
|
||||
session = _get_swift_session()
|
||||
params = {
|
||||
'retries': CONF.swift.swift_max_retries,
|
||||
'preauthurl': keystone.get_service_url(
|
||||
session,
|
||||
service_type='object-store'),
|
||||
'preauthtoken': keystone.get_admin_auth_token(session)
|
||||
}
|
||||
# NOTE(pas-ha):session.verify is for HTTPS urls and can be
|
||||
# - False (do not verify)
|
||||
# - True (verify but try to locate system CA certificates)
|
||||
# - Path (verify using specific CA certificate)
|
||||
verify = session.verify
|
||||
params['insecure'] = not verify
|
||||
if verify and isinstance(verify, six.string_types):
|
||||
params['cacert'] = verify
|
||||
|
||||
self.connection = swift_client.Connection(**params)
|
||||
|
||||
|
@ -142,8 +111,7 @@ class SwiftAPI(object):
|
|||
raise exception.SwiftOperationError(operation=operation,
|
||||
error=e)
|
||||
|
||||
storage_url, token = self.connection.get_auth()
|
||||
parse_result = parse.urlparse(storage_url)
|
||||
parse_result = parse.urlparse(self.connection.url)
|
||||
swift_object_path = '/'.join((parse_result.path, container, object))
|
||||
temp_url_key = account_info['x-account-meta-temp-url-key']
|
||||
url_path = swift_utils.generate_temp_url(swift_object_path, timeout,
|
||||
|
|
|
@ -30,7 +30,6 @@ import tempfile
|
|||
|
||||
import netaddr
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import timeutils
|
||||
import paramiko
|
||||
|
@ -41,21 +40,7 @@ from ironic.common import exception
|
|||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.common.i18n import _LW
|
||||
|
||||
utils_opts = [
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
default="/etc/ironic/rootwrap.conf",
|
||||
help=_('Path to the rootwrap configuration file to use for '
|
||||
'running commands as root.')),
|
||||
cfg.StrOpt('tempdir',
|
||||
default=tempfile.gettempdir(),
|
||||
sample_default='/tmp',
|
||||
help=_('Temporary working directory, default is Python temp '
|
||||
'dir.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(utils_opts)
|
||||
from ironic.conf import CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
@ -185,6 +170,22 @@ def is_valid_mac(address):
|
|||
re.match(m, address.lower()))
|
||||
|
||||
|
||||
def is_valid_datapath_id(datapath_id):
|
||||
"""Verify the format of an OpenFlow datapath_id.
|
||||
|
||||
Check if a datapath_id is valid and contains 16 hexadecimal digits.
|
||||
Datapath ID format: the lower 48-bits are for a MAC address,
|
||||
while the upper 16-bits are implementer-defined.
|
||||
|
||||
:param datapath_id: OpenFlow datapath_id to be validated.
|
||||
:returns: True if valid. False if not.
|
||||
|
||||
"""
|
||||
m = "^[0-9a-f]{16}$"
|
||||
return (isinstance(datapath_id, six.string_types) and
|
||||
re.match(m, datapath_id.lower()))
|
||||
|
||||
|
||||
_is_valid_logical_name_re = re.compile(r'^[A-Z0-9-._~]+$', re.I)
|
||||
|
||||
# old is_hostname_safe() regex, retained for backwards compat
|
||||
|
@ -284,6 +285,23 @@ def validate_and_normalize_mac(address):
|
|||
return address.lower()
|
||||
|
||||
|
||||
def validate_and_normalize_datapath_id(datapath_id):
|
||||
"""Validate an OpenFlow datapath_id and return normalized form.
|
||||
|
||||
Checks whether the supplied OpenFlow datapath_id is formally correct and
|
||||
normalize it to all lower case.
|
||||
|
||||
:param datapath_id: OpenFlow datapath_id to be validated and normalized.
|
||||
:returns: Normalized and validated OpenFlow datapath_id.
|
||||
:raises: InvalidDatapathID If an OpenFlow datapath_id is not valid.
|
||||
|
||||
"""
|
||||
|
||||
if not is_valid_datapath_id(datapath_id):
|
||||
raise exception.InvalidDatapathID(datapath_id=datapath_id)
|
||||
return datapath_id.lower()
|
||||
|
||||
|
||||
def is_valid_ipv6_cidr(address):
|
||||
try:
|
||||
str(netaddr.IPNetwork(address, version=6).cidr)
|
||||
|
|
|
@ -83,8 +83,12 @@ class BaseConductorManager(object):
|
|||
self.ring_manager = hash.HashRingManager()
|
||||
"""Consistent hash ring which maps drivers to conductors."""
|
||||
|
||||
# NOTE(deva): this call may raise DriverLoadError or DriverNotFound
|
||||
# NOTE(deva): these calls may raise DriverLoadError or DriverNotFound
|
||||
# NOTE(vdrok): instantiate network interface factory on startup so that
|
||||
# all the network interfaces are loaded at the very beginning, and
|
||||
# failures prevent the conductor from starting.
|
||||
drivers = driver_factory.drivers()
|
||||
driver_factory.NetworkInterfaceFactory()
|
||||
if not drivers:
|
||||
msg = _LE("Conductor %s cannot be started because no drivers "
|
||||
"were loaded. This could be because no drivers were "
|
||||
|
@ -106,6 +110,8 @@ class BaseConductorManager(object):
|
|||
periodic_task_classes = set()
|
||||
self._collect_periodic_tasks(self, (admin_context,))
|
||||
for driver_obj in drivers.values():
|
||||
# TODO(dtantsur): collecting tasks from driver objects is
|
||||
# deprecated and should be removed in Ocata.
|
||||
self._collect_periodic_tasks(driver_obj, (self, admin_context))
|
||||
for iface_name in driver_obj.all_interfaces:
|
||||
iface = getattr(driver_obj, iface_name, None)
|
||||
|
@ -125,6 +131,8 @@ class BaseConductorManager(object):
|
|||
self._periodic_task_callables,
|
||||
executor_factory=periodics.ExistingExecutor(self._executor))
|
||||
|
||||
# clear all target_power_state with locks by this conductor
|
||||
self.dbapi.clear_node_target_power_state(self.host)
|
||||
# clear all locks held by this conductor before registering
|
||||
self.dbapi.clear_node_reservations_for_conductor(self.host)
|
||||
try:
|
||||
|
|
|
@ -81,7 +81,7 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
"""Ironic Conductor manager main class."""
|
||||
|
||||
# NOTE(rloo): This must be in sync with rpcapi.ConductorAPI's.
|
||||
RPC_API_VERSION = '1.33'
|
||||
RPC_API_VERSION = '1.34'
|
||||
|
||||
target = messaging.Target(version=RPC_API_VERSION)
|
||||
|
||||
|
@ -91,7 +91,8 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
|
||||
@messaging.expected_exceptions(exception.InvalidParameterValue,
|
||||
exception.MissingParameterValue,
|
||||
exception.NodeLocked)
|
||||
exception.NodeLocked,
|
||||
exception.InvalidState)
|
||||
def update_node(self, context, node_obj):
|
||||
"""Update a node with the supplied data.
|
||||
|
||||
|
@ -113,6 +114,27 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
if 'maintenance' in delta and not node_obj.maintenance:
|
||||
node_obj.maintenance_reason = None
|
||||
|
||||
if 'network_interface' in delta:
|
||||
allowed_update_states = [states.ENROLL, states.INSPECTING,
|
||||
states.MANAGEABLE]
|
||||
if not (node_obj.provision_state in allowed_update_states or
|
||||
node_obj.maintenance):
|
||||
action = _("Node %(node)s can not have network_interface "
|
||||
"updated unless it is in one of allowed "
|
||||
"(%(allowed)s) states or in maintenance mode.")
|
||||
raise exception.InvalidState(
|
||||
action % {'node': node_obj.uuid,
|
||||
'allowed': ', '.join(allowed_update_states)})
|
||||
net_iface = node_obj.network_interface
|
||||
if net_iface not in CONF.enabled_network_interfaces:
|
||||
raise exception.InvalidParameterValue(
|
||||
_("Cannot change network_interface to invalid value "
|
||||
"%(n_interface)s for node %(node)s, valid interfaces "
|
||||
"are: %(valid_choices)s.") % {
|
||||
'n_interface': net_iface, 'node': node_obj.uuid,
|
||||
'valid_choices': CONF.enabled_network_interfaces,
|
||||
})
|
||||
|
||||
driver_name = node_obj.driver if 'driver' in delta else None
|
||||
with task_manager.acquire(context, node_id, shared=False,
|
||||
driver_name=driver_name,
|
||||
|
@ -142,8 +164,8 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
|
||||
"""
|
||||
LOG.debug("RPC change_node_power_state called for node %(node)s. "
|
||||
"The desired new state is %(state)s."
|
||||
% {'node': node_id, 'state': new_state})
|
||||
"The desired new state is %(state)s.",
|
||||
{'node': node_id, 'state': new_state})
|
||||
|
||||
with task_manager.acquire(context, node_id, shared=False,
|
||||
purpose='changing node power state') as task:
|
||||
|
@ -446,8 +468,8 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
except (exception.InvalidParameterValue,
|
||||
exception.MissingParameterValue) as e:
|
||||
raise exception.InstanceDeployFailure(
|
||||
_("RPC do_node_deploy failed to validate deploy or "
|
||||
"power info for node %(node_uuid)s. Error: %(msg)s") %
|
||||
_("Failed to validate deploy or power info for node "
|
||||
"%(node_uuid)s. Error: %(msg)s") %
|
||||
{'node_uuid': node.uuid, 'msg': e})
|
||||
|
||||
LOG.debug("do_node_deploy Calling event: %(event)s for node: "
|
||||
|
@ -644,8 +666,8 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
try:
|
||||
task.driver.power.validate(task)
|
||||
except exception.InvalidParameterValue as e:
|
||||
msg = (_('RPC do_node_clean failed to validate power info.'
|
||||
' Cannot clean node %(node)s. Error: %(msg)s') %
|
||||
msg = (_('Failed to validate power info. '
|
||||
'Cannot clean node %(node)s. Error: %(msg)s') %
|
||||
{'node': node.uuid, 'msg': e})
|
||||
raise exception.InvalidParameterValue(msg)
|
||||
|
||||
|
@ -1558,8 +1580,7 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
async task
|
||||
"""
|
||||
LOG.debug('RPC set_console_mode called for node %(node)s with '
|
||||
'enabled %(enabled)s' % {'node': node_id,
|
||||
'enabled': enabled})
|
||||
'enabled %(enabled)s', {'node': node_id, 'enabled': enabled})
|
||||
|
||||
with task_manager.acquire(context, node_id, shared=False,
|
||||
purpose='setting console mode') as task:
|
||||
|
@ -1994,8 +2015,8 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
task.driver.inspect.validate(task)
|
||||
except (exception.InvalidParameterValue,
|
||||
exception.MissingParameterValue) as e:
|
||||
error = (_("RPC inspect_hardware failed to validate "
|
||||
"inspection or power info. Error: %(msg)s")
|
||||
error = (_("Failed to validate inspection or power info. "
|
||||
"Error: %(msg)s")
|
||||
% {'msg': e})
|
||||
raise exception.HardwareInspectionFailure(error=error)
|
||||
|
||||
|
@ -2093,6 +2114,25 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
|
||||
return driver.raid.get_logical_disk_properties()
|
||||
|
||||
@messaging.expected_exceptions(exception.NoFreeConductorWorker)
|
||||
def heartbeat(self, context, node_id, callback_url):
|
||||
"""Process a heartbeat from the ramdisk.
|
||||
|
||||
:param context: request context.
|
||||
:param node_id: node id or uuid.
|
||||
:param callback_url: URL to reach back to the ramdisk.
|
||||
:raises: NoFreeConductorWorker if there are no conductors to process
|
||||
this heartbeat request.
|
||||
"""
|
||||
LOG.debug('RPC heartbeat called for node %s', node_id)
|
||||
|
||||
# NOTE(dtantsur): we acquire a shared lock to begin with, drivers are
|
||||
# free to promote it to an exclusive one.
|
||||
with task_manager.acquire(context, node_id, shared=True,
|
||||
purpose='heartbeat') as task:
|
||||
task.spawn_after(self._spawn_worker, task.driver.deploy.heartbeat,
|
||||
task, callback_url)
|
||||
|
||||
def _object_dispatch(self, target, method, context, args, kwargs):
|
||||
"""Dispatch a call to an object method.
|
||||
|
||||
|
@ -2127,7 +2167,7 @@ class ConductorManager(base_manager.BaseConductorManager):
|
|||
:param args: The positional arguments to the action method
|
||||
:param kwargs: The keyword arguments to the action method
|
||||
:returns: The result of the action method, which may (or may not)
|
||||
be an instance of the implementing VersionedObject class.
|
||||
be an instance of the implementing VersionedObject class.
|
||||
"""
|
||||
objclass = objects_base.IronicObject.obj_class_from_name(
|
||||
objname, object_versions[objname])
|
||||
|
@ -2485,8 +2525,8 @@ def _do_inspect_hardware(task):
|
|||
|
||||
if new_state == states.MANAGEABLE:
|
||||
task.process_event('done')
|
||||
LOG.info(_LI('Successfully inspected node %(node)s')
|
||||
% {'node': node.uuid})
|
||||
LOG.info(_LI('Successfully inspected node %(node)s'),
|
||||
{'node': node.uuid})
|
||||
elif new_state != states.INSPECTING:
|
||||
error = (_("During inspection, driver returned unexpected "
|
||||
"state %(state)s") % {'state': new_state})
|
||||
|
|
|
@ -80,11 +80,12 @@ class ConductorAPI(object):
|
|||
| object_backport_versions
|
||||
| 1.32 - Add do_node_clean
|
||||
| 1.33 - Added update and destroy portgroup.
|
||||
| 1.34 - Added heartbeat
|
||||
|
||||
"""
|
||||
|
||||
# NOTE(rloo): This must be in sync with manager.ConductorManager's.
|
||||
RPC_API_VERSION = '1.33'
|
||||
RPC_API_VERSION = '1.34'
|
||||
|
||||
def __init__(self, topic=None):
|
||||
super(ConductorAPI, self).__init__()
|
||||
|
@ -646,6 +647,18 @@ class ConductorAPI(object):
|
|||
return cctxt.call(context, 'do_node_clean',
|
||||
node_id=node_id, clean_steps=clean_steps)
|
||||
|
||||
def heartbeat(self, context, node_id, callback_url, topic=None):
|
||||
"""Process a node heartbeat.
|
||||
|
||||
:param context: request context.
|
||||
:param node_id: node ID or UUID.
|
||||
:param callback_url: URL to reach back to the ramdisk.
|
||||
:param topic: RPC topic. Defaults to self.topic.
|
||||
"""
|
||||
cctxt = self.client.prepare(topic=topic or self.topic, version='1.34')
|
||||
return cctxt.call(context, 'heartbeat', node_id=node_id,
|
||||
callback_url=callback_url)
|
||||
|
||||
def object_class_action_versions(self, context, objname, objmethod,
|
||||
object_versions, args, kwargs):
|
||||
"""Perform an action on a VersionedObject class.
|
||||
|
|
|
@ -15,18 +15,62 @@
|
|||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.conf import agent
|
||||
from ironic.conf import api
|
||||
from ironic.conf import audit
|
||||
from ironic.conf import cimc
|
||||
from ironic.conf import cisco_ucs
|
||||
from ironic.conf import conductor
|
||||
from ironic.conf import console
|
||||
from ironic.conf import database
|
||||
from ironic.conf import default
|
||||
from ironic.conf import deploy
|
||||
from ironic.conf import dhcp
|
||||
from ironic.conf import glance
|
||||
from ironic.conf import iboot
|
||||
from ironic.conf import ilo
|
||||
from ironic.conf import inspector
|
||||
from ironic.conf import ipmi
|
||||
from ironic.conf import irmc
|
||||
from ironic.conf import keystone
|
||||
from ironic.conf import metrics
|
||||
from ironic.conf import metrics_statsd
|
||||
from ironic.conf import neutron
|
||||
from ironic.conf import oneview
|
||||
from ironic.conf import seamicro
|
||||
from ironic.conf import service_catalog
|
||||
from ironic.conf import snmp
|
||||
from ironic.conf import ssh
|
||||
from ironic.conf import swift
|
||||
from ironic.conf import virtualbox
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
agent.register_opts(CONF)
|
||||
api.register_opts(CONF)
|
||||
audit.register_opts(CONF)
|
||||
cimc.register_opts(CONF)
|
||||
cisco_ucs.register_opts(CONF)
|
||||
conductor.register_opts(CONF)
|
||||
console.register_opts(CONF)
|
||||
database.register_opts(CONF)
|
||||
default.register_opts(CONF)
|
||||
deploy.register_opts(CONF)
|
||||
dhcp.register_opts(CONF)
|
||||
glance.register_opts(CONF)
|
||||
iboot.register_opts(CONF)
|
||||
ilo.register_opts(CONF)
|
||||
inspector.register_opts(CONF)
|
||||
ipmi.register_opts(CONF)
|
||||
irmc.register_opts(CONF)
|
||||
keystone.register_opts(CONF)
|
||||
metrics.register_opts(CONF)
|
||||
metrics_statsd.register_opts(CONF)
|
||||
neutron.register_opts(CONF)
|
||||
oneview.register_opts(CONF)
|
||||
seamicro.register_opts(CONF)
|
||||
service_catalog.register_opts(CONF)
|
||||
snmp.register_opts(CONF)
|
||||
ssh.register_opts(CONF)
|
||||
swift.register_opts(CONF)
|
||||
virtualbox.register_opts(CONF)
|
||||
|
|
|
@ -0,0 +1,91 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 Rackspace, Inc.
|
||||
# Copyright 2015 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
|
||||
opts = [
|
||||
cfg.BoolOpt('manage_agent_boot',
|
||||
default=True,
|
||||
help=_('Whether Ironic will manage booting of the agent '
|
||||
'ramdisk. If set to False, you will need to configure '
|
||||
'your mechanism to allow booting the agent '
|
||||
'ramdisk.')),
|
||||
cfg.IntOpt('memory_consumed_by_agent',
|
||||
default=0,
|
||||
help=_('The memory size in MiB consumed by agent when it is '
|
||||
'booted on a bare metal node. This is used for '
|
||||
'checking if the image can be downloaded and deployed '
|
||||
'on the bare metal node after booting agent ramdisk. '
|
||||
'This may be set according to the memory consumed by '
|
||||
'the agent ramdisk image.')),
|
||||
cfg.BoolOpt('stream_raw_images',
|
||||
default=True,
|
||||
help=_('Whether the agent ramdisk should stream raw images '
|
||||
'directly onto the disk or not. By streaming raw '
|
||||
'images directly onto the disk the agent ramdisk will '
|
||||
'not spend time copying the image to a tmpfs partition '
|
||||
'(therefore consuming less memory) prior to writing it '
|
||||
'to the disk. Unless the disk where the image will be '
|
||||
'copied to is really slow, this option should be set '
|
||||
'to True. Defaults to True.')),
|
||||
cfg.IntOpt('post_deploy_get_power_state_retries',
|
||||
default=6,
|
||||
help=_('Number of times to retry getting power state to check '
|
||||
'if bare metal node has been powered off after a soft '
|
||||
'power off.')),
|
||||
cfg.IntOpt('post_deploy_get_power_state_retry_interval',
|
||||
default=5,
|
||||
help=_('Amount of time (in seconds) to wait between polling '
|
||||
'power state after trigger soft poweroff.')),
|
||||
cfg.StrOpt('agent_api_version',
|
||||
default='v1',
|
||||
help=_('API version to use for communicating with the ramdisk '
|
||||
'agent.')),
|
||||
cfg.StrOpt('deploy_logs_collect',
|
||||
choices=['always', 'on_failure', 'never'],
|
||||
default='on_failure',
|
||||
help=_('Whether Ironic should collect the deployment logs on '
|
||||
'deployment failure (on_failure), always or never.')),
|
||||
cfg.StrOpt('deploy_logs_storage_backend',
|
||||
choices=['local', 'swift'],
|
||||
default='local',
|
||||
help=_('The name of the storage backend where the logs '
|
||||
'will be stored.')),
|
||||
cfg.StrOpt('deploy_logs_local_path',
|
||||
default='/var/log/ironic/deploy',
|
||||
help=_('The path to the directory where the logs should be '
|
||||
'stored, used when the deploy_logs_storage_backend '
|
||||
'is configured to "local".')),
|
||||
cfg.StrOpt('deploy_logs_swift_container',
|
||||
default='ironic_deploy_logs_container',
|
||||
help=_('The name of the Swift container to store the logs, '
|
||||
'used when the deploy_logs_storage_backend is '
|
||||
'configured to "swift".')),
|
||||
cfg.IntOpt('deploy_logs_swift_days_to_expire',
|
||||
default=30,
|
||||
help=_('Number of days before a log object is marked as '
|
||||
'expired in Swift. If None, the logs will be kept '
|
||||
'forever or until manually deleted. Used when the '
|
||||
'deploy_logs_storage_backend is configured to '
|
||||
'"swift".')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='agent')
|
|
@ -0,0 +1,68 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('host_ip',
|
||||
default='0.0.0.0',
|
||||
help=_('The IP address on which ironic-api listens.')),
|
||||
cfg.PortOpt('port',
|
||||
default=6385,
|
||||
help=_('The TCP port on which ironic-api listens.')),
|
||||
cfg.IntOpt('max_limit',
|
||||
default=1000,
|
||||
help=_('The maximum number of items returned in a single '
|
||||
'response from a collection resource.')),
|
||||
cfg.StrOpt('public_endpoint',
|
||||
help=_("Public URL to use when building the links to the API "
|
||||
"resources (for example, \"https://ironic.rocks:6384\")."
|
||||
" If None the links will be built using the request's "
|
||||
"host URL. If the API is operating behind a proxy, you "
|
||||
"will want to change this to represent the proxy's URL. "
|
||||
"Defaults to None.")),
|
||||
cfg.IntOpt('api_workers',
|
||||
help=_('Number of workers for OpenStack Ironic API service. '
|
||||
'The default is equal to the number of CPUs available '
|
||||
'if that can be determined, else a default worker '
|
||||
'count of 1 is returned.')),
|
||||
cfg.BoolOpt('enable_ssl_api',
|
||||
default=False,
|
||||
help=_("Enable the integrated stand-alone API to service "
|
||||
"requests via HTTPS instead of HTTP. If there is a "
|
||||
"front-end service performing HTTPS offloading from "
|
||||
"the service, this option should be False; note, you "
|
||||
"will want to change public API endpoint to represent "
|
||||
"SSL termination URL with 'public_endpoint' option.")),
|
||||
cfg.BoolOpt('restrict_lookup',
|
||||
default=True,
|
||||
help=_('Whether to restrict the lookup API to only nodes '
|
||||
'in certain states.')),
|
||||
cfg.IntOpt('ramdisk_heartbeat_timeout',
|
||||
default=300,
|
||||
deprecated_group='agent', deprecated_name='heartbeat_timeout',
|
||||
help=_('Maximum interval (in seconds) for agent heartbeats.')),
|
||||
]
|
||||
|
||||
opt_group = cfg.OptGroup(name='api',
|
||||
title='Options for the ironic-api service')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(opt_group)
|
||||
conf.register_opts(opts, group=opt_group)
|
|
@ -0,0 +1,38 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.BoolOpt('enabled',
|
||||
default=False,
|
||||
help=_('Enable auditing of API requests'
|
||||
' (for ironic-api service).')),
|
||||
|
||||
cfg.StrOpt('audit_map_file',
|
||||
default='/etc/ironic/ironic_api_audit_map.conf',
|
||||
help=_('Path to audit map file for ironic-api service. '
|
||||
'Used only when API audit is enabled.')),
|
||||
|
||||
cfg.StrOpt('ignore_req_list',
|
||||
help=_('Comma separated list of Ironic REST API HTTP methods '
|
||||
'to be ignored during audit. For example: auditing '
|
||||
'will not be done on any GET or POST requests '
|
||||
'if this is set to "GET,POST". It is used '
|
||||
'only when API audit is enabled.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='audit')
|
|
@ -0,0 +1,79 @@
|
|||
# Copyright 2016 Mirantis Inc
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
|
||||
from keystoneauth1 import exceptions as kaexception
|
||||
from keystoneauth1 import loading as kaloading
|
||||
from oslo_config import cfg
|
||||
|
||||
|
||||
LEGACY_SECTION = 'keystone_authtoken'
|
||||
OLD_SESSION_OPTS = {
|
||||
'certfile': [cfg.DeprecatedOpt('certfile', LEGACY_SECTION)],
|
||||
'keyfile': [cfg.DeprecatedOpt('keyfile', LEGACY_SECTION)],
|
||||
'cafile': [cfg.DeprecatedOpt('cafile', LEGACY_SECTION)],
|
||||
'insecure': [cfg.DeprecatedOpt('insecure', LEGACY_SECTION)],
|
||||
'timeout': [cfg.DeprecatedOpt('timeout', LEGACY_SECTION)],
|
||||
}
|
||||
|
||||
# FIXME(pas-ha) remove import of auth_token section after deprecation period
|
||||
cfg.CONF.import_group(LEGACY_SECTION, 'keystonemiddleware.auth_token')
|
||||
|
||||
|
||||
def load_auth(conf, group):
|
||||
try:
|
||||
auth = kaloading.load_auth_from_conf_options(conf, group)
|
||||
except kaexception.MissingRequiredOptions:
|
||||
auth = None
|
||||
return auth
|
||||
|
||||
|
||||
def register_auth_opts(conf, group):
|
||||
"""Register session- and auth-related options
|
||||
|
||||
Registers only basic auth options shared by all auth plugins.
|
||||
The rest are registered at runtime depending on auth plugin used.
|
||||
"""
|
||||
kaloading.register_session_conf_options(
|
||||
conf, group, deprecated_opts=OLD_SESSION_OPTS)
|
||||
kaloading.register_auth_conf_options(conf, group)
|
||||
|
||||
|
||||
def add_auth_opts(options):
|
||||
"""Add auth options to sample config
|
||||
|
||||
As these are dynamically registered at runtime,
|
||||
this adds options for most used auth_plugins
|
||||
when generating sample config.
|
||||
"""
|
||||
def add_options(opts, opts_to_add):
|
||||
for new_opt in opts_to_add:
|
||||
for opt in opts:
|
||||
if opt.name == new_opt.name:
|
||||
break
|
||||
else:
|
||||
opts.append(new_opt)
|
||||
|
||||
opts = copy.deepcopy(options)
|
||||
opts.insert(0, kaloading.get_auth_common_conf_options()[0])
|
||||
# NOTE(dims): There are a lot of auth plugins, we just generate
|
||||
# the config options for a few common ones
|
||||
plugins = ['password', 'v2password', 'v3password']
|
||||
for name in plugins:
|
||||
plugin = kaloading.get_plugin_loader(name)
|
||||
add_options(opts, kaloading.get_auth_plugin_conf_options(plugin))
|
||||
add_options(opts, kaloading.get_session_conf_options())
|
||||
opts.sort(key=lambda x: x.name)
|
||||
return opts
|
|
@ -21,10 +21,12 @@ from ironic.common.i18n import _
|
|||
opts = [
|
||||
cfg.StrOpt('terminal',
|
||||
default='shellinaboxd',
|
||||
help=_('Path to serial console terminal program')),
|
||||
help=_('Path to serial console terminal program. Used only '
|
||||
'by Shell In A Box console.')),
|
||||
cfg.StrOpt('terminal_cert_dir',
|
||||
help=_('Directory containing the terminal SSL cert(PEM) for '
|
||||
'serial console access')),
|
||||
help=_('Directory containing the terminal SSL cert (PEM) for '
|
||||
'serial console access. Used only by Shell In A Box '
|
||||
'console.')),
|
||||
cfg.StrOpt('terminal_pid_dir',
|
||||
help=_('Directory for holding terminal pid files. '
|
||||
'If not specified, the temporary directory '
|
||||
|
|
|
@ -0,0 +1,203 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# Copyright 2013 Red Hat, Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import socket
|
||||
import tempfile
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import netutils
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
api_opts = [
|
||||
cfg.StrOpt(
|
||||
'auth_strategy',
|
||||
default='keystone',
|
||||
choices=['noauth', 'keystone'],
|
||||
help=_('Authentication strategy used by ironic-api. "noauth" should '
|
||||
'not be used in a production environment because all '
|
||||
'authentication will be disabled.')),
|
||||
cfg.BoolOpt('debug_tracebacks_in_api',
|
||||
default=False,
|
||||
help=_('Return server tracebacks in the API response for any '
|
||||
'error responses. WARNING: this is insecure '
|
||||
'and should not be used in a production environment.')),
|
||||
cfg.BoolOpt('pecan_debug',
|
||||
default=False,
|
||||
help=_('Enable pecan debug mode. WARNING: this is insecure '
|
||||
'and should not be used in a production environment.')),
|
||||
]
|
||||
|
||||
driver_opts = [
|
||||
cfg.ListOpt('enabled_drivers',
|
||||
default=['pxe_ipmitool'],
|
||||
help=_('Specify the list of drivers to load during service '
|
||||
'initialization. Missing drivers, or drivers which '
|
||||
'fail to initialize, will prevent the conductor '
|
||||
'service from starting. The option default is a '
|
||||
'recommended set of production-oriented drivers. A '
|
||||
'complete list of drivers present on your system may '
|
||||
'be found by enumerating the "ironic.drivers" '
|
||||
'entrypoint. An example may be found in the '
|
||||
'developer documentation online.')),
|
||||
cfg.ListOpt('enabled_network_interfaces',
|
||||
default=['flat', 'noop'],
|
||||
help=_('Specify the list of network interfaces to load during '
|
||||
'service initialization. Missing network interfaces, '
|
||||
'or network interfaces which fail to initialize, will '
|
||||
'prevent the conductor service from starting. The '
|
||||
'option default is a recommended set of '
|
||||
'production-oriented network interfaces. A complete '
|
||||
'list of network interfaces present on your system may '
|
||||
'be found by enumerating the '
|
||||
'"ironic.hardware.interfaces.network" entrypoint. '
|
||||
'This value must be the same on all ironic-conductor '
|
||||
'and ironic-api services, because it is used by '
|
||||
'ironic-api service to validate a new or updated '
|
||||
'node\'s network_interface value.')),
|
||||
cfg.StrOpt('default_network_interface',
|
||||
help=_('Default network interface to be used for nodes that '
|
||||
'do not have network_interface field set. A complete '
|
||||
'list of network interfaces present on your system may '
|
||||
'be found by enumerating the '
|
||||
'"ironic.hardware.interfaces.network" entrypoint.'))
|
||||
]
|
||||
|
||||
exc_log_opts = [
|
||||
cfg.BoolOpt('fatal_exception_format_errors',
|
||||
default=False,
|
||||
help=_('Used if there is a formatting error when generating '
|
||||
'an exception message (a programming error). If True, '
|
||||
'raise an exception; if False, use the unformatted '
|
||||
'message.')),
|
||||
]
|
||||
|
||||
hash_opts = [
|
||||
cfg.IntOpt('hash_partition_exponent',
|
||||
default=5,
|
||||
help=_('Exponent to determine number of hash partitions to use '
|
||||
'when distributing load across conductors. Larger '
|
||||
'values will result in more even distribution of load '
|
||||
'and less load when rebalancing the ring, but more '
|
||||
'memory usage. Number of partitions per conductor is '
|
||||
'(2^hash_partition_exponent). This determines the '
|
||||
'granularity of rebalancing: given 10 hosts, and an '
|
||||
'exponent of the 2, there are 40 partitions in the ring.'
|
||||
'A few thousand partitions should make rebalancing '
|
||||
'smooth in most cases. The default is suitable for up '
|
||||
'to a few hundred conductors. Too many partitions has a '
|
||||
'CPU impact.')),
|
||||
cfg.IntOpt('hash_distribution_replicas',
|
||||
default=1,
|
||||
help=_('[Experimental Feature] '
|
||||
'Number of hosts to map onto each hash partition. '
|
||||
'Setting this to more than one will cause additional '
|
||||
'conductor services to prepare deployment environments '
|
||||
'and potentially allow the Ironic cluster to recover '
|
||||
'more quickly if a conductor instance is terminated.')),
|
||||
cfg.IntOpt('hash_ring_reset_interval',
|
||||
default=180,
|
||||
help=_('Interval (in seconds) between hash ring resets.')),
|
||||
]
|
||||
|
||||
image_opts = [
|
||||
cfg.BoolOpt('force_raw_images',
|
||||
default=True,
|
||||
help=_('If True, convert backing images to "raw" disk image '
|
||||
'format.')),
|
||||
cfg.StrOpt('isolinux_bin',
|
||||
default='/usr/lib/syslinux/isolinux.bin',
|
||||
help=_('Path to isolinux binary file.')),
|
||||
cfg.StrOpt('isolinux_config_template',
|
||||
default=os.path.join('$pybasedir',
|
||||
'common/isolinux_config.template'),
|
||||
help=_('Template file for isolinux configuration file.')),
|
||||
cfg.StrOpt('grub_config_template',
|
||||
default=os.path.join('$pybasedir',
|
||||
'common/grub_conf.template'),
|
||||
help=_('Template file for grub configuration file.')),
|
||||
]
|
||||
|
||||
img_cache_opts = [
|
||||
cfg.BoolOpt('parallel_image_downloads',
|
||||
default=False,
|
||||
help=_('Run image downloads and raw format conversions in '
|
||||
'parallel.')),
|
||||
]
|
||||
|
||||
netconf_opts = [
|
||||
cfg.StrOpt('my_ip',
|
||||
default=netutils.get_my_ipv4(),
|
||||
sample_default='127.0.0.1',
|
||||
help=_('IP address of this host. If unset, will determine the '
|
||||
'IP programmatically. If unable to do so, will use '
|
||||
'"127.0.0.1".')),
|
||||
]
|
||||
|
||||
path_opts = [
|
||||
cfg.StrOpt('pybasedir',
|
||||
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
|
||||
'../')),
|
||||
sample_default='/usr/lib/python/site-packages/ironic/ironic',
|
||||
help=_('Directory where the ironic python module is '
|
||||
'installed.')),
|
||||
cfg.StrOpt('bindir',
|
||||
default='$pybasedir/bin',
|
||||
help=_('Directory where ironic binaries are installed.')),
|
||||
cfg.StrOpt('state_path',
|
||||
default='$pybasedir',
|
||||
help=_("Top-level directory for maintaining ironic's state.")),
|
||||
]
|
||||
|
||||
service_opts = [
|
||||
cfg.StrOpt('host',
|
||||
default=socket.getfqdn(),
|
||||
sample_default='localhost',
|
||||
help=_('Name of this node. This can be an opaque identifier. '
|
||||
'It is not necessarily a hostname, FQDN, or IP address. '
|
||||
'However, the node name must be valid within '
|
||||
'an AMQP key, and if using ZeroMQ, a valid '
|
||||
'hostname, FQDN, or IP address.')),
|
||||
]
|
||||
|
||||
utils_opts = [
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
default="/etc/ironic/rootwrap.conf",
|
||||
help=_('Path to the rootwrap configuration file to use for '
|
||||
'running commands as root.')),
|
||||
cfg.StrOpt('tempdir',
|
||||
default=tempfile.gettempdir(),
|
||||
sample_default='/tmp',
|
||||
help=_('Temporary working directory, default is Python temp '
|
||||
'dir.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(api_opts)
|
||||
conf.register_opts(driver_opts)
|
||||
conf.register_opts(exc_log_opts)
|
||||
conf.register_opts(hash_opts)
|
||||
conf.register_opts(image_opts)
|
||||
conf.register_opts(img_cache_opts)
|
||||
conf.register_opts(netconf_opts)
|
||||
conf.register_opts(path_opts)
|
||||
conf.register_opts(service_opts)
|
||||
conf.register_opts(utils_opts)
|
|
@ -0,0 +1,68 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright (c) 2012 NTT DOCOMO, INC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('http_url',
|
||||
help=_("ironic-conductor node's HTTP server URL. "
|
||||
"Example: http://192.1.2.3:8080")),
|
||||
cfg.StrOpt('http_root',
|
||||
default='/httpboot',
|
||||
help=_("ironic-conductor node's HTTP root path.")),
|
||||
cfg.IntOpt('erase_devices_priority',
|
||||
help=_('Priority to run in-band erase devices via the Ironic '
|
||||
'Python Agent ramdisk. If unset, will use the priority '
|
||||
'set in the ramdisk (defaults to 10 for the '
|
||||
'GenericHardwareManager). If set to 0, will not run '
|
||||
'during cleaning.')),
|
||||
# TODO(mmitchell): Remove the deprecated name/group during Ocata cycle.
|
||||
cfg.IntOpt('shred_random_overwrite_iterations',
|
||||
deprecated_name='erase_devices_iterations',
|
||||
deprecated_group='deploy',
|
||||
default=1,
|
||||
min=0,
|
||||
help=_('During shred, overwrite all block devices N times with '
|
||||
'random data. This is only used if a device could not '
|
||||
'be ATA Secure Erased. Defaults to 1.')),
|
||||
cfg.BoolOpt('shred_final_overwrite_with_zeros',
|
||||
default=True,
|
||||
help=_("Whether to write zeros to a node's block devices "
|
||||
"after writing random data. This will write zeros to "
|
||||
"the device even when "
|
||||
"deploy.shred_random_overwrite_interations is 0. This "
|
||||
"option is only used if a device could not be ATA "
|
||||
"Secure Erased. Defaults to True.")),
|
||||
cfg.BoolOpt('continue_if_disk_secure_erase_fails',
|
||||
default=False,
|
||||
help=_('Defines what to do if an ATA secure erase operation '
|
||||
'fails during cleaning in the Ironic Python Agent. '
|
||||
'If False, the cleaning operation will fail and the '
|
||||
'node will be put in ``clean failed`` state. '
|
||||
'If True, shred will be invoked and cleaning will '
|
||||
'continue.')),
|
||||
cfg.BoolOpt('power_off_after_deploy_failure',
|
||||
default=True,
|
||||
help=_('Whether to power off a node after deploy failure. '
|
||||
'Defaults to True.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='deploy')
|
|
@ -0,0 +1,153 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2010 OpenStack Foundation
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.conf import auth
|
||||
|
||||
opts = [
|
||||
cfg.ListOpt('allowed_direct_url_schemes',
|
||||
default=[],
|
||||
help=_('A list of URL schemes that can be downloaded directly '
|
||||
'via the direct_url. Currently supported schemes: '
|
||||
'[file].')),
|
||||
# To upload this key to Swift:
|
||||
# swift post -m Temp-Url-Key:secretkey
|
||||
# When using radosgw, temp url key could be uploaded via the above swift
|
||||
# command, or with:
|
||||
# radosgw-admin user modify --uid=user --temp-url-key=secretkey
|
||||
cfg.StrOpt('swift_temp_url_key',
|
||||
help=_('The secret token given to Swift to allow temporary URL '
|
||||
'downloads. Required for temporary URLs.'),
|
||||
secret=True),
|
||||
cfg.IntOpt('swift_temp_url_duration',
|
||||
default=1200,
|
||||
help=_('The length of time in seconds that the temporary URL '
|
||||
'will be valid for. Defaults to 20 minutes. If some '
|
||||
'deploys get a 401 response code when trying to '
|
||||
'download from the temporary URL, try raising this '
|
||||
'duration. This value must be greater than or equal to '
|
||||
'the value for '
|
||||
'swift_temp_url_expected_download_start_delay')),
|
||||
cfg.BoolOpt('swift_temp_url_cache_enabled',
|
||||
default=False,
|
||||
help=_('Whether to cache generated Swift temporary URLs. '
|
||||
'Setting it to true is only useful when an image '
|
||||
'caching proxy is used. Defaults to False.')),
|
||||
cfg.IntOpt('swift_temp_url_expected_download_start_delay',
|
||||
default=0, min=0,
|
||||
help=_('This is the delay (in seconds) from the time of the '
|
||||
'deploy request (when the Swift temporary URL is '
|
||||
'generated) to when the IPA ramdisk starts up and URL '
|
||||
'is used for the image download. This value is used to '
|
||||
'check if the Swift temporary URL duration is large '
|
||||
'enough to let the image download begin. Also if '
|
||||
'temporary URL caching is enabled this will determine '
|
||||
'if a cached entry will still be valid when the '
|
||||
'download starts. swift_temp_url_duration value must be '
|
||||
'greater than or equal to this option\'s value. '
|
||||
'Defaults to 0.')),
|
||||
cfg.StrOpt(
|
||||
'swift_endpoint_url',
|
||||
help=_('The "endpoint" (scheme, hostname, optional port) for '
|
||||
'the Swift URL of the form '
|
||||
'"endpoint_url/api_version/[account/]container/object_id". '
|
||||
'Do not include trailing "/". '
|
||||
'For example, use "https://swift.example.com". If using RADOS '
|
||||
'Gateway, endpoint may also contain /swift path; if it does '
|
||||
'not, it will be appended. Required for temporary URLs.')),
|
||||
cfg.StrOpt(
|
||||
'swift_api_version',
|
||||
default='v1',
|
||||
help=_('The Swift API version to create a temporary URL for. '
|
||||
'Defaults to "v1". Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.StrOpt(
|
||||
'swift_account',
|
||||
help=_('The account that Glance uses to communicate with '
|
||||
'Swift. The format is "AUTH_uuid". "uuid" is the '
|
||||
'UUID for the account configured in the glance-api.conf. '
|
||||
'Required for temporary URLs when Glance backend is Swift. '
|
||||
'For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". '
|
||||
'Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.StrOpt(
|
||||
'swift_container',
|
||||
default='glance',
|
||||
help=_('The Swift container Glance is configured to store its '
|
||||
'images in. Defaults to "glance", which is the default '
|
||||
'in glance-api.conf. '
|
||||
'Swift temporary URL format: '
|
||||
'"endpoint_url/api_version/[account/]container/object_id"')),
|
||||
cfg.IntOpt('swift_store_multiple_containers_seed',
|
||||
default=0,
|
||||
help=_('This should match a config by the same name in the '
|
||||
'Glance configuration file. When set to 0, a '
|
||||
'single-tenant store will only use one '
|
||||
'container to store all images. When set to an integer '
|
||||
'value between 1 and 32, a single-tenant store will use '
|
||||
'multiple containers to store images, and this value '
|
||||
'will determine how many containers are created.')),
|
||||
cfg.StrOpt('temp_url_endpoint_type',
|
||||
default='swift',
|
||||
choices=['swift', 'radosgw'],
|
||||
help=_('Type of endpoint to use for temporary URLs. If the '
|
||||
'Glance backend is Swift, use "swift"; if it is CEPH '
|
||||
'with RADOS gateway, use "radosgw".')),
|
||||
cfg.StrOpt('glance_host',
|
||||
default='$my_ip',
|
||||
help=_('Default glance hostname or IP address.')),
|
||||
cfg.PortOpt('glance_port',
|
||||
default=9292,
|
||||
help=_('Default glance port.')),
|
||||
cfg.StrOpt('glance_protocol',
|
||||
default='http',
|
||||
choices=['http', 'https'],
|
||||
help=_('Default protocol to use when connecting to glance. '
|
||||
'Set to https for SSL.')),
|
||||
cfg.ListOpt('glance_api_servers',
|
||||
help=_('A list of the glance api servers available to ironic. '
|
||||
'Prefix with https:// for SSL-based glance API '
|
||||
'servers. Format is [hostname|IP]:port.')),
|
||||
cfg.BoolOpt('glance_api_insecure',
|
||||
default=False,
|
||||
help=_('Allow to perform insecure SSL (https) requests to '
|
||||
'glance.')),
|
||||
cfg.IntOpt('glance_num_retries',
|
||||
default=0,
|
||||
help=_('Number of retries when downloading an image from '
|
||||
'glance.')),
|
||||
cfg.StrOpt('auth_strategy',
|
||||
default='keystone',
|
||||
choices=['keystone', 'noauth'],
|
||||
help=_('Authentication strategy to use when connecting to '
|
||||
'glance.')),
|
||||
cfg.StrOpt('glance_cafile',
|
||||
help=_('Optional path to a CA certificate bundle to be used to '
|
||||
'validate the SSL certificate served by glance. It is '
|
||||
'used when glance_api_insecure is set to False.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='glance')
|
||||
auth.register_auth_opts(conf, 'glance')
|
||||
|
||||
|
||||
def list_opts():
|
||||
return auth.add_auth_opts(opts)
|
|
@ -0,0 +1,42 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.IntOpt('max_retry',
|
||||
default=3,
|
||||
help=_('Maximum retries for iBoot operations')),
|
||||
cfg.IntOpt('retry_interval',
|
||||
default=1,
|
||||
help=_('Time (in seconds) between retry attempts for iBoot '
|
||||
'operations')),
|
||||
cfg.IntOpt('reboot_delay',
|
||||
default=5,
|
||||
min=0,
|
||||
help=_('Time (in seconds) to sleep between when rebooting '
|
||||
'(powering off and on again).'))
|
||||
]
|
||||
|
||||
opt_group = cfg.OptGroup(name='iboot',
|
||||
title='Options for the iBoot power driver')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(opt_group)
|
||||
conf.register_opts(opts, group=opt_group)
|
|
@ -0,0 +1,87 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.IntOpt('client_timeout',
|
||||
default=60,
|
||||
help=_('Timeout (in seconds) for iLO operations')),
|
||||
cfg.PortOpt('client_port',
|
||||
default=443,
|
||||
help=_('Port to be used for iLO operations')),
|
||||
cfg.StrOpt('swift_ilo_container',
|
||||
default='ironic_ilo_container',
|
||||
help=_('The Swift iLO container to store data.')),
|
||||
cfg.IntOpt('swift_object_expiry_timeout',
|
||||
default=900,
|
||||
help=_('Amount of time in seconds for Swift objects to '
|
||||
'auto-expire.')),
|
||||
cfg.BoolOpt('use_web_server_for_images',
|
||||
default=False,
|
||||
help=_('Set this to True to use http web server to host '
|
||||
'floppy images and generated boot ISO. This '
|
||||
'requires http_root and http_url to be configured '
|
||||
'in the [deploy] section of the config file. If this '
|
||||
'is set to False, then Ironic will use Swift '
|
||||
'to host the floppy images and generated '
|
||||
'boot_iso.')),
|
||||
cfg.IntOpt('clean_priority_erase_devices',
|
||||
deprecated_for_removal=True,
|
||||
deprecated_reason=_('This configuration option is duplicated '
|
||||
'by [deploy] erase_devices_priority, '
|
||||
'please use that instead.'),
|
||||
help=_('Priority for erase devices clean step. If unset, '
|
||||
'it defaults to 10. If set to 0, the step will be '
|
||||
'disabled and will not run during cleaning.')),
|
||||
cfg.IntOpt('clean_priority_reset_ilo',
|
||||
default=0,
|
||||
help=_('Priority for reset_ilo clean step.')),
|
||||
cfg.IntOpt('clean_priority_reset_bios_to_default',
|
||||
default=10,
|
||||
help=_('Priority for reset_bios_to_default clean step.')),
|
||||
cfg.IntOpt('clean_priority_reset_secure_boot_keys_to_default',
|
||||
default=20,
|
||||
help=_('Priority for reset_secure_boot_keys clean step. This '
|
||||
'step will reset the secure boot keys to manufacturing '
|
||||
'defaults.')),
|
||||
cfg.IntOpt('clean_priority_clear_secure_boot_keys',
|
||||
default=0,
|
||||
help=_('Priority for clear_secure_boot_keys clean step. This '
|
||||
'step is not enabled by default. It can be enabled to '
|
||||
'clear all secure boot keys enrolled with iLO.')),
|
||||
cfg.IntOpt('clean_priority_reset_ilo_credential',
|
||||
default=30,
|
||||
help=_('Priority for reset_ilo_credential clean step. This '
|
||||
'step requires "ilo_change_password" parameter to be '
|
||||
'updated in nodes\'s driver_info with the new '
|
||||
'password.')),
|
||||
cfg.IntOpt('power_retry',
|
||||
default=6,
|
||||
help=_('Number of times a power operation needs to be '
|
||||
'retried')),
|
||||
cfg.IntOpt('power_wait',
|
||||
default=2,
|
||||
help=_('Amount of time in seconds to wait in between power '
|
||||
'operations')),
|
||||
cfg.StrOpt('ca_file',
|
||||
help=_('CA certificate file to validate iLO.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='ilo')
|
|
@ -0,0 +1,39 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.conf import auth
|
||||
|
||||
opts = [
|
||||
cfg.BoolOpt('enabled', default=False,
|
||||
help=_('whether to enable inspection using ironic-inspector')),
|
||||
cfg.StrOpt('service_url',
|
||||
help=_('ironic-inspector HTTP endpoint. If this is not set, '
|
||||
'the ironic-inspector client default '
|
||||
'(http://127.0.0.1:5050) will be used.')),
|
||||
cfg.IntOpt('status_check_period', default=60,
|
||||
help=_('period (in seconds) to check status of nodes '
|
||||
'on inspection')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='inspector')
|
||||
auth.register_auth_opts(conf, 'inspector')
|
||||
|
||||
|
||||
def list_opts():
|
||||
return auth.add_auth_opts(opts)
|
|
@ -0,0 +1,41 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
#
|
||||
# Copyright 2013 International Business Machines Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.IntOpt('retry_timeout',
|
||||
default=60,
|
||||
help=_('Maximum time in seconds to retry IPMI operations. '
|
||||
'There is a tradeoff when setting this value. Setting '
|
||||
'this too low may cause older BMCs to crash and require '
|
||||
'a hard reset. However, setting too high can cause the '
|
||||
'sync power state periodic task to hang when there are '
|
||||
'slow or unresponsive BMCs.')),
|
||||
cfg.IntOpt('min_command_interval',
|
||||
default=5,
|
||||
help=_('Minimum time, in seconds, between IPMI operations '
|
||||
'sent to a server. There is a risk with some hardware '
|
||||
'that setting this too low may cause the BMC to crash. '
|
||||
'Recommended setting is 5 seconds.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='ipmi')
|
|
@ -0,0 +1,73 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2015 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('remote_image_share_root',
|
||||
default='/remote_image_share_root',
|
||||
help=_('Ironic conductor node\'s "NFS" or "CIFS" root path')),
|
||||
cfg.StrOpt('remote_image_server',
|
||||
help=_('IP of remote image server')),
|
||||
cfg.StrOpt('remote_image_share_type',
|
||||
default='CIFS',
|
||||
choices=['CIFS', 'NFS'],
|
||||
ignore_case=True,
|
||||
help=_('Share type of virtual media')),
|
||||
cfg.StrOpt('remote_image_share_name',
|
||||
default='share',
|
||||
help=_('share name of remote_image_server')),
|
||||
cfg.StrOpt('remote_image_user_name',
|
||||
help=_('User name of remote_image_server')),
|
||||
cfg.StrOpt('remote_image_user_password', secret=True,
|
||||
help=_('Password of remote_image_user_name')),
|
||||
cfg.StrOpt('remote_image_user_domain',
|
||||
default='',
|
||||
help=_('Domain name of remote_image_user_name')),
|
||||
cfg.PortOpt('port',
|
||||
default=443,
|
||||
choices=[443, 80],
|
||||
help=_('Port to be used for iRMC operations')),
|
||||
cfg.StrOpt('auth_method',
|
||||
default='basic',
|
||||
choices=['basic', 'digest'],
|
||||
help=_('Authentication method to be used for iRMC '
|
||||
'operations')),
|
||||
cfg.IntOpt('client_timeout',
|
||||
default=60,
|
||||
help=_('Timeout (in seconds) for iRMC operations')),
|
||||
cfg.StrOpt('sensor_method',
|
||||
default='ipmitool',
|
||||
choices=['ipmitool', 'scci'],
|
||||
help=_('Sensor data retrieval method.')),
|
||||
cfg.StrOpt('snmp_version',
|
||||
default='v2c',
|
||||
choices=['v1', 'v2c', 'v3'],
|
||||
help=_('SNMP protocol version')),
|
||||
cfg.PortOpt('snmp_port',
|
||||
default=161,
|
||||
help=_('SNMP port')),
|
||||
cfg.StrOpt('snmp_community',
|
||||
default='public',
|
||||
help=_('SNMP community. Required for versions "v1" and "v2c"')),
|
||||
cfg.StrOpt('snmp_security',
|
||||
help=_('SNMP security name. Required for version "v3"')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='irmc')
|
|
@ -0,0 +1,27 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('region_name',
|
||||
help=_('The region used for getting endpoints of OpenStack'
|
||||
' services.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='keystone')
|
|
@ -0,0 +1,55 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 Rackspace, Inc.
|
||||
# Copyright 2015 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
|
||||
opts = [
|
||||
# IPA config options: used by IPA to configure how it reports metric data
|
||||
cfg.StrOpt('agent_backend',
|
||||
default='noop',
|
||||
help=_('Backend for the agent ramdisk to use for metrics. '
|
||||
'Default possible backends are "noop" and "statsd".')),
|
||||
cfg.BoolOpt('agent_prepend_host',
|
||||
default=False,
|
||||
help=_('Prepend the hostname to all metric names sent by the '
|
||||
'agent ramdisk. The format of metric names is '
|
||||
'[global_prefix.][uuid.][host_name.]prefix.'
|
||||
'metric_name.')),
|
||||
cfg.BoolOpt('agent_prepend_uuid',
|
||||
default=False,
|
||||
help=_('Prepend the node\'s Ironic uuid to all metric names '
|
||||
'sent by the agent ramdisk. The format of metric names '
|
||||
'is [global_prefix.][uuid.][host_name.]prefix.'
|
||||
'metric_name.')),
|
||||
cfg.BoolOpt('agent_prepend_host_reverse',
|
||||
default=True,
|
||||
help=_('Split the prepended host value by "." and reverse it '
|
||||
'for metrics sent by the agent ramdisk (to better '
|
||||
'match the reverse hierarchical form of domain '
|
||||
'names).')),
|
||||
cfg.StrOpt('agent_global_prefix',
|
||||
help=_('Prefix all metric names sent by the agent ramdisk '
|
||||
'with this value. The format of metric names is '
|
||||
'[global_prefix.][uuid.][host_name.]prefix.'
|
||||
'metric_name.'))
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='metrics')
|
|
@ -0,0 +1,36 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 Rackspace, Inc.
|
||||
# Copyright 2015 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('agent_statsd_host',
|
||||
default='localhost',
|
||||
help=_('Host for the agent ramdisk to use with the statsd '
|
||||
'backend. This must be accessible from networks the '
|
||||
'agent is booted on.')),
|
||||
cfg.PortOpt('agent_statsd_port',
|
||||
default=8125,
|
||||
help=_('Port for the agent ramdisk to use with the statsd '
|
||||
'backend.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='metrics_statsd')
|
|
@ -0,0 +1,66 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2014 OpenStack Foundation
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.conf import auth
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('url',
|
||||
help=_("URL for connecting to neutron. "
|
||||
"Default value translates to 'http://$my_ip:9696' "
|
||||
"when auth_strategy is 'noauth', "
|
||||
"and to discovery from Keystone catalog "
|
||||
"when auth_strategy is 'keystone'.")),
|
||||
cfg.IntOpt('url_timeout',
|
||||
default=30,
|
||||
help=_('Timeout value for connecting to neutron in seconds.')),
|
||||
cfg.IntOpt('port_setup_delay',
|
||||
default=0,
|
||||
min=0,
|
||||
help=_('Delay value to wait for Neutron agents to setup '
|
||||
'sufficient DHCP configuration for port.')),
|
||||
cfg.IntOpt('retries',
|
||||
default=3,
|
||||
help=_('Client retries in the case of a failed request.')),
|
||||
cfg.StrOpt('auth_strategy',
|
||||
default='keystone',
|
||||
choices=['keystone', 'noauth'],
|
||||
help=_('Authentication strategy to use when connecting to '
|
||||
'neutron. Running neutron in noauth mode (related to '
|
||||
'but not affected by this setting) is insecure and '
|
||||
'should only be used for testing.')),
|
||||
cfg.StrOpt('cleaning_network_uuid',
|
||||
help=_('Neutron network UUID for the ramdisk to be booted '
|
||||
'into for cleaning nodes. Required for "neutron" '
|
||||
'network interface. It is also required if cleaning '
|
||||
'nodes when using "flat" network interface or "neutron" '
|
||||
'DHCP provider.')),
|
||||
cfg.StrOpt('provisioning_network_uuid',
|
||||
help=_('Neutron network UUID for the ramdisk to be booted '
|
||||
'into for provisioning nodes. Required for "neutron" '
|
||||
'network interface.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='neutron')
|
||||
auth.register_auth_opts(conf, 'neutron')
|
||||
|
||||
|
||||
def list_opts():
|
||||
return auth.add_auth_opts(opts)
|
|
@ -0,0 +1,53 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
# Copyright 2015 Hewlett Packard Development Company, LP
|
||||
# Copyright 2015 Universidade Federal de Campina Grande
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.StrOpt('manager_url',
|
||||
help=_('URL where OneView is available.')),
|
||||
cfg.StrOpt('username',
|
||||
help=_('OneView username to be used.')),
|
||||
cfg.StrOpt('password',
|
||||
secret=True,
|
||||
help=_('OneView password to be used.')),
|
||||
cfg.BoolOpt('allow_insecure_connections',
|
||||
default=False,
|
||||
help=_('Option to allow insecure connection with OneView.')),
|
||||
cfg.StrOpt('tls_cacert_file',
|
||||
help=_('Path to CA certificate.')),
|
||||
cfg.IntOpt('max_polling_attempts',
|
||||
default=12,
|
||||
help=_('Max connection retries to check changes on OneView.')),
|
||||
cfg.BoolOpt('enable_periodic_tasks',
|
||||
default=True,
|
||||
help=_('Whether to enable the periodic tasks for OneView '
|
||||
'driver be aware when OneView hardware resources are '
|
||||
'taken and released by Ironic or OneView users '
|
||||
'and proactively manage nodes in clean fail state '
|
||||
'according to Dynamic Allocation model of hardware '
|
||||
'resources allocation in OneView.')),
|
||||
cfg.IntOpt('periodic_check_interval',
|
||||
default=300,
|
||||
help=_('Period (in seconds) for periodic tasks to be '
|
||||
'executed when enable_periodic_tasks=True.')),
|
||||
]
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_opts(opts, group='oneview')
|
|
@ -12,101 +12,60 @@
|
|||
|
||||
import itertools
|
||||
|
||||
import ironic.api
|
||||
import ironic.api.app
|
||||
import ironic.common.driver_factory
|
||||
import ironic.common.exception
|
||||
import ironic.common.glance_service.v2.image_service
|
||||
import ironic.common.hash_ring
|
||||
import ironic.common.image_service
|
||||
import ironic.common.images
|
||||
import ironic.common.keystone
|
||||
import ironic.common.paths
|
||||
import ironic.common.service
|
||||
import ironic.common.swift
|
||||
import ironic.common.utils
|
||||
import ironic.dhcp.neutron
|
||||
import ironic.drivers.modules.agent
|
||||
import ironic.drivers.modules.agent_base_vendor
|
||||
import ironic.drivers.modules.agent_client
|
||||
import ironic.drivers.modules.amt.common
|
||||
import ironic.drivers.modules.amt.power
|
||||
import ironic.drivers.modules.deploy_utils
|
||||
import ironic.drivers.modules.iboot
|
||||
import ironic.drivers.modules.ilo.common
|
||||
import ironic.drivers.modules.ilo.deploy
|
||||
import ironic.drivers.modules.ilo.management
|
||||
import ironic.drivers.modules.ilo.power
|
||||
import ironic.drivers.modules.image_cache
|
||||
import ironic.drivers.modules.inspector
|
||||
import ironic.drivers.modules.ipminative
|
||||
import ironic.drivers.modules.irmc.boot
|
||||
import ironic.drivers.modules.irmc.common
|
||||
import ironic.drivers.modules.iscsi_deploy
|
||||
import ironic.drivers.modules.oneview.common
|
||||
import ironic.drivers.modules.pxe
|
||||
import ironic.drivers.modules.seamicro
|
||||
import ironic.drivers.modules.snmp
|
||||
import ironic.drivers.modules.ssh
|
||||
import ironic.drivers.modules.virtualbox
|
||||
import ironic.netconf
|
||||
|
||||
_default_opt_lists = [
|
||||
ironic.api.app.api_opts,
|
||||
ironic.common.driver_factory.driver_opts,
|
||||
ironic.common.exception.exc_log_opts,
|
||||
ironic.common.hash_ring.hash_opts,
|
||||
ironic.common.images.image_opts,
|
||||
ironic.common.paths.path_opts,
|
||||
ironic.common.service.service_opts,
|
||||
ironic.common.utils.utils_opts,
|
||||
ironic.drivers.modules.image_cache.img_cache_opts,
|
||||
ironic.netconf.netconf_opts,
|
||||
ironic.conf.default.api_opts,
|
||||
ironic.conf.default.driver_opts,
|
||||
ironic.conf.default.exc_log_opts,
|
||||
ironic.conf.default.hash_opts,
|
||||
ironic.conf.default.image_opts,
|
||||
ironic.conf.default.img_cache_opts,
|
||||
ironic.conf.default.netconf_opts,
|
||||
ironic.conf.default.path_opts,
|
||||
ironic.conf.default.service_opts,
|
||||
ironic.conf.default.utils_opts,
|
||||
]
|
||||
|
||||
_opts = [
|
||||
('DEFAULT', itertools.chain(*_default_opt_lists)),
|
||||
('agent', itertools.chain(
|
||||
ironic.drivers.modules.agent.agent_opts,
|
||||
ironic.drivers.modules.agent_base_vendor.agent_opts,
|
||||
ironic.drivers.modules.agent_client.agent_opts)),
|
||||
('agent', ironic.conf.agent.opts),
|
||||
('amt', itertools.chain(
|
||||
ironic.drivers.modules.amt.common.opts,
|
||||
ironic.drivers.modules.amt.power.opts)),
|
||||
('api', ironic.api.API_SERVICE_OPTS),
|
||||
('api', ironic.conf.api.opts),
|
||||
('audit', ironic.conf.audit.opts),
|
||||
('cimc', ironic.conf.cimc.opts),
|
||||
('cisco_ucs', ironic.conf.cisco_ucs.opts),
|
||||
('conductor', ironic.conf.conductor.opts),
|
||||
('console', ironic.conf.console.opts),
|
||||
('database', ironic.conf.database.opts),
|
||||
('deploy', ironic.drivers.modules.deploy_utils.deploy_opts),
|
||||
('deploy', ironic.conf.deploy.opts),
|
||||
('dhcp', ironic.conf.dhcp.opts),
|
||||
('glance', itertools.chain(
|
||||
ironic.common.glance_service.v2.image_service.glance_opts,
|
||||
ironic.common.image_service.glance_opts)),
|
||||
('iboot', ironic.drivers.modules.iboot.opts),
|
||||
('ilo', itertools.chain(
|
||||
ironic.drivers.modules.ilo.common.opts,
|
||||
ironic.drivers.modules.ilo.deploy.clean_opts,
|
||||
ironic.drivers.modules.ilo.management.clean_step_opts,
|
||||
ironic.drivers.modules.ilo.power.opts)),
|
||||
('inspector', ironic.drivers.modules.inspector.inspector_opts),
|
||||
('ipmi', ironic.drivers.modules.ipminative.opts),
|
||||
('irmc', itertools.chain(
|
||||
ironic.drivers.modules.irmc.boot.opts,
|
||||
ironic.drivers.modules.irmc.common.opts)),
|
||||
('glance', ironic.conf.glance.list_opts()),
|
||||
('iboot', ironic.conf.iboot.opts),
|
||||
('ilo', ironic.conf.ilo.opts),
|
||||
('inspector', ironic.conf.inspector.list_opts()),
|
||||
('ipmi', ironic.conf.ipmi.opts),
|
||||
('irmc', ironic.conf.irmc.opts),
|
||||
('iscsi', ironic.drivers.modules.iscsi_deploy.iscsi_opts),
|
||||
('keystone', ironic.common.keystone.keystone_opts),
|
||||
('neutron', ironic.dhcp.neutron.neutron_opts),
|
||||
('oneview', ironic.drivers.modules.oneview.common.opts),
|
||||
('keystone', ironic.conf.keystone.opts),
|
||||
('metrics', ironic.conf.metrics.opts),
|
||||
('metrics_statsd', ironic.conf.metrics_statsd.opts),
|
||||
('neutron', ironic.conf.neutron.list_opts()),
|
||||
('oneview', ironic.conf.oneview.opts),
|
||||
('pxe', itertools.chain(
|
||||
ironic.drivers.modules.iscsi_deploy.pxe_opts,
|
||||
ironic.drivers.modules.pxe.pxe_opts)),
|
||||
('seamicro', ironic.drivers.modules.seamicro.opts),
|
||||
('snmp', ironic.drivers.modules.snmp.opts),
|
||||
('ssh', ironic.drivers.modules.ssh.libvirt_opts),
|
||||
('swift', ironic.common.swift.swift_opts),
|
||||
('virtualbox', ironic.drivers.modules.virtualbox.opts),
|
||||
('seamicro', ironic.conf.seamicro.opts),
|
||||
('service_catalog', ironic.conf.service_catalog.list_opts()),
|
||||
('snmp', ironic.conf.snmp.opts),
|
||||
('ssh', ironic.conf.ssh.opts),
|
||||
('swift', ironic.conf.swift.list_opts()),
|
||||
('virtualbox', ironic.conf.virtualbox.opts),
|
||||
]
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
# Copyright 2016 Intel Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
|
||||
opts = [
|
||||
cfg.IntOpt('max_retry',
|
||||
default=3,
|
||||
help=_('Maximum retries for SeaMicro operations')),
|
||||
cfg.IntOpt('action_timeout',
|
||||
default=10,
|
||||
help=_('Seconds to wait for power action to be completed'))
|
||||
]
|
||||
|
||||
opt_group = cfg.OptGroup(name='seamicro',
|
||||
title='Options for the seamicro power driver')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(opt_group)
|
||||
conf.register_opts(opts, group=opt_group)
|
|
@ -0,0 +1,33 @@
|
|||
# Copyright 2016 Mirantis Inc
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from ironic.common.i18n import _
|
||||
from ironic.conf import auth
|
||||
|
||||
SERVICE_CATALOG_GROUP = cfg.OptGroup(
|
||||
'service_catalog',
|
||||
title='Access info for Ironic service user',
|
||||
help=_('Holds credentials and session options to access '
|
||||
'Keystone catalog for Ironic API endpoint resolution.'))
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
auth.register_auth_opts(conf, SERVICE_CATALOG_GROUP.name)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return auth.add_auth_opts([])
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue