ironic 6.2.0 release

meta:version: 6.2.0
 meta:diff-start: -
 meta:series: newton
 meta:release-type: release
 meta:announce: openstack-announce@lists.openstack.org
 meta:pypi: no
 meta:first: no
 meta:release:Author: Jim Rollenhagen <jim@jimrollenhagen.com>
 meta:release:Commit: Jim Rollenhagen <jim@jimrollenhagen.com>
 meta:release:Change-Id: I85c67e643719736fe3bacc4b8774c6d10e3edd8f
 meta:release:Code-Review+1: Jay Faulkner <jay@jvf.cc>
 meta:release:Code-Review+1: Dmitry Tantsur <divius.inside@gmail.com>
 meta:release:Code-Review+2: Davanum Srinivas (dims) <davanum@gmail.com>
 meta:release:Workflow+1: Davanum Srinivas (dims) <davanum@gmail.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJX48joAAoJENljH+rwzGInnosH/2ccTb1bZMkeFwKNlRKjK2iK
 4hVdanveh1ccpu+Dc7l2RAqyUjhgpjKqu/socnoiZLD+TA3Rt9Mg6EDy0vymobTq
 VkJSzCrbfJaeIj+GcfCY9r9Yth9yyZLfCe1O1EY2RSDZTDLMeJBrOqenvEa0qHmo
 BKe0ZtmrUVJBOvRP5cJhs5iWq0G68Zb2uHIaAusu28UDgBlCBMOmf7anvqlEz/3R
 1DB4mW0T6y60ITkxTOjUc24OQ3mTE3du5aw7MT/+UuscEda9AYzmGn+wMCfgu4wA
 bmMXSW1L08ZUzuMrkcnunrzKh8tIdRCZpTtrmO5tDyPgm1Elwgq0/MwytLUPhEg=
 =7PW7
 -----END PGP SIGNATURE-----

Merge tag '6.2.0' into debian/newton

ironic 6.2.0 release

  * New upstream release.
  * Fixed (build-)depends for this release.
  * Fixed oslotest EPOCH.
  * Removed Fix-broken-unit-tests-for-get_ilo_object.patch applied upstream.
  * Rebased requirements patches.

Change-Id: If529b5a15f540fcc7c35c2e8b60551f4f2e79a4f
This commit is contained in:
Thomas Goirand 2016-09-28 09:28:42 +02:00
commit 8bd6eed0de
308 changed files with 10903 additions and 3410 deletions

View File

@ -1,30 +1,29 @@
======
Ironic
======
Ironic is an integrated OpenStack project which aims to provision bare
metal machines instead of virtual machines, forked from the Nova Baremetal
driver. It is best thought of as a bare metal hypervisor **API** and a set
of plugins which interact with the bare metal hypervisors. By default, it
will use PXE and IPMI together to provision and turn on/off machines,
but Ironic also supports vendor-specific plugins which may implement
additional functionality.
Ironic consists of an API and plug-ins for managing and provisioning
physical machines in a security-aware and fault-tolerant manner. It can be
used with nova as a hypervisor driver, or standalone service using bifrost.
By default, it will use PXE and IPMI to interact with bare metal machines.
Ironic also supports vendor-specific plug-ins which may implement additional
functionality.
-----------------
Project Resources
-----------------
Ironic is distributed under the terms of the Apache License, Version 2.0. The
full terms and conditions of this license are detailed in the LICENSE file.
Project resources
~~~~~~~~~~~~~~~~~
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/ironic
* Source: http://git.openstack.org/cgit/openstack/ironic
* Bugs: http://bugs.launchpad.net/ironic
* Wiki: https://wiki.openstack.org/wiki/Ironic
* APIs: http://developer.openstack.org/api-ref/baremetal/index.html
Project status, bugs and RFEs (requests for feature enhancements)
are tracked on Launchpad:
Project status, bugs, and requests for feature enhancements (RFEs) are tracked
on Launchpad:
http://launchpad.net/ironic
http://launchpad.net/ironic
Anyone wishing to contribute to an OpenStack project should
find a good reference here:
http://docs.openstack.org/infra/manual/developers.html
For information on how to contribute to ironic, see
http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html

171
api-ref/regenerate-samples.sh Executable file
View File

@ -0,0 +1,171 @@
#!/bin/bash
set -e -x
if [ ! -x /usr/bin/jq ]; then
echo "This script relies on 'jq' to process JSON output."
echo "Please install it before continuing."
exit 1
fi
OS_AUTH_TOKEN=$(openstack token issue | grep ' id ' | awk '{print $4}')
IRONIC_URL="http://127.0.0.1:6385"
export OS_AUTH_TOKEN IRONIC_URL
function GET {
# GET $RESOURCE
curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \
-H 'X-OpenStack-Ironic-API-Version: 1.22' \
${IRONIC_URL}/$1 | jq -S '.'
}
function POST {
# POST $RESOURCE $FILENAME
curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \
-H 'X-OpenStack-Ironic-API-Version: 1.22' \
-H "Content-Type: application/json" \
-X POST --data @$2 \
${IRONIC_URL}/$1 | jq -S '.'
}
function PATCH {
# POST $RESOURCE $FILENAME
curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \
-H 'X-OpenStack-Ironic-API-Version: 1.22' \
-H "Content-Type: application/json" \
-X PATCH --data @$2 \
${IRONIC_URL}/$1 | jq -S '.'
}
function PUT {
# PUT $RESOURCE $FILENAME
curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \
-H 'X-OpenStack-Ironic-API-Version: 1.22' \
-H "Content-Type: application/json" \
-X PUT --data @$2 \
${IRONIC_URL}/$1
}
pushd source/samples
###########
# ROOT APIs
GET '' > api-root-response.json
GET 'v1' > api-v1-root-response.json
###########
# DRIVER APIs
GET v1/drivers > drivers-list-response.json
GET v1/drivers/agent_ipmitool > driver-get-response.json
GET v1/drivers/agent_ipmitool/properties > driver-property-response.json
GET v1/drivers/agent_ipmitool/raid/logical_disk_properties > driver-logical-disk-properties-response.json
GET v1/drivers/agent_ipmitool/vendor_passthru/methods > driver-passthru-methods-response.json
#########
# CHASSIS
POST v1/chassis chassis-create-request.json > chassis-show-response.json
CID=$(cat chassis-show-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/')
if [ "$CID" == "" ]; then
exit 1
else
echo "Chassis created. UUID: $CID"
fi
GET v1/chassis > chassis-list-response.json
GET v1/chassis/detail > chassis-list-details-response.json
PATCH v1/chassis/$CID chassis-update-request.json > chassis-update-response.json
# skip GET /v1/chassis/$UUID because the response is same as POST
#######
# NODES
# Create a node with a real driver, but missing ipmi_address,
# then do basic commands with it
POST v1/nodes node-create-request.json > node-create-response.json
NID=$(cat node-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/')
if [ "$NID" == "" ]; then
exit 1
else
echo "Node created. UUID: $NID"
fi
# get the list of passthru methods from agent* driver
GET v1/nodes/$NID/vendor_passthru/methods > node-vendor-passthru-response.json
# Change to the fake driver and then move the node into the AVAILABLE
# state without saving any output.
# NOTE that these three JSON files are not included in the docs
PATCH v1/nodes/$NID node-update-driver.json
PUT v1/nodes/$NID/states/provision node-set-manage-state.json
PUT v1/nodes/$NID/states/provision node-set-available-state.json
GET v1/nodes/$NID/validate > node-validate-response.json
PUT v1/nodes/$NID/states/power node-set-power-off.json
GET v1/nodes/$NID/states > node-get-state-response.json
GET v1/nodes > nodes-list-response.json
GET v1/nodes/detail > nodes-list-details-response.json
GET v1/nodes/$NID > node-show-response.json
# Put the Node in maintenance mode, then continue doing everything else
PUT v1/nodes/$NID/maintenance node-maintenance-request.json
###########
# PORTS
# Before we can create a port, we must
# write NODE ID into the create request document body
sed -i "s/.*node_uuid.*/ \"node_uuid\": \"$NID\",/" port-create-request.json
POST v1/ports port-create-request.json > port-create-response.json
PID=$(cat port-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/')
if [ "$PID" == "" ]; then
exit 1
else
echo "Port created. UUID: $PID"
fi
GET v1/ports > port-list-respone.json
GET v1/ports/detail > port-list-detail-response.json
PATCH v1/ports/$PID port-update-request.json > port-update-response.json
# skip GET $PID because same result as POST
# skip DELETE
################
# NODE PORT APIs
GET v1/nodes/$NID/ports > node-port-list-response.json
GET v1/nodes/$NID/ports/detail > node-port-detail-response.json
############
# LOOKUP API
GET v1/lookup?node_uuid=$NID > lookup-node-response.json
#####################
# NODES MANAGEMENT API
# These need to be done while the node is in maintenance mode,
# and the node's driver is "fake", to avoid potential races
# with internal processes that lock the Node
# this corrects an intentional ommission in some of the samples
PATCH v1/nodes/$NID node-update-driver-info-request.json > node-update-driver-info-response.json
GET v1/nodes/$NID/management/boot_device/supported > node-get-supported-boot-devices-response.json
PUT v1/nodes/$NID/management/boot_device node-set-boot-device.json
GET v1/nodes/$NID/management/boot_device > node-get-boot-device-response.json

View File

@ -21,7 +21,8 @@ List chassis with details
Lists all chassis with details.
Normal response codes: 200
Error response codes:413,405,404,403,401,400,503,
.. TODO: add error codes
Request
-------
@ -58,7 +59,8 @@ Show chassis details
Shows details for a chassis.
Normal response codes: 200
Error response codes:413,405,404,403,401,400,503,
.. TODO: add error codes
Request
-------
@ -83,6 +85,7 @@ Response Example
.. literalinclude:: samples/chassis-show-response.json
:language: javascript
Update chassis
==============
@ -91,11 +94,15 @@ Update chassis
Updates a chassis.
Normal response codes: 200
Error response codes:413,415,405,404,403,401,400,503,409,
.. TODO: add error codes
Request
-------
The BODY of the PATCH request must be a JSON PATCH document, adhering to
`RFC 6902 <https://tools.ietf.org/html/rfc6902>`_.
.. rest_parameters:: parameters.yaml
- chassis: chassis
@ -125,7 +132,7 @@ Response Parameters
Response Example
----------------
.. literalinclude:: samples/chassis-show-response.json
.. literalinclude:: samples/chassis-update-response.json
:language: javascript
@ -136,7 +143,7 @@ Delete chassis
Deletes a chassis.
Error response codes:204,413,415,405,404,403,401,400,503,409,
.. TODO: add error codes
Request
-------
@ -178,6 +185,12 @@ Response Parameters
- nodes: nodes
- uuid: uuid
Response Example
----------------
.. literalinclude:: samples/chassis-show-response.json
:language: javascript
List chassis
============
@ -186,7 +199,8 @@ List chassis
Lists all chassis.
Normal response codes: 200
Error response codes:413,405,404,403,401,400,503,
.. TODO: add error codes
Request
-------

View File

@ -0,0 +1,108 @@
.. -*- rst -*-
=======
Utility
=======
This section describes two API endpoints used by the ``ironic-python-agent``
ramdisk as it communicates with the Bare Metal service. These were previously
exposed as vendor passthrough methods, however, as ironic-python-agent has
become the standard ramdisk agent, these methods have been made a part of the
official REST API.
.. note::
**Operators are reminded not to expose the Bare Metal Service's API to
unsecured networks.** Both API endpoints listed below are available to
*unauthenticated* clients because the default method for booting the
``ironic-python-agent`` ramdisk does not provide the agent with keystone
credentials.
.. note::
It is possible to include keys in your ramdisk, or pass keys in via the
boot method, if your driver supports it; if that is done, you may configure
these endpoint to require authentication by changing the policy rules
``baremetal:driver:ipa_lookup`` and ``baremetal:node:ipa_heartbeat``.
In light of that, operators are recommended to ensure that this endpoint is
only available on the ``provisioning`` and ``cleaning`` networks.
Agent Lookup
============
.. rest_method:: GET /v1/lookup
Beginning with the v1.22 API, a ``/lookup`` method is exposed at the root of
the REST API. This should only be used by the ``ironic-python-agent`` ramdisk
to retrieve required configuration data from the Bare Metal service.
By default, ``/v1/lookup`` will only match Nodes that are expected to be
running the ``ironic-python-agent`` ramdisk (for instance, because the Bare
Metal service has just initiated a deployment). It can not be used as a
generic search mechanism, though this behaviour may be changed by setting
the ``[api] restrict_lookup = false`` configuration option for the ironic-api
service.
The query string should include either or both a ``node_uuid`` or an
``addresses`` query parameter. If a matching Node is found, information about
that Node shall be returned, including instance-specific information such as
the configdrive.
This deprecates the ``agent``-driver specific ``vendor_passthru`` method of the
same name, previously accessible at
``/v1/drivers/agent_*/vendor_passthru?method=lookup``.
Normal response codes: 200
Error response codes: 400 404
Request
-------
.. rest_parameters:: parameters.yaml
- node_uuid: r_node_uuid
- addresses: r_addresses
Response
--------
Returns only the information about the corresponding Node that the
``ironic-python-agent`` process requires.
.. rest_parameters:: parameters.yaml
- node: agent_node
- config: agent_config
Response Example
----------------
.. literalinclude:: samples/lookup-node-response.json
:language: javascript
Agent Heartbeat
===============
.. rest_method:: POST /v1/heartbeat/{node_ident}
Beginning with the v1.22 API, a ``/heartbeat`` method is exposed at the root of
the REST API. This is used as a callback from within the ``ironic-python-agent``
ramdisk, so that an active ramdisk may periodically contact the Bare Metal
service and provide the current URL at which to contact the agent.
This deprecates the ``agent``-driver specific ``vendor_passthru`` method of the
same name, previously accessible at
``/v1/nodes/{node_ident}/vendor_passthru?method=heartbeat``.
Normal response codes: 202
Error response codes: 400 404
Request
-------
.. rest_parameters:: parameters.yaml
- node_ident: node_ident
- callback_url: callback_url

View File

@ -155,7 +155,7 @@ Request
**Example JSON request body to set boot device:**
.. literalinclude:: samples/node-get-or-set-boot-device.json
.. literalinclude:: samples/node-set-boot-device.json
Get Boot Device
@ -190,7 +190,7 @@ Response
**Example JSON response to get boot device:**
.. literalinclude:: samples/node-get-or-set-boot-device.json
.. literalinclude:: samples/node-get-boot-device-response.json
Get Supported Boot Devices

View File

@ -17,6 +17,11 @@ and by a unique human-readable "name" in any request. Throughout this
documentation, this is referred to as the ``node_ident``. Responses clearly
indicate whether a given field is a ``uuid`` or a ``name``.
Depending on the Roles assigned to the authenticated OpenStack User, and upon
the configuration of the Bare Metal service, API responses may change. For
example, the default value of the "show_password" settings cause all API
responses to mask passwords within ``driver_info`` with the literal string
"\*\*\*\*\*\*".
Create Node
===========
@ -71,7 +76,15 @@ API microversion 1.7 introduced the ``clean_step`` field`
API microversion 1.12 introduced support for the ``raid_config`` and
``target_raid_config`` fields.
The list and example below are representative of the response as of API microversion 1.16.
API microversion 1.20 introduced the ``network_interface`` field. If this field
is not supplied when creating the Node, the default value will be used.
API microversion 1.21 introduced the ``resource_class`` field, which may be used to
store a resource designation for the proposed OpenStack Placement Engine. This
field has no effect within Ironic.
The list and example below are representative of the response as of API microversion 1.22.
.. rest_parameters:: parameters.yaml
@ -100,6 +113,8 @@ The list and example below are representative of the response as of API microver
- links: links
- ports: n_ports
- states: n_states
- network_interface: network_interface
- resource_class: resource_class
**Example JSON representation of a Node:**
@ -128,6 +143,9 @@ the list of returned Nodes to be filtered by their current state.
API microversion 1.16 added the ``driver`` Request parameter, allowing
the list of returned Nodes to be filtered by their driver name.
API microversion 1.21 added the ``resource_class`` Request parameter,
allowing the list of returned Nodes to be filtered by this field.
Normal response codes: 200
.. TODO: add error codes
@ -142,6 +160,7 @@ Request
- associated: r_associated
- provision_state: r_provision_state
- driver: r_driver
- resource_class: r_resource_class
- fields: fields
- limit: limit
- marker: marker
@ -192,6 +211,7 @@ Request
- associated: r_associated
- provision_state: r_provision_state
- driver: r_driver
- resource_class: r_resource_class
- limit: limit
- marker: marker
- sort_dir: sort_dir
@ -227,6 +247,8 @@ Response
- links: links
- ports: n_ports
- states: n_states
- network_interface: network_interface
- resource_class: resource_class
**Example detailed list of Nodes:**
@ -285,6 +307,8 @@ Response
- links: links
- ports: n_ports
- states: n_states
- network_interface: network_interface
- resource_class: resource_class
**Example JSON representation of a Node:**
@ -350,6 +374,8 @@ Response
- links: links
- ports: n_ports
- states: n_states
- network_interface: network_interface
- resource_class: resource_class
**Example JSON representation of a Node:**

View File

@ -14,6 +14,8 @@ supports versioning. There are two kinds of versions in Ironic.
- ''microversions'', which can be requested through the use of the
``X-OpenStack-Ironic-API-Version`` header.
The Version APIs work differently from other APIs as they *do not* require authentication.
Beginning with the Kilo release, all API requests support the
``X-OpenStack-Ironic-API-Version`` header. This header SHOULD be supplied
with every request; in the absence of this header, each request is treated
@ -75,4 +77,4 @@ Response Example
- x-openstack-ironic-api-max-version: x-openstack-ironic-api-max-version
.. literalinclude:: samples/api-v1-root-response.json
:language: javascript
:language: javascript

View File

@ -28,6 +28,18 @@ import os
import subprocess
import sys
import openstackdocstheme
html_theme = 'openstackdocs'
html_theme_path = [openstackdocstheme.get_html_theme_path()]
html_theme_options = {
"sidebar_mode": "toc",
}
extensions = [
'os_api_ref',
]
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
@ -40,11 +52,6 @@ sys.path.insert(0, os.path.abspath('./'))
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'os_api_ref',
'oslosphinx',
]
# The suffix of source filenames.
source_suffix = '.rst'
@ -69,6 +76,14 @@ release = version_info.release_string()
# The short X.Y version.
version = version_info.version_string()
# Config logABug feature
giturl = u'http://git.openstack.org/cgit/openstack/ironic/tree/api-ref/source'
# source tree
# html_context allows us to pass arbitrary values into the html template
html_context = {"bug_tag": "api-ref",
"giturl": giturl,
"bug_project": "ironic"}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#

View File

@ -4,14 +4,6 @@
Bare Metal API
================
This documentation describes the REST API for the Ironic service, beginning with the
5.1.0 (Mitaka) release.
Version negotiation is implemented in the server. When the negotiated version
is not the current maximum version, both request and response may not match what
is presented in this document. Significant changes may be noted inline.
.. rest_expand_all::
.. include:: baremetal-api-versions.inc
@ -23,4 +15,5 @@ is presented in this document. Significant changes may be noted inline.
.. include:: baremetal-api-v1-drivers.inc
.. include:: baremetal-api-v1-driver-passthru.inc
.. include:: baremetal-api-v1-chassis.inc
.. include:: baremetal-api-v1-misc.inc

View File

@ -15,7 +15,7 @@ openstack-request-id:
type: string
x-openstack-ironic-api-max-version:
description: |
Maximum API microversion supported by this endpoint, eg. "1.16"
Maximum API microversion supported by this endpoint, eg. "1.22"
in: header
required: true
type: string
@ -70,6 +70,14 @@ port_ident:
type: string
callback_url:
description: |
The URL of an active ironic-python-agent ramdisk, sent back to the Bare
Metal service and stored temporarily during a provisioning action.
in: query
required: true
type: string
# variables common to all query strings
fields:
description: |
@ -114,6 +122,14 @@ method_name:
required: true
type: string
# variable in the lookup query string
r_addresses:
description: |
Optional list of one or more Port addresses.
in: query
required: false
type: list
# variables in the node query string
r_associated:
description: |
@ -143,6 +159,13 @@ r_maintenance:
in: query
required: false
type: boolean
# variable in the lookup query string
r_node_uuid:
description: |
Optional Node UUID.
in: query
required: false
type: string
r_port_address:
description: |
Filter the list of returned Ports, and only return the ones with the
@ -172,6 +195,13 @@ r_provision_state:
in: query
required: false
type: string
r_resource_class:
description: |
Filter the list of returned nodes, and only return the ones with the
specified resource class. Introduced in API version 1.21.
in: query
required: false
type: string
sort_dir:
description: |
Sorts the response by the requested sort
@ -196,6 +226,22 @@ sort_key:
type: string
# variable returned from /lookup
agent_config:
description: |
JSON document of configuration data for the ironic-python-agent process.
in: body
required: true
type: JSON
agent_node:
description: |
JSON document containing a subset of Node fields, used by the
ironic-python-agent process as it operates on the Node.
in: body
required: true
type: JSON
# variables in the API response body
boot_device:
description: |
@ -309,15 +355,15 @@ id:
type: string
inspection_finished_at:
description: |
The UTC date and time when the resource was created,
`ISO 8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ format.
The UTC date and time when the last hardware inspection finished
successfully, `ISO 8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ format.
May be "null".
in: body
required: true
type: string
inspection_started_at:
description: |
The UTC date and time when the resource was created,
The UTC date and time when the hardware inspection was started,
`ISO 8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ format.
May be "null".
in: body
@ -412,6 +458,13 @@ name:
in: body
required: true
type: string
network_interface:
description: |
Which Network Interface provider to use when plumbing the network
connections for this Node. Added in API microversion v1.20
in: body
required: true
type: string
node_name:
description: |
Human-readable identifier for the Node resource. May be undefined. Certain
@ -545,6 +598,14 @@ reservation:
in: body
required: true
type: string
resource_class:
description: |
A string which can be used by external schedulers to identify this Node as
a unit of a specific type of resource. This will be used by the openstack
Placement Engine in a future release. Added in API microversion 1.21.
in: body
required: true
type: string
supported_boot_devices:
description: |
List of boot devices which this Node's driver supports.
@ -637,7 +698,7 @@ v_raid:
version:
description: |
Versioning of this API response, eg. "1.16".
Versioning of this API response, eg. "1.22".
in: body
required: true
type: string

View File

@ -1,30 +1,30 @@
{
"name" : "OpenStack Ironic API",
"description" : "Ironic is an OpenStack project which aims to provision baremetal machines.",
"default_version" : {
"status" : "CURRENT",
"version" : "1.16",
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/"
}
],
"id" : "v1",
"min_version" : "1.1"
},
"versions" : [
"default_version": {
"id": "v1",
"links": [
{
"status" : "CURRENT",
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/",
"rel" : "self"
}
],
"id" : "v1",
"version" : "1.16",
"min_version" : "1.1"
"href": "http://127.0.0.1:6385/v1/",
"rel": "self"
}
]
],
"min_version": "1.1",
"status": "CURRENT",
"version": "1.22"
},
"description": "Ironic is an OpenStack project which aims to provision baremetal machines.",
"name": "OpenStack Ironic API",
"versions": [
{
"id": "v1",
"links": [
{
"href": "http://127.0.0.1:6385/v1/",
"rel": "self"
}
],
"min_version": "1.1",
"status": "CURRENT",
"version": "1.22"
}
]
}

View File

@ -1,60 +1,80 @@
{
"chassis" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/chassis/"
},
{
"href" : "http://127.0.0.1:6385/chassis/",
"rel" : "bookmark"
}
],
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/",
"rel" : "self"
},
{
"rel" : "describedby",
"type" : "text/html",
"href" : "http://docs.openstack.org/developer/ironic/dev/api-spec-v1.html"
}
],
"nodes" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/"
}
],
"ports" : [
{
"href" : "http://127.0.0.1:6385/v1/ports/",
"rel" : "self"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/ports/"
}
],
"media_types" : [
{
"type" : "application/vnd.openstack.ironic.v1+json",
"base" : "application/json"
}
],
"drivers" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/drivers/"
},
{
"href" : "http://127.0.0.1:6385/drivers/",
"rel" : "bookmark"
}
],
"id" : "v1"
}
"chassis": [
{
"href": "http://127.0.0.1:6385/v1/chassis/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/",
"rel": "bookmark"
}
],
"drivers": [
{
"href": "http://127.0.0.1:6385/v1/drivers/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/",
"rel": "bookmark"
}
],
"heartbeat": [
{
"href": "http://127.0.0.1:6385/v1/heartbeat/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/heartbeat/",
"rel": "bookmark"
}
],
"id": "v1",
"links": [
{
"href": "http://127.0.0.1:6385/v1/",
"rel": "self"
},
{
"href": "http://docs.openstack.org/developer/ironic/dev/api-spec-v1.html",
"rel": "describedby",
"type": "text/html"
}
],
"lookup": [
{
"href": "http://127.0.0.1:6385/v1/lookup/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/lookup/",
"rel": "bookmark"
}
],
"media_types": [
{
"base": "application/json",
"type": "application/vnd.openstack.ironic.v1+json"
}
],
"nodes": [
{
"href": "http://127.0.0.1:6385/v1/nodes/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/",
"rel": "bookmark"
}
],
"ports": [
{
"href": "http://127.0.0.1:6385/v1/ports/",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/",
"rel": "bookmark"
}
]
}

View File

@ -1,7 +1,3 @@
{
"chassis": [
{
"description": "Sample chassis"
}
]
"description": "Sample chassis"
}

View File

@ -1,18 +1,31 @@
{
"chassis": [
"chassis": [
{
"created_at": "2016-08-18T22:28:48.165105+00:00",
"description": "Sample chassis",
"extra": {},
"links": [
{
"description": "Sample chassis",
"links": [
{
"href": "http://localhost:6385/v1/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "self"
},
{
"href": "http://localhost:6385/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "bookmark"
}
],
"uuid": "eaaca217-e7d8-47b4-bb41-3f99f20eed89"
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "bookmark"
}
]
],
"nodes": [
{
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "bookmark"
}
],
"updated_at": null,
"uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1"
}
]
}

View File

@ -1,18 +1,18 @@
{
"chassis": [
"chassis": [
{
"description": "Sample chassis",
"links": [
{
"description": "Sample chassis",
"links": [
{
"href": "http://localhost:6385/v1/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "self"
},
{
"href": "http://localhost:6385/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "bookmark"
}
],
"uuid": "eaaca217-e7d8-47b4-bb41-3f99f20eed89"
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "bookmark"
}
]
],
"uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1"
}
]
}

View File

@ -1,27 +1,27 @@
{
"created_at": "2000-01-01T12:00:00",
"description": "Sample chassis",
"extra": {},
"links": [
{
"href": "http://localhost:6385/v1/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "self"
},
{
"href": "http://localhost:6385/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89",
"rel": "bookmark"
}
],
"nodes": [
{
"href": "http://localhost:6385/v1/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89/nodes",
"rel": "self"
},
{
"href": "http://localhost:6385/chassis/eaaca217-e7d8-47b4-bb41-3f99f20eed89/nodes",
"rel": "bookmark"
}
],
"updated_at": "2000-01-01T12:00:00",
"uuid": "eaaca217-e7d8-47b4-bb41-3f99f20eed89"
"created_at": "2016-08-18T22:28:48.165105+00:00",
"description": "Sample chassis",
"extra": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "bookmark"
}
],
"nodes": [
{
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "bookmark"
}
],
"updated_at": null,
"uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1"
}

View File

@ -1,7 +1,7 @@
{
"chassis": [
{
"description": "Sample chassis"
}
]
}
[
{
"op": "replace",
"path": "/description",
"value": "Updated Chassis"
}
]

View File

@ -0,0 +1,27 @@
{
"created_at": "2016-08-18T22:28:48.165105+00:00",
"description": "Updated Chassis",
"extra": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1",
"rel": "bookmark"
}
],
"nodes": [
{
"href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes",
"rel": "bookmark"
}
],
"updated_at": "2016-08-18T22:28:48.556556+00:00",
"uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1"
}

View File

@ -1,26 +1,26 @@
{
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/drivers/agent_ipmitool"
},
{
"href" : "http://127.0.0.1:6385/drivers/agent_ipmitool",
"rel" : "bookmark"
}
],
"name" : "agent_ipmitool",
"properties" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/drivers/agent_ipmitool/properties"
}
],
"hosts" : [
"localhost"
]
"hosts": [
"897ab1dad809"
],
"links": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ipmitool",
"rel": "bookmark"
}
],
"name": "agent_ipmitool",
"properties": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ipmitool/properties",
"rel": "bookmark"
}
]
}

View File

@ -1,12 +1,12 @@
{
"share_physical_disks" : "Specifies whether other logical disks can share physical disks with this logical disk. By default, this is False. Optional.",
"controller" : "Controller to use for this logical disk. If not specified, the driver will choose a suitable RAID controller on the bare metal node. Optional.",
"disk_type" : "The type of disk preferred. Valid values are 'hdd' and 'ssd'. If this is not specified, disk type will not be a selection criterion for choosing backing physical disks. Optional.",
"physical_disks" : "The physical disks to use for this logical disk. If not specified, the driver will choose suitable physical disks to use. Optional.",
"volume_name" : "Name of the volume to be created. If this is not specified, it will be auto-generated. Optional.",
"number_of_physical_disks" : "Number of physical disks to use for this logical disk. By default, the driver uses the minimum number of disks required for that RAID level. Optional.",
"raid_level" : "RAID level for the logical disk. Valid values are '0', '1', '2', '5', '6', '1+0', '5+0' and '6+0'. Required.",
"size_gb" : "Size in GiB (Integer) for the logical disk. Use 'MAX' as size_gb if this logical disk is supposed to use the rest of the space available. Required.",
"interface_type" : "The interface type of disk. Valid values are 'sata', 'scsi' and 'sas'. If this is not specified, interface type will not be a selection criterion for choosing backing physical disks. Optional.",
"is_root_volume" : "Specifies whether this disk is a root volume. By default, this is False. Optional."
"controller": "Controller to use for this logical disk. If not specified, the driver will choose a suitable RAID controller on the bare metal node. Optional.",
"disk_type": "The type of disk preferred. Valid values are 'hdd' and 'ssd'. If this is not specified, disk type will not be a selection criterion for choosing backing physical disks. Optional.",
"interface_type": "The interface type of disk. Valid values are 'sata', 'scsi' and 'sas'. If this is not specified, interface type will not be a selection criterion for choosing backing physical disks. Optional.",
"is_root_volume": "Specifies whether this disk is a root volume. By default, this is False. Optional.",
"number_of_physical_disks": "Number of physical disks to use for this logical disk. By default, the driver uses the minimum number of disks required for that RAID level. Optional.",
"physical_disks": "The physical disks to use for this logical disk. If not specified, the driver will choose suitable physical disks to use. Optional.",
"raid_level": "RAID level for the logical disk. Valid values are 'JBOD', 0', '1', '2', '5', '6', '1+0', '5+0' and '6+0'. Required.",
"share_physical_disks": "Specifies whether other logical disks can share physical disks with this logical disk. By default, this is False. Optional.",
"size_gb": "Size in GiB (Integer) for the logical disk. Use 'MAX' as size_gb if this logical disk is supposed to use the rest of the space available. Required.",
"volume_name": "Name of the volume to be created. If this is not specified, it will be auto-generated. Optional."
}

View File

@ -1,10 +1,10 @@
{
"lookup" : {
"http_methods" : [
"POST"
],
"attach" : false,
"description" : "",
"async" : false
}
"lookup": {
"async": false,
"attach": false,
"description": "",
"http_methods": [
"POST"
]
}
}

View File

@ -1,22 +1,22 @@
{
"ipmi_force_boot_device" : "Whether Ironic should specify the boot device to the BMC each time the server is turned on, eg. because the BMC is not capable of remembering the selected boot device across power cycles; default value is False. Optional.",
"deploy_forces_oob_reboot" : "Whether Ironic should force a reboot of the Node via the out-of-band channel after deployment is complete. Provides compatiblity with older deploy ramdisks. Defaults to False. Optional.",
"ipmi_target_address" : "destination address for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".",
"image_https_proxy" : "URL of a proxy server for HTTPS connections. Optional.",
"ipmi_password" : "password. Optional.",
"ipmi_bridging" : "bridging_type; default is \"no\". One of \"single\", \"dual\", \"no\". Optional.",
"deploy_kernel" : "UUID (from Glance) of the deployment kernel. Required.",
"ipmi_address" : "IP address or hostname of the node. Required.",
"image_no_proxy" : "A comma-separated list of host names, IP addresses and domain names (with optional :port) that will be excluded from proxying. To denote a doman name, use a dot to prefix the domain name. This value will be ignored if ``image_http_proxy`` and ``image_https_proxy`` are not specified. Optional.",
"ipmi_local_address" : "local IPMB address for bridged requests. Used only if ipmi_bridging is set to \"single\" or \"dual\". Optional.",
"ipmi_transit_channel" : "transit channel for bridged request. Required only if ipmi_bridging is set to \"dual\".",
"ipmi_transit_address" : "transit address for bridged request. Required only if ipmi_bridging is set to \"dual\".",
"ipmi_username" : "username; default is NULL user. Optional.",
"deploy_ramdisk" : "UUID (from Glance) of the ramdisk that is mounted at boot time. Required.",
"ipmi_target_channel" : "destination channel for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".",
"ipmi_terminal_port" : "node's UDP port to connect to. Only required for console access.",
"image_http_proxy" : "URL of a proxy server for HTTP connections. Optional.",
"ipmi_priv_level" : "privilege level; default is ADMINISTRATOR. One of ADMINISTRATOR, CALLBACK, OPERATOR, USER. Optional.",
"ipmi_protocol_version" : "the version of the IPMI protocol; default is \"2.0\". One of \"1.5\", \"2.0\". Optional.",
"ipmi_port" : "remote IPMI RMCP port. Optional."
"deploy_forces_oob_reboot": "Whether Ironic should force a reboot of the Node via the out-of-band channel after deployment is complete. Provides compatibility with older deploy ramdisks. Defaults to False. Optional.",
"deploy_kernel": "UUID (from Glance) of the deployment kernel. Required.",
"deploy_ramdisk": "UUID (from Glance) of the ramdisk that is mounted at boot time. Required.",
"image_http_proxy": "URL of a proxy server for HTTP connections. Optional.",
"image_https_proxy": "URL of a proxy server for HTTPS connections. Optional.",
"image_no_proxy": "A comma-separated list of host names, IP addresses and domain names (with optional :port) that will be excluded from proxying. To denote a doman name, use a dot to prefix the domain name. This value will be ignored if ``image_http_proxy`` and ``image_https_proxy`` are not specified. Optional.",
"ipmi_address": "IP address or hostname of the node. Required.",
"ipmi_bridging": "bridging_type; default is \"no\". One of \"single\", \"dual\", \"no\". Optional.",
"ipmi_force_boot_device": "Whether Ironic should specify the boot device to the BMC each time the server is turned on, eg. because the BMC is not capable of remembering the selected boot device across power cycles; default value is False. Optional.",
"ipmi_local_address": "local IPMB address for bridged requests. Used only if ipmi_bridging is set to \"single\" or \"dual\". Optional.",
"ipmi_password": "password. Optional.",
"ipmi_port": "remote IPMI RMCP port. Optional.",
"ipmi_priv_level": "privilege level; default is ADMINISTRATOR. One of ADMINISTRATOR, CALLBACK, OPERATOR, USER. Optional.",
"ipmi_protocol_version": "the version of the IPMI protocol; default is \"2.0\". One of \"1.5\", \"2.0\". Optional.",
"ipmi_target_address": "destination address for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".",
"ipmi_target_channel": "destination channel for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".",
"ipmi_terminal_port": "node's UDP port to connect to. Only required for console access.",
"ipmi_transit_address": "transit address for bridged request. Required only if ipmi_bridging is set to \"dual\".",
"ipmi_transit_channel": "transit channel for bridged request. Required only if ipmi_bridging is set to \"dual\".",
"ipmi_username": "username; default is NULL user. Optional."
}

View File

@ -1,30 +1,108 @@
{
"drivers" : [
{
"hosts" : [
"localhost"
],
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/drivers/agent_ipmitool",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/drivers/agent_ipmitool",
"rel" : "bookmark"
}
],
"name" : "agent_ipmitool",
"properties" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties"
},
{
"href" : "http://127.0.0.1:6385/drivers/agent_ipmitool/properties",
"rel" : "bookmark"
}
]
}
]
"drivers": [
{
"hosts": [
"897ab1dad809"
],
"links": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ssh",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ssh",
"rel": "bookmark"
}
],
"name": "agent_ssh",
"properties": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ssh/properties",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ssh/properties",
"rel": "bookmark"
}
]
},
{
"hosts": [
"897ab1dad809"
],
"links": [
{
"href": "http://127.0.0.1:6385/v1/drivers/pxe_ipmitool",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/pxe_ipmitool",
"rel": "bookmark"
}
],
"name": "pxe_ipmitool",
"properties": [
{
"href": "http://127.0.0.1:6385/v1/drivers/pxe_ipmitool/properties",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/pxe_ipmitool/properties",
"rel": "bookmark"
}
]
},
{
"hosts": [
"897ab1dad809"
],
"links": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ipmitool",
"rel": "bookmark"
}
],
"name": "agent_ipmitool",
"properties": [
{
"href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/agent_ipmitool/properties",
"rel": "bookmark"
}
]
},
{
"hosts": [
"897ab1dad809"
],
"links": [
{
"href": "http://127.0.0.1:6385/v1/drivers/fake",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/fake",
"rel": "bookmark"
}
],
"name": "fake",
"properties": [
{
"href": "http://127.0.0.1:6385/v1/drivers/fake/properties",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/drivers/fake/properties",
"rel": "bookmark"
}
]
}
]
}

View File

@ -0,0 +1,34 @@
{
"config": {
"heartbeat_timeout": 300,
"metrics": {
"backend": "noop",
"global_prefix": null,
"prepend_host": false,
"prepend_host_reverse": true,
"prepend_uuid": false
},
"metrics_statsd": {
"statsd_host": "localhost",
"statsd_port": 8125
}
},
"node": {
"driver_internal_info": {
"clean_steps": null
},
"instance_info": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"properties": {},
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}
}

View File

@ -1,4 +1,8 @@
{
"name": "test_node",
"driver": "agent_ipmitool",
"name": "test_node"
"driver_info": {
"ipmi_username": "ADMIN",
"ipmi_password": "password"
}
}

View File

@ -1,58 +1,63 @@
{
"last_error" : null,
"extra" : {},
"reservation" : null,
"driver" : "agent_ipmitool",
"instance_info" : {},
"created_at" : "2016-05-04T22:59:49.300836+00:00",
"raid_config" : {},
"uuid" : "14deb747-127c-4fe4-be9d-906c43006cd4",
"maintenance_reason" : null,
"target_provision_state" : null,
"ports" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/14deb747-127c-4fe4-be9d-906c43006cd4/ports"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/14deb747-127c-4fe4-be9d-906c43006cd4/ports"
}
],
"power_state" : null,
"instance_uuid" : null,
"name" : "test_node_",
"properties" : {},
"clean_step" : {},
"console_enabled" : false,
"driver_internal_info" : {},
"target_power_state" : null,
"inspection_started_at" : null,
"provision_state" : "enroll",
"provision_updated_at" : null,
"driver_info" : {},
"inspection_finished_at" : null,
"updated_at" : null,
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/14deb747-127c-4fe4-be9d-906c43006cd4"
},
{
"href" : "http://127.0.0.1:6385/nodes/14deb747-127c-4fe4-be9d-906c43006cd4",
"rel" : "bookmark"
}
],
"target_raid_config" : {},
"maintenance" : false,
"states" : [
{
"href" : "http://127.0.0.1:6385/v1/nodes/14deb747-127c-4fe4-be9d-906c43006cd4/states",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/nodes/14deb747-127c-4fe4-be9d-906c43006cd4/states",
"rel" : "bookmark"
}
]
"clean_step": {},
"console_enabled": false,
"created_at": "2016-08-18T22:28:48.643434+00:00",
"driver": "agent_ipmitool",
"driver_info": {
"ipmi_password": "******",
"ipmi_username": "ADMIN"
},
"driver_internal_info": {},
"extra": {},
"inspection_finished_at": null,
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"last_error": null,
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"maintenance": false,
"maintenance_reason": null,
"name": "test_node",
"network_interface": "flat",
"ports": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "bookmark"
}
],
"power_state": null,
"properties": {},
"provision_state": "enroll",
"provision_updated_at": null,
"raid_config": {},
"reservation": null,
"resource_class": null,
"states": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "bookmark"
}
],
"target_power_state": null,
"target_provision_state": null,
"target_raid_config": {},
"updated_at": null,
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}

View File

@ -0,0 +1,4 @@
{
"boot_device": "pxe",
"persistent": false
}

View File

@ -1,11 +1,11 @@
{
"last_error" : "",
"target_raid_config" : {},
"target_power_state" : null,
"console_enabled" : false,
"target_provision_state" : null,
"provision_updated_at" : null,
"power_state" : "power off",
"raid_config" : {},
"provision_state" : "available"
"console_enabled": false,
"last_error": null,
"power_state": "power off",
"provision_state": "available",
"provision_updated_at": "2016-08-18T22:28:49.382814+00:00",
"raid_config": {},
"target_power_state": null,
"target_provision_state": null,
"target_raid_config": {}
}

View File

@ -1,9 +1,5 @@
{
"supported_boot_devices" : [
"pxe",
"disk",
"cdrom",
"bios",
"safe"
]
"supported_boot_devices": [
"pxe"
]
}

View File

@ -1,29 +1,29 @@
{
"ports" : [
{
"extra" : {},
"address" : "22:22:22:22:22:22",
"updated_at" : "2016-05-05T22:48:52+00:00",
"node_uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2"
},
{
"href" : "http://127.0.0.1:6385/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "bookmark"
}
],
"created_at" : "2016-05-05T22:30:57+00:00",
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"pxe_enabled": true,
"local_link_connection": {
"switch_id": "0a:1b:2c:3d:4e:5f",
"port_id": "Ethernet3/1",
"switch_info": "switch1"
},
"internal_info": {}
}
]
"ports": [
{
"address": "22:22:22:22:22:22",
"created_at": "2016-08-18T22:28:49.946416+00:00",
"extra": {},
"internal_info": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"local_link_connection": {
"port_id": "Ethernet3/1",
"switch_id": "0a:1b:2c:3d:4e:5f",
"switch_info": "switch1"
},
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"pxe_enabled": true,
"updated_at": "2016-08-18T22:28:50.148137+00:00",
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}
]
}

View File

@ -1,18 +1,18 @@
{
"ports" : [
{
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "bookmark"
}
],
"address" : "22:22:22:22:22:22"
}
]
"ports": [
{
"address": "22:22:22:22:22:22",
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}
]
}

View File

@ -0,0 +1,3 @@
{
"target": "provide"
}

View File

@ -2,11 +2,11 @@
"target": "clean",
"clean_steps": [
{
'interface': 'deploy',
'step': 'upgrade_firmware',
'args': {
'force': True
"interface": "deploy",
"step": "upgrade_firmware",
"args": {
"force": "True"
}
}
]
}
}

View File

@ -0,0 +1,3 @@
{
"target": "manage"
}

View File

@ -1,71 +1,65 @@
{
"target_provision_state" : null,
"instance_info" : {},
"updated_at" : "2016-05-05T00:28:40+00:00",
"maintenance_reason" : null,
"inspection_started_at" : null,
"target_power_state" : null,
"ports" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports"
},
{
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports",
"rel" : "bookmark"
}
],
"maintenance" : false,
"driver" : "fake",
"provision_state" : "available",
"reservation" : null,
"uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"extra" : {
"foo" : "bar"
},
"driver_internal_info" : {},
"states" : [
{
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states",
"rel" : "self"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states"
}
],
"target_raid_config" : {},
"console_enabled" : false,
"clean_step" : {},
"last_error" : null,
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb"
}
],
"provision_updated_at" : null,
"name" : "test_node",
"properties" : {
"local_gb" : 10,
"cpu_arch" : "x86_64",
"cpus" : 1,
"memory_mb" : 1024
},
"power_state" : "power off",
"created_at" : "2016-04-20T16:51:03+00:00",
"instance_uuid" : null,
"raid_config" : {},
"driver_info" : {
"ipmi_password" : "***",
"ipmi_username" : "ADMIN",
"ipmi_address" : "1.2.3.4",
"deploy_kernel" : "http://127.0.0.1/images/kernel",
"deploy_ramdisk" : "http://127.0.0.1/images/ramdisk"
},
"inspection_finished_at" : null
"clean_step": {},
"console_enabled": false,
"created_at": "2016-08-18T22:28:48.643434+00:00",
"driver": "fake",
"driver_info": {
"ipmi_password": "******",
"ipmi_username": "ADMIN"
},
"driver_internal_info": {
"clean_steps": null
},
"extra": {},
"inspection_finished_at": null,
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"last_error": null,
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"maintenance": false,
"maintenance_reason": null,
"name": "test_node",
"network_interface": "flat",
"ports": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "bookmark"
}
],
"power_state": "power off",
"properties": {},
"provision_state": "available",
"provision_updated_at": "2016-08-18T22:28:49.382814+00:00",
"raid_config": {},
"reservation": null,
"resource_class": null,
"states": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "bookmark"
}
],
"target_power_state": null,
"target_provision_state": null,
"target_raid_config": {},
"updated_at": "2016-08-18T22:28:49.653974+00:00",
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}

View File

@ -1,12 +1,17 @@
[
{
"op" : "replace",
"path" : "/driver_info/ipmi_username",
"value" : "OPERATOR"
"op": "replace",
"path": "/driver_info/ipmi_username",
"value": "OPERATOR"
},
{
"value" : "10.0.0.123",
"op" : "replace",
"path" : "/driver_info/ipmi_address"
"op": "add",
"path": "/driver_info/deploy_kernel",
"value": "http://127.0.0.1/images/kernel"
},
{
"op": "add",
"path": "/driver_info/deploy_ramdisk",
"value": "http://127.0.0.1/images/ramdisk"
}
]

View File

@ -1,71 +1,67 @@
{
"properties" : {
"memory_mb" : 1024,
"cpus" : 1,
"local_gb" : 10,
"cpu_arch" : "x86_64"
},
"maintenance_reason" : null,
"instance_info" : {},
"states" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states"
},
{
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states",
"rel" : "bookmark"
}
],
"driver_internal_info" : {},
"power_state" : "power off",
"console_enabled" : false,
"last_error" : null,
"target_raid_config" : {},
"maintenance" : false,
"provision_state" : "available",
"uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb"
},
{
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"rel" : "bookmark"
}
],
"clean_step" : {},
"created_at" : "2016-04-20T16:51:03+00:00",
"instance_uuid" : null,
"target_power_state" : null,
"driver_info" : {
"ipmi_address" : "10.0.0.123",
"deploy_ramdisk" : "http://127.0.0.1/images/ramdisk",
"deploy_kernel" : "http://127.0.0.1/images/kernel",
"ipmi_password" : "***",
"ipmi_username" : "OPERATOR"
},
"inspection_started_at" : null,
"raid_config" : {},
"inspection_finished_at" : null,
"reservation" : null,
"target_provision_state" : null,
"extra" : {
"foo" : "bar"
},
"driver" : "fake",
"name" : "test_node",
"updated_at" : "2016-05-05T18:43:41+00:00",
"ports" : [
{
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports",
"rel" : "self"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports"
}
],
"provision_updated_at" : null
"clean_step": {},
"console_enabled": false,
"created_at": "2016-08-18T22:28:48+00:00",
"driver": "fake",
"driver_info": {
"deploy_kernel": "http://127.0.0.1/images/kernel",
"deploy_ramdisk": "http://127.0.0.1/images/ramdisk",
"ipmi_password": "******",
"ipmi_username": "OPERATOR"
},
"driver_internal_info": {
"clean_steps": null
},
"extra": {},
"inspection_finished_at": null,
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"last_error": null,
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"maintenance": true,
"maintenance_reason": "Replacing the hard drive",
"name": "test_node",
"network_interface": "flat",
"ports": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "bookmark"
}
],
"power_state": "power off",
"properties": {},
"provision_state": "available",
"provision_updated_at": "2016-08-18T22:28:49+00:00",
"raid_config": {},
"reservation": null,
"resource_class": null,
"states": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "bookmark"
}
],
"target_power_state": null,
"target_provision_state": null,
"target_raid_config": {},
"updated_at": "2016-08-18T22:28:50+00:00",
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}

View File

@ -0,0 +1,7 @@
[
{
"op" : "replace",
"path" : "/driver",
"value" : "fake"
}
]

View File

@ -1,27 +1,26 @@
{
"management" : {
"result" : true
},
"inspect" : {
"result" : null,
"reason" : "not supported"
},
"power" : {
"result" : true
},
"raid" : {
"result" : true
},
"boot" : {
"result" : false,
"reason" : "Cannot validate image information for node ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb because one or more parameters are missing from its instance_info.. Missing are: ['ramdisk', 'kernel', 'image_source']"
},
"console" : {
"result" : false,
"reason" : "Missing 'ipmi_terminal_port' parameter in node's driver_info."
},
"deploy" : {
"reason" : "Cannot validate image information for node ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb because one or more parameters are missing from its instance_info.. Missing are: ['ramdisk', 'kernel', 'image_source']",
"result" : false
}
"boot": {
"result": true
},
"console": {
"result": true
},
"deploy": {
"result": true
},
"inspect": {
"result": true
},
"management": {
"result": true
},
"network": {
"result": true
},
"power": {
"result": true
},
"raid": {
"result": true
}
}

View File

@ -1,26 +1,29 @@
{
"bmc_reset" : {
"async" : true,
"description" : "",
"http_methods" : [
"POST"
],
"attach" : false
},
"send_raw" : {
"description" : "",
"attach" : false,
"http_methods" : [
"POST"
],
"async" : true
},
"heartbeat" : {
"async" : true,
"attach" : false,
"http_methods" : [
"POST"
],
"description" : ""
}
"bmc_reset": {
"async": true,
"attach": false,
"description": "",
"http_methods": [
"POST"
],
"require_exclusive_lock": true
},
"heartbeat": {
"async": true,
"attach": false,
"description": "",
"http_methods": [
"POST"
],
"require_exclusive_lock": true
},
"send_raw": {
"async": true,
"attach": false,
"description": "",
"http_methods": [
"POST"
],
"require_exclusive_lock": true
}
}

View File

@ -1,75 +1,69 @@
{
"nodes" : [
{
"reservation" : null,
"driver" : "agent_ipmitool",
"uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"extra" : {
"foo" : "bar"
},
"provision_updated_at" : null,
"provision_state" : "available",
"clean_step" : {},
"maintenance" : false,
"driver_internal_info" : {},
"console_enabled" : false,
"raid_config" : {},
"target_raid_config" : {},
"inspection_started_at" : null,
"instance_info" : {},
"states" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states"
},
{
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/states",
"rel" : "bookmark"
}
],
"last_error" : null,
"properties" : {
"cpus" : 1,
"memory_mb" : 1024,
"local_gb" : 10,
"cpu_arch" : "x86_64"
},
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb"
}
],
"name" : "test_node",
"ports" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports"
},
{
"rel" : "bookmark",
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb/ports"
}
],
"created_at" : "2016-04-20T16:51:03+00:00",
"updated_at" : "2016-05-04T23:24:20+00:00",
"maintenance_reason" : null,
"inspection_finished_at" : null,
"driver_info" : {
"deploy_kernel" : "http://127.0.0.1/images/kernel",
"ipmi_address" : "1.2.3.4",
"deploy_ramdisk" : "http://127.0.0.1/images/ramdisk",
"ipmi_password" : "******",
"ipmi_username" : "ADMIN",
},
"instance_uuid" : null,
"power_state" : "power off",
"target_power_state" : null,
"target_provision_state" : null
}
]
"nodes": [
{
"clean_step": {},
"console_enabled": false,
"created_at": "2016-08-18T22:28:48.643434+00:00",
"driver": "fake",
"driver_info": {
"ipmi_password": "******",
"ipmi_username": "ADMIN"
},
"driver_internal_info": {
"clean_steps": null
},
"extra": {},
"inspection_finished_at": null,
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"last_error": null,
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"maintenance": false,
"maintenance_reason": null,
"name": "test_node",
"network_interface": "flat",
"ports": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports",
"rel": "bookmark"
}
],
"power_state": "power off",
"properties": {},
"provision_state": "available",
"provision_updated_at": "2016-08-18T22:28:49.382814+00:00",
"raid_config": {},
"reservation": null,
"resource_class": null,
"states": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states",
"rel": "bookmark"
}
],
"target_power_state": null,
"target_provision_state": null,
"target_raid_config": {},
"updated_at": "2016-08-18T22:28:49.653974+00:00",
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}
]
}

View File

@ -1,22 +1,22 @@
{
"nodes" : [
{
"provision_state" : "available",
"name" : "test_node",
"maintenance" : false,
"uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/nodes/ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"rel" : "bookmark"
}
],
"instance_uuid" : null,
"power_state" : "power off"
}
]
"nodes": [
{
"instance_uuid": null,
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d",
"rel": "bookmark"
}
],
"maintenance": false,
"name": "test_node",
"power_state": "power off",
"provision_state": "available",
"uuid": "6d85703a-565d-469a-96ce-30b6de53079d"
}
]
}

View File

@ -1,5 +1,5 @@
{
"node_uuid": "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"address": "11:11:11:11:11:11",
"local_link_connection": {
"switch_id": "0a:1b:2c:3d:4e:5f",

View File

@ -1,25 +1,25 @@
{
"created_at" : "2016-05-05T22:30:57.924480+00:00",
"links" : [
{
"rel" : "self",
"href" : "http://127.0.0.1:6385/v1/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2"
},
{
"href" : "http://127.0.0.1:6385/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "bookmark"
}
],
"extra" : {},
"address" : "11:11:11:11:11:11",
"updated_at" : null,
"node_uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"pxe_enabled": true,
"local_link_connection": {
"switch_id": "0a:1b:2c:3d:4e:5f",
"port_id": "Ethernet3/1",
"switch_info": "switch1"
},
"internal_info": {}
"address": "11:11:11:11:11:11",
"created_at": "2016-08-18T22:28:49.946416+00:00",
"extra": {},
"internal_info": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"local_link_connection": {
"port_id": "Ethernet3/1",
"switch_id": "0a:1b:2c:3d:4e:5f",
"switch_info": "switch1"
},
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"pxe_enabled": true,
"updated_at": null,
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}

View File

@ -1,29 +1,29 @@
{
"ports" : [
{
"node_uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"extra" : {},
"updated_at" : "2016-05-05T22:48:52+00:00",
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"address" : "22:22:22:22:22:22",
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "bookmark"
}
],
"created_at" : "2016-05-05T22:30:57+00:00",
"pxe_enabled": true,
"local_link_connection": {
"switch_id": "0a:1b:2c:3d:4e:5f",
"port_id": "Ethernet3/1",
"switch_info": "switch1"
},
"internal_info": {}
}
]
"ports": [
{
"address": "11:11:11:11:11:11",
"created_at": "2016-08-18T22:28:49.946416+00:00",
"extra": {},
"internal_info": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"local_link_connection": {
"port_id": "Ethernet3/1",
"switch_id": "0a:1b:2c:3d:4e:5f",
"switch_info": "switch1"
},
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"pxe_enabled": true,
"updated_at": null,
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}
]
}

View File

@ -0,0 +1,18 @@
{
"ports": [
{
"address": "11:11:11:11:11:11",
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}
]
}

View File

@ -1,25 +1,25 @@
{
"node_uuid" : "ecddf26d-8c9c-4ddf-8f45-fd57e09ccddb",
"extra" : {},
"updated_at" : "2016-05-05T22:48:52+00:00",
"uuid" : "c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"address" : "22:22:22:22:22:22",
"links" : [
{
"href" : "http://127.0.0.1:6385/v1/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "self"
},
{
"href" : "http://127.0.0.1:6385/ports/c933a251-486f-4c27-adb2-8b5f59bd9cd2",
"rel" : "bookmark"
}
],
"created_at" : "2016-05-05T22:30:57+00:00",
"pxe_enabled": true,
"local_link_connection": {
"switch_id": "0a:1b:2c:3d:4e:5f",
"port_id": "Ethernet3/1",
"switch_info": "switch1"
},
"internal_info": {}
"address": "22:22:22:22:22:22",
"created_at": "2016-08-18T22:28:49+00:00",
"extra": {},
"internal_info": {},
"links": [
{
"href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1",
"rel": "bookmark"
}
],
"local_link_connection": {
"port_id": "Ethernet3/1",
"switch_id": "0a:1b:2c:3d:4e:5f",
"switch_info": "switch1"
},
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"pxe_enabled": true,
"updated_at": "2016-08-18T22:28:50+00:00",
"uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1"
}

12
debian/changelog vendored
View File

@ -1,9 +1,17 @@
ironic (1:6.1.0-2) UNRELEASED; urgency=medium
ironic (1:6.2.0-1) experimental; urgency=medium
[ Ondřej Nový ]
* d/s/options: extend-diff-ignore of .gitreview
* d/control: Use correct branch in Vcs-* fields
-- Ondřej Nový <onovy@debian.org> Mon, 26 Sep 2016 19:02:33 +0200
[ Thomas Goirand ]
* New upstream release.
* Fixed (build-)depends for this release.
* Fixed oslotest EPOCH.
* Removed Fix-broken-unit-tests-for-get_ilo_object.patch applied upstream.
* Rebased requirements patches.
-- Thomas Goirand <zigo@debian.org> Wed, 28 Sep 2016 09:28:56 +0200
ironic (1:6.1.0-1) experimental; urgency=medium

27
debian/control vendored
View File

@ -22,9 +22,9 @@ Build-Depends-Indep: alembic (>= 0.8.4),
python-eventlet (>= 0.18.4),
python-fixtures (>= 3.0.0),
python-futurist (>= 0.11.0),
python-glanceclient (>= 1:2.0.0),
python-glanceclient (>= 1:2.3.0),
python-greenlet,
python-hacking (>= 0.10.0),
python-hacking (>= 0.11.0),
python-ironic-lib (>= 2.0.0),
python-ironicclient (>= 1.6.0),
python-iso8601 (>= 0.1.11),
@ -35,13 +35,14 @@ Build-Depends-Indep: alembic (>= 0.8.4),
python-keystonemiddleware (>= 4.0.0),
python-mock (>= 2.0),
python-mysqldb,
python-netaddr (>= 0.7.12),
python-neutronclient (>= 1:4.2.0),
python-netaddr (>= 0.7.13),
python-neutronclient (>= 1:5.1.0),
python-openstackdocstheme (>= 1.5.0),
python-os-testr (>= 0.7.0),
python-oslo.concurrency (>= 3.8.0),
python-oslo.config (>= 1:3.14.0),
python-oslo.context (>= 2.4.0),
python-oslo.db (>= 4.1.0),
python-oslo.context (>= 2.9.0),
python-oslo.db (>= 4.10.0),
python-oslo.i18n (>= 2.1.0),
python-oslo.log (>= 2.0.0),
python-oslo.messaging (>= 5.2.0),
@ -53,7 +54,7 @@ Build-Depends-Indep: alembic (>= 0.8.4),
python-oslo.utils (>= 3.16.0),
python-oslo.versionedobjects (>= 1.13.0),
python-oslosphinx (>= 2.5.0),
python-oslotest (>= 1.10.0),
python-oslotest (>= 1:1.10.0),
python-paramiko (>= 2.0),
python-pecan (>= 1.0.0),
python-pil,
@ -81,7 +82,6 @@ Build-Depends-Indep: alembic (>= 0.8.4),
python-wsme (>= 0.8),
subunit,
testrepository,
websockify (>= 0.8.0),
Standards-Version: 3.9.8
Vcs-Browser: https://git.openstack.org/cgit/openstack/deb-ironic?h=debian%2Fnewton
Vcs-Git: https://git.openstack.org/openstack/deb-ironic -b debian/newton
@ -95,7 +95,7 @@ Depends: alembic (>= 0.8.4),
python-automaton,
python-eventlet (>= 0.18.4),
python-futurist (>= 0.11.0),
python-glanceclient (>= 1:2.0.0),
python-glanceclient (>= 1:2.3.0),
python-greenlet,
python-ironic-lib (>= 2.0.0),
python-jinja2 (>= 2.8),
@ -103,12 +103,12 @@ Depends: alembic (>= 0.8.4),
python-jsonschema,
python-keystoneauth1 (>= 2.10.0),
python-keystonemiddleware (>= 4.0.0),
python-netaddr (>= 0.7.12),
python-neutronclient (>= 1:4.2.0),
python-netaddr (>= 0.7.13),
python-neutronclient (>= 1:5.1.0),
python-oslo.concurrency (>= 3.8.0),
python-oslo.config (>= 1:3.14.0),
python-oslo.context (>= 2.4.0),
python-oslo.db (>= 4.1.0),
python-oslo.context (>= 2.9.0),
python-oslo.db (>= 4.10.0),
python-oslo.i18n (>= 2.1.0),
python-oslo.log (>= 2.0.0),
python-oslo.messaging (>= 5.2.0),
@ -139,7 +139,6 @@ Depends: alembic (>= 0.8.4),
python-tz,
python-webob,
python-wsme (>= 0.8),
websockify (>= 0.8.0),
${misc:Depends},
${python:Depends},
Conflicts: python-cjson,

View File

@ -1,41 +0,0 @@
Description: Fix broken unit tests for get_ilo_object
First, the tested function signature was wrong. We didn't catch it in gate,
as we mock proliantutils, but it does break e.g. Debian package build.
.
Second, the arguments override was not actually working. We didn't catch
it in gate, because the new values were the same as the defaults.
Author: Dmitry Tantsur <divius.inside@gmail.com>
Date: Wed, 21 Sep 2016 13:45:21 +0000 (+0200)
X-Git-Url: https://review.openstack.org/gitweb?p=openstack%2Fironic.git;a=commitdiff_plain;h=87327803772ef7c80f7981d32851b90253d5c655
Bug-Ubuntu: #1626089
Change-Id: I2e4899e368b0b882dcd59bf33fdca98f47e5b405
Origin: upstream, https://review.openstack.org/374161
Last-Update: 2016-09-21
diff --git a/ironic/tests/unit/drivers/modules/ilo/test_common.py b/ironic/tests/unit/drivers/modules/ilo/test_common.py
index d61a7a0..c1bf3cb 100644
--- a/ironic/tests/unit/drivers/modules/ilo/test_common.py
+++ b/ironic/tests/unit/drivers/modules/ilo/test_common.py
@@ -154,9 +154,10 @@ class IloCommonMethodsTestCase(db_base.DbTestCase):
@mock.patch.object(ilo_client, 'IloClient', spec_set=True,
autospec=True)
def _test_get_ilo_object(self, ilo_client_mock, isFile_mock, ca_file=None):
- self.info['client_timeout'] = 60
- self.info['client_port'] = 443
+ self.info['client_timeout'] = 600
+ self.info['client_port'] = 4433
self.info['ca_file'] = ca_file
+ self.node.driver_info = self.info
ilo_client_mock.return_value = 'ilo_object'
returned_ilo_object = ilo_common.get_ilo_object(self.node)
ilo_client_mock.assert_called_with(
@@ -164,7 +165,8 @@ class IloCommonMethodsTestCase(db_base.DbTestCase):
self.info['ilo_username'],
self.info['ilo_password'],
self.info['client_timeout'],
- self.info['client_port'])
+ self.info['client_port'],
+ cacert=self.info['ca_file'])
self.assertEqual('ilo_object', returned_ilo_object)
def test_get_ilo_object_cafile(self):

View File

@ -3,14 +3,16 @@ Author: Thomas Goirand <zigo@debian.org>
Forwarded: not-needed
Last-Update: 2016-06-30
--- ironic-6.0.0.orig/test-requirements.txt
+++ ironic-6.0.0/test-requirements.txt
Index: deb-ironic/test-requirements.txt
===================================================================
--- deb-ironic.orig/test-requirements.txt
+++ deb-ironic/test-requirements.txt
@@ -4,7 +4,7 @@
hacking<0.11,>=0.10.0
hacking<0.12,>=0.11.0 # Apache-2.0
coverage>=3.6 # Apache-2.0
doc8 # Apache-2.0
-fixtures>=3.0.0 # Apache-2.0/BSD
+fixtures
mock>=2.0 # BSD
Babel>=2.3.4 # BSD
PyMySQL>=0.6.2 # MIT License
PyMySQL!=0.7.7,>=0.6.2 # MIT License

View File

@ -3,9 +3,11 @@ Author: Thomas Goirand <zigo@debian.org>
Forwarded: not-needed
Last-Update: 2016-06-30
--- ironic-6.0.0.orig/requirements.txt
+++ ironic-6.0.0/requirements.txt
@@ -15,7 +15,7 @@ python-glanceclient>=2.0.0 # Apache-2.0
Index: deb-ironic/requirements.txt
===================================================================
--- deb-ironic.orig/requirements.txt
+++ deb-ironic/requirements.txt
@@ -15,7 +15,7 @@ python-glanceclient!=2.4.0,>=2.3.0 # Apa
keystoneauth1>=2.10.0 # Apache-2.0
ironic-lib>=2.0.0 # Apache-2.0
python-swiftclient>=2.2.0 # Apache-2.0
@ -13,4 +15,4 @@ Last-Update: 2016-06-30
+pytz
stevedore>=1.16.0 # Apache-2.0
pysendfile>=2.0.0 # MIT
websockify>=0.8.0 # LGPLv3
oslo.concurrency>=3.8.0 # Apache-2.0

View File

@ -1,4 +1,3 @@
adds-alembic.ini-in-MANIFEST.in.patch
allow-any-pytz-version.patch
allow-any-fixtures-version.patch
Fix-broken-unit-tests-for-get_ilo_object.patch

View File

@ -85,6 +85,9 @@ IRONIC_HW_ARCH=${IRONIC_HW_ARCH:-x86_64}
# *_oneview:
# <Server Hardware URI> <Server Hardware Type URI> <Enclosure Group URI> <Server Profile Template URI> <MAC of primary connection> <Applied Server Profile URI>
#
# *_drac:
# <BMC address> <MAC address> <BMC username> <BMC password>
#
# IRONIC_IPMIINFO_FILE is deprecated, please use IRONIC_HWINFO_FILE. IRONIC_IPMIINFO_FILE will be removed in Ocata.
IRONIC_IPMIINFO_FILE=${IRONIC_IPMIINFO_FILE:-""}
if [ ! -z "$IRONIC_IPMIINFO_FILE" ]; then
@ -121,7 +124,7 @@ IRONIC_VM_NETWORK_BRIDGE=${IRONIC_VM_NETWORK_BRIDGE:-brbm}
IRONIC_VM_NETWORK_RANGE=${IRONIC_VM_NETWORK_RANGE:-192.0.2.0/24}
IRONIC_VM_MACS_CSV_FILE=${IRONIC_VM_MACS_CSV_FILE:-$IRONIC_DATA_DIR/ironic_macs.csv}
IRONIC_AUTHORIZED_KEYS_FILE=${IRONIC_AUTHORIZED_KEYS_FILE:-$HOME/.ssh/authorized_keys}
IRONIC_CLEAN_NET_NAME=${IRONIC_CLEAN_NET_NAME:-$PRIVATE_NETWORK_NAME}
IRONIC_CLEAN_NET_NAME=${IRONIC_CLEAN_NET_NAME:-${IRONIC_PROVISION_NETWORK_NAME:-${PRIVATE_NETWORK_NAME}}}
IRONIC_EXTRA_PXE_PARAMS=${IRONIC_EXTRA_PXE_PARAMS:-}
IRONIC_TTY_DEV=${IRONIC_TTY_DEV:-ttyS0}
@ -186,8 +189,9 @@ IRONIC_DIB_RAMDISK_OPTIONS=${IRONIC_DIB_RAMDISK_OPTIONS:-'ubuntu'}
# Set this variable to "true" to build an ISO for deploy ramdisk and
# upload to Glance.
IRONIC_DEPLOY_ISO_REQUIRED=$(trueorfalse False IRONIC_DEPLOY_ISO_REQUIRED)
if $IRONIC_DEPLOY_ISO_REQUIRED = 'True' && $IRONIC_BUILD_DEPLOY_RAMDISK = 'False'\
&& [ -n $IRONIC_DEPLOY_ISO ]; then
if [[ "$IRONIC_DEPLOY_ISO_REQUIRED" = "True" \
&& "$IRONIC_BUILD_DEPLOY_RAMDISK" = "False" \
&& -n "$IRONIC_DEPLOY_ISO" ]]; then
die "Prebuilt ISOs are not available, provide an ISO via IRONIC_DEPLOY_ISO \
or set IRONIC_BUILD_DEPLOY_RAMDISK=True to use ISOs"
fi
@ -195,7 +199,8 @@ fi
# are ``pxe_ssh``, ``pxe_ipmitool``, ``agent_ssh`` and ``agent_ipmitool``.
#
# Additional valid choices if IRONIC_IS_HARDWARE == true are:
# ``pxe_iscsi_cimc``, ``pxe_agent_cimc``, ``pxe_ucs``, ``pxe_cimc`` and ``*_pxe_oneview``
# ``pxe_iscsi_cimc``, ``pxe_agent_cimc``, ``pxe_ucs``, ``pxe_cimc``,
# ``*_pxe_oneview`` and ``pxe_drac``
IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh}
# Support entry points installation of console scripts
@ -208,6 +213,8 @@ IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST:$IRONIC_SERVICE_PORT}
# Enable iPXE
IRONIC_IPXE_ENABLED=$(trueorfalse True IRONIC_IPXE_ENABLED)
# Options below are only applied when IRONIC_IPXE_ENABLED is True
IRONIC_IPXE_USE_SWIFT=$(trueorfalse False IRONIC_IPXE_USE_SWIFT)
IRONIC_HTTP_DIR=${IRONIC_HTTP_DIR:-$IRONIC_DATA_DIR/httpboot}
IRONIC_HTTP_SERVER=${IRONIC_HTTP_SERVER:-$IRONIC_TFTPSERVER_IP}
IRONIC_HTTP_PORT=${IRONIC_HTTP_PORT:-3928}
@ -274,6 +281,22 @@ IRONIC_PROVISION_SUBNET_GATEWAY=${IRONIC_PROVISION_SUBNET_GATEWAY:-}
# Example: IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24
IRONIC_PROVISION_SUBNET_PREFIX=${IRONIC_PROVISION_SUBNET_PREFIX:-}
# Retrieving logs from the deploy ramdisk
#
# IRONIC_DEPLOY_LOGS_COLLECT possible values are:
# * always: Collect the ramdisk logs from the deployment on success or
# failure (Default in DevStack for debugging purpose).
# * on_failure: Collect the ramdisk logs upon a deployment failure
# (Default in Ironic).
# * never: Never collect the ramdisk logs.
IRONIC_DEPLOY_LOGS_COLLECT=${IRONIC_DEPLOY_LOGS_COLLECT:-always}
# IRONIC_DEPLOY_LOGS_STORAGE_BACKEND possible values are:
# * local: To store the logs in the local filesystem (Default in Ironic and DevStack).
# * swift: To store the logs in Swift.
IRONIC_DEPLOY_LOGS_STORAGE_BACKEND=${IRONIC_DEPLOY_LOGS_STORAGE_BACKEND:-local}
# The path to the directory where Ironic should put the logs when IRONIC_DEPLOY_LOGS_STORAGE_BACKEND is set to "local"
IRONIC_DEPLOY_LOGS_LOCAL_PATH=${IRONIC_DEPLOY_LOGS_LOCAL_PATH:-$IRONIC_VM_LOG_DIR/deploy_logs}
# get_pxe_boot_file() - Get the PXE/iPXE boot file path
function get_pxe_boot_file {
local relpath=syslinux/pxelinux.0
@ -336,6 +359,11 @@ function is_deployed_by_ilo {
return 1
}
function is_deployed_by_drac {
[[ -z "${IRONIC_DEPLOY_DRIVER##*_drac}" ]] && return 0
return 1
}
function is_glance_configuration_required {
is_deployed_by_agent || [[ "$IRONIC_CONFIGURE_GLANCE_WITH_SWIFT" == "True" ]] && return 0
return 1
@ -474,6 +502,29 @@ function configure_ironic_dirs {
# More info: http://www.syslinux.org/wiki/index.php/Library_modules
cp -aR $(dirname $IRONIC_PXE_BOOT_IMAGE)/*.{c32,0} $IRONIC_TFTPBOOT_DIR
fi
# Create the logs directory when saving the deploy logs to the filesystem
if [ "$IRONIC_DEPLOY_LOGS_STORAGE_BACKEND" = "local"] && [ "$IRONIC_DEPLOY_LOGS_COLLECT" != "never" ]; then
sudo install -d -o $STACK_USER $IRONIC_DEPLOY_LOGS_LOCAL_PATH
fi
}
function configure_ironic_networks {
if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then
echo_summary "Configuring Ironic provisioning network"
configure_ironic_provision_network
fi
echo_summary "Configuring Ironic cleaning network"
configure_ironic_cleaning_network
}
function configure_ironic_cleaning_network {
local cleaning_network_uuid
cleaning_network_uuid=$(openstack network show "$IRONIC_CLEAN_NET_NAME" -c id -f value)
die_if_not_set $LINENO cleaning_network_uuid "Failed to get ironic cleaning network id"
iniset $IRONIC_CONF_FILE neutron cleaning_network_uuid ${cleaning_network_uuid}
}
function configure_ironic_provision_network {
@ -537,6 +588,13 @@ function configure_ironic {
iniset $IRONIC_CONF_FILE database connection `database_connection_url ironic`
iniset $IRONIC_CONF_FILE DEFAULT state_path $IRONIC_STATE_PATH
iniset $IRONIC_CONF_FILE DEFAULT use_syslog $SYSLOG
# NOTE(vsaienko) with multinode each conductor should have its own host.
iniset $IRONIC_CONF_FILE DEFAULT host $LOCAL_HOSTNAME
# Retrieve deployment logs
iniset $IRONIC_CONF_FILE agent deploy_logs_collect $IRONIC_DEPLOY_LOGS_COLLECT
iniset $IRONIC_CONF_FILE agent deploy_logs_storage_backend $IRONIC_DEPLOY_LOGS_STORAGE_BACKEND
iniset $IRONIC_CONF_FILE agent deploy_logs_local_path $IRONIC_DEPLOY_LOGS_LOCAL_PATH
# Configure Ironic conductor, if it was enabled.
if is_service_enabled ir-cond; then
configure_ironic_conductor
@ -645,16 +703,6 @@ function configure_ironic_conductor {
pxe_params+=" $IRONIC_EXTRA_PXE_PARAMS"
# When booting with less than 1GB, we need to switch from default tmpfs
# to ramfs for ramdisks to decompress successfully.
if ([[ "$IRONIC_IS_HARDWARE" == "True" ]] &&
[[ "$IRONIC_HW_NODE_RAM" -lt 1024 ]]) ||
([[ "$IRONIC_IS_HARDWARE" == "False" ]] &&
[[ "$IRONIC_VM_SPECS_RAM" -lt 1024 ]]); then
pxe_params+=" rootfstype=ramfs"
fi
if [[ -n "$pxe_params" ]]; then
iniset $IRONIC_CONF_FILE pxe pxe_append_params "$pxe_params"
fi
@ -698,6 +746,9 @@ function configure_ironic_conductor {
iniset $IRONIC_CONF_FILE pxe pxe_bootfile_name $pxebin
iniset $IRONIC_CONF_FILE deploy http_root $IRONIC_HTTP_DIR
iniset $IRONIC_CONF_FILE deploy http_url "http://$IRONIC_HTTP_SERVER:$IRONIC_HTTP_PORT"
if [[ "$IRONIC_IPXE_USE_SWIFT" == "True" ]]; then
iniset $IRONIC_CONF_FILE pxe ipxe_use_swift True
fi
fi
if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then
@ -755,14 +806,6 @@ function create_ironic_accounts {
# init_ironic() - Initialize databases, etc.
function init_ironic {
if is_service_enabled neutron; then
# Save private network as cleaning network
local cleaning_network_uuid
cleaning_network_uuid=$(openstack network show "$IRONIC_CLEAN_NET_NAME" -c id -f value)
die_if_not_set $LINENO cleaning_network_uuid "Failed to get ironic cleaning network id"
iniset $IRONIC_CONF_FILE neutron cleaning_network_uuid ${cleaning_network_uuid}
fi
# (Re)create ironic database
recreate_database ironic
@ -1057,9 +1100,17 @@ function enroll_nodes {
mac_address=$(echo $hardware_info |awk '{print $5}')
local applied_server_profile_uri
applied_server_profile_uri=$(echo $hardware_info |awk '{print $6}')
local dynamic_allocation
dynamic_allocation=$(echo $hardware_info |awk '{print $7}')
dynamic_allocation=$(trueorfalse False dynamic_allocation)
node_options+=" -i server_hardware_uri=$server_hardware_uri"
node_options+=" -i applied_server_profile_uri=$applied_server_profile_uri"
if [[ -n "$applied_server_profile_uri" ]]; then
node_options+=" -i applied_server_profile_uri=$applied_server_profile_uri"
fi
if [[ "$dynamic_allocation" == "True" ]]; then
node_options+=" -i dynamic_allocation=$dynamic_allocation"
fi
node_options+=" -p capabilities="
node_options+="server_hardware_type_uri:$server_hardware_type_uri,"
node_options+="enclosure_group_uri:$enclosure_group_uri,"
@ -1070,6 +1121,9 @@ function enroll_nodes {
if [[ $IRONIC_DEPLOY_DRIVER -ne "pxe_ilo" ]]; then
node_options+=" -i ilo_deploy_iso=$IRONIC_DEPLOY_ISO_ID"
fi
elif is_deployed_by_drac; then
node_options+=" -i drac_host=$bmc_address -i drac_password=$bmc_passwd\
-i drac_username=$bmc_username"
fi
fi

View File

@ -41,9 +41,10 @@ if is_service_enabled ir-api ir-cond; then
echo_summary "Creating bridge and VMs"
create_bridge_and_vms
fi
if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then
echo_summary "Configuring Ironic provisioning network"
configure_ironic_provision_network
if is_service_enabled neutron; then
echo_summary "Configuring Ironic networks"
configure_ironic_networks
fi
# Start the ironic API and ironic taskmgr components

View File

@ -1 +1,6 @@
enable_service ironic ir-api ir-cond
# Neutron public network type was changed to flat by default recently:
# https://review.openstack.org/#/c/346282/
# TODO(vsaienko) remove once ironic-multitenant job variable and ironic
# developer documentation are updated.
Q_USE_PROVIDERNET_FOR_PUBLIC=False

View File

@ -80,7 +80,7 @@ def main():
parser.add_argument('--disk-format', default='qcow2',
help='Disk format to use.')
args = parser.parse_args()
with file(templatedir + '/vm.xml', 'rb') as f:
with open(templatedir + '/vm.xml', 'rb') as f:
source_template = f.read()
params = {
'name': args.name,

View File

@ -1,5 +1,6 @@
.. _api-audit-support:
=================
API Audit Logging
=================

View File

@ -0,0 +1,224 @@
.. _console:
=================================
Configuring Web or Serial Console
=================================
Overview
--------
There are two types of console which are available in Bare Metal service,
one is web console (`Node web console`_) which is available directly from web
browser, another is serial console (`Node serial console`_).
Node web console
----------------
The web console can be configured in Bare Metal service in the following way:
* Install shellinabox in ironic conductor node. For RHEL/CentOS, shellinabox package
is not present in base repositories, user must enable EPEL repository, you can find
more from `FedoraProject page`_.
Installation example::
Ubuntu:
sudo apt-get install shellinabox
Fedora 21/RHEL7/CentOS7:
sudo yum install shellinabox
Fedora 22 or higher:
sudo dnf install shellinabox
You can find more about shellinabox on the `shellinabox page`_.
You can optionally use the SSL certificate in shellinabox. If you want to use the SSL
certificate in shellinabox, you should install openssl and generate the SSL certificate.
1. Install openssl, for example::
Ubuntu:
sudo apt-get install openssl
Fedora 21/RHEL7/CentOS7:
sudo yum install openssl
Fedora 22 or higher:
sudo dnf install openssl
2. Generate the SSL certificate, here is an example, you can find more about openssl on
the `openssl page`_::
cd /tmp/ca
openssl genrsa -des3 -out my.key 1024
openssl req -new -key my.key -out my.csr
cp my.key my.key.org
openssl rsa -in my.key.org -out my.key
openssl x509 -req -days 3650 -in my.csr -signkey my.key -out my.crt
cat my.crt my.key > certificate.pem
* Customize the console section in the Bare Metal service configuration
file (/etc/ironic/ironic.conf), if you want to use SSL certificate in
shellinabox, you should specify ``terminal_cert_dir``.
for example::
[console]
#
# Options defined in ironic.drivers.modules.console_utils
#
# Path to serial console terminal program. Used only by Shell
# In A Box console. (string value)
#terminal=shellinaboxd
# Directory containing the terminal SSL cert (PEM) for serial
# console access. Used only by Shell In A Box console. (string
# value)
terminal_cert_dir=/tmp/ca
# Directory for holding terminal pid files. If not specified,
# the temporary directory will be used. (string value)
#terminal_pid_dir=<None>
# Time interval (in seconds) for checking the status of
# console subprocess. (integer value)
#subprocess_checking_interval=1
# Time (in seconds) to wait for the console subprocess to
# start. (integer value)
#subprocess_timeout=10
* Append console parameters for bare metal PXE boot in the Bare Metal service
configuration file (/etc/ironic/ironic.conf), including right serial port
terminal and serial speed, serial speed should be same serial configuration
with BIOS settings, so that os boot process can be seen in web console,
for example::
pxe_* driver:
[pxe]
#Additional append parameters for bare metal PXE boot. (string value)
pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8
* Configure node web console.
Enable the web console, for example::
ironic node-update <node-uuid> add driver_info/<terminal_port>=<customized_port>
ironic node-set-console-mode <node-uuid> true
Check whether the console is enabled, for example::
ironic node-validate <node-uuid>
Disable the web console, for example::
ironic node-set-console-mode <node-uuid> false
ironic node-update <node-uuid> remove driver_info/<terminal_port>
The ``<terminal_port>`` is driver dependent. The actual name of this field can be
checked in driver properties, for example::
ironic driver-properties <driver>
For ``*_ipmitool`` and ``*_ipminative`` drivers, this option is ``ipmi_terminal_port``.
For ``seamicro`` driver, this option is ``seamicro_terminal_port``. Give a customized port
number to ``<customized_port>``, for example ``8023``, this customized port is used in
web console url.
Get web console information for a node as follows::
ironic node-get-console <node-uuid>
+-----------------+----------------------------------------------------------------------+
| Property | Value |
+-----------------+----------------------------------------------------------------------+
| console_enabled | True |
| console_info | {u'url': u'http://<url>:<customized_port>', u'type': u'shellinabox'} |
+-----------------+----------------------------------------------------------------------+
You can open web console using above ``url`` through web browser. If ``console_enabled`` is
``false``, ``console_info`` is ``None``, web console is disabled. If you want to launch web
console, see the ``Configure node web console`` part.
.. _`shellinabox page`: https://code.google.com/p/shellinabox/
.. _`openssl page`: https://www.openssl.org/
.. _`FedoraProject page`: https://fedoraproject.org/wiki/Infrastructure/Mirroring
Node serial console
-------------------
Serial consoles for nodes are implemented using `socat`_.
In Newton, the following drivers support socat consoles for nodes:
* agent_ipmitool_socat
* fake_ipmitool_socat
* pxe_ipmitool_socat
Serial consoles can be configured in the Bare Metal service as follows:
* Install socat on the ironic conductor node. Also, ``socat`` needs to be in
the $PATH environment variable that the ironic-conductor service uses.
Installation example::
Ubuntu:
sudo apt-get install socat
Fedora 21/RHEL7/CentOS7:
sudo yum install socat
Fedora 22 or higher:
sudo dnf install socat
* Append ``console`` parameters for bare metal PXE boot in the Bare Metal
service configuration file
(``[pxe]`` section in ``/etc/ironic/ironic.conf``),
including the serial port terminal and serial speed. Serial speed must be
the same as the serial configuration in the BIOS settings, so that the
operating system boot process can be seen in the serial console.
In the following example, the console parameter 'console=ttyS0,115200n8'
uses ttyS0 for console output at 115200bps, 8bit, non-parity::
pxe_* driver:
[pxe]
#Additional append parameters for bare metal PXE boot. (string value)
pxe_append_params = nofb nomodeset vga=normal console=ttyS0,115200n8
* Configure node console.
Enable the serial console, for example::
ironic node-update <node-uuid> add driver_info/ipmi_terminal_port=<port>
ironic node-set-console-mode <node-uuid> true
Check whether the serial console is enabled, for example::
ironic node-validate <node-uuid>
Disable the serial console, for example::
ironic node-set-console-mode <node-uuid> false
ironic node-update <node-uuid> remove driver_info/ipmi_terminal_port
Serial console information is available from the Bare Metal service. Get
serial console information for a node from the Bare Metal service as follows::
ironic node-get-console <node-uuid>
+-----------------+----------------------------------------------------------------------+
| Property | Value |
+-----------------+----------------------------------------------------------------------+
| console_enabled | True |
| console_info | {u'url': u'tcp://<host>:<port>', u'type': u'socat'} |
+-----------------+----------------------------------------------------------------------+
If ``console_enabled`` is ``false`` or ``console_info`` is ``None`` then
the serial console is disabled. If you want to launch serial console, see the
``Configure node console``.
.. _`socat`: http://www.dest-unreach.org/socat

View File

@ -26,13 +26,14 @@ includes:
- the OpenStack Image service (glance) from which to retrieve images and image meta-data
- the OpenStack Networking service (neutron) for DHCP and network configuration
- the OpenStack Compute service (nova) works with the Bare Metal service and acts as
a user-facing API for instance management, while the Bare Metal service provides
the admin/operator API for hardware management.
The OpenStack Compute service also provides scheduling facilities (matching
flavors <-> images <-> hardware), tenant quotas, IP assignment, and other
services which the Bare Metal service does not, in and of itself, provide.
a user-facing API for instance management, while the Bare Metal service
provides the admin/operator API for hardware management. The OpenStack
Compute service also provides scheduling facilities (matching flavors <->
images <-> hardware), tenant quotas, IP assignment, and other services which
the Bare Metal service does not, in and of itself, provide.
- the OpenStack Block Storage (cinder) provides volumes, but this aspect is not yet available.
- the OpenStack Block Storage (cinder) provides volumes, but this aspect is not
yet available.
The Bare Metal service includes the following components:
@ -91,38 +92,82 @@ have already been set up.
Configure the Identity service for the Bare Metal service
---------------------------------------------------------
#. Create the Bare Metal service user (for example,``ironic``).
#. Create the Bare Metal service user (for example, ``ironic``).
The service uses this to authenticate with the Identity service.
Use the ``service`` tenant and give the user the ``admin`` role::
openstack user create --password IRONIC_PASSWORD \
--email ironic@example.com ironic
--email ironic@example.com ironic
openstack role add --project service --user ironic admin
#. You must register the Bare Metal service with the Identity service so that
other OpenStack services can locate it. To register the service::
openstack service create --name ironic --description \
"Ironic baremetal provisioning service" baremetal
"Ironic baremetal provisioning service" baremetal
#. Use the ``id`` property that is returned from the Identity service when
registering the service (above), to create the endpoint,
and replace IRONIC_NODE with your Bare Metal service's API node::
openstack endpoint create --region RegionOne \
baremetal admin http://IRONIC_NODE:6385
baremetal admin http://IRONIC_NODE:6385
openstack endpoint create --region RegionOne \
baremetal public http://IRONIC_NODE:6385
baremetal public http://IRONIC_NODE:6385
openstack endpoint create --region RegionOne \
baremetal internal http://IRONIC_NODE:6385
baremetal internal http://IRONIC_NODE:6385
If only keystone v2 API is available, use this command instead::
openstack endpoint create --region RegionOne \
--publicurl http://IRONIC_NODE:6385 \
--internalurl http://IRONIC_NODE:6385 \
--adminurl http://IRONIC_NODE:6385 \
baremetal
--publicurl http://IRONIC_NODE:6385 \
--internalurl http://IRONIC_NODE:6385 \
--adminurl http://IRONIC_NODE:6385 \
baremetal
#. You may delegate limited privileges related to the Bare Metal service
to your Users by creating Roles with the OpenStack Identity service. By
default, the Bare Metal service expects the "baremetal_admin" and
"baremetal_observer" Roles to exist, in addition to the default "admin"
Role. There is no negative consequence if you choose not to create these
Roles. They can be created with the following commands::
openstack role create baremetal_admin
openstack role create baremetal_observer
If you choose to customize the names of Roles used with the Bare Metal
service, do so by changing the "is_member", "is_observer", and "is_admin"
policy settings in ``/etc/ironic/policy.json``.
More complete documentation on managing Users and Roles within your
OpenStack deployment are outside the scope of this document, but may be
found here_.
#. You can further restrict access to the Bare Metal service by creating a
separate "baremetal" Project, so that Bare Metal resources (Nodes, Ports,
etc) are only accessible to members of this Project::
openstack project create baremetal
At this point, you may grant read-only access to the Bare Metal service API
without granting any other access by issuing the following commands::
openstack user create \
--domain default --project-domain default --project baremetal \
--password PASSWORD USERNAME
openstack role add \
--user-domain default --project-domain default --project baremetal\
--user USERNAME baremetal_observer
#. Further documentation is available elsewhere for the ``openstack``
`command-line client`_ and the Identity_ service. A policy.json.sample_
file, which enumerates the service's default policies, is provided for
your convenience with the Bare Metal Service.
.. _Identity: http://docs.openstack.org/admin-guide/identity-management.html
.. _`command-line client`: http://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html
.. _here: http://docs.openstack.org/admin-guide/identity-concepts.html#user-management
.. _policy.json.sample: https://github.com/openstack/ironic/blob/master/etc/ironic/policy.json.sample
Set up the database for Bare Metal
@ -138,9 +183,9 @@ MySQL database that is used by other OpenStack services.
# mysql -u root -p
mysql> CREATE DATABASE ironic CHARACTER SET utf8;
mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
IDENTIFIED BY 'IRONIC_DBPASSWORD';
IDENTIFIED BY 'IRONIC_DBPASSWORD';
mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
IDENTIFIED BY 'IRONIC_DBPASSWORD';
IDENTIFIED BY 'IRONIC_DBPASSWORD';
Install the Bare Metal service
------------------------------
@ -152,13 +197,13 @@ Install the Bare Metal service
Fedora 21/RHEL7/CentOS7:
sudo yum install openstack-ironic-api openstack-ironic-conductor \
python-ironicclient
python-ironicclient
sudo systemctl enable openstack-ironic-api openstack-ironic-conductor
sudo systemctl start openstack-ironic-api openstack-ironic-conductor
Fedora 22 or higher:
sudo dnf install openstack-ironic-api openstack-ironic-conductor \
python-ironicclient
python-ironicclient
sudo systemctl enable openstack-ironic-api openstack-ironic-conductor
sudo systemctl start openstack-ironic-api openstack-ironic-conductor
@ -227,17 +272,18 @@ Configuring ironic-api service
# "keystone" or "noauth". "noauth" should not be used in a
# production environment because all authentication will be
# disabled. (string value)
#auth_strategy=keystone
auth_strategy=keystone
[keystone_authtoken]
...
# Complete public Identity API endpoint (string value)
auth_uri=http://IDENTITY_IP:5000/
# Authentication type to load (string value)
auth_type = v3password
# Complete admin Identity API endpoint. This should specify
# the unversioned root endpoint e.g. https://localhost:35357/
# (string value)
identity_uri=http://IDENTITY_IP:35357/
# Complete public Identity API endpoint (string value)
auth_uri=http://PUBLIC_IDENTITY_IP:5000/v3/
# Complete admin Identity API endpoint. (string value)
auth_url=http://PRIVATE_IDENTITY_IP:35357/v3/
# Service username. (string value)
admin_user=ironic
@ -504,6 +550,14 @@ Compute service's controller nodes and compute nodes.*
#scheduler_tracks_instance_changes=True
scheduler_tracks_instance_changes=False
# New instances will be scheduled on a host chosen randomly from a subset
# of the N best hosts, where N is the value set by this option. Valid
# values are 1 or greater. Any value less than one will be treated as 1.
# For ironic, this should be set to a number >= the number of ironic nodes
# to more evenly distribute instances across the nodes.
#scheduler_host_subset_size=1
scheduler_host_subset_size=9999999
2. Change these configuration options in the ``ironic`` section.
Replace:
@ -572,9 +626,6 @@ An example of this is shown in the `Enrollment`_ section.
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vlan]
network_vlan_ranges = physnet1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
@ -647,6 +698,11 @@ An example of this is shown in the `Enrollment`_ section.
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
start=$START_IP,end=$END_IP --enable-dhcp
Configuring Tenant Networks
===========================
See :ref:`multitenancy`
.. _CleaningNetworkSetup:
Configure the Bare Metal service for cleaning
@ -667,9 +723,6 @@ Configure the Bare Metal service for cleaning
[neutron]
...
# UUID of the network to create Neutron ports on, when booting
# to a ramdisk for cleaning using Neutron DHCP. (string value)
#cleaning_network_uuid=<None>
cleaning_network_uuid = NETWORK_UUID
#. Restart the Bare Metal service's ironic-conductor::
@ -915,17 +968,7 @@ PXE UEFI setup
If you want to deploy on a UEFI supported bare metal, perform these additional
steps on the ironic conductor node to configure the PXE UEFI environment.
#. Download and untar the elilo bootloader version >= 3.16 from
http://sourceforge.net/projects/elilo/::
sudo tar zxvf elilo-3.16-all.tar.gz
#. Copy the elilo boot loader image to ``/tftpboot`` directory::
sudo cp ./elilo-3.16-x86_64.efi /tftpboot/elilo.efi
#. Grub2 is an alternate UEFI bootloader supported in Bare Metal service.
Install grub2 and shim packages::
#. Install Grub2 and shim packages::
Ubuntu: (14.04LTS and later)
sudo apt-get install grub-efi-amd64-signed shim-signed
@ -975,18 +1018,6 @@ steps on the ironic conductor node to configure the PXE UEFI environment.
sudo chmod 644 $GRUB_DIR/grub.cfg
#. Update bootfile and template file configuration parameters for UEFI PXE boot
in the Bare Metal Service's configuration file (/etc/ironic/ironic.conf)::
[pxe]
# Bootfile DHCP parameter for UEFI boot mode. (string value)
uefi_pxe_bootfile_name=bootx64.efi
# Template file for PXE configuration for UEFI boot loader.
# (string value)
uefi_pxe_config_template=$pybasedir/drivers/modules/pxe_grub_config.template
#. Update the bare metal node with ``boot_mode`` capability in node's properties
field::
@ -999,7 +1030,37 @@ steps on the ironic conductor node to configure the PXE UEFI environment.
boot device on the bare metal node. So this step is not required for
``pxe_ilo`` driver.
For more information on configuring boot modes, refer boot_mode_support_.
.. note::
For more information on configuring boot modes, see boot_mode_support_.
Elilo: an alternative to Grub2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Elilo is a UEFI bootloader. It is an alternative to Grub2, although it
isn't recommended since it is not being supported.
#. Download and untar the elilo bootloader version >= 3.16 from
http://sourceforge.net/projects/elilo/::
sudo tar zxvf elilo-3.16-all.tar.gz
#. Copy the elilo boot loader image to ``/tftpboot`` directory::
sudo cp ./elilo-3.16-x86_64.efi /tftpboot/elilo.efi
#. Update bootfile and template file configuration parameters for UEFI
PXE boot in the Bare Metal Service's configuration file
(/etc/ironic/ironic.conf)::
[pxe]
# Bootfile DHCP parameter for UEFI boot mode. (string value)
uefi_pxe_bootfile_name=elilo.efi
# Template file for PXE configuration for UEFI boot loader.
# (string value)
uefi_pxe_config_template=$pybasedir/drivers/modules/elilo_efi_pxe_config.template
iPXE setup
@ -1208,136 +1269,7 @@ Telemetry, they are:
Configure node web console
--------------------------
The web console can be configured in Bare Metal service in the following way:
* Install shellinabox in ironic conductor node. For RHEL/CentOS, shellinabox package
is not present in base repositories, user must enable EPEL repository, you can find
more from `FedoraProject page`_.
Installation example::
Ubuntu:
sudo apt-get install shellinabox
Fedora 21/RHEL7/CentOS7:
sudo yum install shellinabox
Fedora 22 or higher:
sudo dnf install shellinabox
You can find more about shellinabox on the `shellinabox page`_.
You can optionally use the SSL certificate in shellinabox. If you want to use the SSL
certificate in shellinabox, you should install openssl and generate the SSL certificate.
1. Install openssl, for example::
Ubuntu:
sudo apt-get install openssl
Fedora 21/RHEL7/CentOS7:
sudo yum install openssl
Fedora 22 or higher:
sudo dnf install openssl
2. Generate the SSL certificate, here is an example, you can find more about openssl on
the `openssl page`_::
cd /tmp/ca
openssl genrsa -des3 -out my.key 1024
openssl req -new -key my.key -out my.csr
cp my.key my.key.org
openssl rsa -in my.key.org -out my.key
openssl x509 -req -days 3650 -in my.csr -signkey my.key -out my.crt
cat my.crt my.key > certificate.pem
* Customize the console section in the Bare Metal service configuration
file (/etc/ironic/ironic.conf), if you want to use SSL certificate in
shellinabox, you should specify ``terminal_cert_dir``.
for example::
[console]
#
# Options defined in ironic.drivers.modules.console_utils
#
# Path to serial console terminal program (string value)
#terminal=shellinaboxd
# Directory containing the terminal SSL cert(PEM) for serial
# console access (string value)
terminal_cert_dir=/tmp/ca
# Directory for holding terminal pid files. If not specified,
# the temporary directory will be used. (string value)
#terminal_pid_dir=<None>
# Time interval (in seconds) for checking the status of
# console subprocess. (integer value)
#subprocess_checking_interval=1
# Time (in seconds) to wait for the console subprocess to
# start. (integer value)
#subprocess_timeout=10
* Append console parameters for bare metal PXE boot in the Bare Metal service
configuration file (/etc/ironic/ironic.conf), including right serial port
terminal and serial speed, serial speed should be same serial configuration
with BIOS settings, so that os boot process can be seen in web console,
for example::
pxe_* driver:
[pxe]
#Additional append parameters for bare metal PXE boot. (string value)
pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8
* Configure node web console.
Enable the web console, for example::
ironic node-update <node-uuid> add driver_info/<terminal_port>=<customized_port>
ironic node-set-console-mode <node-uuid> true
Check whether the console is enabled, for example::
ironic node-validate <node-uuid>
Disable the web console, for example::
ironic node-set-console-mode <node-uuid> false
ironic node-update <node-uuid> remove driver_info/<terminal_port>
The ``<terminal_port>`` is driver dependent. The actual name of this field can be
checked in driver properties, for example::
ironic driver-properties <driver>
For ``*_ipmitool`` and ``*_ipminative`` drivers, this option is ``ipmi_terminal_port``.
For ``seamicro`` driver, this option is ``seamicro_terminal_port``. Give a customized port
number to ``<customized_port>``, for example ``8023``, this customized port is used in
web console url.
* Get web console information::
ironic node-get-console <node-uuid>
+-----------------+----------------------------------------------------------------------+
| Property | Value |
+-----------------+----------------------------------------------------------------------+
| console_enabled | True |
| console_info | {u'url': u'http://<url>:<customized_port>', u'type': u'shellinabox'} |
+-----------------+----------------------------------------------------------------------+
You can open web console using above ``url`` through web browser. If ``console_enabled`` is
``false``, ``console_info`` is ``None``, web console is disabled. If you want to launch web
console, refer to ``Enable web console`` part.
.. _`shellinabox page`: https://code.google.com/p/shellinabox/
.. _`openssl page`: https://www.openssl.org/
.. _`FedoraProject page`: https://fedoraproject.org/wiki/Infrastructure/Mirroring
See :ref:`console`.
.. _boot_mode_support:

View File

@ -0,0 +1,102 @@
.. _metrics:
=========================
Emitting Software Metrics
=========================
Beginning with the Newton (6.1.0) release, the ironic services support
emitting internal performance data to
`statsd <https://github.com/etsy/statsd>`_. This allows operators to graph
and understand performance bottlenecks in their system.
This guide assumes you have a statsd server setup. For information on using
and configuring statsd, please see the
`statsd <https://github.com/etsy/statsd>`_ README and documentation.
These performance measurements, herein referred to as "metrics", can be
emitted from the Bare Metal service, including ironic-api, ironic-conductor,
and ironic-python-agent. By default, none of the services will emit metrics.
Configuring the Bare Metal Service to Enable Metrics
====================================================
Enabling metrics in ironic-api and ironic-conductor
---------------------------------------------------
The ironic-api and ironic-conductor services can be configured to emit metrics
to statsd by adding the following to the ironic configuration file, usually
located at ``/etc/ironic/ironic.conf``::
[metrics]
backend = statsd
If a statsd daemon is installed and configured on every host running an ironic
service, listening on the default UDP port (8125), no further configuration is
needed. If you are using a remote statsd server, you must also supply
connection information in the ironic configuration file::
[metrics_statsd]
# Point this at your environments' statsd host
statsd_host = 192.0.2.1
statsd_port = 8125
Enabling metrics in ironic-python-agent
---------------------------------------
The ironic-python-agent process receives its configuration in the response from
the inital lookup request to the ironic-api service. This means to configure
ironic-python-agent to emit metrics, you must enable the agent metrics backend
in your ironic configuration file on all ironic-conductor hosts::
[metrics]
agent_backend = statsd
In order to reliably emit metrics from the ironic-python-agent, you must
provide a statsd server that is reachable from both the configured provisioning
and cleaning networks. The agent statsd connection information is configured
in the ironic configuration file as well::
[metrics_statsd]
# Point this at a statsd host reachable from the provisioning and cleaning nets
agent_statsd_host = 198.51.100.2
agent_statsd_port = 8125
Types of Metrics Emitted
========================
The Bare Metal service emits timing metrics for every API method, as well as
for most driver methods. These metrics measure how long a given method takes
to execute.
A deployer with metrics enabled should expect between 100 and 500 distinctly
named data points to be emitted from the Bare Metal service. This will
increase if the metrics.preserve_host option is set to true or if multiple
drivers are used in the Bare Metal deployment. This estimate may be used to
determine if a deployer needs to scale their metrics backend to handle the
additional load before enablng metrics.
.. note::
With the default statsd configuration, each timing metric may create
additional metrics due to how statsd handles timing metrics. For more
information, see statds documentation on
`metric types <https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing>`_.
The ironic-python-agent ramdisk emits timing metrics for every API method.
Deployers who use custom HardwareManagers can emit custom metrics for their
hardware. For more information on custom HardwareManagers, and emitting
metrics from them, please see the
`ironic-python-agent documentation <http://docs.openstack.org/developer/ironic-python-agent/>`_.
Adding New Metrics
==================
If you're a developer, and would like to add additional metrics to ironic,
please see the ironic-lib developer documentation for details on how to use
the metrics library.
.. TODO::
Link to ironic-lib developer documentation once it's published.

View File

@ -0,0 +1,226 @@
.. _multitenancy:
==================================
Multitenancy in Bare Metal service
==================================
Overview
========
It is possible to use dedicated tenant networks for provisioned nodes, which
extends the current Bare Metal service capabilities of providing flat networks.
This works in conjunction with the Networking service to allow provisioning of
nodes in a separate provisioning network. The result of this is that multiple
tenants can use nodes in an isolated fashion. However, this configuration does
not support trunk ports belonging to multiple networks.
Network interface is one of the driver interfaces, that manages network
switching for nodes. Currently there are 3 network interfaces available in
the Bare Metal service:
- ``noop`` interface is used for standalone deployments, and does not perform
any network switching;
- ``flat`` interface places all provisioned nodes and nodes being deployed into
a single layer 2 network, separated from the cleaning network;
- ``neutron`` interface provides tenant-defined networking by integrating with
neutron, while also separating tenant networks from the provisioning and
cleaning provider networks.
Configuring Ironic
==================
Below is an example flow of how to setup ironic so that node provisioning will
happen in a multitenant environment (which means using ``neutron`` network
interface as stated above):
#. Network interfaces can be enabled on ironic-conductor by adding them to
``enabled_network_interfaces`` configuration option under the default
section::
[DEFAULT]
...
enabled_network_interfaces=noop,flat,neutron
Please note that ``enabled_network_interfaces`` has to be set in the
ironic-api configuration file too, as its value is used on the API side to
check if the requested network interface is available.
Keep in mind that, ideally, all conductors should have the same list of
enabled network interfaces, but it may not be the case during conductor
upgrades. This may cause problems if one of the conductors dies and some
node that is taken over is mapped to a conductor that does not support the
node's network interface. Any actions that involve calling the node's driver
will fail until that network interface is installed and enabled on that
conductor.
#. It is recommended to set the default network interface via
``default_network_interface`` configuration option under the default
section::
[DEFAULT]
...
default_network_interface=neutron
It will be used for all nodes that will be created without explicitly
specifying the network interface.
If it is not set, the default network interface is determined by looking at
the ``[dhcp]dhcp_provider`` configuration option value. If it is
``neutron`` - ``flat`` network interface becomes the default, otherwise
``noop`` is the default.
#. Define a provider network in neutron, which we shall refer to as the
"provisioning" network, and add it in under the neutron section in
ironic-conductor configuration file. Using ``neutron`` network interface
requires that ``provision_network_uuid`` and ``cleaning_network_uuid``
configuration options are set to a valid neutron network UUIDs, otherwise
ironic-conductor will fail to start::
[neutron]
...
cleaning_network_uuid=$CLEAN_UUID
provisioning_network_uuid=$PROVISION_UUID
Please refer to :ref:`CleaningNetworkSetup` for more information about
cleaning.
.. note::
The "provisioning" and "cleaning" networks may be the same neutron
provider network, or may be distinct networks. To ensure communication
between ironic and the deploy ramdisk works, it's important to ensure
that security groups are disabled for these networks, *or* the default
security groups allow:
* DHCP
* TFTP
* egress port used for ironic (6385 by default)
* ingress port used for ironic-python-agent (9999 by default)
* if using the iSCSI deploy method (``pxe_*`` and ``iscsi_*`` drivers),
the egress port used for iSCSI (3260 by default)
* if using the direct deploy method (``agent_*`` drivers), the egress
port used for swift (typically 80 or 443)
* if using iPXE, the egress port used for the HTTP server running
on the ironic conductor nodes (typically 80).
#. Install and configure a compatible ML2 mechanism driver which supports bare
metal provisioning for your switch. See `ML2 plugin configuration manual
<http://docs.openstack.org/networking-guide/config-ml2-plug-in.html>`_
for details.
#. Restart the ironic conductor and API services after the modifications:
- Fedora/RHEL7/CentOS7::
sudo systemctl restart openstack-ironic-api
sudo systemctl restart openstack-ironic-conductor
- Ubuntu::
sudo service ironic-api restart
sudo service ironic-conductor restart
#. Make sure that the conductor is reachable over the provisioning network
by trying to download a file from a TFTP server on it, from some
non-control-plane server in that network::
tftp $TFTP_IP -c get $FILENAME
where FILENAME is the file located at the TFTP server.
Configuring nodes
=================
#. Multitenancy support was added in the 1.20 API version. The following
examples assume you are using python-ironicclient version 1.5.0 or higher.
They show the usage of both ``ironic`` and ``openstack baremetal`` commands.
If you're going to use ``ironic`` command, set the following variable in
your shell environment::
export IRONIC_API_VERSION=1.20
If you're using ironic client plugin for openstack client via
``openstack baremetal`` commands, export the following variable::
export OS_BAREMETAL_API_VERSION=1.20
#. Node's ``network_interface`` field should be set to valid network interface
that is listed in the ``[DEFAULT]/enabled_network_interfaces`` configuration
option in the ironic-api config. Set it to ``neutron`` to use neutron ML2
driver:
- ``ironic`` command::
ironic node-create --network-interface neutron \
--driver agent-ipmitool
- ``openstack`` command::
openstack baremetal node create --network-interface neutron \
--driver agent-ipmitool
.. note::
If the ``[DEFAULT]/default_network_interface`` configuration option was
set, the ``--network-interface`` option does not need to be specified
when defining the node.
#. To update existing node's network interface, use the following commands:
- ``ironic`` command::
ironic node-update $NODE_UUID_OR_NAME add network_interface=neutron
- ``openstack`` command::
openstack baremetal node set $NODE_UUID_OR_NAME \
--network-interface neutron
#. The Bare Metal service provides the ``local_link_connection`` information to
the Networking service ML2 driver. The ML2 driver uses that information to
plug the specified port to the tenant network.
.. list-table:: ``local_link_connection`` fields
:header-rows: 1
* - Field
- Description
* - ``switch_id``
- Required. Identifies a switch and can be a MAC address or an
OpenFlow-based ``datapath_id``.
* - ``port_id``
- Required. Port ID on the switch, for example, Gig0/1.
* - ``switch_info``
- Optional. Used to distinguish different switch models or other
vendor specific-identifier. Some ML2 plugins may require this
field.
Create a port as follows:
- ``ironic`` command::
ironic port-create -a $HW_MAC_ADDRESS -n $NODE_UUID \
-l switch_id=$SWITCH_MAC_ADDRESS -l switch_info=$SWITCH_HOSTNAME \
-l port_id=$SWITCH_PORT --pxe-enabled true
- ``openstack`` command::
openstack baremetal port create $HW_MAC_ADDRESS --node $NODE_UUID \
--local-link-connection switch_id=$SWITCH_MAC_ADDRESS \
--local-link-connection switch_info=$SWITCH_HOSTNAME \
--local-link-connection port_id=$SWITCH_PORT --pxe-enabled true
#. Check the port configuration:
- ``ironic`` command::
ironic port-show $PORT_UUID
- ``openstack`` command::
openstack baremetal port show $PORT_UUID
After these steps, the provisioning of the created node will happen in the
provisioning network, and then the node will be moved to the tenant network
that was requested.

View File

@ -1,29 +1,82 @@
.. _security:
========
Security
========
Overview
========
=================
Security Overview
=================
While the Bare Metal service is intended to be a secure application, it is
important to understand what it does and does not cover today.
Deployers must properly evaluate their use case and take the appropriate
actions to secure their environment appropriately. This document is intended to
provide an overview of what risks an operator of the Bare Metal service should
be aware of. It is not intended as a How-To guide for securing a data center
or an OpenStack deployment.
actions to secure their environment(s). This document is intended to provide an
overview of what risks an operator of the Bare Metal service should be aware
of. It is not intended as a How-To guide for securing a data center or an
OpenStack deployment.
.. TODO: add "Security Considerations for Network Boot" section
.. TODO: add "Credential Storage and Management" section
.. TODO: add "Securing Ironic's REST API" section
.. TODO: add "Multi-tenancy Considerations" section
REST API: user roles and policy settings
========================================
Beginning with the Newton (6.1.0) release, the Bare Metal service allows
operators significant control over API access:
* Access may be restricted to each method (GET, PUT, etc) for each
REST resource. Defaults are provided with the release and defined in code.
* Access may be divided between an "administrative" role with full access and
"observer" role with read-only access. By default, these roles are assigned
the names ``baremetal_admin`` and ``baremetal_observer``, respectively.
* As before, passwords may be hidden in ``driver_info``.
Prior to the Newton (6.1.0) release, the Bare Metal service only supported two
policy options:
* API access may be secured by a simple policy rule: users with administrative
privileges may access all API resources, whereas users without administrative
privileges may only access public API resources.
* Passwords contained in the ``driver_info`` field may be hidden from all API
responses with the ``show_password`` policy setting. This defaults to always
hide passwords, regardless of the user's role.
Multi-tenancy
=============
There are two aspects of multitenancy to consider when evaluating a deployment
of the Bare Metal Service: interactions between tenants on the network, and
actions one tenant can take on a machine that will affect the next tenant.
Network Interactions
--------------------
Interactions between tenants' workloads running simultaneously on separate
servers include, but are not limited to: IP spoofing, packet sniffing, and
network man-in-the-middle attacks.
By default, the Bare Metal service provisions all nodes on a "flat" network, and
does not take any precautions to avoid or prevent interaction between tenants.
This can be addressed by integration with the OpenStack Identity, Compute, and
Networking services, so as to provide tenant-network isolation. Additional
documentation on `network multi-tenancy <multitenancy>`_ is available.
Lingering Effects
-----------------
Interactions between tenants placed sequentially on the same server include, but
are not limited to: changes in BIOS settings, modifications to firmware, or
files left on disk or peripheral storage devices (if these devices are not
erased between uses).
By default, the Bare Metal service will erase (clean) the local disk drives
during the "cleaning" phase, after deleting an instance. It *does not* reset
BIOS or reflash firmware or peripheral devices. This can be addressed through
customizing the utility ramdisk used during the "cleaning" phase. See details in
the `Firmware security`_ section.
Firmware security
=================

View File

@ -106,3 +106,139 @@ API Errors
The `debug_tracebacks_in_api` config option may be set to return tracebacks
in the API response for all 4xx and 5xx errors.
Retrieving logs from the deploy ramdisk
=======================================
When troubleshooting deployments (specially in case of a deploy failure)
it's important to have access to the deploy ramdisk logs to be able to
identify the source of the problem. By default, Ironic will retrieve the
logs from the deploy ramdisk when the deployment fails and save it on the
local filesystem at ``/var/log/ironic/deploy``.
To change this behavior, operators can make the following changes to
``/etc/ironic/ironic.conf`` under the ``[agent]`` group:
* ``deploy_logs_collect``: Whether Ironic should collect the deployment
logs on deployment. Valid values for this option are:
* ``on_failure`` (**default**): Retrieve the deployment logs upon a
deployment failure.
* ``always``: Always retrieve the deployment logs, even if the
deployment succeed.
* ``never``: Disable retrieving the deployment logs.
* ``deploy_logs_storage_backend``: The name of the storage backend where
the logs will be stored. Valid values for this option are:
* ``local`` (**default**): Store the logs in the local filesystem.
* ``swift``: Store the logs in Swift.
* ``deploy_logs_local_path``: The path to the directory where the
logs should be stored, used when the ``deploy_logs_storage_backend``
is configured to ``local``. By default logs will be stored at
**/var/log/ironic/deploy**.
* ``deploy_logs_swift_container``: The name of the Swift container to
store the logs, used when the deploy_logs_storage_backend is configured to
"swift". By default **ironic_deploy_logs_container**.
* ``deploy_logs_swift_days_to_expire``: Number of days before a log object
is marked as expired in Swift. If None, the logs will be kept forever
or until manually deleted. Used when the deploy_logs_storage_backend is
configured to "swift". By default **30** days.
When the logs are collected, Ironic will store a *tar.gz* file containing
all the logs according to the ``deploy_logs_storage_backend``
configuration option. All log objects will be named with the following
pattern::
<node-uuid>[_<instance-uuid>]_<timestamp yyyy-mm-dd-hh:mm:ss>.tar.gz
.. note::
The *instance_uuid* field is not required for deploying a node when
Ironic is configured to be used in standalone mode. If present it
will be appended to the name.
Accessing the log data
----------------------
When storing in the local filesystem
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When storing the logs in the local filesystem, the log files can
be found at the path configured in the ``deploy_logs_local_path``
configuration option. For example, to find the logs from the node
``5e9258c4-cfda-40b6-86e2-e192f523d668``:
.. code-block:: bash
$ ls /var/log/ironic/deploy | grep 5e9258c4-cfda-40b6-86e2-e192f523d668
5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz
5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz
.. note::
When saving the logs to the filesystem, operators may want to enable
some form of rotation for the logs to avoid disk space problems.
When storing in Swift
~~~~~~~~~~~~~~~~~~~~~
When using Swift, operators can associate the objects in the
container with the nodes in Ironic and search for the logs for the node
``5e9258c4-cfda-40b6-86e2-e192f523d668`` using the **prefix** parameter.
For example:
.. code-block:: bash
$ swift list ironic_deploy_logs_container -p 5e9258c4-cfda-40b6-86e2-e192f523d668
5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz
5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz
To download a specific log from Swift, do:
.. code-block:: bash
$ swift download ironic_deploy_logs_container "5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz"
5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz [auth 0.341s, headers 0.391s, total 0.391s, 0.531 MB/s]
The contents of the log file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The log is just a ``.tar.gz`` file that can be extracted as:
.. code-block:: bash
$ tar xvf <file path>
The contents of the file may differ slightly depending on the distribution
that the deploy ramdisk is using:
* For distributions using ``systemd`` there will be a file called
**journal** which contains all the system logs collected via the
``journalctl`` command.
* For other distributions, the ramdisk will collect all the contents of
the ``/var/log`` directory.
For all distributions, the log file will also contain the output of
the following commands (if present): ``ps``, ``df``, ``ip addr`` and
``iptables``.
Here's one example when extracting the content of a log file for a
distribution that uses ``systemd``:
.. code-block:: bash
$ tar xvf 5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz
df
ps
journal
ip_addr
iptables

View File

@ -5,12 +5,37 @@ Bare Metal Service Upgrade Guide
================================
This document outlines various steps and notes for operators to consider when
upgrading their Ironic-driven clouds from previous versions of OpenStack.
upgrading their ironic-driven clouds from previous versions of OpenStack.
The Ironic service is tightly coupled with the Ironic driver that is shipped
with Nova. Currently, some special considerations must be taken into account
The ironic service is tightly coupled with the ironic driver that is shipped
with nova. Some special considerations must be taken into account
when upgrading your cloud from previous versions of OpenStack.
The `release notes <http://docs.openstack.org/releasenotes/ironic/>`_
should always be read carefully when upgrading the ironic service. Starting
with the Mitaka series, specific upgrade steps and considerations are
well-documented in the release notes. Specific upgrade considerations prior
to the Mitaka series are documented below.
Upgrades are only supported one series at a time, or within a series.
General upgrades - all versions
===============================
Starting with the Liberty release, the ironic service should always be upgraded
before the nova service. The ironic virt driver in nova always uses a specific
version of the ironic REST API. This API version may be one that was introduced
in the same development cycle, so upgrading nova first may result in nova being
unable to use ironic's API.
When upgrading ironic, the following steps should always be taken:
* Update ironic code, without restarting services yet.
* Run database migrations
* Restart ironic-conductor and ironic-api services.
Upgrading from Kilo to Liberty
==============================
@ -25,7 +50,7 @@ the **ironic-discoverd** package. Ironic Liberty supports the
Please refer to
`ironic-inspector version support matrix
<http://docs.openstack.org/developer/ironic-inspector/install.html#version-support-matrix>`_
for details on which Ironic versions can work with which
for details on which ironic versions can work with which
**ironic-inspector**/**ironic-discoverd** versions.
It's also highly recommended that you switch to using **ironic-inspector**,
@ -62,25 +87,25 @@ The discoverd to inspector upgrade procedure is as follows:
Upgrading from Juno to Kilo
===========================
When upgrading a cloud from Juno to Kilo, users must ensure the Nova
service is upgraded prior to upgrading the Ironic service. Additionally,
users need to set a special config flag in Nova prior to upgrading to ensure
the newer version of Nova is not attempting to take advantage of new Ironic
features until the Ironic service has been upgraded. The steps for upgrading
your Nova and Ironic services are as follows:
When upgrading a cloud from Juno to Kilo, users must ensure the nova
service is upgraded prior to upgrading the ironic service. Additionally,
users need to set a special config flag in nova prior to upgrading to ensure
the newer version of nova is not attempting to take advantage of new ironic
features until the ironic service has been upgraded. The steps for upgrading
your nova and ironic services are as follows:
- Edit nova.conf and ensure force_config_drive=False is set in the [DEFAULT]
group. Restart nova-compute if necessary.
- Install new Nova code, run database migrations
- Install new nova code, run database migrations
- Install new python-ironicclient code.
- Restart Nova services.
- Install new Ironic code, run database migrations, restart Ironic services.
- Restart nova services.
- Install new ironic code, run database migrations, restart ironic services.
- Edit nova.conf and set force_config_drive to your liking, restarting
nova-compute if necessary.
Note that during the period between Nova's upgrade and Ironic's upgrades,
Note that during the period between nova's upgrade and ironic's upgrades,
instances can still be provisioned to nodes. However, any attempt by users to
specify a config drive for an instance will cause an error until Ironic's
specify a config drive for an instance will cause an error until ironic's
upgrade has completed.
Cleaning
@ -90,8 +115,8 @@ workloads to ensure the node is ready for another workload. This can include
erasing the hard drives, updating firmware, and other steps. For more
information, see :ref:`automated_cleaning`.
If Ironic is configured with automated cleaning enabled (defaults to True) and
If ironic is configured with automated cleaning enabled (defaults to True) and
to use Neutron as the DHCP provider (also the default), you will need to set the
`cleaning_network_uuid` option in the Ironic configuration file before starting
the Kilo Ironic service. See :ref:`CleaningNetworkSetup` for information on
how to set up the cleaning network for Ironic.
`cleaning_network_uuid` option in the ironic configuration file before starting
the Kilo ironic service. See :ref:`CleaningNetworkSetup` for information on
how to set up the cleaning network for ironic.

View File

@ -4,30 +4,25 @@
Introduction to Ironic
======================
Ironic is an OpenStack project which provisions physical hardware as opposed to
virtual machines. Ironic provides several reference drivers which leverage
common technologies like PXE and IPMI, to cover a wide range of hardware.
Ironic's pluggable driver architecture also allows vendor-specific drivers to
be added for improved performance or functionality not provided by reference
drivers.
Ironic is an OpenStack project which provisions bare metal (as opposed to
virtual) machines. It may be used independently or as part of an OpenStack
Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova),
Network (neutron), Image (glance) and Object (swift) services.
If one thinks of traditional hypervisor functionality (e.g., creating a
VM, enumerating virtual devices, managing the power state, loading an OS onto
the VM, and so on), then Ironic may be thought of as a hypervisor API gluing
together multiple drivers, each of which implement some portion of that
functionality with respect to physical hardware.
When the Bare Metal service is appropriately configured with the Compute and
Network services, it is possible to provision both virtual and physical
machines through the Compute service's API. However, the set of instance
actions is limited, arising from the different characteristics of physical
servers and switch hardware. For example, live migration can not be performed
on a bare metal instance.
OpenStack's Ironic project makes physical servers as easy to provision as
virtual machines in cloud, which in turn will open up new avenues for
enterprises and service providers.
The community maintains reference drivers that leverage open-source
technologies (eg. PXE and IPMI) to cover a wide range of hardware. Ironic's
pluggable driver architecture also allows hardware vendors to write and
contribute drivers that may improve performance or add functionality not
provided by the community drivers.
Ironic's driver replaces the Nova "bare metal" driver (in Grizzly - Juno
releases). Ironic is available for use and is supported by the Ironic
developers starting with the Juno release. It is officially integrated with
OpenStack in the Kilo release.
See https://wiki.openstack.org/wiki/Ironic for links to the project's current
development status.
.. TODO: the remainder of this file needs to be cleaned up still
Why Provision Bare Metal
========================

View File

@ -93,9 +93,9 @@ Here the ``spacing`` argument is a period in seconds for a given periodic task.
For example 'spacing=5' means every 5 seconds.
.. note::
As of the Newton release, it's possible to bind periodic tasks to a driver
object instead of an interface. This is deprecated and support for it will
be removed in the Ocata release.
In releases prior to and including the Newton release, it's possible to
bind periodic tasks to a driver object instead of an interface. This is
deprecated and support for it will be removed in the Ocata release.
Message Routing

View File

@ -8,6 +8,60 @@ This document provides some necessary points for developers to consider when
writing and reviewing Ironic code. The checklist will help developers get
things right.
Getting Started
===============
If you're completely new to OpenStack and want to contribute to the ironic
project, please start by familiarizing yourself with the `Infra Team's Developer
Guide <http://docs.openstack.org/infra/manual/developers.html>`_. This will help
you get your accounts set up in Launchpad and Gerrit, familiarize you with the
workflow for the OpenStack continuous integration and testing systems, and help
you with your first commit.
LaunchPad Project
-----------------
Most of the tools used for OpenStack require a launchpad.net ID for
authentication.
.. seealso::
* https://launchpad.net
* https://launchpad.net/ironic
Related Projects
----------------
There are several projects that are tightly integrated with ironic and which are
developed by the same community.
.. seealso::
* https://launchpad.net/bifrost
* https://launchpad.net/ironic-inspector
* https://launchpad.net/ironic-lib
* https://launchpad.net/ironic-python-agent
* https://launchpad.net/python-ironicclient
* https://launchpad.net/python-ironic-inspector-client
Project Hosting Details
-----------------------
Bug tracker
http://launchpad.net/ironic
Mailing list (prefix Subject line with ``[ironic]``)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Wiki
http://wiki.openstack.org/Ironic
Code Hosting
https://git.openstack.org/cgit/openstack/ironic
Code Review
https://review.openstack.org/#/q/status:open+project:openstack/ironic,n,z
Adding New Features
===================
@ -208,4 +262,3 @@ For approved and completed specs:
Please see the `Ironic specs process wiki page <https://wiki.openstack.org/
wiki/Ironic/Specs_Process>`_ for further reference.

View File

@ -1,60 +0,0 @@
.. _contributing:
======================
Contributing to Ironic
======================
If you're interested in contributing to the Ironic project,
the following will help get you started.
Contributor License Agreement
-----------------------------
.. index::
single: license; agreement
In order to contribute to the Ironic project, you need to have
signed OpenStack's contributor's agreement.
.. seealso::
* http://docs.openstack.org/infra/manual/developers.html
* http://wiki.openstack.org/CLA
LaunchPad Project
-----------------
Most of the tools used for OpenStack depend on a launchpad.net ID for
authentication.
.. seealso::
* https://launchpad.net
* https://launchpad.net/ironic
Related Projects
----------------
* https://launchpad.net/ironic-inspector
* https://launchpad.net/python-ironicclient
* https://launchpad.net/python-ironic-inspector-client
* https://launchpad.net/bifrost
Project Hosting Details
-----------------------
Bug tracker
http://launchpad.net/ironic
Mailing list (prefix subjects with ``[ironic]`` for faster responses)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Wiki
http://wiki.openstack.org/Ironic
Code Hosting
https://git.openstack.org/cgit/openstack/ironic
Code Review
https://review.openstack.org/#/q/status:open+project:openstack/ironic,n,z

View File

@ -18,12 +18,23 @@ to submitting a patch.
.. note::
This document is compatible with Python (3.5), Ubuntu (16.04) and Fedora (23).
When referring to different versions of Python and OS distributions, this
is explicitly stated.
.. seealso::
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Install prerequisites for python 2.7:
Prepare Development System
==========================
System Prerequisites
--------------------
The following packages cover the prerequisites for a local development
environment on most current distributions. Instructions for getting set up with
non-default versions of Python and on older distributions are included below as
well.
- Ubuntu/Debian::
@ -53,8 +64,11 @@ Install prerequisites for python 2.7:
`<https://software.opensuse.org/download.html?project=graphics&package=graphviz-plugins>`_.
If you need Python 3.4, follow the instructions above to install prerequisites for 2.7 and
additionally install the following packages:
(Optional) Installing Py34 requirements
---------------------------------------
If you need Python 3.4, follow the instructions above to install prerequisites
and *additionally* install the following packages:
- On Ubuntu 14.x/Debian::
@ -81,8 +95,12 @@ additionally install the following packages:
sudo dnf install python3-devel
If you need Python 3.5, follow the instructions for installing prerequisites for Python 2.7 and
run the following commands.
(Optional) Installing Py35 requirements
---------------------------------------
If you need Python 3.5 support on an older distro that does not already have
it, follow the instructions for installing prerequisites above and
*additionally* run the following commands.
- On Ubuntu 14.04::
@ -103,45 +121,43 @@ run the following commands.
sudo dnf copr enable -y mstuchli/Python3.5
dnf install -y python35-python3
Python Prerequisites
--------------------
If your distro has at least tox 1.8, use similar command to install
``python-tox`` package. Otherwise install this on all distros::
sudo pip install -U tox
You may need to explicitly upgrade virtualenv if you've installed the one
from your OS distribution and it is too old (tox will complain). You can
upgrade it individually, if you need to::
sudo pip install -U virtualenv
Ironic source code should be pulled directly from git::
Running Unit Tests Locally
==========================
If you haven't already, Ironic source code should be pulled directly from git::
# from your home or source directory
cd ~
git clone https://git.openstack.org/openstack/ironic
cd ironic
Set up a local environment for development and testing should be done with tox,
for example::
# create a virtualenv for development
tox -evenv --notest
Running Unit and Style Tests
----------------------------
All unit tests should be run using tox. To run Ironic's entire test suite::
# run all tests (unit under both py27 and py34, and pep8)
# to run the py27, py34, py35 unit tests, and the style tests
tox
To run the unit tests under py34 and also run the pep8 tests::
To run a specific test or tests, use the "-e" option followed by the tox target
name. For example::
# run all tests (unit under py34 and pep8)
tox -epy34 -epep8
To run the unit tests under py27 and also run the pep8 tests::
# run all tests (unit under py27 and pep8)
# run the unit tests under py27 and also run the pep8 tests
tox -epy27 -epep8
.. note::
@ -160,10 +176,6 @@ To run a specific unit test, this passes the -r option and desired test
# run a specific test for Python 2.7
tox -epy27 -- -r test_conductor
To run only the pep8/flake8 syntax and style checks::
tox -epep8
Debugging unit tests
--------------------
@ -184,25 +196,97 @@ Then run ``tox`` with the debug environment as one of the following::
For more information see the `oslotest documentation
<http://docs.openstack.org/developer/oslotest/features.html#debugging-with-oslo-debug-helper>`_.
===============================
Additional Tox Targets
----------------------
There are several additional tox targets not included in the default list, such
as the target which builds the documentation site. See the ``tox.ini`` file
for a complete listing of tox targets. These can be run directly by specifying
the target name::
# generate the documentation pages locally
tox -edocs
# generate the sample configuration file
tox -egenconfig
Exercising the Services Locally
===============================
If you would like to exercise the Ironic services in isolation within a local
virtual environment, you can do this without starting any other OpenStack
services. For example, this is useful for rapidly prototyping and debugging
interactions over the RPC channel, testing database migrations, and so forth.
In addition to running automated tests, sometimes it can be helpful to actually
run the services locally, without needing a server in a remote datacenter.
Step 1: System Dependencies
---------------------------
If you would like to exercise the Ironic services in isolation within your local
environment, you can do this without starting any other OpenStack services. For
example, this is useful for rapidly prototyping and debugging interactions over
the RPC channel, testing database migrations, and so forth.
There are two ways you may use to install the required system dependencies:
Manually, or by using the included Vagrant file.
Here we describe two ways to install and configure the dependencies, either run
directly on your local machine or encapsulated in a virtual machine or
container.
Option 1: Manual Install
########################
Step 1: Create a Python virtualenv
----------------------------------
#. Install a few system prerequisites::
#. If you haven't already downloaded the source code, do that first::
cd ~
git clone https://git.openstack.org/openstack/ironic
cd ironic
#. Create the Python virtualenv::
tox -evenv --notest --develop -r
#. Activate the virtual environment::
source .tox/venv/bin/activate
#. Install the ironic client::
pip install python-ironicclient
.. NOTE: You can install python-ironicclient from source by cloning the git
repository and running `pip install .` while in the root of the
cloned repository.
#. Export some ENV vars so the client will connect to the local services
that you'll start in the next section::
export OS_AUTH_TOKEN=fake-token
export IRONIC_URL=http://localhost:6385/
Next, install and configure system dependencies. Two different approaches are
described below; you should only do one of these.
Step 2a: System Dependencies In A Virtual Machine
-------------------------------------------------
This option requires `virtualbox <https://www.virtualbox.org>`_,
`vagrant <https://www.vagrantup.com>`_, and
`ansible <https://www.ansible.com>`_. You may install these using your
favorite package manager, or by downloading from the provided links.
#. Let vagrant do the work::
vagrant up
This will create a VM available to your local system at `192.168.99.11`,
will install all the necessary service dependencies,
and configure some default users. It will also generate
`./etc/ironic/ironic.conf.local` preconfigured for local dev work.
We recommend you compare and familiarize yourself with the settings in
`./etc/ironic/ironic.conf.sample` so you can adjust it to meet your own needs.
Step 2b: Install System Dependencies Locally
--------------------------------------------
This option will install RabbitMQ and MySQL on your local system. This may not
be desirable in some situations (eg, you're developing from a laptop and do not
want to run a MySQL server on it all the time).
#. Install rabbitmq-server::
# install rabbit message broker
# Ubuntu/Debian:
@ -220,7 +304,7 @@ Option 1: Manual Install
sudo zypper install rabbitmq-server
sudo systemctl start rabbitmq-server.service
# optionally, install mysql-server
#. Install mysql-server::
# Ubuntu/Debian:
# sudo apt-get install mysql-server
@ -237,17 +321,11 @@ Option 1: Manual Install
# sudo zypper install mariadb
# sudo systemctl start mysql.service
#. Clone the ``Ironic`` repository and install it within a virtualenv::
# If using MySQL, you need to create the initial database
mysql -u root -pMYSQL_ROOT_PWD -e "create schema ironic"
# activate the virtualenv
cd ~
git clone https://git.openstack.org/openstack/ironic
cd ironic
tox -evenv --notest
source .tox/venv/bin/activate
# install ironic within the virtualenv
python setup.py develop
.. NOTE: if you choose not to install mysql-server, ironic will default to
using a local sqlite database.
#. Create a configuration file within the ironic source directory::
@ -266,95 +344,41 @@ Option 1: Manual Install
# turn off the periodic sync_power_state task, to avoid getting NodeLocked exceptions
sed -i "s/#sync_power_state_interval = 60/sync_power_state_interval = -1/" etc/ironic/ironic.conf.local
#. Initialize the ironic database (optional)::
# ironic defaults to storing data in ./ironic/ironic.sqlite
# If using MySQL, you need to create the initial database
mysql -u root -pMYSQL_ROOT_PWD -e "create schema ironic"
# and switch the DB connection from sqlite to something else, eg. mysql
# if you opted to install mysql-server, switch the DB connection from sqlite to mysql
sed -i "s/#connection = .*/connection = mysql\+pymysql:\/\/root:MYSQL_ROOT_PWD@localhost\/ironic/" etc/ironic/ironic.conf.local
At this point, you can continue to Step 2.
Step 3: Start the Services
--------------------------
Option 2: Vagrant, VirtualBox, and Ansible
##########################################
From within the python virtualenv, run the following command to prepare the
database before you start the ironic services::
This option requires `virtualbox <https://www.virtualbox.org>`_,
`vagrant <https://www.vagrantup.com>`_, and
`ansible <https://www.ansible.com>`_. You may install these using your
favorite package manager, or by downloading from the provided links.
Next, run vagrant::
vagrant up
This will create a VM available to your local system at `192.168.99.11`,
will install all the necessary service dependencies,
and configure some default users. It will also generate
`./etc/ironic/ironic.conf.local` preconfigured for local dev work.
We recommend you compare and familiarize yourself with the settings in
`./etc/ironic/ironic.conf.sample` so you can adjust it to meet your own needs.
Step 2: Start the API
---------------------
#. Activate the virtual environment created in the previous section to run
the API::
# switch to the ironic source (Not necessary if you followed Option 1)
cd ironic
# activate the virtualenv
source .tox/venv/bin/activate
# install ironic within the virtualenv
python setup.py develop
# This creates the database tables.
# initialize the database for ironic
ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema
Next, open two new terminals for this section, and run each of the examples
here in a separate terminal. In this way, the services will *not* be run as
daemons; you can observe their output and stop them with Ctrl-C at any time.
#. Start the API service in debug mode and watch its output::
# start the API service
cd ~/ironic
source .tox/venv/bin/activate
ironic-api -v -d --config-file etc/ironic/ironic.conf.local
#. Start the Conductor service in debug mode and watch its output::
Step 3: Install the Client
--------------------------
#. Clone the ``python-ironicclient`` repository and install it within a
virtualenv::
# from your home or source directory
cd ~
git clone https://git.openstack.org/openstack/python-ironicclient
cd python-ironicclient
tox -evenv --notest
cd ~/ironic
source .tox/venv/bin/activate
#. Export some ENV vars so the client will connect to the local services
that you'll start in the next section::
export OS_AUTH_TOKEN=fake-token
export IRONIC_URL=http://localhost:6385/
Step 4: Start the Conductor Service
-----------------------------------
Open one more window (or screen session), again activate the venv, and then
start the conductor service and watch its output::
# activate the virtualenv
cd ironic
source .tox/venv/bin/activate
# start the conductor service
ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local
You should now be able to interact with Ironic via the python client (installed
in Step 3) and observe both services' debug outputs in the other two
windows. This is a good way to test new features or play with the functionality
without necessarily starting DevStack.
Step 4: Interact with the running services
------------------------------------------
You should now be able to interact with ironic via the python client, which is
present in the python virtualenv, and observe both services' debug outputs in
the other two windows. This is a good way to test new features or play with the
functionality without necessarily starting DevStack.
To get started, list the available commands and resources::
@ -394,11 +418,35 @@ Here is an example walkthrough of creating a node::
# its power state from ironic!
ironic node-set-power-state $NODE on
If you make some code changes and want to test their effects, install
again with "python setup.py develop", stop the services with Ctrl-C,
and restart them.
If you make some code changes and want to test their effects, simply stop the
services with Ctrl-C and restart them.
Step 5: Fixing your test environment
------------------------------------
If you are testing changes that add or remove python entrypoints, or making
significant changes to ironic's python modules, or simply keep the virtualenv
around for a long time, your development environment may reach an inconsistent
state. It may help to delete cached ".pyc" files, update dependencies,
reinstall ironic, or even recreate the virtualenv. The following commands may
help with that, but are not an exhaustive troubleshooting guide.::
# clear cached pyc files
cd ~/ironic/ironic
find ./ -name '*.pyc' | xargs rm
# reinstall ironic modules
cd ~/ironic
source .tox/venv/bin/activate
pip uninstall ironic
pip install -e .
# install and upgrade ironic and all python dependencies
cd ~/ironic
source .tox/venv/bin/activate
pip install -U -e .
==============================
Deploying Ironic with DevStack
==============================
@ -407,12 +455,21 @@ driver and provide hardware resources (network, baremetal compute nodes)
using a combination of OpenVSwitch and libvirt. It is highly recommended
to deploy on an expendable virtual machine and not on your personal work
station. Deploying Ironic with DevStack requires a machine running Ubuntu
14.04 (or later) or Fedora 20 (or later).
14.04 (or later) or Fedora 20 (or later). Make sure your machine is fully
up to date and has the latest packages installed before beginning this process.
.. seealso::
http://docs.openstack.org/developer/devstack/
.. note::
The devstack "demo" tenant is now granted the "baremetal_observer" role
and thereby has read-only access to ironic's API. This is sufficient for
all the examples below. Should you want to create or modify bare metal
resources directly (ie. through ironic rather than through nova) you will
need to use the devstack "admin" tenant.
Devstack will no longer create the user 'stack' with the desired
permissions, but does provide a script to perform the task::
@ -487,7 +544,7 @@ and uses the ``agent_ipmitool`` driver by default::
# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1024
IRONIC_VM_SPECS_RAM=1280
IRONIC_VM_SPECS_DISK=10
# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
@ -559,7 +616,7 @@ Run stack.sh::
./stack.sh
Source credentials, create a key, and spawn an instance::
Source credentials, create a key, and spawn an instance as the ``demo`` user::
source ~/devstack/openrc
@ -568,22 +625,22 @@ Source credentials, create a key, and spawn an instance::
# create keypair
ssh-keygen
nova keypair-add default --pub-key ~/.ssh/id_rsa.pub
openstack keypair create --public-key ~/.ssh/id_rsa.pub default
# spawn instance
nova boot --flavor baremetal --image $image --key-name default testing
openstack server create --flavor baremetal --image $image --key-name default testing
.. note::
Because devstack create multiple networks, we need to pass an additional parameter
``--nic net-id`` to the nova boot command when using the admin account, for example::
net_id=$(neutron net-list | egrep "$PRIVATE_NETWORK_NAME"'[^-]' | awk '{ print $2 }')
net_id=$(openstack network list | egrep "$PRIVATE_NETWORK_NAME"'[^-]' | awk '{ print $2 }')
nova boot --flavor baremetal --nic net-id=$net_id --image $image --key-name default testing
openstack server create --flavor baremetal --nic net-id=$net_id --image $image --key-name default testing
As the demo tenant, you should now see a Nova instance building::
You should now see a Nova instance building::
nova list
openstack server list
+--------------------------------------+---------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------+
@ -594,9 +651,7 @@ Nova will be interfacing with Ironic conductor to spawn the node. On the
Ironic side, you should see an Ironic node associated with this Nova instance.
It should be powered on and in a 'wait call-back' provisioning state::
# Note that 'ironic' calls must be made with admin credentials
. ~/devstack/openrc admin admin
ironic node-list
openstack baremetal node list
+--------------------------------------+--------------------------------------+-------------+--------------------+
| UUID | Instance UUID | Power State | Provisioning State |
+--------------------------------------+--------------------------------------+-------------+--------------------+
@ -621,7 +676,7 @@ This provisioning process may take some time depending on the performance of
the host system, but Ironic should eventually show the node as having an
'active' provisioning state::
ironic node-list
openstack baremetal node list
+--------------------------------------+--------------------------------------+-------------+--------------------+
| UUID | Instance UUID | Power State | Provisioning State |
+--------------------------------------+--------------------------------------+-------------+--------------------+
@ -633,9 +688,7 @@ the host system, but Ironic should eventually show the node as having an
This should also be reflected in the Nova instance state, which at this point
should be ACTIVE, Running and an associated private IP::
# Note that 'nova' calls must be made with the credentials of the demo tenant
. ~/devstack/openrc demo demo
nova list
openstack server list
+--------------------------------------+---------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+------------------+
@ -647,7 +700,6 @@ The server should now be accessible via SSH::
ssh cirros@10.1.0.4
$
=====================
Running Tempest tests
=====================
@ -701,7 +753,6 @@ For more information about the supported parameters see::
Always be careful when running debuggers in time sensitive code,
they may cause timeout errors that weren't there before.
================================
Building developer documentation
================================
@ -739,4 +790,3 @@ commands to build the documentation set:
#Now use your browser to open the top-level index.html located at:
http://your_ip:8000

View File

@ -0,0 +1,192 @@
.. _notifications:
=============
Notifications
=============
Ironic notifications are events intended for consumption by external services
like a billing or usage system, a monitoring data store, or other OpenStack
services. Notifications are sent to these services over a message bus by
oslo.messaging's Notifier class [1]_. The consumer sees the notification as a
JSON object structured in the following way as defined by oslo.messaging::
{
"priority": <string, defined by the sender>,
"event_type": <string, defined by the sender>,
"timestamp": <string, the isotime of when the notification emitted>,
"publisher_id": <string, defined by the sender>,
"message_id": <uuid, generated by oslo>,
"payload": <json serialized dict, defined by the sender>
}
Versioned notifications in ironic
---------------------------------
To make it easier for consumers of ironic's notifications to use predictably,
ironic defines each notification and its payload as oslo versioned objects
[2]_.
An increase in the minor version of the payload will indicate that only
new fields have been added since the last version, so the consumer can still
use the notification as it did previously. An increase in the major version of
the payload indicates that the consumer can no longer parse the notification as
it did previously, indicating that a field was removed or the type of the
payload field changed.
Ironic exposes a configuration option in the ``DEFAULT`` section called
``notification_level`` that indicates the minimum level for which
notifications will be emitted. This option is not defined by default, which
indicates that no notifications will be sent by ironic. Notification levels
may be "debug", "info", "warning", "error", or "critical", and each
level follows the OpenStack logging guidelines [3]_. If it's desired that
ironic emit all notifications, the config option should be set to "debug", for
example. If only "warning", "error", and "critical" notifications are needed,
the config option should be set to "warning". This level gets exposed in the
notification on the wire as the "priority" field.
All ironic versioned notifications will be sent on the message bus via the
``ironic_versioned_notifications`` topic.
Ironic also has a set of base classes that assist in clearly defining the
notification itself, the payload, and the other fields not auto-generated by
oslo (level, event_type and publisher_id). Below describes how to use these
base classes to add a new notification to ironic.
Adding a new notification to ironic
-----------------------------------
To add a new notification to ironic, new versioned notification classes should
be created by subclassing the NotificationBase class to define the notification
itself and the NotificationPayloadBase class to define which fields the new
notification will contain inside its payload. You may also define a schema to
allow the payload to be automatically populated by the fields of an ironic
object. Here's an example::
# The ironic object whose fields you want to use in your schema
@base.IronicObjectRegistry.register
class ExampleObject(base.IronicObject):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'id': fields.IntegerField(),
'uuid': fields.UUIDField(),
'a_useful_field': fields.StringField(),
'not_useful_field': fields.StringField()
}
# A class for your new notification
@base.IronicObjectRegistry.register
class ExampleNotification(notification.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': fields.ObjectField('ExampleNotifPayload')
}
# A class for your notification's payload
@base.IronicObjectRegistry.register
class ExampleNotifPayload(notification.NotificationPayloadBase):
# Schemas are optional. They just allow you to reuse other objects'
# fields by passing in that object and calling populate_schema with
# a kwarg set to the other object.
SCHEMA = {
'a_useful_field': ('example_obj', 'a_useful_field')
}
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'a_useful_field': fields.StringField(),
'an_extra_field': fields.StringField(nullable=True)
}
SCHEMA defines how to populate the payload fields. It's an optional
attribute that subclasses may use to easily populate notifications with
data from other objects.
It is a dictionary where every key value pair has the following format::
<payload_field_name>: (<data_source_name>,
<field_of_the_data_source>)
The ``<payload_field_name>`` is the name where the data will be stored in the
payload object; this field has to be defined as a field of the payload.
The ``<data_source_name>`` shall refer to name of the parameter passed as
kwarg to the payload's ``populate_schema()`` call and this object will be
used as the source of the data. The ``<field_of_the_data_source>`` shall be
a valid field of the passed argument.
The SCHEMA needs to be applied with the ``populate_schema()`` call before the
notification can be emitted.
The value of the ``payload.<payload_field_name>`` field will be set by the
``<data_source_name>.<field_of_the_data_source>`` field. The
``<data_source_name>`` will not be part of the payload object internal or
external representation.
Payload fields that are not set by the SCHEMA can be filled in the same
way as in any versioned object.
Then, to create a payload, you would do something like the following. Note
that if you choose to define a schema in the SCHEMA class variable, you must
populate the schema by calling ``populate_schema(example_obj=my_example_obj)``
before emitting the notification is allowed::
my_example_obj = ExampleObject(id=1,
a_useful_field='important',
not_useful_field='blah')
# an_extra_field is optional since it's not a part of the SCHEMA and is a
# nullable field in the class fields
my_notify_payload = ExampleNotifyPayload(an_extra_field='hello')
# populate the schema with the ExampleObject fields
my_notify_payload.populate_schema(example_obj=my_example_obj)
You then create the notification with the oslo required fields (event_type,
publisher_id, and level, all sender fields needed by oslo that are defined
in the ironic notification base classes) and emit it::
notify = ExampleNotification(
event_type=notification.EventType(object='example_obj',
action='do_something', status='start'),
publisher=notification.NotificationPublisher(service='conductor',
host='cond-hostname01'),
level=fields.NotificationLevel.DEBUG,
payload=my_notify_payload)
notify.emit(context)
When specifying the event_type, ``object`` will specify the object being acted
on, ``action`` will be a string describing what action is being performed on
that object, and ``status`` will be one of "start", "end", "error", or
"success". "start" and "end" are used to indicate when actions that are not
immediate begin and succeed. "success" is used to indicate when actions that
are immediate succeed. "error" is used to indicate when any type of action
fails, regardless of whether it's immediate or not. As a result of specifying
these parameters, event_type will be formatted as
``baremetal.<object>.<action>.<status>`` on the message bus.
This example will send the following notification over the message bus::
{
"priority": "debug",
"payload":{
"ironic_object.namespace":"ironic",
"ironic_object.name":"ExampleNotifyPayload",
"ironic_object.version":"1.0",
"ironic_object.data":{
"a_useful_field":"important",
"an_extra_field":"hello"
}
},
"event_type":"baremetal.example_obj.do_something.start",
"publisher_id":"conductor.cond-hostname01"
}
Existing notifications
----------------------
Descriptions of notifications emitted by ironic will be documented here when
they are added.
.. [1] http://docs.openstack.org/developer/oslo.messaging/notifier.html
.. [2] http://docs.openstack.org/developer/oslo.versionedobjects
.. [3] https://wiki.openstack.org/wiki/LoggingStandards#Log_level_definitions

View File

@ -0,0 +1,146 @@
========================
REST API Version History
========================
**1.22**
Added endpoints for deployment ramdisks.
**1.21**
Add node ``resource_class`` field.
**1.20**
Add node ``network_interface`` field.
**1.19**
Add ``local_link_connection`` and ``pxe_enabled`` fields to the port object.
**1.18**
Add ``internal_info`` readonly field to the port object, that will be used
by ironic to store internal port-related information.
**1.17**
Addition of provision_state verb ``adopt`` which allows an operator
to move a node from ``manageable`` state to ``active`` state without
performing a deployment operation on the node. This is intended for
nodes that have already been deployed by external means.
**1.16**
Add ability to filter nodes by driver.
**1.15**
Add ability to do manual cleaning when a node is in the manageable
provision state via PUT v1/nodes/<identifier>/states/provision,
target:clean, clean_steps:[...].
**1.14**
Make the following endpoints discoverable via Ironic API:
* '/v1/nodes/<UUID or logical name>/states'
* '/v1/drivers/<driver name>/properties'
**1.13**
Add a new verb ``abort`` to the API used to abort nodes in
``CLEANWAIT`` state.
**1.12**
This API version adds the following abilities:
* Get/set ``node.target_raid_config`` and to get
``node.raid_config``.
* Retrieve the logical disk properties for the driver.
**1.11** (breaking change)
Newly registered nodes begin in the ``enroll`` provision state by default,
instead of ``available``. To get them to the ``available`` state,
the ``manage`` action must first be run to verify basic hardware control.
On success the node moves to ``manageable`` provision state. Then the
``provide`` action must be run. Automated cleaning of the node is done and
the node is made ``available``.
**1.10**
Logical node names support all RFC 3986 unreserved characters.
Previously only valid fully qualified domain names could be used.
**1.9**
Add ability to filter nodes by provision state.
**1.8**
Add ability to return a subset of resource fields.
**1.7**
Add node ``clean_step`` field.
**1.6**
Add :ref:`inspection` process: introduce ``inspecting`` and ``inspectfail``
provision states, and ``inspect`` action that can be used when a node is in
``manageable`` provision state.
**1.5**
Add logical node names that can be used to address a node in addition to
the node UUID. Name is expected to be a valid `fully qualified domain
name`_ in this version of API.
**1.4**
Add ``manageable`` state and ``manage`` transition, which can be used to
move a node to ``manageable`` state from ``available``.
The node cannot be deployed in ``manageable`` state.
This change is mostly a preparation for future inspection work
and introduction of ``enroll`` provision state.
**1.3**
Add node ``driver_internal_info`` field.
**1.2** (breaking change)
Renamed NOSTATE (``None`` in Python, ``null`` in JSON) node state to
``available``. This is needed to reduce confusion around ``None`` state,
especially when future additions to the state machine land.
**1.1**
This was the initial version when API versioning was introduced.
Includes the following changes from Kilo release cycle:
* Add node ``maintenance_reason`` field and an API endpoint to
set/unset the node maintenance mode.
* Add sync and async support for vendor passthru methods.
* Vendor passthru endpoints support different HTTP methods, not only
``POST``.
* Make vendor methods discoverable via the Ironic API.
* Add logic to store the config drive passed by Nova.
This has been the minimum supported version since versioning was
introduced.
**1.0**
This version denotes Juno API and was never explicitly supported, as API
versioning was not implemented in Juno, and **1.1** became the minimum
supported version in Kilo.
.. _fully qualified domain name: https://en.wikipedia.org/wiki/Fully_qualified_domain_name

76
doc/source/dev/webapi.rst Normal file
View File

@ -0,0 +1,76 @@
=========================
REST API Conceptual Guide
=========================
Versioning
==========
The ironic REST API supports two types of versioning:
- "major versions", which have dedicated urls.
- "microversions", which can be requested through the use of the
``X-OpenStack-Ironic-API-Version`` header.
There is only one major version supported currently, "v1". As such, most URLs
in this documentation are written with the "/v1/" prefix.
Starting with the Kilo release, ironic supports microversions. In this context,
a version is defined as a string of 2 integers separated by a dot: **X.Y**.
Here ``X`` is a major version, always equal to ``1``, and ``Y`` is
a minor version. Server minor version is increased every time the API behavior
is changed (note `Exceptions from Versioning`_).
.. note::
`Nova versioning documentation`_ has a nice guide for developers on when to
bump an API version.
The server indicates its minimum and maximum supported API versions in the
``X-OpenStack-Ironic-API-Minimum-Version`` and
``X-OpenStack-Ironic-API-Maximum-Version`` headers respectively, returned
with every response. Client may request a specific API version by providing
``X-OpenStack-Ironic-API-Version`` header with request.
The requested microversion determines both the allowable requests and the
response format for all requests. A resource may be represented differently
based on the requested microversion.
If no version is requested by the client, the minimum supported version will be
assumed. In this way, a client is only exposed to those API features that are
supported in the requested (explicitly or implicitly) API version (again note
`Exceptions from Versioning`_, they are not covered by this rule).
We recommend clients that require a stable API to always request a specific
version of API that they have been tested against.
.. note::
A special value ``latest`` can be requested instead a numerical
microversion, which always requests the newest supported API version from
the server.
.. _Nova versioning documentation: http://docs.openstack.org/developer/nova/api_microversion_dev.html#when-do-i-need-a-new-microversion
REST API Versions History
-------------------------
.. toctree::
:maxdepth: 1
API Version History <dev/webapi-version-history>
Exceptions from Versioning
--------------------------
The following API-visible things are not covered by the API versioning:
* Current node state is always exposed as it is, even if not supported by the
requested API version, with exception of ``available`` state, which is
returned in version 1.1 as ``None`` (in Python) or ``null`` (in JSON).
* Data within free-form JSON attributes: ``properties``, ``driver_info``,
``instance_info``, ``driver_internal_info`` fields on a node object;
``extra`` fields on all objects.
* Addition of new drivers.
* All vendor passthru methods.

View File

@ -43,9 +43,9 @@ Prerequisites
which contains set of modules for managing HPE ProLiant hardware.
Install ``proliantutils`` module on the ironic conductor node. Minimum
version required is 2.1.7.::
version required is 2.1.11.::
$ pip install "proliantutils>=2.1.7"
$ pip install "proliantutils>=2.1.11"
* ``ipmitool`` command must be present on the service node(s) where
``ironic-conductor`` is running. On most distros, this is provided as part
@ -626,6 +626,11 @@ mode (Legacy BIOS or UEFI).
* When boot mode capability is not configured:
- If config variable ``default_boot_mode`` in ``[ilo]`` section of
ironic configuration file is set to either 'bios' or 'uefi', then iLO
drivers use that boot mode for provisioning the baremetal ProLiant
servers.
- If the pending boot mode is set on the node then iLO drivers use that boot
mode for provisioning the baremetal ProLiant servers.

View File

@ -21,7 +21,7 @@ Prerequisites
* Install `python-scciclient package <https://pypi.python.org/pypi/python-scciclient>`_::
$ pip install "python-scciclient>=0.3.0"
$ pip install "python-scciclient>=0.4.0"
Drivers
=======

View File

@ -44,7 +44,7 @@ OneView appliance.
The Mitaka version of the ironic OneView drivers only supported what we call
**pre-allocation** of nodes, meaning that resources in OneView are allocated
prior to the node being made available in ironic. This model is deprecated and
will be supported until OpenStack's `P` release. From the Newton release on,
will be supported until OpenStack's Pike release. From the Newton release on,
OneView drivers enables a new feature called **dynamic allocation** of nodes
[6]_. In this model, the driver allocates resources in OneView only at boot
time, allowing idle resources in ironic to be used by OneView users, enabling
@ -224,22 +224,22 @@ etc. In this case, to be enrolled, the node must have the following parameters:
* In ``driver_info``
- ``server_hardware_uri``: URI of the Server Hardware on OneView.
- ``server_hardware_uri``: URI of the ``Server Hardware`` on OneView.
- ``dynamic_allocation``: Boolean value to enable or disable (True/False)
``dynamic allocation`` for the given node. If this parameter is not set,
the driver will consider the ``pre-allocation`` model to maintain
compatibility on ironic upgrade. The support for this key will be dropped
in P, where only dynamic allocation will be used.
in the Pike release, where only dynamic allocation will be used.
* In ``properties/capabilities``
- ``server_hardware_type_uri``: URI of the Server Hardware Type of the
Server Hardware.
- ``server_profile_template_uri``: URI of the Server Profile Template used
to create the Server Profile of the Server Hardware.
- ``enclosure_group_uri`` (optional): URI of the Enclosure Group of the
Server Hardware.
- ``server_hardware_type_uri``: URI of the ``Server Hardware Type`` of the
``Server Hardware``.
- ``server_profile_template_uri``: URI of the ``Server Profile Template`` used
to create the ``Server Profile`` of the ``Server Hardware``.
- ``enclosure_group_uri`` (optional): URI of the ``Enclosure Group`` of the
``Server Hardware``.
To enroll a node with any of the OneView drivers, do::
@ -256,30 +256,32 @@ OneView node, do::
$ ironic node-update $NODE_UUID add \
properties/capabilities=server_hardware_type_uri:$SHT_URI,enclosure_group_uri:$EG_URI,server_profile_template_uri=$SPT_URI
In order to deploy, ironic will create and apply, at boot time, a Server
Profile based on the Server Profile Template specified on the node to the
Server Hardware it represents on OneView. The URI of such Server Profile will
be stored in ``driver_info.applied_server_profile_uri`` field while the Server
is allocated to ironic.
In order to deploy, ironic will create and apply, at boot time, a ``Server
Profile`` based on the ``Server Profile Template`` specified on the node to the
``Server Hardware`` it represents on OneView. The URI of such ``Server Profile``
will be stored in ``driver_info.applied_server_profile_uri`` field while the
Server is allocated to ironic.
The Server Profile Templates and, therefore, the Server Profiles derived from
them MUST comply with the following requirements:
The ``Server Profile Templates`` and, therefore, the ``Server Profiles`` derived
from them MUST comply with the following requirements:
* The option `MAC Address` in the `Advanced` section of
``Server Profile``/``Server Profile Template`` should be set to `Physical`
option;
* The option `MAC Address` in the `Advanced` section of Server Profile/Server
Profile Template should be set to `Physical` option;
* Their first `Connection` interface should be:
* Connected to ironic's provisioning network and;
* The `Boot` option should be set to primary.
* Connected to ironic's provisioning network and;
* The `Boot` option should be set to primary.
Node ports should be created considering the **MAC address of the first
Interface** of the given Server Hardware.
Interface** of the given ``Server Hardware``.
.. note::
Old versions of ironic using ``pre-allocation`` model (before Newton
release) and nodes with `dynamic_allocation` flag disabled shall have their
Server Profiles applied during node enrollment and can have their ports
created using the `Virtual` MAC addresses provided on Server Profile
``Server Profiles`` applied during node enrollment and can have their ports
created using the `Virtual` MAC addresses provided on ``Server Profile``
application.
To tell ironic which NIC should be connected to the provisioning network, do::
@ -292,6 +294,94 @@ For more information on the definitions of ``Server Hardware``, ``Server
Profile``, ``Server Profile Template`` and other OneView entities, refer to
[1]_ or browse Help in your OneView appliance menu.
Migrating from pre-allocation to dynamic allocation
===================================================
The migration of a node from an ironic deployment using ``pre-allocation``
model to the new ``dynamic allocation`` model can be done by using
``ironic-oneview-cli`` facilities to migrate nodes (further details on [8]_).
However, the same results can be achieved using the ironic CLI as explained
below.
Checking if a node can be migrated
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is recommended to migrate nodes which are in a stable `provision state`. That
means the the conductor is not performing an operation with the node, what can
impact in the execution of a migration. The possible stable `provision_state`
values [9_] are: `enroll`, `manageable`, `available`, `active`, `error`,
`clean failed` and `inspect failed`.
Dynamic allocation mode changes the way a ``Server Profile`` is associated with
a node. In ``pre-allocation`` mode, when a node is registered in ironic, there
must be a ``Server Profile`` applied to the ``Server Hardware`` represented by
the given node what means, from the OneView point of view, the hardware is in
use. In the ``dynamic allocation`` mode a ``Server Hardware`` is associated only
when the node is in use by the Compute service or the OneView itself. As a
result, there are different steps to perform if the node has an instance
provisioned, in other words, when the `provisioning_state` is set to `active`.
.. note::
Verify if the node has not already been migrated checking if there is
a `dynamic_allocation` field set to ``True`` in the `driver_info` namespace
doing::
$ ironic node-show --fields driver_info
Migrating nodes in `active` state
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
List nodes that are in `active` state doing::
$ ironic node-list --provision-state active --fields uuid driver_info
Execute the following steps for each node:
1. Remove the node's ``Server Profile`` from the ``Server Hardware`` in OneView.
To identify which ``Server Profile`` is associated with a node check the
property ``server_hardware_uri`` in the ``driver_info`` namespace doing::
$ ironic node-show <node-uuid> --fields driver_info
2. Then, using the ``server_hardware_uri``, log into OneView and remove the
``Server Profile``.
3. Finally, set the `dynamic_allocation` flag in the ``driver_info`` namespace
to ``True`` in order to finish the migration of the node doing::
$ ironic node-update <node-uuid> add driver_info/dynamic_allocation=True
Other cases for migration
^^^^^^^^^^^^^^^^^^^^^^^^^
Remember these steps are valid for nodes in the following states: `enroll`,
`manageable`, `available`, `error`, `clean failed` and `inspect failed`. So,
list the nodes in a given state, then execute the migration following steps for
each node:
1. Place the node in maintenance mode to prevent ironic from working on the node
during the migration doing::
$ ironic node-set-maintenance --reason "Migrating node to dynamic allocation" <node_uuid> true
.. note::
It's recommended to check if the node's state has not changed as there is no way
of locking the node between these commands.
2. Identify which ``Server Profile`` is associated by checking the property
``server_hardware_uri`` in the ``driver_info`` namespace. Using the
``server_hardware_uri``, log into OneView and remove the ``Server Profile``.
3. Set the `dynamic_allocation` to ``True`` in the flag ``driver_info``
namespace doing::
$ ironic node-update $NODE_UUID add driver_info/dynamic_allocation=True
4. Finally, in order to put the node back into the resource pool, remove the
node from maintenance mode doing::
$ ironic node-set-maintenance <node_uuid> false
3rd Party Tools
===============
@ -330,3 +420,4 @@ References
.. [6] Dynamic Allocation in OneView drivers - http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/oneview-drivers-dynamic-allocation.html
.. [7] ironic-oneviewd - https://pypi.python.org/pypi/ironic-oneviewd/
.. [8] ironic-oneview-cli - https://pypi.python.org/pypi/ironic-oneview-cli/
.. [9] Ironics State Machine - http://docs.openstack.org/developer/ironic/dev/states.html#states

View File

@ -6,72 +6,199 @@ Introduction
============
Ironic is an OpenStack project which provisions bare metal (as opposed to
virtual) machines by leveraging common technologies such as PXE boot and IPMI
to cover a wide range of hardware, while supporting pluggable drivers to allow
vendor-specific functionality to be added.
virtual) machines. It may be used independently or as part of an OpenStack
Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova),
Network (neutron), Image (glance), and Object (swift) services.
If one thinks of traditional hypervisor functionality (eg, creating a VM,
enumerating virtual devices, managing the power state, loading an OS onto the
VM, and so on), then Ironic may be thought of as a *hypervisor API* gluing
together multiple drivers, each of which implement some portion of that
functionality with respect to physical hardware.
The Bare Metal service manages hardware through both common (eg. PXE and IPMI)
and vendor-specific remote management protocols. It provides the cloud operator
with a unified interface to a heterogeneous fleet of servers while also
providing the Compute service with an interface that allows physical servers to
be managed as though they were virtual machines.
The documentation provided here is continually kept up-to-date based
on the latest code, and may not represent the state of the project at any
specific prior release.
`An introduction to ironic's conceptual architecture <deploy/user-guide.html>`_
is available for those new to the project.
For information on any current or prior version of Ironic, see `the release
notes`_.
Site Notes
----------
.. _the release notes: http://docs.openstack.org/releasenotes/ironic/
This site is primarily intended to provide documentation for developers
interested in contributing to or working with ironic. It *also* contains
references and guides for administrators which are not yet hosted elsewhere on
the OpenStack documentation sites.
Administrator's Guide
=====================
This documentation is continually updated and may not represent the state of
the project at any specific prior release. To access documentation for a
previous release of ironic, append the OpenStack release name to the URL, for
example:
http://docs.openstack.org/developer/ironic/mitaka/
Bare Metal API References
=========================
Ironic's REST API has changed since its first release, and continues to evolve
to meet the changing needs of the community. Here we provide a conceptual
guide as well as more detailed reference documentation.
.. toctree::
:maxdepth: 1
API Concept Guide <dev/webapi>
API Reference (latest) <http://developer.openstack.org/api-ref/baremetal/>
API Version History <dev/webapi-version-history>
Developer's Guide
=================
Getting Started
---------------
If you are new to ironic, this section contains information that should help
you get started as a developer working on the project or contributing to the
project.
.. toctree::
:maxdepth: 1
Developer Contribution Guide <dev/code-contribution-guide>
Setting Up Your Development Environment <dev/dev-quickstart>
Frequently Asked Questions <dev/faq>
The following pages describe the architecture of the Bare Metal service
and may be helpful to anyone working on or with the service, but are written
primarily for developers.
.. toctree::
:maxdepth: 1
Ironic System Architecture <dev/architecture>
Provisioning State Machine <dev/states>
Notifications <dev/notifications>
Writing Drivers
---------------
Ironic's community includes many hardware vendors who contribute drivers that
enable more advanced functionality when Ironic is used in conjunction with that
hardware. To do this, the Ironic developer community is committed to
standardizing on a `Python Driver API <api/ironic.drivers.base.html>`_ that
meets the common needs of all hardware vendors, and evolving this API without
breaking backwards compatibility. However, it is sometimes necessary for driver
authors to implement functionality - and expose it through the REST API - that
can not be done through any existing API.
To facilitate that, we also provide the means for API calls to be "passed
through" ironic and directly to the driver. Some guidelines on how to implement
this are provided below. Driver authors are strongly encouraged to talk with
the developer community about any implementation using this functionality.
.. toctree::
:maxdepth: 1
Driver Overview <dev/drivers>
Driver Base Class Definition <api/ironic.drivers.base.html>
Writing "vendor_passthru" methods <dev/vendor-passthru>
Testing Network Integration
---------------------------
In order to test the integration between the Bare Metal and Networking
services, support has been added to `devstack <http://launchpad.net/devstack>`_
to mimic an external physical switch. Here we include a recommended
configuration for devstack to bring up this environment.
.. toctree::
:maxdepth: 1
Configuring Devstack for multitenant network testing <dev/ironic-multitenant-networking>
Administrator's Guide
=====================
Installation & Operations
-------------------------
If you are a system administrator running Ironic, this section contains
information that should help you understand how to deploy, operate, and upgrade
the services.
.. toctree::
:maxdepth: 1
deploy/user-guide
Installation Guide <deploy/install-guide>
Upgrade Guide <deploy/upgrade-guide>
Configuration Reference (Mitaka) <http://docs.openstack.org/mitaka/config-reference/bare-metal.html>
drivers/ipa
deploy/drivers
deploy/cleaning
deploy/raid
deploy/inspection
deploy/security
deploy/adoption
deploy/api-audit-support
deploy/troubleshooting
Release Notes <http://docs.openstack.org/releasenotes/ironic/>
Troubleshooting FAQ <deploy/troubleshooting>
Configuration
-------------
There are many aspects of the Bare Metal service which are environment
specific. The following pages will be helpful in configuring specific aspects
of ironic that may or may not be suitable to every situation.
.. toctree::
:maxdepth: 1
Guide to Node Cleaning <deploy/cleaning>
Configuring Node Inspection <deploy/inspection>
Configuring RAID during deployment <deploy/raid>
Security considerations for your Bare Metal installation <deploy/security>
Adopting Nodes in an ACTIVE state <deploy/adoption>
Auditing API Traffic <deploy/api-audit-support>
Configuring for Multi-tenant Networking <deploy/multitenancy>
Configuring node web or serial console <deploy/console>
Emitting software metrics <deploy/metrics>
A reference guide listing all available configuration options is published for
every major release. Additionally, a `sample configuration file`_ is included
within the project and kept continually up to date.
.. toctree::
:maxdepth: 1
Configuration Reference (Mitaka) <http://docs.openstack.org/mitaka/config-reference/bare-metal.html>
.. _sample configuration file: https://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/ironic.conf.sample
Dashboard Integration
---------------------
A plugin for the OpenStack Dashboard (horizon) service is under development.
Documentation for that can be found within the ironic-ui project.
.. toctree::
:maxdepth: 1
Dashboard (horizon) plugin <http://docs.openstack.org/developer/ironic-ui/>
Commands and API References
===========================
Driver References
=================
Every driver author is expected to document the use and configuration of their
driver. These pages are linked below.
.. toctree::
:maxdepth: 2
Driver Documentation pages <deploy/drivers>
Further Considerations for the Agent Drivers <drivers/ipa>
Command References
==================
Here are references for commands not elsewhere documented.
.. toctree::
:maxdepth: 1
cmds/ironic-dbsync
webapi/v1
dev/drivers
Developer's Guide
=================
.. toctree::
:maxdepth: 1
dev/architecture
dev/states
dev/contributing
dev/code-contribution-guide
dev/dev-quickstart
dev/vendor-passthru
dev/ironic-multitenant-networking
dev/faq
Indices and tables
==================

View File

@ -1,277 +1,5 @@
====================
RESTful Web API (v1)
====================
========
REST API
========
API Versioning
==============
Starting with the Kilo release ironic supports versioning of API. Version is
defined as a string of 2 integers separated by a dot: **X.Y**. Here ``X`` is a
major version, always equal to ``1`` at the moment of writing, ``Y`` is
a minor version. Server minor version is increased every time the API behavior
is changed (note `Exceptions from Versioning`_). `Nova versioning
documentation`_ has a nice guide on when to bump an API version.
Server indicates its minimum and maximum supported API versions in the
``X-OpenStack-Ironic-API-Minimum-Version`` and
``X-OpenStack-Ironic-API-Maximum-Version`` headers respectively, returned
with every response. Client may request a specific API version by providing
``X-OpenStack-Ironic-API-Version`` header with request.
If no version is requested by the client, minimum supported version - **1.1**,
is assumed. The client is only exposed to those API features that are supported
in the requested (explicitly or implicitly) API version (again note `Exceptions
from Versioning`_, they are not covered by this rule).
We recommend clients requiring stable API to always request a specific version
of API. However, a special value ``latest`` can be requested instead, which
always requests the newest supported API version.
.. _Nova versioning documentation: http://docs.openstack.org/developer/nova/api_microversion_dev.html#when-do-i-need-a-new-microversion
API Versions History
--------------------
**1.22**
Added endpoints for deployment ramdisks.
**1.21**
Add node ``resource_class`` field.
**1.20**
Add node ``network_interface`` field.
**1.19**
Add ``local_link_connection`` and ``pxe_enabled`` fields to the port object.
**1.18**
Add ``internal_info`` readonly field to the port object, that will be used
by ironic to store internal port-related information.
**1.17**
Addition of provision_state verb ``adopt`` which allows an operator
to move a node from ``manageable`` state to ``active`` state without
performing a deployment operation on the node. This is intended for
nodes that have already been deployed by external means.
**1.16**
Add ability to filter nodes by driver.
**1.15**
Add ability to do manual cleaning when a node is in the manageable
provision state via PUT v1/nodes/<identifier>/states/provision,
target:clean, clean_steps:[...].
**1.14**
Make the following endpoints discoverable via Ironic API:
* '/v1/nodes/<UUID or logical name>/states'
* '/v1/drivers/<driver name>/properties'
**1.13**
Add a new verb ``abort`` to the API used to abort nodes in
``CLEANWAIT`` state.
**1.12**
This API version adds the following abilities:
* Get/set ``node.target_raid_config`` and to get
``node.raid_config``.
* Retrieve the logical disk properties for the driver.
**1.11** (breaking change)
Newly registered nodes begin in the ``enroll`` provision state by default,
instead of ``available``. To get them to the ``available`` state,
the ``manage`` action must first be run to verify basic hardware control.
On success the node moves to ``manageable`` provision state. Then the
``provide`` action must be run. Automated cleaning of the node is done and
the node is made ``available``.
**1.10**
Logical node names support all RFC 3986 unreserved characters.
Previously only valid fully qualified domain names could be used.
**1.9**
Add ability to filter nodes by provision state.
**1.8**
Add ability to return a subset of resource fields.
**1.7**
Add node ``clean_step`` field.
**1.6**
Add :ref:`inspection` process: introduce ``inspecting`` and ``inspectfail``
provision states, and ``inspect`` action that can be used when a node is in
``manageable`` provision state.
**1.5**
Add logical node names that can be used to address a node in addition to
the node UUID. Name is expected to be a valid `fully qualified domain
name`_ in this version of API.
**1.4**
Add ``manageable`` state and ``manage`` transition, which can be used to
move a node to ``manageable`` state from ``available``.
The node cannot be deployed in ``manageable`` state.
This change is mostly a preparation for future inspection work
and introduction of ``enroll`` provision state.
**1.3**
Add node ``driver_internal_info`` field.
**1.2** (breaking change)
Renamed NOSTATE (``None`` in Python, ``null`` in JSON) node state to
``available``. This is needed to reduce confusion around ``None`` state,
especially when future additions to the state machine land.
**1.1**
This was the initial version when API versioning was introduced.
Includes the following changes from Kilo release cycle:
* Add node ``maintenance_reason`` field and an API endpoint to
set/unset the node maintenance mode.
* Add sync and async support for vendor passthru methods.
* Vendor passthru endpoints support different HTTP methods, not only
``POST``.
* Make vendor methods discoverable via the Ironic API.
* Add logic to store the config drive passed by Nova.
This has been the minimum supported version since versioning was
introduced.
**1.0**
This version denotes Juno API and was never explicitly supported, as API
versioning was not implemented in Juno, and **1.1** became the minimum
supported version in Kilo.
.. _fully qualified domain name: https://en.wikipedia.org/wiki/Fully_qualified_domain_name
Exceptions from Versioning
--------------------------
The following API-visible things are not covered by the API versioning:
* Current node state is always exposed as it is, even if not supported by the
requested API version, with exception of ``available`` state, which is
returned in version 1.1 as ``None`` (in Python) or ``null`` (in JSON).
* Data within free-form JSON attributes: ``properties``, ``driver_info``,
``instance_info``, ``driver_internal_info`` fields on a node object;
``extra`` fields on all objects.
* Addition of new drivers.
* All vendor passthru methods.
Chassis
=======
.. rest-controller:: ironic.api.controllers.v1.chassis:ChassisController
:webprefix: /v1/chassis
.. autotype:: ironic.api.controllers.v1.chassis.ChassisCollection
:members:
.. autotype:: ironic.api.controllers.v1.chassis.Chassis
:members:
Drivers
=======
.. rest-controller:: ironic.api.controllers.v1.driver:DriversController
:webprefix: /v1/drivers
.. rest-controller:: ironic.api.controllers.v1.driver:DriverRaidController
:webprefix: /v1/drivers/(driver_name)/raid
.. rest-controller:: ironic.api.controllers.v1.driver:DriverPassthruController
:webprefix: /v1/drivers/(driver_name)/vendor_passthru
.. autotype:: ironic.api.controllers.v1.driver.DriverList
:members:
.. autotype:: ironic.api.controllers.v1.driver.Driver
:members:
Links
=====
.. autotype:: ironic.api.controllers.link.Link
:members:
Nodes
=====
.. rest-controller:: ironic.api.controllers.v1.node:NodesController
:webprefix: /v1/nodes
.. rest-controller:: ironic.api.controllers.v1.node:NodeMaintenanceController
:webprefix: /v1/nodes/(node_ident)/maintenance
.. rest-controller:: ironic.api.controllers.v1.node:BootDeviceController
:webprefix: /v1/nodes/(node_ident)/management/boot_device
.. rest-controller:: ironic.api.controllers.v1.node:NodeStatesController
:webprefix: /v1/nodes/(node_ident)/states
.. rest-controller:: ironic.api.controllers.v1.node:NodeConsoleController
:webprefix: /v1/nodes/(node_ident)/states/console
.. rest-controller:: ironic.api.controllers.v1.node:NodeVendorPassthruController
:webprefix: /v1/nodes/(node_ident)/vendor_passthru
.. autotype:: ironic.api.controllers.v1.node.ConsoleInfo
:members:
.. autotype:: ironic.api.controllers.v1.node.Node
:members:
.. autotype:: ironic.api.controllers.v1.node.NodeCollection
:members:
.. autotype:: ironic.api.controllers.v1.node.NodeStates
:members:
Ports
=====
.. rest-controller:: ironic.api.controllers.v1.port:PortsController
:webprefix: /v1/ports
.. autotype:: ironic.api.controllers.v1.port.PortCollection
:members:
.. autotype:: ironic.api.controllers.v1.port.Port
:members:
The API documentation reference `has been moved here <../dev/webapi.html>`_.

View File

@ -4,15 +4,15 @@
# python projects they should package as optional dependencies for Ironic.
# These are available on pypi
proliantutils>=2.1.7
proliantutils>=2.1.11
pyghmi>=0.8.0
pysnmp
python-ironic-inspector-client
python-ironic-inspector-client>=1.5.0
python-oneviewclient<3.0.0,>=2.0.2
python-scciclient>=0.3.0
python-scciclient>=0.4.0
python-seamicroclient>=0.4.0
UcsSdk==0.8.2.2
python-dracclient>=0.0.5
python-dracclient>=0.1.0
# The amt driver imports a python module called "pywsman", but this does not
# exist on pypi.

View File

@ -105,6 +105,12 @@
# (string value)
#my_ip = 127.0.0.1
# Specifies the minimum level for which to send notifications.
# If not set, no notifications will be sent. The default is
# for this option to be unset. (string value)
# Allowed values: debug, info, warning, error, critical
#notification_level = <None>
# Directory where the ironic python module is installed.
# (string value)
#pybasedir = /usr/lib/python/site-packages/ironic/ironic
@ -247,28 +253,42 @@
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy
# (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool
# (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default
@ -276,42 +296,65 @@
# of 0 specifies no linger period. Pending messages shall be
# discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll
# raises timeout exception when timeout expired. (integer
# value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about
# existing target ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about
# existing target. (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses
# proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with
# ZMQBindError. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for
# serializing/deserializing outgoing/incoming messages (string
# value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True
# means not keeping a queue when server side disconnects.
# False means to keep queue and messages even if server is
# disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
@ -838,10 +881,10 @@
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool.
# (integer value)
# Setting a value of 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = <None>
#max_pool_size = 5
# Maximum number of database connection retries during
# startup. Set to -1 to specify an infinite retry count.
@ -864,6 +907,8 @@
# Verbosity of SQL debugging information: 0=None,
# 100=Everything. (integer value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
@ -919,6 +964,13 @@
# set to 0, will not run during cleaning. (integer value)
#erase_devices_priority = <None>
# Priority to run in-band clean step that erases metadata from
# devices, via the Ironic Python Agent ramdisk. If unset, will
# use the priority set in the ramdisk (defaults to 99 for the
# GenericHardwareManager). If set to 0, will not run during
# cleaning. (integer value)
#erase_devices_metadata_priority = <None>
# During shred, overwrite all block devices N times with
# random data. This is only used if a device could not be ATA
# Secure Erased. Defaults to 1. (integer value)
@ -928,7 +980,7 @@
# Whether to write zeros to a node's block devices after
# writing random data. This will write zeros to the device
# even when deploy.shred_random_overwrite_interations is 0.
# even when deploy.shred_random_overwrite_iterations is 0.
# This option is only used if a device could not be ATA Secure
# Erased. Defaults to True. (boolean value)
#shred_final_overwrite_with_zeros = true
@ -998,6 +1050,19 @@
#iscsi_verify_attempts = 3
[drac]
#
# From ironic
#
# Interval (in seconds) between periodic RAID job status
# checks to determine whether the asynchronous RAID
# configuration was successfully finished or not. (integer
# value)
#query_raid_config_job_status_interval = 120
[glance]
#
@ -1292,6 +1357,15 @@
# CA certificate file to validate iLO. (string value)
#ca_file = <None>
# Default boot mode to be used in provisioning when
# "boot_mode" capability is not provided in the
# "properties/capabilities" of the node. The default is "auto"
# for backward compatibility. When "auto" is specified,
# default boot mode will be selected based on boot mode
# settings on the system. (string value)
# Allowed values: auto, bios, uefi
#default_boot_mode = auto
[inspector]
@ -1357,8 +1431,7 @@
#project_name = <None>
# ironic-inspector HTTP endpoint. If this is not set, the
# ironic-inspector client default (http://127.0.0.1:5050) will
# be used. (string value)
# service catalog will be used. (string value)
#service_url = <None>
# period (in seconds) to check status of nodes on inspection
@ -1442,6 +1515,8 @@
#remote_image_user_domain =
# Port to be used for iRMC operations (port value)
# Minimum value: 0
# Maximum value: 65535
# Allowed values: 443, 80
#port = 443
@ -1545,7 +1620,11 @@
# with Identity API Server. (integer value)
#http_request_max_retries = 3
# Env key for the swift cache. (string value)
# Request environment key where the Swift cache object is
# stored. When auth_token middleware is deployed with a Swift
# cache, use this option to have the middleware share a
# caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate
@ -1714,11 +1793,11 @@
# Time in ms to wait between connection attempts. (integer
# value)
#wait_timeout = 5000
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed.
# (integer value)
#check_timeout = 60000
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
@ -1998,22 +2077,8 @@
# From oslo.messaging
#
# address prefix used when sending to a specific server
# (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string
# value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string
# value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Name for the AMQP container (string value)
# Name for the AMQP container. must be globally unique.
# Defaults to a generated UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
@ -2073,6 +2138,122 @@
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer
# value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds
# after each unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval +
# connection_retry_backoff (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that
# failed due to a recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used
# when caller does not provide a timeout expiry. (integer
# value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only
# used when caller does not provide a timeout expiry. (integer
# value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only
# used when caller does not provide a timeout expiry. (integer
# value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does
# not support routing otherwise use routable addressing
# (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server
# (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string
# value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string
# value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string
# value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses
# (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout
# message. Used by the message bus to identify fanout
# messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular
# RPC/Notification server. Used by the message bus to identify
# messages sent to a single destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of
# consumers. Used by the message bus to identify messages that
# should be delivered in a round-robin fashion across
# consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer
# value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer
# value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
@ -2138,11 +2319,11 @@
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set
# compression will not be used. This option may notbe
# compression will not be used. This option may not be
# available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client beforce abandoning to send
# How long to wait a missing client before abandoning to send
# it its replies. This value should not be longer than
# rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
@ -2223,9 +2404,11 @@
# 30 seconds. (integer value)
#rabbit_interval_max = 30
# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# DEPRECATED: Maximum number of RabbitMQ connection retries.
# Default is 0 (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you
@ -2375,6 +2558,107 @@
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default
# value of -1 specifies an infinite linger period. The value
# of 0 specifies no linger period. Pending messages shall be
# discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll
# raises timeout exception when timeout expired. (integer
# value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about
# existing target ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about
# existing target. (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses
# proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with
# ZMQBindError. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for
# serializing/deserializing outgoing/incoming messages (string
# value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True
# means not keeping a queue when server side disconnects.
# False means to keep queue and messages even if server is
# disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_policy]
#
@ -2431,17 +2715,13 @@
# (integer value)
#image_cache_ttl = 10080
# The disk devices to scan while doing the deploy. (string
# value)
#disk_devices = cciss/c0d0,sda,hda,vda
# On ironic-conductor node, template file for PXE
# configuration. (string value)
#pxe_config_template = $pybasedir/drivers/modules/pxe_config.template
# On ironic-conductor node, template file for PXE
# configuration for UEFI boot loader. (string value)
#uefi_pxe_config_template = $pybasedir/drivers/modules/elilo_efi_pxe_config.template
#uefi_pxe_config_template = $pybasedir/drivers/modules/pxe_grub_config.template
# IP address of ironic-conductor node's TFTP server. (string
# value)
@ -2460,7 +2740,7 @@
#pxe_bootfile_name = pxelinux.0
# Bootfile DHCP parameter for UEFI boot mode. (string value)
#uefi_pxe_bootfile_name = elilo.efi
#uefi_pxe_bootfile_name = bootx64.efi
# Enable iPXE boot. (boolean value)
#ipxe_enabled = false
@ -2478,6 +2758,13 @@
# Allowed values: 4, 6
#ip_version = 4
# Download deploy images directly from swift using temporary
# URLs. If set to false (default), images are downloaded to
# the ironic-conductor node and served over its local HTTP
# server. Applicable only when 'ipxe_enabled' option is set to
# true. (boolean value)
#ipxe_use_swift = false
[seamicro]

View File

@ -2,8 +2,10 @@
"admin_api": "role:admin or role:administrator"
# Internal flag for public API routes
"public_api": "is_public_api:True"
# Show or mask passwords in API responses
# Show or mask secrets within driver_info in API responses
"show_password": "!"
# Show or mask secrets within instance_info in API responses
"show_instance_secrets": "!"
# May be used to restrict access to specific tenants
"is_member": "tenant:demo or tenant:baremetal"
# Read-only API access
@ -27,7 +29,7 @@
# Set maintenance flag, taking a Node out of service
"baremetal:node:set_maintenance": "rule:is_admin"
# Clear maintenance flag, placing the Node into service again
"baremetal:node:clear_maintenance": "role:is_admin"
"baremetal:node:clear_maintenance": "rule:is_admin"
# Change Node boot device
"baremetal:node:set_boot_device": "rule:is_admin"
# Change Node power status

View File

@ -14,6 +14,8 @@ blockdev: CommandFilter, blockdev, root
hexdump: CommandFilter, hexdump, root
qemu-img: CommandFilter, qemu-img, root
wipefs: CommandFilter, wipefs, root
sgdisk: CommandFilter, sgdisk, root
partprobe: CommandFilter, partprobe, root
# ironic_lib/utils.py
mkswap: CommandFilter, mkswap, root

View File

@ -0,0 +1,6 @@
Prerequisites
-------------
Before you install and configure the Bare Metal service,
you must follow the `install and configure the prerequisites <http://docs.openstack.org/developer/ironic/deploy/install-guide.html#install-and-configure-prerequisites>`_
section of the legacy installation guide.

View File

@ -0,0 +1,301 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
# import sys
import openstackdocstheme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# TODO(ajaeger): enable PDF building, for example add 'rst2pdf.pdfbuilder'
# extensions =
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Installation Guide for Bare Metal Service'
bug_tag = u'install-guide'
copyright = u'2016, OpenStack contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# A few variables have to be set for the log-a-bug feature.
# giturl: The location of conf.py on Git. Must be set manually.
# gitsha: The SHA checksum of the bug description. Automatically extracted
# from git log.
# bug_tag: Tag for categorizing the bug. Must be set manually.
# These variables are passed to the logabug code via html_context.
giturl = u'http://git.openstack.org/cgit/openstack/ironic/tree/install-guide/source' # noqa
git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '"
gitsha = os.popen(git_cmd).read().strip('\n')
html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
"giturl": giturl,
"bug_project": "ironic"}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["common_prerequisites.rst"]
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# So that we can enable "log-a-bug" links from each output HTML page, this
# variable must be set to a format that includes year, month, day, hours and
# minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'install-guide'
# If true, publish source files
html_copy_source = False
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'InstallGuide.tex', u'Install Guide',
u'OpenStack contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'installguide', u'Install Guide',
[u'OpenStack contributors'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'InstallGuide', u'Install Guide',
u'OpenStack contributors', 'InstallGuide',
'This guide shows OpenStack end users how to install '
'an OpenStack cloud.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'InstallGuide', u'Install Guide',
u'OpenStack contributors')
]

View File

@ -0,0 +1,9 @@
===========================
Bare Metal service overview
===========================
The Bare Metal service is a collection of components that provides support to manage and provision physical machines.
Please read the `Service overview`_ section of the legacy installation guide.
.. _Service overview: http://docs.openstack.org/developer/ironic/deploy/install-guide.html#service-overview

View File

@ -0,0 +1,20 @@
==================
Bare Metal service
==================
.. toctree::
:maxdepth: 2
get_started.rst
install.rst
verify.rst
next-steps.rst
The Bare Metal service is a collection of components that provides support to
manage and provision physical machines.
This new installation manual is being built. For the time being, please read
the legacy `Ironic Installation Guide <http://docs.openstack.org/developer/ironic/deploy/install-guide.html>`_.
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial <http://docs.openstack.org/#install-guides>`_.

View File

@ -0,0 +1,15 @@
.. _install-obs:
Install and configure for openSUSE and SUSE Linux Enterprise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Bare Metal service
for openSUSE Leap 42.1 and SUSE Linux Enterprise Server 12 SP1.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
Please follow the `Install the Bare Metal service <http://docs.openstack.org/developer/ironic/deploy/install-guide.html#install-the-bare-metal-service>`_ section of the legacy installation guide.

View File

@ -0,0 +1,15 @@
.. _install-rdo:
Install and configure for Red Hat Enterprise Linux and CentOS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Bare Metal service
for Red Hat Enterprise Linux 7 and CentOS 7.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
Please follow the `Install the Bare Metal service <http://docs.openstack.org/developer/ironic/deploy/install-guide.html#install-the-bare-metal-service>`_ section of the legacy installation guide.

View File

@ -0,0 +1,14 @@
.. _install-ubuntu:
Install and configure for Ubuntu
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Bare Metal
service for Ubuntu 14.04 (LTS).
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
Please follow the `Install the Bare Metal service <http://docs.openstack.org/developer/ironic/deploy/install-guide.html#install-the-bare-metal-service>`_ section of the legacy installation guide.

View File

@ -0,0 +1,16 @@
.. _install:
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the
Bare Metal service, code-named ironic.
Note that installation and configuration vary by distribution.
.. toctree::
:maxdepth: 2
install-obs.rst
install-rdo.rst
install-ubuntu.rst

View File

@ -0,0 +1,6 @@
.. _next-steps:
Next steps
~~~~~~~~~~
Your OpenStack environment now includes the Bare Metal service.

View File

@ -0,0 +1,9 @@
.. _verify:
Verify operation
~~~~~~~~~~~~~~~~
To verify the operation of the Bare Metal service, please see the
`Troubleshooting`_ section of the legacy installation guide.
.. _`Troubleshooting`: http://docs.openstack.org/developer/ironic/deploy/install-guide.html#troubleshooting

View File

@ -16,13 +16,12 @@
# under the License.
import keystonemiddleware.audit as audit_middleware
from keystonemiddleware.audit import PycadfAuditApiConfigError
from oslo_config import cfg
import oslo_middleware.cors as cors_middleware
import pecan
from ironic.api import config
from ironic.api.controllers.base import Version
from ironic.api.controllers import base
from ironic.api import hooks
from ironic.api import middleware
from ironic.api.middleware import auth_token
@ -67,7 +66,8 @@ def setup_app(pecan_config=None, extra_hooks=None):
audit_map_file=CONF.audit.audit_map_file,
ignore_req_list=CONF.audit.ignore_req_list
)
except (EnvironmentError, OSError, PycadfAuditApiConfigError) as e:
except (EnvironmentError, OSError,
audit_middleware.PycadfAuditApiConfigError) as e:
raise exception.InputFileError(
file_name=CONF.audit.audit_map_file,
reason=e
@ -82,9 +82,11 @@ def setup_app(pecan_config=None, extra_hooks=None):
# included in all CORS responses.
app = cors_middleware.CORS(app, CONF)
app.set_latent(
allow_headers=[Version.max_string, Version.min_string, Version.string],
allow_headers=[base.Version.max_string, base.Version.min_string,
base.Version.string],
allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'],
expose_headers=[Version.max_string, Version.min_string, Version.string]
expose_headers=[base.Version.max_string, base.Version.min_string,
base.Version.string]
)
return app

View File

@ -105,8 +105,11 @@ class Version(object):
"Invalid value for %s header") % Version.string)
return version
def __gt__(a, b):
return (a.major, a.minor) > (b.major, b.minor)
def __gt__(self, other):
return (self.major, self.minor) > (other.major, other.minor)
def __eq__(a, b):
return (a.major, a.minor) == (b.major, b.minor)
def __eq__(self, other):
return (self.major, self.minor) == (other.major, other.minor)
def __ne__(self, other):
return not self.__eq__(other)

View File

@ -51,7 +51,7 @@ class Chassis(base.APIBase):
uuid = types.uuid
"""The UUID of the chassis"""
description = wtypes.text
description = wtypes.StringType(max_length=255)
"""The description of the chassis"""
extra = {wtypes.text: types.jsontype}

View File

@ -17,7 +17,6 @@ import datetime
from ironic_lib import metrics_utils
import jsonschema
from oslo_config import cfg
from oslo_log import log
from oslo_utils import strutils
from oslo_utils import uuidutils
@ -40,13 +39,11 @@ from ironic.common.i18n import _
from ironic.common import policy
from ironic.common import states as ir_states
from ironic.conductor import utils as conductor_utils
import ironic.conf
from ironic import objects
CONF = cfg.CONF
CONF.import_opt('heartbeat_timeout', 'ironic.conductor.manager',
group='conductor')
CONF.import_opt('enabled_network_interfaces', 'ironic.common.driver_factory')
CONF = ironic.conf.CONF
LOG = log.getLogger(__name__)
_CLEAN_STEPS_SCHEMA = {
@ -778,8 +775,7 @@ class Node(base.APIBase):
setattr(self, 'chassis_uuid', kwargs.get('chassis_id', wtypes.Unset))
@staticmethod
def _convert_with_links(node, url, fields=None, show_password=True,
show_states_links=True):
def _convert_with_links(node, url, fields=None, show_states_links=True):
# NOTE(lucasagomes): Since we are able to return a specified set of
# fields the "uuid" can be unset, so we need to save it in another
# variable to use when building the links
@ -800,10 +796,6 @@ class Node(base.APIBase):
node_uuid + "/states",
bookmark=True)]
if not show_password and node.driver_info != wtypes.Unset:
node.driver_info = strutils.mask_dict_password(node.driver_info,
"******")
# NOTE(lucasagomes): The numeric ID should not be exposed to
# the user, it's internal only.
node.chassis_id = wtypes.Unset
@ -822,14 +814,35 @@ class Node(base.APIBase):
if fields is not None:
api_utils.check_for_invalid_fields(fields, node.as_dict())
cdict = pecan.request.context.to_dict()
# NOTE(deva): the 'show_password' policy setting name exists for legacy
# purposes and can not be changed. Changing it will cause
# upgrade problems for any operators who have customized
# the value of this field
show_driver_secrets = policy.check("show_password", cdict, cdict)
show_instance_secrets = policy.check("show_instance_secrets",
cdict, cdict)
if not show_driver_secrets and node.driver_info != wtypes.Unset:
node.driver_info = strutils.mask_dict_password(
node.driver_info, "******")
if not show_instance_secrets and node.instance_info != wtypes.Unset:
node.instance_info = strutils.mask_dict_password(
node.instance_info, "******")
# NOTE(deva): agent driver may store a swift temp_url on the
# instance_info, which shouldn't be exposed to non-admin users.
# Now that ironic supports additional policies, we need to hide
# it here, based on this policy.
# Related to bug #1613903
if node.instance_info.get('image_url'):
node.instance_info['image_url'] = "******"
update_state_in_older_versions(node)
hide_fields_in_newer_versions(node)
show_password = pecan.request.context.show_password
show_states_links = (
api_utils.allow_links_node_states_and_driver_properties())
return cls._convert_with_links(node, pecan.request.public_url,
fields=fields,
show_password=show_password,
show_states_links=show_states_links)
@classmethod
@ -1026,19 +1039,12 @@ class NodesController(rest.RestController):
"""A resource used for vendors to expose a custom functionality in
the API"""
ports = port.PortsController()
"""Expose ports as a sub-element of nodes"""
management = NodeManagementController()
"""Expose management as a sub-element of nodes"""
maintenance = NodeMaintenanceController()
"""Expose maintenance as a sub-element of nodes"""
# Set the flag to indicate that the requests to this resource are
# coming from a top-level resource
ports.from_nodes = True
from_chassis = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource Chassis"""
@ -1052,6 +1058,20 @@ class NodesController(rest.RestController):
'instance_info', 'driver_internal_info',
'clean_step', 'raid_config', 'target_raid_config']
_subcontroller_map = {
'ports': port.PortsController
}
@pecan.expose()
def _lookup(self, ident, subres, *remainder):
try:
ident = types.uuid_or_name.validate(ident)
except exception.InvalidUuidOrName as e:
pecan.abort(http_client.BAD_REQUEST, e.args[0])
subcontroller = self._subcontroller_map.get(subres)
if subcontroller:
return subcontroller(node_ident=ident), remainder
def _get_nodes_collection(self, chassis_uuid, instance_uuid, associated,
maintenance, provision_state, marker, limit,
sort_key, sort_dir, driver=None,

View File

@ -215,10 +215,6 @@ class PortCollection(collection.Collection):
class PortsController(rest.RestController):
"""REST controller for Ports."""
from_nodes = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource Nodes."""
_custom_actions = {
'detail': ['GET'],
}
@ -227,12 +223,13 @@ class PortsController(rest.RestController):
advanced_net_fields = ['pxe_enabled', 'local_link_connection']
def __init__(self, node_ident=None):
super(PortsController, self).__init__()
self.parent_node_ident = node_ident
def _get_ports_collection(self, node_ident, address, marker, limit,
sort_key, sort_dir, resource_url=None,
fields=None):
if self.from_nodes and not node_ident:
raise exception.MissingParameterValue(
_("Node identifier not specified."))
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
@ -247,6 +244,7 @@ class PortsController(rest.RestController):
_("The sort_key value %(key)s is an invalid field for "
"sorting") % {'key': sort_key})
node_ident = self.parent_node_ident or node_ident
if node_ident:
# FIXME(comstud): Since all we need is the node ID, we can
# make this more efficient by only querying
@ -395,7 +393,7 @@ class PortsController(rest.RestController):
cdict = pecan.request.context.to_dict()
policy.authorize('baremetal:port:get', cdict, cdict)
if self.from_nodes:
if self.parent_node_ident:
raise exception.OperationNotPermitted()
api_utils.check_allow_specify_fields(fields)
@ -414,7 +412,7 @@ class PortsController(rest.RestController):
cdict = pecan.request.context.to_dict()
policy.authorize('baremetal:port:create', cdict, cdict)
if self.from_nodes:
if self.parent_node_ident:
raise exception.OperationNotPermitted()
pdict = port.as_dict()
@ -443,7 +441,7 @@ class PortsController(rest.RestController):
cdict = pecan.request.context.to_dict()
policy.authorize('baremetal:port:update', cdict, cdict)
if self.from_nodes:
if self.parent_node_ident:
raise exception.OperationNotPermitted()
if not api_utils.allow_port_advanced_net_fields():
for field in self.advanced_net_fields:
@ -495,7 +493,7 @@ class PortsController(rest.RestController):
cdict = pecan.request.context.to_dict()
policy.authorize('baremetal:port:delete', cdict, cdict)
if self.from_nodes:
if self.parent_node_ident:
raise exception.OperationNotPermitted()
rpc_port = objects.Port.get_by_uuid(pecan.request.context,
port_uuid)

View File

@ -22,7 +22,7 @@ import pecan
from pecan import rest
import six
from six.moves import http_client
from webob.static import FileIter
from webob import static
import wsme
from ironic.api.controllers.v1 import versions
@ -203,7 +203,7 @@ def vendor_passthru(ident, method, topic, data=None, driver_passthru=False):
# If unicode, convert to bytes
return_value = return_value.encode('utf-8')
file_ = wsme.types.File(content=return_value)
pecan.response.app_iter = FileIter(file_.file)
pecan.response.app_iter = static.FileIter(file_.file)
# Since we've attached the return value to the response
# object the response body should now be empty.
return_value = None

View File

@ -17,8 +17,8 @@
BASE_VERSION = 1
# Here goes a short log of changes in every version.
# Refer to doc/source/webapi/v1.rst for a detailed explanation of what
# each version contains.
# Refer to doc/source/dev/webapi-version-history.rst for a detailed explanation
# of what each version contains.
#
# v1.0: corresponds to Juno API, not supported since Kilo
# v1.1: API at the point in time when versioning support was added,
@ -78,8 +78,8 @@ MINOR_21_RESOURCE_CLASS = 21
MINOR_22_LOOKUP_HEARTBEAT = 22
# When adding another version, update MINOR_MAX_VERSION and also update
# doc/source/webapi/v1.rst with a detailed explanation of what the version has
# changed.
# doc/source/dev/webapi-version-history.rst with a detailed explanation of
# what the version has changed.
MINOR_MAX_VERSION = MINOR_22_LOOKUP_HEARTBEAT
# String representations of the minor and maximum versions

Some files were not shown because too many files have changed in this diff Show More