Merge "Rework drivers page in the admin documentation"

This commit is contained in:
Zuul 2017-11-28 13:53:57 +00:00 committed by Gerrit Code Review
commit 38b8ae5b82
10 changed files with 188 additions and 158 deletions

View File

@ -1,107 +1,38 @@
.. _enabling_drivers:
===============================================
Drivers, Hardware Types and Hardware Interfaces
===============================================
================
Enabling drivers
================
Ironic-Python-Agent (agent)
---------------------------
Ironic-Python-Agent is an agent that handles *ironic* bare metal
nodes in various actions such as inspection and deployment of such
nodes, and runs processes inside of a ramdisk.
For more information on this, see :ref:`IPA`.
PXE Boot Interface
Generic Interfaces
------------------
.. toctree::
:maxdepth: 1
:maxdepth: 2
drivers/pxe
interfaces/boot
interfaces/deploy
IPMITool driver
---------------
.. toctree::
:maxdepth: 1
drivers/ipmitool
iDRAC driver
------------
.. toctree::
:maxdepth: 1
drivers/idrac
SNMP driver
-----------
.. toctree::
:maxdepth: 1
drivers/snmp
iLO driver
----------
.. toctree::
:maxdepth: 1
drivers/ilo
iRMC driver
-----------
.. toctree::
:maxdepth: 1
drivers/irmc
Cisco UCS driver
----------------
.. toctree::
:maxdepth: 1
drivers/ucs
CIMC driver
-----------
Hardware Types
--------------
.. toctree::
:maxdepth: 1
drivers/cimc
OneView driver
--------------
.. toctree::
:maxdepth: 1
drivers/idrac
drivers/ilo
drivers/ipmitool
drivers/irmc
drivers/oneview
Redfish driver
--------------
.. toctree::
:maxdepth: 1
drivers/redfish
drivers/snmp
drivers/ucs
Unsupported drivers
-------------------
The following drivers were declared as unsupported in ironic Newton release
and as of Ocata release they are removed form ironic:
and as of Ocata release they are removed from ironic:
- AMT driver - available as part of ironic-staging-drivers_
- iBoot driver - available as part of ironic-staging-drivers_
@ -110,4 +41,9 @@ and as of Ocata release they are removed form ironic:
- SeaMicro drivers
- MSFT OCS drivers
The SSH drivers were removed in the Pike release. Similar functionality can be
achieved either with VirtualBMC_ or using libvirt drivers from
ironic-staging-drivers_.
.. _ironic-staging-drivers: http://ironic-staging-drivers.readthedocs.io
.. _VirtualBMC: https://git.openstack.org/cgit/openstack/virtualbmc

View File

@ -1,5 +1,3 @@
.. _IPA:
===================
Ironic Python Agent
===================
@ -23,16 +21,16 @@ Starting with the Kilo release all drivers (except for fake ones) are using
IPA for deployment. There are two types of them, which can be distinguished
by prefix:
* For drivers with ``pxe_`` or ``iscsi_`` prefix IPA exposes the root hard
* For nodes using the :ref:`iscsi-deploy` interface, IPA exposes the root hard
drive as an iSCSI share and calls back to the ironic conductor. The
conductor mounts the share and copies an image there. It then signals back
to IPA for post-installation actions like setting up a bootloader for local
boot support.
* For drivers with ``agent_`` prefix the conductor prepares a swift temporary
URL for an image. IPA then handles the whole deployment process:
downloading an image from swift, putting it on the machine and doing any
post-deploy actions.
* For nodes using the :ref:`direct-deploy` interface, the conductor prepares
a swift temporary URL for an image. IPA then handles the whole deployment
process: downloading an image from swift, putting it on the machine and doing
any post-deploy actions.
Which one to choose depends on your environment. iSCSI-based drivers put
higher load on conductors, agent-based drivers currently require the whole

View File

@ -1,26 +1 @@
.. pxe:
==============================
Configuring PXE boot interface
==============================
Enable persistent boot device for deploy/clean operation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ironic uses non-persistent boot for cleaning/deploying phases as default,
in PXE interface. For some drivers, a persistent change is far more
costly than a non-persistent one, so this can bring performance improvements.
Enable persistent boot device on node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Set the flag ``force_persistent_boot_device`` to ``True`` in the node's ``driver_info``::
$ openstack baremetal node set --driver-info force_persistent_boot_device=True <node>
.. note::
It's recommended to check if the node's state has not changed as there
is no way of locking the node between these commands.
Once the flag is present, the next cleaning and deploy steps will be done
with persistent boot for that node.
See :ref:`pxe-boot`.

View File

@ -8,7 +8,7 @@ the services.
.. toctree::
:maxdepth: 1
Enabling Drivers <drivers>
Drivers, Hardware Types and Hardware Interfaces <drivers>
Ironic Python Agent <drivers/ipa>
Node Hardware Inspection <inspection>
Node Cleaning <cleaning>

View File

@ -0,0 +1,70 @@
===============
Boot interfaces
===============
The boot interface manages booting of both the deploy ramdisk and the user
instances on the bare metal node.
The `PXE boot`_ interface is generic and works with all hardware that supports
booting from network. Alternatively, several vendors provide *virtual media*
implementations of the boot interface. They work by pushing an ISO image to
the node's `management controller`_, and do not require either PXE or iPXE.
Check your driver documentation at :doc:`../drivers` for details.
.. _pxe-boot:
PXE boot
--------
The ``pxe`` boot interface uses PXE_ or iPXE_ to deliver the target
kernel/ramdisk pair. PXE uses relatively slow and unreliable TFTP protocol
for transfer, while iPXE uses HTTP. The downside of iPXE is that it's less
common, and usually requires bootstrapping using PXE first.
The ``pxe`` boot interface works by preparing a PXE/iPXE environment for a
node on the file system, then instructing the DHCP provider (for example,
the Networking service) to boot the node from it. See
:ref:`iscsi-deploy-example` and :ref:`direct-deploy-example` for a better
understanding of the whole deployment process.
.. note::
Both PXE and iPXE are configured differently, when UEFI boot is used
instead of conventional BIOS boot. This is particularly important for CPU
architectures that do not have BIOS support at all.
The ``pxe`` boot interface is used by default for many hardware types,
including ``ipmi``, and for all classic drivers with names starting with
``pxe_``. Some hardware types, notably ``ilo`` and ``irmc`` have their
specific implementations of the PXE boot interface.
Additional configuration is required for this boot interface - see
:doc:`/install/configure-pxe` for details.
Enable persistent boot device for deploy/clean operation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ironic uses non-persistent boot for cleaning/deploying phases as default,
in PXE interface. For some drivers, a persistent change is far more
costly than a non-persistent one, so this can bring performance improvements.
Set the flag ``force_persistent_boot_device`` to ``True`` in the node's
``driver_info``::
$ openstack baremetal node set --driver-info force_persistent_boot_device=True <node>
.. note::
It's recommended to check if the node's state has not changed as there
is no way of locking the node between these commands.
Once the flag is present, the next cleaning and deploy steps will be done
with persistent boot for that node.
.. _PXE: https://en.wikipedia.org/wiki/Preboot_Execution_Environment
.. _iPXE: https://en.wikipedia.org/wiki/IPXE
.. _management controller: https://en.wikipedia.org/wiki/Out-of-band_management
.. toctree::
:hidden:
../drivers/pxe

View File

@ -0,0 +1,57 @@
=================
Deploy Interfaces
=================
A *deploy* interface plays a critical role in the provisioning process. It
orchestrates the whole deployment and defines how the image gets transferred
to the target disk.
.. _iscsi-deploy:
iSCSI deploy
============
With ``iscsi`` deploy interface (and also ``oneview-iscsi``, specific to the
``oneview`` hardware type) the deploy ramdisk publishes the node's hard drive
as an iSCSI_ share. The ironic-conductor then copies the image to this share.
See :ref:`iSCSI deploy diagram <iscsi-deploy-example>` for a detailed
explanation of how this deploy interface works.
This interface is used by default, if enabled (see
:ref:`enable-hardware-interfaces`). You can specify it explicitly
when creating or updating a node::
openstack baremetal node create --driver ipmi --deploy-interface iscsi
openstack baremetal node set <NODE> --deploy-interface iscsi
The ``iscsi`` deploy interface is also used in all of the *classic drivers*
with names starting with ``pxe_`` (except for ``pxe_agent_cimc``)
and ``iscsi_``.
.. _iSCSI: https://en.wikipedia.org/wiki/ISCSI
.. _direct-deploy:
Direct deploy
=============
With ``direct`` deploy interface (and also ``oneview-direct``, specific to the
``oneview`` hardware type), the deploy ramdisk fetches the image from an
HTTP location. It can be an object storage (swift or RadosGW) temporary URL or
a user-provided HTTP URL. The deploy ramdisk then copies the image to the
target disk. See :ref:`direct deploy diagram <direct-deploy-example>` for
a detailed explanation of how this deploy interface works.
You can specify this deploy interface when creating or updating a node::
openstack baremetal node create --driver ipmi --deploy-interface direct
openstack baremetal node set <NODE> --deploy-interface direct
The ``direct`` deploy interface is also used in all *classic drivers*
whose names include ``agent``.
.. note::
For historical reasons the ``direct`` deploy interface is sometimes called
``agent``, and some *classic drivers* using it are called ``agent_*``.
This is because before the Kilo release **ironic-python-agent** used to
only support this deploy interface.

View File

@ -429,9 +429,10 @@ Switch to the stack user and clone DevStack::
git clone https://git.openstack.org/openstack-dev/devstack.git devstack
Create devstack/local.conf with minimal settings required to enable Ironic.
You can use either of two drivers for deploy: agent\_\* or pxe\_\*, see :ref:`IPA`
for explanation. An example local.conf that enables both types of drivers
and uses the ``agent_ipmitool`` driver by default::
You can use either of two drivers for deploy: agent\_\* or pxe\_\*, see
:doc:`/admin/interfaces/deploy` for explanation. An example local.conf that
enables both types of drivers and uses the ``agent_ipmitool`` driver
by default::
cd devstack
cat >local.conf <<END

View File

@ -55,7 +55,9 @@ There are several types of hardware interfaces:
boot
manages booting of both the deploy ramdisk and the user instances on the
bare metal node. Boot interface implementations are often vendor specific,
bare metal node. See :doc:`/admin/interfaces/boot` for details.
Boot interface implementations are often vendor specific,
and can be enabled via the ``enabled_boot_interfaces`` option:
.. code-block:: ini
@ -71,21 +73,24 @@ console
manages access to the serial console of a bare metal node.
See :doc:`/admin/console` for details.
deploy
defines how the image gets transferred to the target disk.
defines how the image gets transferred to the target disk. See
:doc:`/admin/interfaces/deploy` for an explanation of the difference
between supported deploy interfaces ``direct`` and ``iscsi``.
* With ``iscsi`` deploy method the deploy ramdisk publishes node's hard
drive as an iSCSI_ share. The ironic-conductor then copies the image
to this share. Requires :doc:`configure-iscsi`.
* With ``direct`` deploy method, the deploy ramdisk fetches the image
from an HTTP location (object storage temporary URL or user-provided
HTTP URL).
The deploy interfaces can be enabled as follows:
.. code-block:: ini
[DEFAULT]
enabled_hardware_types = ipmi,redfish
enabled_deploy_interfaces = iscsi,direct
Additionally,
* the ``iscsi`` deploy interface requires :doc:`configure-iscsi`
* the ``direct`` deploy interface requires the Object Storage service
or an HTTP service
inspect
implements fetching hardware information from nodes. Can be implemented
out-of-band (via contacting the node's BMC) or in-band (via booting
@ -312,5 +317,4 @@ See :doc:`/admin/drivers` for the required configuration of each driver.
.. _driver composition reform specification: https://specs.openstack.org/openstack/ironic-specs/specs/approved/driver-composition-reform.html
.. _setup.cfg: https://git.openstack.org/cgit/openstack/ironic/tree/setup.cfg
.. _iSCSI: https://en.wikipedia.org/wiki/ISCSI
.. _ironic-inspector: https://docs.openstack.org/ironic-inspector/latest/

View File

@ -89,32 +89,19 @@ The boot interface of a node manages booting of both the deploy ramdisk and
the user instances on the bare metal node. The deploy interface orchestrates
the deployment and defines how the image gets transferred to the target disk.
The ``pxe`` boot interface uses PXE_ or iPXE_ to deliver the target
kernel/ramdisk pair. PXE uses relatively slow and unreliable TFTP protocol
for transfer, while iPXE uses HTTP. The downside of iPXE is that it's less
common, and usually requires bootstrapping using PXE first. It is recommended
to use iPXE, when it is supported by target hardware, see
:doc:`../configure-pxe` for details.
.. note::
Both PXE and iPXE are configured differently, when UEFI boot is used
instead of conventional BIOS boot. This is particularly important for CPU
architectures that do not have BIOS support at all.
Alternatively, several vendors provide *virtual media* implementations of the
boot interface. They work by pushing an ISO image to the node's `management
controller`_, and do not require either PXE or iPXE. If such boot
The main alternatives are to use PXE/iPXE or virtual media - see
:doc:`/admin/interfaces/boot` for a detailed explanation. If a virtual media
implementation is available for the hardware, it is recommended using it
for better scalability and security. Check your driver documentation at
:doc:`/admin/drivers` for details.
for better scalability and security. Otherwise, it is recommended to use iPXE,
when it is supported by target hardware.
Deploy interface
~~~~~~~~~~~~~~~~
There are two deploy interfaces in-tree, ``iscsi`` and ``direct``. See
:doc:`../enabling-drivers` for explanation of the difference. With the
``iscsi`` deploy method, most of the deployment operations happen on the
conductor. If the Object Storage service (swift) or RadosGW is present in
:doc:`../../admin/interfaces/deploy` for explanation of the difference.
With the ``iscsi`` deploy method, most of the deployment operations happen on
the conductor. If the Object Storage service (swift) or RadosGW is present in
the environment, it is recommended to use the ``direct`` deploy method for
better scalability and reliability.

View File

@ -250,9 +250,8 @@ the same.
#. The node boots the deploy ramdisk.
#. Depending on the exact driver used, either the conductor copies the image
over iSCSI to the physical node (``iscsi`` deploy interface or most
``pxe_*`` classic drivers) or the deploy ramdisk downloads the image from a
temporary URL (``direct`` deploy interface or ``agent_*`` classic drivers).
over iSCSI to the physical node (:ref:`iscsi-deploy`) or the deploy ramdisk
downloads the image from a temporary URL (:ref:`direct-deploy`).
The temporary URL can be generated by Swift API-compatible object stores,
for example Swift itself or RadosGW.
@ -323,11 +322,12 @@ The following two examples describe what ironic is doing in more detail,
leaving out the actions performed by nova and some of the more advanced
options.
.. _iscsi-deploy-example:
Example 1: PXE Boot and iSCSI Deploy Process
--------------------------------------------
This process is used with ``pxe_*`` family of drivers (the only exception
is ``pxe_agent_cimc`` driver).
This process is how :ref:`iscsi-deploy` works.
.. seqdiag::
:scale: 75
@ -381,10 +381,12 @@ is ``pxe_agent_cimc`` driver).
(From a `talk`_ and `slides`_)
.. _direct-deploy-example:
Example 2: PXE Boot and Direct Deploy Process
---------------------------------------------
This process is used with ``agent_*`` family of drivers.
This process is how :ref:`direct-deploy` works.
.. seqdiag::
:scale: 75