Merge "Adjust globaltoc"

This commit is contained in:
Jenkins 2016-04-15 06:47:15 +00:00 committed by Gerrit Code Review
commit c27b2657b4
10 changed files with 117 additions and 111 deletions

View File

@ -4,6 +4,6 @@ stylesheet = css/basic.css
pygments_style = native
[options]
globaltoc_depth = 3
globaltoc_depth = 4
globaltoc_includehidden = true
analytics_tracking_code = UA-17511903-1

View File

@ -40,7 +40,7 @@ the following tasks:
OpenStack environment.
Boot workflow of a Fuel Slave node
----------------------------------
++++++++++++++++++++++++++++++++++
The boot workflow of a Fuel Slave node does not require any user interaction.
For general understanding of the processes that take place in the system when

View File

@ -38,3 +38,108 @@ The sample OpenStack environment includes:
| | database, you must add three|
| | more servers. |
+--------------------------+-----------------------------+
.. _sysreqs_sample_target_node_config_controller:
Controller nodes
++++++++++++++++
In this example, we use Ceph as a back end for ephemeral storage.
Therefore, in addition to the basic OpenStack components and a MySQL
database,
controller nodes require sufficient resources to run Ceph monitors.
Each controller node must include:
+--------------+-----------------------------------+
| CPU | 2 CPUs with at least six physical |
| | cores each |
+--------------+-----------------------------------+
| RAM | * For testing: 2 GB |
| | * For production: |
| | |
| | * 24 GB (minimum) |
| | * 64 GB for deployments of |
| | 1000 VMs or more |
+--------------+-----------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs |
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+--------------+-----------------------------------+
| Storage | Hardware RAID 1 with at least 1 TB|
| | formatted capacity for the host |
| | host operating system disk |
| | |
| | Larger disks may be warranted |
| | depending on the expected database|
| | and log storage requirements. |
+--------------+-----------------------------------+
.. _sysreqs_sample_target_node_config_compute:
Compute nodes
+++++++++++++
Your virtual machines are hosted on the compute nodes; therefore,
you must allocate enough resources to run these virtual machines.
Each compute node must include:
+---------------+----------------------------------+
| CPU | Dual-socket CPU with a minimum |
| | of 4 cores per socket |
+---------------+----------------------------------+
| RAM | 64 GB |
+---------------+----------------------------------+
| Storage | Hardware RAID 1 controller with |
| | at least 500 GB capacity for |
| | the host operating system disk |
+---------------+----------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs |
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+---------------+----------------------------------+
.. _sysreqs_sample_target_node_config_storage:
Storage nodes
+++++++++++++
We recommend that you separate Ceph nodes for scalability and robustness.
The hardware estimate provided below is based on the requirement of 0.5 cores
per Ceph-OSD CPU and 1 GB of RAM per 1 TB of Ceph-OSD space. You can
configure
all Ceph storage and journal hard disks in JBOD (Just a Bunch of Disks) mode
on the RAID controller or plug them directly to the available SATA or SAS
ports
on the mainboard.
Each storage node must include:
+------------------------+---------------------------------+
| CPU | Single-socket CPU with at least |
| | 4 physical cores |
+------------------------+---------------------------------+
| RAM | 24 GB |
+------------------------+---------------------------------+
| Storage | RAID 1 controller with at least |
| | 500 GB capacity for the host |
| | operating system disk |
| | |
| | For production installations, |
| | set the Ceph object replication |
| | factor to 3 or greater. |
+------------------------+---------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs|
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+------------------------+---------------------------------+
| Storage | * 18 TB for Ceph storage |
| | (6 x 3 TB) |
| | * 1-2 x 64 GB SSDs or more, for |
| | the Ceph journal |
+------------------------+---------------------------------+

View File

@ -1,24 +0,0 @@
.. _sysreqs_sample_target_node_config_compute:
Compute nodes
-------------
Your virtual machines are hosted on the compute nodes; therefore,
you must allocate enough resources to run these virtual machines.
Each compute node must include:
+---------------+----------------------------------+
| CPU | Dual-socket CPU with a minimum |
| | of 4 cores per socket |
+---------------+----------------------------------+
| RAM | 64 GB |
+---------------+----------------------------------+
| Storage | Hardware RAID 1 controller with |
| | at least 500 GB capacity for |
| | the host operating system disk |
+---------------+----------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs |
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+---------------+----------------------------------+

View File

@ -1,34 +0,0 @@
.. _sysreqs_sample_target_node_config_controller:
Controller nodes
----------------
In this example, we use Ceph as a back end for ephemeral storage.
Therefore, in addition to the basic OpenStack components and a MySQL database,
controller nodes require sufficient resources to run Ceph monitors.
Each controller node must include:
+--------------+-----------------------------------+
| CPU | 2 CPUs with at least six physical |
| | cores each |
+--------------+-----------------------------------+
| RAM | * For testing: 2 GB |
| | * For production: |
| | |
| | * 24 GB (minimum) |
| | * 64 GB for deployments of |
| | 1000 VMs or more |
+--------------+-----------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs |
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+--------------+-----------------------------------+
| Storage | Hardware RAID 1 with at least 1 TB|
| | formatted capacity for the host |
| | host operating system disk |
| | |
| | Larger disks may be warranted |
| | depending on the expected database|
| | and log storage requirements. |
+--------------+-----------------------------------+

View File

@ -1,38 +0,0 @@
.. _sysreqs_sample_target_node_config_storage:
Storage nodes
-------------
We recommend that you separate Ceph nodes for scalability and robustness.
The hardware estimate provided below is based on the requirement of 0.5 cores
per Ceph-OSD CPU and 1 GB of RAM per 1 TB of Ceph-OSD space. You can configure
all Ceph storage and journal hard disks in JBOD (Just a Bunch of Disks) mode
on the RAID controller or plug them directly to the available SATA or SAS ports
on the mainboard.
Each storage node must include:
+------------------------+---------------------------------+
| CPU | Single-socket CPU with at least |
| | 4 physical cores |
+------------------------+---------------------------------+
| RAM | 24 GB |
+------------------------+---------------------------------+
| Storage | RAID 1 controller with at least |
| | 500 GB capacity for the host |
| | operating system disk |
| | |
| | For production installations, |
| | set the Ceph object replication |
| | factor to 3 or greater. |
+------------------------+---------------------------------+
| Network | * For testing: 2 x 1 Gbit/s NICs|
| | * For production: 2 x 10 Gbit/s |
| | NICs |
+------------------------+---------------------------------+
| Storage | * 18 TB for Ceph storage |
| | (6 x 3 TB) |
| | * 1-2 x 64 GB SSDs or more, for |
| | the Ceph journal |
+------------------------+---------------------------------+

View File

@ -26,6 +26,3 @@ This section includes the following topics:
sysreq/sysreq_ironic_prereq
sysreq/sysreq_ironic_limitations
sysreq/sysreq_sample_configuration
sysreq/sysreq_sample_configuration_controllers
sysreq/sysreq_sample_configuration_compute
sysreq/sysreq_sample_configuration_storage

View File

@ -47,7 +47,7 @@ the upgrade packages from these repositories.
test staging environment before applying the updates to production.
Patch the Fuel Master node
--------------------------
++++++++++++++++++++++++++
#. Back up your data with dockerctl backup. This will save the data
to ``/var/backup/fuel/``.
@ -57,7 +57,7 @@ Patch the Fuel Master node
#. Wait for the new containers deployment to finish.
Patch an Ubuntu slave node
--------------------------
++++++++++++++++++++++++++
#. Run ``apt-get update``.
#. Run ``apt-get upgrade``.
@ -66,7 +66,7 @@ Patch an Ubuntu slave node
#. Reboot the node.
Apply Puppet changes on a slave node
------------------------------------
++++++++++++++++++++++++++++++++++++
You may want to apply all changes on a slave node or run a single
granular task so that Fuel Puppet changes take effect.
@ -88,7 +88,7 @@ granular task so that Fuel Puppet changes take effect.
**Verify a patch:**
Verify a patch on the Fuel Master node
--------------------------------------
++++++++++++++++++++++++++++++++++++++
To verify the packages on the Fuel Master node:
@ -101,7 +101,7 @@ To verify the packages on the Fuel Master node:
yum -y update
Verify a patch on a Fuel slave node
-----------------------------------
+++++++++++++++++++++++++++++++++++
To verify the packages are up-to-date on the Fuel Slave nodes:
@ -132,7 +132,7 @@ To verify the packages are up-to-date on the Fuel Slave nodes:
The rollback instructions listed here are for advanced administrators.
Roll back the Fuel Master node
------------------------------
++++++++++++++++++++++++++++++
#. Roll back the packages on the Fuel Master node.
`Refer to this article <https://access.redhat.com/solutions/64069>`__ as an example.
@ -143,7 +143,7 @@ Roll back the Fuel Master node
#. Wait for bootstrap to complete.
Roll back an Ubuntu slave node
------------------------------
++++++++++++++++++++++++++++++
You must identify the packages to roll back and where to get
their specific versions, install the packages and roll back the

View File

@ -44,7 +44,7 @@ or through Fuel CLI using the ``fuel-createmirror`` script.
Alternatively (recommended), reboot the Fuel Master node.
About the fuel-createmirror script
----------------------------------
++++++++++++++++++++++++++++++++++
The ``fuel-createmirror`` is a built-in Fuel script that enables
you to modify the Fuel repository sources from the CLI.

View File

@ -7,7 +7,7 @@ To enable inter-node communication, you must configure networks on
VMware vCenter.
Configure a network for Fuel Admin (PXE) traffic
------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++
You must configure a network for the Fuel Admin (PXE) traffic
and enable Promiscuous mode.
@ -21,7 +21,7 @@ and enable Promiscuous mode.
#. Click the **Add Host Networking** icon.
Create a vCenter Port Group network
-----------------------------------
+++++++++++++++++++++++++++++++++++
You must create a Port Group with Promiscuous mode.