Add versioning to installation and developer guides

For both installation and developer guides:
- Move 2018_10 into versioned sub folder
- Add latest version (copy of 2018_10) to latest folder (to be updated for next release)
- Update intro to each versioned guide
- Update relative links (version specific) for each version
- Update all references to version, to use standard format of stx.Year.Mo
- Cleaned up headings to use sentence casing per OpenStack guidelines
- Cleaned up capitalization: standardize capitalization for terms, only capitalize official
  names/terms, casing consistent with command line examples

Change-Id: Id5fd0a78a1d81fdf8c63d132f2d4c50f9ed2f2bf
Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
Kristal Dale 2019-03-11 16:30:26 -07:00 committed by Abraham Arce
parent 8fd61ad17e
commit 5a51a00b57
30 changed files with 7461 additions and 1833 deletions

View File

@ -0,0 +1,839 @@
===========================
Developer guide stx.2018.10
===========================
This section contains the steps for building a StarlingX ISO from
the stx.2018.10 branch.
If a developer guide is needed for a previous release, review the
:doc:`developer guides for all previous releases </developer_guide/index>`.
------------
Requirements
------------
The recommended minimum requirements include:
*********************
Hardware requirements
*********************
A workstation computer with:
- Processor: x86_64 is the only supported architecture
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Network adapter with active Internet connection
*********************
Software requirements
*********************
A workstation computer with:
- Operating System: Ubuntu 16.04 LTS 64-bit
- Docker
- Android Repo Tool
- Proxy settings configured (if required)
- See
http://lists.starlingx.io/pipermail/starlingx-discuss/2018-July/000136.html
for more details
- Public SSH key
-----------------------------
Development environment setup
-----------------------------
This section describes how to set up a StarlingX development system on a
workstation computer. After completing these steps, you can
build a StarlingX ISO image on the following Linux distribution:
- Ubuntu 16.04 LTS 64-bit
****************************
Update your operating system
****************************
Before proceeding with the build, ensure your Ubuntu distribution is up to date.
You first need to update the local database list of available packages:
.. code:: sh
$ sudo apt-get update
******************************************
Installation requirements and dependencies
******************************************
^^^^
User
^^^^
1. Make sure you are a non-root user with sudo enabled when you build the
StarlingX ISO. You also need to either use your existing user or create a
separate *<user>*:
.. code:: sh
$ sudo useradd -m -d /home/<user> <user>
2. Your *<user>* should have sudo privileges:
.. code:: sh
$ sudo sh -c "echo '<user> ALL=(ALL:ALL) ALL' >> /etc/sudoers"
$ sudo su -c <user>
^^^
Git
^^^
3. Install the required packages on the Ubuntu host system:
.. code:: sh
$ sudo apt-get install make git curl
4. Make sure to set up your identity using the following two commands.
Be sure to provide your actual name and email address:
.. code:: sh
$ git config --global user.name "Name LastName"
$ git config --global user.email "Email Address"
^^^^^^^^^
Docker CE
^^^^^^^^^
5. Install the required Docker CE packages in the Ubuntu host system. See
`Get Docker CE for
Ubuntu <https://docs.docker.com/install/linux/docker-ce/ubuntu/#os-requirements>`__
for more information.
6. Log out and log in to add your *<user>* to the Docker group:
.. code:: sh
$ sudo usermod -aG docker <user>
^^^^^^^^^^^^^^^^^
Android Repo Tool
^^^^^^^^^^^^^^^^^
7. Install the required Android Repo Tool in the Ubuntu host system. Follow
the steps in the `Installing
Repo <https://source.android.com/setup/build/downloading#installing-repo>`__
section.
**********************
Install public SSH key
**********************
#. Follow these instructions on GitHub to `Generate a Public SSH
Key <https://help.github.com/articles/connecting-to-github-with-ssh>`__.
Then upload your public key to your GitHub and Gerrit account
profiles:
- `Upload to
Github <https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account>`__
- `Upload to
Gerrit <https://review.openstack.org/#/settings/ssh-keys>`__
****************************
Create a workspace directory
****************************
#. Create a *starlingx* workspace directory on your system.
Best practices dictate creating the workspace directory
in your $HOME directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/
*************************
Install stx-tools project
*************************
#. Under your $HOME directory, clone the <stx-tools> project:
.. code:: sh
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
#. Navigate to the *<$HOME/stx-tools>* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/
-----------------------------
Prepare the base Docker image
-----------------------------
StarlingX base Docker image handles all steps related to StarlingX ISO
creation. This section describes how to customize the base Docker image
building process.
********************
Configuration values
********************
You can customize values for the StarlingX base Docker image using a
text-based configuration file named ``localrc``:
- ``HOST_PREFIX`` points to the directory that hosts the 'designer'
subdirectory for source code, the 'loadbuild' subdirectory for
the build environment, generated RPMs, and the ISO image.
- ``HOST_MIRROR_DIR`` points to the directory that hosts the CentOS mirror
repository.
^^^^^^^^^^^^^^^^^^^^^^^^^^
localrc configuration file
^^^^^^^^^^^^^^^^^^^^^^^^^^
Create your ``localrc`` configuration file. For example:
.. code:: sh
# tbuilder localrc
MYUNAME=<your user name>
PROJECT=starlingx
HOST_PREFIX=$HOME/starlingx/workspace
HOST_MIRROR_DIR=$HOME/starlingx/mirror
***************************
Build the base Docker image
***************************
Once the ``localrc`` configuration file has been customized, it is time
to build the base Docker image.
#. If necessary, you might have to set http/https proxy in your
Dockerfile before building the docker image:
.. code:: sh
ENV http_proxy " http://your.actual_http_proxy.com:your_port "
ENV https_proxy " https://your.actual_https_proxy.com:your_port "
ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port "
RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf
#. The ``tb.sh`` script automates the Base Docker image build:
.. code:: sh
./tb.sh create
----------------------------------
Build the CentOS mirror repository
----------------------------------
The creation of the StarlingX ISO relies on a repository of RPM Binaries,
RPM Sources, and Tar Compressed files. This section describes how to build
this CentOS mirror repository.
*******************************
Run repository Docker container
*******************************
| Run the following commands under a terminal identified as "**One**":
#. Navigate to the *$HOME/stx-tools/centos-mirror-tool* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/centos-mirror-tools/
#. Launch the Docker container using the previously created base Docker image
*<repository>:<tag>*. As /localdisk is defined as the workdir of the
container, you should use the same folder name to define the volume.
The container starts to run and populate 'logs' and 'output' folders in
this directory. The container runs from the same directory in which the
scripts are stored.
.. code:: sh
$ docker run -it --volume $(pwd):/localdisk local/$USER-stx-builder:7.4 bash
*****************
Download packages
*****************
#. Inside the Docker container, enter the following commands to download
the required packages to populate the CentOS mirror repository:
::
# cd localdisk && bash download_mirror.sh
#. Monitor the download of packages until it is complete. When the download
is complete, the following message appears:
::
totally 17 files are downloaded!
step #3: done successfully
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz
***************
Verify packages
***************
#. Verify no missing or failed packages exist:
::
# cat logs/*_missing_*.log
# cat logs/*_failmove_*.log
#. In case missing or failed packages do exist, which is usually caused by
network instability (or timeout), you need to download the packages
manually.
Doing so assures you get all RPMs listed in
*rpms_3rdparties.lst*/*rpms_centos.lst*/*rpms_centos3rdparties.lst*.
******************
Packages structure
******************
The following is a general overview of the packages structure resulting
from downloading the packages:
::
/home/<user>/stx-tools/centos-mirror-tools/output
└── stx-r1
└── CentOS
└── pike
├── Binary
│   ├── EFI
│   ├── images
│   ├── isolinux
│   ├── LiveOS
│   ├── noarch
│   └── x86_64
├── downloads
│   ├── integrity
│   └── puppet
└── Source
*******************************
Create CentOS mirror repository
*******************************
Outside your Repository Docker container, in another terminal identified
as "**Two**", run the following commands:
#. From terminal identified as "**Two**", create a *mirror/CentOS*
directory under your *starlingx* workspace directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/
#. Copy the built CentOS Mirror Repository built under
*$HOME/stx-tools/centos-mirror-tool* to the *$HOME/starlingx/mirror/*
workspace directory:
.. code:: sh
$ cp -r $HOME/stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/
-------------------------
Create StarlingX packages
-------------------------
*****************************
Run building Docker container
*****************************
#. From the terminal identified as "**Two**", create the workspace folder:
.. code:: sh
$ mkdir -p $HOME/starlingx/workspace
#. Navigate to the *$HOME/stx-tools* project directory:
.. code:: sh
$ cd $HOME/stx-tools
#. Verify environment variables:
.. code:: sh
$ bash tb.sh env
#. Run the building Docker container:
.. code:: sh
$ bash tb.sh run
#. Execute the buiding Docker container:
.. code:: sh
$ bash tb.sh exec
*********************************
Download source code repositories
*********************************
#. From the terminal identified as "**Two**", which is now inside the
Building Docker container, start the internal environment:
.. code:: sh
$ eval $(ssh-agent)
$ ssh-add
#. Use the repo tool to create a local clone of the stx-manifest
Git repository based on the "r/2018.10" branch:
.. code:: sh
$ cd $MY_REPO_ROOT_DIR
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml -b r/2018.10
**NOTE:** To use the "repo" command to clone the stx-manifest repository and
check out the "master" branch, omit the "-b r/2018.10" option.
Following is an example:
.. code:: sh
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml
#. Synchronize the repository:
.. code:: sh
$ repo sync -j`nproc`
#. Create a tarballs repository:
.. code:: sh
$ ln -s /import/mirrors/CentOS/stx-r1/CentOS/pike/downloads/ $MY_REPO/stx/
Alternatively, you can run the "populate_downloads.sh" script to copy
the tarballs instead of using a symlink:
.. code:: sh
$ populate_downloads.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
Outside the container
#. From another terminal identified as "**Three**", create mirror binaries:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/stx-installer
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img $HOME/starlingx/mirror/CentOS/stx-installer/initrd.img
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz $HOME/starlingx/mirror/CentOS/stx-installer/vmlinuz
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img $HOME/starlingx/mirror/CentOS/stx-installer/squashfs.img
**************
Build packages
**************
#. Go back to the terminal identified as "**Two**", which is the Building Docker container.
#. **Temporal!** Build-Pkgs Errors. Be prepared to have some missing /
corrupted rpm and tarball packages generated during
`Build the CentOS Mirror Repository`_, which will cause the next step
to fail. If that step does fail, manually download those missing /
corrupted packages.
#. Update the symbolic links:
.. code:: sh
$ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
#. Build the packages:
.. code:: sh
$ build-pkgs
#. **Optional!** Generate-Cgcs-Tis-Repo:
While this step is optional, it improves performance on subsequent
builds. The cgcs-tis-repo has the dependency information that
sequences the build order. To generate or update the information, you
need to execute the following command after building modified or new
packages.
.. code:: sh
$ generate-cgcs-tis-repo
-------------------
Build StarlingX ISO
-------------------
#. Build the image:
.. code:: sh
$ build-iso
---------------
Build installer
---------------
To get your StarlingX ISO ready to use, you must create the initialization
files used to boot the ISO, additional controllers, and compute nodes.
**NOTE:** You only need this procedure during your first build and
every time you upgrade the kernel.
After running "build-iso", run:
.. code:: sh
$ build-pkgs --installer
This builds *rpm* and *anaconda* packages. Then run:
.. code:: sh
$ update-pxe-network-installer
The *update-pxe-network-installer* covers the steps detailed in
*$MY_REPO/stx/stx-metal/installer/initrd/README*. This script
creates three files on
*/localdisk/loadbuild/pxe-network-installer/output*.
::
new-initrd.img
new-squashfs.img
new-vmlinuz
Rename the files as follows:
::
initrd.img
squashfs.img
vmlinuz
Two ways exist for using these files:
#. Store the files in the */import/mirror/CentOS/stx-installer/* folder
for future use.
#. Store the files in an arbitrary location and modify the
*$MY_REPO/stx/stx-metal/installer/pxe-network-installer/centos/build_srpm.data*
file to point to these files.
Recreate the *pxe-network-installer* package and rebuild the image:
.. code:: sh
$ build-pkgs --clean pxe-network-installer
$ build-pkgs pxe-network-installer
$ build-iso
Your ISO image should be able to boot.
****************
Additional notes
****************
- In order to get the first boot working, this complete procedure needs
to be done. However, once the init files are created, these can be
stored in a shared location where different developers can make use
of them. Updating these files is not a frequent task and should be
done whenever the kernel is upgraded.
- StarlingX is in active development. Consequently, it is possible that in the
future the **0.2** version will change to a more generic solution.
---------------
Build avoidance
---------------
*******
Purpose
*******
Greatly reduce build times after using "repo" to syncronized a local
repository with an upstream source (i.e. "repo sync").
Build avoidance works well for designers working
within a regional office. Starting from a new workspace, "build-pkgs"
typically requires three or more hours to complete. Build avoidance
reduces this step to approximately 20 minutes.
***********
Limitations
***********
- Little or no benefit for designers who refresh a pre-existing
workspace at least daily (e.g. download_mirror.sh, repo sync,
generate-cgcs-centos-repo.sh, build-pkgs, build-iso). In these cases,
an incremental build (i.e. reuse of same workspace without a "build-pkgs
--clean") is often just as efficient.
- Not likely to be useful to solo designers, or teleworkers that wish
to compile on using their home computers. Build avoidance downloads build
artifacts from a reference build, and WAN speeds are generally too
slow.
*****************
Method (in brief)
*****************
#. Reference Builds
- A server in the regional office performs regular (e.g. daily)
automated builds using existing methods. These builds are called
"reference builds".
- The builds are timestamped and preserved for some time (i.e. a
number of weeks).
- A build CONTEXT, which is a file produced by "build-pkgs"
at location *$MY_WORKSPACE/CONTEXT*, is captured. It is a bash script that can
cd to each and every Git and checkout the SHA that contributed to
the build.
- For each package built, a file captures the md5sums of all the
source code inputs required to build that package. These files are
also produced by "build-pkgs" at location
*$MY_WORKSPACE//rpmbuild/SOURCES//srpm_reference.md5*.
- All these build products are accessible locally (e.g. a regional
office) using "rsync".
**NOTE:** Other protocols can be added later.
#. Designers
- Request a build avoidance build. Recommended after you have
done synchronized the repository (i.e. "repo sync").
::
repo sync
generate-cgcs-centos-repo.sh
populate_downloads.sh
build-pkgs --build-avoidance
- Use combinations of additional arguments, environment variables, and a
configuration file unique to the regional office to specify an URL
to the reference builds.
- Using a configuration file to specify the location of your reference build:
::
mkdir -p $MY_REPO/local-build-data
cat <<- EOF > $MY_REPO/local-build-data/build_avoidance_source
# Optional, these are already the default values.
BUILD_AVOIDANCE_DATE_FORMAT="%Y%m%d"
BUILD_AVOIDANCE_TIME_FORMAT="%H%M%S"
BUILD_AVOIDANCE_DATE_TIME_DELIM="T"
BUILD_AVOIDANCE_DATE_TIME_POSTFIX="Z"
BUILD_AVOIDANCE_DATE_UTC=1
BUILD_AVOIDANCE_FILE_TRANSFER="rsync"
# Required, unique values for each regional office
BUILD_AVOIDANCE_USR="jenkins"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
EOF
- Using command-line arguments to specify the location of your reference
build:
::
build-pkgs --build-avoidance --build-avoidance-dir /localdisk/loadbuild/jenkins/StarlingX_Reference_Build --build-avoidance-host stx-builder.mycompany.com --build-avoidance-user jenkins
- Prior to your build attempt, you need to accept the host key.
Doing so prevents "rsync" failures on a "yes/no" prompt.
You only have to do this once.
::
grep -q $BUILD_AVOIDANCE_HOST $HOME/.ssh/known_hosts
if [ $? != 0 ]; then
ssh-keyscan $BUILD_AVOIDANCE_HOST >> $HOME/.ssh/known_hosts
fi
- "build-pkgs" does the following:
- From newest to oldest, scans the CONTEXTs of the various
reference builds. Selects the first (i.e. most recent) context that
satisfies the following requirement: every Git the SHA
specifies in the CONTEXT is present.
- The selected context might be slightly out of date, but not by
more than a day. This assumes daily reference builds are run.
- If the context has not been previously downloaded, then
download it now. This means you need to download select portions of the
reference build workspace into the designer's workspace. This
includes all the SRPMS, RPMS, MD5SUMS, and miscellaneous supporting
files. Downloading these files usually takes about 10 minutes
over an office LAN.
- The designer could have additional commits or uncommitted changes
not present in the reference builds. Affected packages are
identified by the differing md5sum's. In these cases, the packages
are re-built. Re-builds usually take five or more minutes,
depending on the packages that have changed.
- What if no valid reference build is found? Then build-pkgs will fall
back to a regular build.
****************
Reference builds
****************
- The regional office implements an automated build that pulls the
latest StarlingX software and builds it on a regular basis (e.g.
daily builds). Jenkins, cron, or similar tools can trigger these builds.
- Each build is saved to a unique directory, and preserved for a time
that is reflective of how long a designer might be expected to work
on a private branch without syncronizing with the master branch.
This takes about two weeks.
- The *MY_WORKSPACE* directory for the build shall have a common root
directory, and a leaf directory that is a sortable time stamp. The
suggested format is *YYYYMMDDThhmmss*.
.. code:: sh
$ sudo apt-get update
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
- Designers can access all build products over the internal network of
the regional office. The current prototype employs "rsync". Other
protocols that can efficiently share, copy, or transfer large directories
of content can be added as needed.
^^^^^^^^^^^^^^
Advanced usage
^^^^^^^^^^^^^^
Can the reference build itself use build avoidance? Yes it can.
Can it reference itself? Yes it can.
In both these cases, caution is advised. To protect against any possible
'divergence from reality', you should limit how many steps you remove
a build avoidance build from a full build.
Suppose we want to implement a self-referencing daily build in an
environment where a full build already occurs every Saturday.
To protect ourselves from a
build failure on Saturday we also want a limit of seven days since
the last full build. Your build script might look like this ...
::
...
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
FULL_BUILD_DAY="Saturday"
MAX_AGE_DAYS=7
LAST_FULL_BUILD_LINK="$BUILD_AVOIDANCE_DIR/latest_full_build"
LAST_FULL_BUILD_DAY=""
NOW_DAY=$(date -u "+%A")
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
# update software
repo init -u ${BUILD_REPO_URL} -b ${BUILD_BRANCH}
repo sync --force-sync
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/generate-cgcs-centos-repo.sh
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/populate_downloads.sh
# User can optionally define BUILD_METHOD equal to one of 'FULL', 'AVOIDANCE', or 'AUTO'
# Sanitize BUILD_METHOD
if [ "$BUILD_METHOD" != "FULL" ] && [ "$BUILD_METHOD" != "AVOIDANCE" ]; then
BUILD_METHOD="AUTO"
fi
# First build test
if [ "$BUILD_METHOD" != "FULL" ] && [ ! -L $LAST_FULL_BUILD_LINK ]; then
echo "latest_full_build symlink missing, forcing full build"
BUILD_METHOD="FULL"
fi
# Build day test
if [ "$BUILD_METHOD" == "AUTO" ] && [ "$NOW_DAY" == "$FULL_BUILD_DAY" ]; then
echo "Today is $FULL_BUILD_DAY, forcing full build"
BUILD_METHOD="FULL"
fi
# Build age test
if [ "$BUILD_METHOD" != "FULL" ]; then
LAST_FULL_BUILD_DATE=$(basename $(readlink $LAST_FULL_BUILD_LINK) | cut -d '_' -f 1)
LAST_FULL_BUILD_DAY=$(date -d $LAST_FULL_BUILD_DATE "+%A")
AGE_SECS=$(( $(date "+%s") - $(date -d $LAST_FULL_BUILD_DATE "+%s") ))
AGE_DAYS=$(( $AGE_SECS/60/60/24 ))
if [ $AGE_DAYS -ge $MAX_AGE_DAYS ]; then
echo "Haven't had a full build in $AGE_DAYS days, forcing full build"
BUILD_METHOD="FULL"
fi
BUILD_METHOD="AVOIDANCE"
fi
#Build it
if [ "$BUILD_METHOD" == "FULL" ]; then
build-pkgs --no-build-avoidance
else
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER
fi
if [ $? -ne 0 ]; then
echo "Build failed in build-pkgs"
exit 1
fi
build-iso
if [ $? -ne 0 ]; then
echo "Build failed in build-iso"
exit 1
fi
if [ "$BUILD_METHOD" == "FULL" ]; then
# A successful full build. Set last full build symlink.
if [ -L $LAST_FULL_BUILD_LINK ]; then
rm -rf $LAST_FULL_BUILD_LINK
fi
ln -sf $MY_WORKSPACE $LAST_FULL_BUILD_LINK
fi
...
A final note....
To use the full build day as your avoidance build reference point,
modify the "build-pkgs" commands above to use "--build-avoidance-day ",
as shown in the following two examples:
::
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $FULL_BUILD_DAY
# Here is another example with a bit more shuffling of the above script.
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $LAST_FULL_BUILD_DAY
The advantage is that our build is never more than one step removed
from a full build. This assumes the full build was successful.
The disadvantage is that by the end of the week, the reference build is getting
rather old. During active weeks, build times could approach build times for
full builds.

View File

@ -1,838 +1,16 @@
.. _developer-guide:
================
Developer guides
================
===============
Developer Guide
===============
Developer guides for StarlingX are release specific. To build a
StarlingX ISO from the latest release, use the
:doc:`/developer_guide/2018_10/index`.
This section contains the steps for building a StarlingX ISO from
the "r/2018.10" branch.
To build an ISO from a previous release of StarlingX, use the
developer guide for your specific release:
------------
Requirements
------------
.. toctree::
:maxdepth: 1
The recommended minimum requirements include:
*********************
Hardware Requirements
*********************
A workstation computer with:
- Processor: x86_64 is the only supported architecture
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Network adapter with active Internet connection
*********************
Software Requirements
*********************
A workstation computer with:
- Operating System: Ubuntu 16.04 LTS 64-bit
- Docker
- Android Repo Tool
- Proxy Settings Configured (If Required)
- See
http://lists.starlingx.io/pipermail/starlingx-discuss/2018-July/000136.html
for more details
- Public SSH Key
-----------------------------
Development Environment Setup
-----------------------------
This section describes how to set up a StarlingX development system on a
workstation computer. After completing these steps, you can
build a StarlingX ISO image on the following Linux distribution:
- Ubuntu 16.04 LTS 64-bit
****************************
Update Your Operating System
****************************
Before proceeding with the build, ensure your Ubuntu distribution is up to date.
You first need to update the local database list of available packages:
.. code:: sh
$ sudo apt-get update
******************************************
Installation Requirements and Dependencies
******************************************
^^^^
User
^^^^
1. Make sure you are a non-root user with sudo enabled when you build the
StarlingX ISO. You also need to either use your existing user or create a
separate *<user>*:
.. code:: sh
$ sudo useradd -m -d /home/<user> <user>
2. Your *<user>* should have sudo privileges:
.. code:: sh
$ sudo sh -c "echo '<user> ALL=(ALL:ALL) ALL' >> /etc/sudoers"
$ sudo su -c <user>
^^^
Git
^^^
3. Install the required packages on the Ubuntu host system:
.. code:: sh
$ sudo apt-get install make git curl
4. Make sure to set up your identity using the following two commands.
Be sure to provide your actual name and email address:
.. code:: sh
$ git config --global user.name "Name LastName"
$ git config --global user.email "Email Address"
^^^^^^^^^
Docker CE
^^^^^^^^^
5. Install the required Docker CE packages in the Ubuntu host system. See
`Get Docker CE for
Ubuntu <https://docs.docker.com/install/linux/docker-ce/ubuntu/#os-requirements>`__
for more information.
6. Log out and log in to add your *<user>* to the Docker group:
.. code:: sh
$ sudo usermod -aG docker <user>
^^^^^^^^^^^^^^^^^
Android Repo Tool
^^^^^^^^^^^^^^^^^
7. Install the required Android Repo Tool in the Ubuntu host system. Follow
the steps in the `Installing
Repo <https://source.android.com/setup/build/downloading#installing-repo>`__
section.
**********************
Install Public SSH Key
**********************
#. Follow these instructions on GitHub to `Generate a Public SSH
Key <https://help.github.com/articles/connecting-to-github-with-ssh>`__.
Then upload your public key to your GitHub and Gerrit account
profiles:
- `Upload to
Github <https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account>`__
- `Upload to
Gerrit <https://review.openstack.org/#/settings/ssh-keys>`__
****************************
Create a Workspace Directory
****************************
#. Create a *starlingx* workspace directory on your system.
Best practices dictate creating the workspace directory
in your $HOME directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/
*************************
Install stx-tools Project
*************************
#. Under your $HOME directory, clone the <stx-tools> project:
.. code:: sh
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
#. Navigate to the *<$HOME/stx-tools>* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/
-----------------------------
Prepare the Base Docker Image
-----------------------------
StarlingX base docker image handles all steps related to StarlingX ISO
creation. This section describes how to customize the base Docker image
building process.
********************
Configuration Values
********************
You can customize values for the StarlingX base Docker image using a
text-based configuration file named ``localrc``:
- ``HOST_PREFIX`` points to the directory that hosts the 'designer'
subdirectory for source code, the 'loadbuild' subdirectory for
the build environment, generated RPMs, and the ISO image.
- ``HOST_MIRROR_DIR`` points to the directory that hosts the CentOS mirror
repository.
^^^^^^^^^^^^^^^^^^^^^^^^^^
localrc configuration file
^^^^^^^^^^^^^^^^^^^^^^^^^^
Create your ``localrc`` configuration file. Following is an example:
.. code:: sh
# tbuilder localrc
MYUNAME=<your user name>
PROJECT=starlingx
HOST_PREFIX=$HOME/starlingx/workspace
HOST_MIRROR_DIR=$HOME/starlingx/mirror
***************************
Build the base Docker image
***************************
Once the ``localrc`` configuration file has been customized, it is time
to build the base Docker image.
#. If necessary, you might have to set http/https proxy in your
Dockerfile before building the docker image:
.. code:: sh
ENV http_proxy " http://your.actual_http_proxy.com:your_port "
ENV https_proxy " https://your.actual_https_proxy.com:your_port "
ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port "
RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf
#. The ``tb.sh`` script automates the Base Docker image build:
.. code:: sh
./tb.sh create
----------------------------------
Build the CentOS Mirror Repository
----------------------------------
The creation of the StarlingX ISO relies on a repository of RPM Binaries,
RPM Sources, and Tar Compressed files. This section describes how to build
this CentOS mirror repository.
*******************************
Run Repository Docker Container
*******************************
| Run the following commands under a terminal identified as "**One**":
#. Navigate to the *$HOME/stx-tools/centos-mirror-tool* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/centos-mirror-tools/
#. Launch the Docker container using the previously created base Docker image
*<repository>:<tag>*. As /localdisk is defined as the workdir of the
container, you should use the same folder name to define the volume.
The container starts to run and populate 'logs' and 'output' folders in
this directory. The container runs from the same directory in which the
scripts are stored.
.. code:: sh
$ docker run -it --volume $(pwd):/localdisk local/$USER-stx-builder:7.4 bash
*****************
Download Packages
*****************
#. Inside the Docker container, enter the following commands to download
the required packages to populate the CentOS mirror repository:
::
# cd localdisk && bash download_mirror.sh
#. Monitor the download of packages until it is complete. When the download
is complete, the following message appears:
::
totally 17 files are downloaded!
step #3: done successfully
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz
***************
Verify Packages
***************
#. Verify no missing or failed packages exist:
::
# cat logs/*_missing_*.log
# cat logs/*_failmove_*.log
#. In case missing or failed packages do exist, which is usually caused by
network instability (or timeout), you need to download the packages
manually.
Doing so assures you get all RPMs listed in
*rpms_3rdparties.lst*/*rpms_centos.lst*/*rpms_centos3rdparties.lst*.
******************
Packages Structure
******************
The following is a general overview of the packages structure resulting
from downloading the packages:
::
/home/<user>/stx-tools/centos-mirror-tools/output
└── stx-r1
└── CentOS
└── pike
├── Binary
│   ├── EFI
│   ├── images
│   ├── isolinux
│   ├── LiveOS
│   ├── noarch
│   └── x86_64
├── downloads
│   ├── integrity
│   └── puppet
└── Source
*******************************
Create CentOS Mirror Repository
*******************************
Outside your Repository Docker container, in another terminal identified
as "**Two**", run the following commands:
#. From terminal identified as "**Two**", create a *mirror/CentOS*
directory under your *starlingx* workspace directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/
#. Copy the built CentOS Mirror Repository built under
*$HOME/stx-tools/centos-mirror-tool* to the *$HOME/starlingx/mirror/*
workspace directory:
.. code:: sh
$ cp -r $HOME/stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/
-------------------------
Create StarlingX Packages
-------------------------
*****************************
Run Building Docker Container
*****************************
#. From the terminal identified as "**Two**", create the workspace folder:
.. code:: sh
$ mkdir -p $HOME/starlingx/workspace
#. Navigate to the *$HOME/stx-tools* project directory:
.. code:: sh
$ cd $HOME/stx-tools
#. Verify environment variables:
.. code:: sh
$ bash tb.sh env
#. Run the building Docker container:
.. code:: sh
$ bash tb.sh run
#. Execute the buiding Docker container:
.. code:: sh
$ bash tb.sh exec
*********************************
Download Source Code Repositories
*********************************
#. From the terminal identified as "**Two**", which is now inside the
Building Docker container, start the internal environment:
.. code:: sh
$ eval $(ssh-agent)
$ ssh-add
#. Use the repo tool to create a local clone of the stx-manifest
Git repository based on the "r/2018.10" branch:
.. code:: sh
$ cd $MY_REPO_ROOT_DIR
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml -b r/2018.10
**NOTE:** To use the "repo" command to clone the stx-manifest repository and
check out the "master" branch, omit the "-b r/2018.10" option.
Following is an example:
.. code:: sh
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml
#. Synchronize the repository:
.. code:: sh
$ repo sync -j`nproc`
#. Create a tarballs repository:
.. code:: sh
$ ln -s /import/mirrors/CentOS/stx-r1/CentOS/pike/downloads/ $MY_REPO/stx/
Alternatively, you can run the "populate_downloads.sh" script to copy
the tarballs instead of using a symlink:
.. code:: sh
$ populate_downloads.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
Outside the container
#. From another terminal identified as "**Three**", create mirror binaries:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/stx-installer
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img $HOME/starlingx/mirror/CentOS/stx-installer/initrd.img
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz $HOME/starlingx/mirror/CentOS/stx-installer/vmlinuz
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img $HOME/starlingx/mirror/CentOS/stx-installer/squashfs.img
**************
Build Packages
**************
#. Go back to the terminal identified as "**Two**", which is the Building Docker container.
#. **Temporal!** Build-Pkgs Errors. Be prepared to have some missing /
corrupted rpm and tarball packages generated during
`Build the CentOS Mirror Repository`_, which will cause the next step
to fail. If that step does fail, manually download those missing /
corrupted packages.
#. Update the symbolic links:
.. code:: sh
$ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
#. Build the packages:
.. code:: sh
$ build-pkgs
#. **Optional!** Generate-Cgcs-Tis-Repo:
While this step is optional, it improves performance on subsequent
builds. The cgcs-tis-repo has the dependency information that
sequences the build order. To generate or update the information, you
need to execute the following command after building modified or new
packages.
.. code:: sh
$ generate-cgcs-tis-repo
-------------------
Build StarlingX ISO
-------------------
#. Build the image:
.. code:: sh
$ build-iso
---------------
Build Installer
---------------
To get your StarlingX ISO ready to use, you must create the initialization
files used to boot the ISO, additional controllers, and compute nodes.
**NOTE:** You only need this procedure during your first build and
every time you upgrade the kernel.
After running "build-iso", run:
.. code:: sh
$ build-pkgs --installer
This builds *rpm* and *anaconda* packages. Then run:
.. code:: sh
$ update-pxe-network-installer
The *update-pxe-network-installer* covers the steps detailed in
*$MY_REPO/stx/stx-metal/installer/initrd/README*. This script
creates three files on
*/localdisk/loadbuild/pxe-network-installer/output*.
::
new-initrd.img
new-squashfs.img
new-vmlinuz
Rename the files as follows:
::
initrd.img
squashfs.img
vmlinuz
Two ways exist for using these files:
#. Store the files in the */import/mirror/CentOS/stx-installer/* folder
for future use.
#. Store the files in an arbitrary location and modify the
*$MY_REPO/stx/stx-metal/installer/pxe-network-installer/centos/build_srpm.data*
file to point to these files.
Recreate the *pxe-network-installer* package and rebuild the image:
.. code:: sh
$ build-pkgs --clean pxe-network-installer
$ build-pkgs pxe-network-installer
$ build-iso
Your ISO image should be able to boot.
****************
Additional Notes
****************
- In order to get the first boot working, this complete procedure needs
to be done. However, once the init files are created, these can be
stored in a shared location where different developers can make use
of them. Updating these files is not a frequent task and should be
done whenever the kernel is upgraded.
- StarlingX is in active development. Consequently, it is possible that in the
future the **0.2** version will change to a more generic solution.
---------------
Build Avoidance
---------------
*******
Purpose
*******
Greatly reduce build times after using "repo" to syncronized a local
repository with an upstream source (i.e. "repo sync").
Build avoidance works well for designers working
within a regional office. Starting from a new workspace, "build-pkgs"
typically requires three or more hours to complete. Build avoidance
reduces this step to approximately 20 minutes.
***********
Limitations
***********
- Little or no benefit for designers who refresh a pre-existing
workspace at least daily (e.g. download_mirror.sh, repo sync,
generate-cgcs-centos-repo.sh, build-pkgs, build-iso). In these cases,
an incremental build (i.e. reuse of same workspace without a "build-pkgs
--clean") is often just as efficient.
- Not likely to be useful to solo designers, or teleworkers that wish
to compile on using their home computers. Build avoidance downloads build
artifacts from a reference build, and WAN speeds are generally too
slow.
*****************
Method (in brief)
*****************
#. Reference Builds
- A server in the regional office performs regular (e.g. daily)
automated builds using existing methods. These builds are called
"reference builds".
- The builds are timestamped and preserved for some time (i.e. a
number of weeks).
- A build CONTEXT, which is a file produced by "build-pkgs"
at location *$MY_WORKSPACE/CONTEXT*, is captured. It is a bash script that can
cd to each and every Git and checkout the SHA that contributed to
the build.
- For each package built, a file captures the md5sums of all the
source code inputs required to build that package. These files are
also produced by "build-pkgs" at location
*$MY_WORKSPACE//rpmbuild/SOURCES//srpm_reference.md5*.
- All these build products are accessible locally (e.g. a regional
office) using "rsync".
**NOTE:** Other protocols can be added later.
#. Designers
- Request a build avoidance build. Recommended after you have
done synchronized the repository (i.e. "repo sync").
::
repo sync
generate-cgcs-centos-repo.sh
populate_downloads.sh
build-pkgs --build-avoidance
- Use combinations of additional arguments, environment variables, and a
configuration file unique to the regional office to specify an URL
to the reference builds.
- Using a configuration file to specify the location of your reference build:
::
mkdir -p $MY_REPO/local-build-data
cat <<- EOF > $MY_REPO/local-build-data/build_avoidance_source
# Optional, these are already the default values.
BUILD_AVOIDANCE_DATE_FORMAT="%Y%m%d"
BUILD_AVOIDANCE_TIME_FORMAT="%H%M%S"
BUILD_AVOIDANCE_DATE_TIME_DELIM="T"
BUILD_AVOIDANCE_DATE_TIME_POSTFIX="Z"
BUILD_AVOIDANCE_DATE_UTC=1
BUILD_AVOIDANCE_FILE_TRANSFER="rsync"
# Required, unique values for each regional office
BUILD_AVOIDANCE_USR="jenkins"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
EOF
- Using command-line arguments to specify the location of your reference
build:
::
build-pkgs --build-avoidance --build-avoidance-dir /localdisk/loadbuild/jenkins/StarlingX_Reference_Build --build-avoidance-host stx-builder.mycompany.com --build-avoidance-user jenkins
- Prior to your build attempt, you need to accept the host key.
Doing so prevents "rsync" failures on a "yes/no" prompt.
You only have to do this once.
::
grep -q $BUILD_AVOIDANCE_HOST $HOME/.ssh/known_hosts
if [ $? != 0 ]; then
ssh-keyscan $BUILD_AVOIDANCE_HOST >> $HOME/.ssh/known_hosts
fi
- "build-pkgs" does the following:
- From newest to oldest, scans the CONTEXTs of the various
reference builds. Selects the first (i.e. most recent) context that
satisfies the following requirement: every Git the SHA
specifies in the CONTEXT is present.
- The selected context might be slightly out of date, but not by
more than a day. This assumes daily reference builds are run.
- If the context has not been previously downloaded, then
download it now. This means you need to download select portions of the
reference build workspace into the designer's workspace. This
includes all the SRPMS, RPMS, MD5SUMS, and miscellaneous supporting
files. Downloading these files usually takes about 10 minutes
over an office LAN.
- The designer could have additional commits or uncommitted changes
not present in the reference builds. Affected packages are
identified by the differing md5sum's. In these cases, the packages
are re-built. Re-builds usually take five or more minutes,
depending on the packages that have changed.
- What if no valid reference build is found? Then build-pkgs will fall
back to a regular build.
****************
Reference Builds
****************
- The regional office implements an automated build that pulls the
latest StarlingX software and builds it on a regular basis (e.g.
daily builds). Jenkins, cron, or similar tools can trigger these builds.
- Each build is saved to a unique directory, and preserved for a time
that is reflective of how long a designer might be expected to work
on a private branch without syncronizing with the master branch.
This takes about two weeks.
- The *MY_WORKSPACE* directory for the build shall have a common root
directory, and a leaf directory that is a sortable time stamp. The
suggested format is *YYYYMMDDThhmmss*.
.. code:: sh
$ sudo apt-get update
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
- Designers can access all build products over the internal network of
the regional office. The current prototype employs "rsync". Other
protocols that can efficiently share, copy, or transfer large directories
of content can be added as needed.
^^^^^^^^^^^^^^
Advanced Usage
^^^^^^^^^^^^^^
Can the reference build itself use build avoidance? Yes it can.
Can it reference itself? Yes it can.
In both these cases, caution is advised. To protect against any possible
'divergence from reality', you should limit how many steps you remove
a build avoidance build from a full build.
Suppose we want to implement a self-referencing daily build in an
environment where a full build already occurs every Saturday.
To protect ourselves from a
build failure on Saturday we also want a limit of seven days since
the last full build. Your build script might look like this ...
::
...
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
FULL_BUILD_DAY="Saturday"
MAX_AGE_DAYS=7
LAST_FULL_BUILD_LINK="$BUILD_AVOIDANCE_DIR/latest_full_build"
LAST_FULL_BUILD_DAY=""
NOW_DAY=$(date -u "+%A")
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
# update software
repo init -u ${BUILD_REPO_URL} -b ${BUILD_BRANCH}
repo sync --force-sync
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/generate-cgcs-centos-repo.sh
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/populate_downloads.sh
# User can optionally define BUILD_METHOD equal to one of 'FULL', 'AVOIDANCE', or 'AUTO'
# Sanitize BUILD_METHOD
if [ "$BUILD_METHOD" != "FULL" ] && [ "$BUILD_METHOD" != "AVOIDANCE" ]; then
BUILD_METHOD="AUTO"
fi
# First build test
if [ "$BUILD_METHOD" != "FULL" ] && [ ! -L $LAST_FULL_BUILD_LINK ]; then
echo "latest_full_build symlink missing, forcing full build"
BUILD_METHOD="FULL"
fi
# Build day test
if [ "$BUILD_METHOD" == "AUTO" ] && [ "$NOW_DAY" == "$FULL_BUILD_DAY" ]; then
echo "Today is $FULL_BUILD_DAY, forcing full build"
BUILD_METHOD="FULL"
fi
# Build age test
if [ "$BUILD_METHOD" != "FULL" ]; then
LAST_FULL_BUILD_DATE=$(basename $(readlink $LAST_FULL_BUILD_LINK) | cut -d '_' -f 1)
LAST_FULL_BUILD_DAY=$(date -d $LAST_FULL_BUILD_DATE "+%A")
AGE_SECS=$(( $(date "+%s") - $(date -d $LAST_FULL_BUILD_DATE "+%s") ))
AGE_DAYS=$(( $AGE_SECS/60/60/24 ))
if [ $AGE_DAYS -ge $MAX_AGE_DAYS ]; then
echo "Haven't had a full build in $AGE_DAYS days, forcing full build"
BUILD_METHOD="FULL"
fi
BUILD_METHOD="AVOIDANCE"
fi
#Build it
if [ "$BUILD_METHOD" == "FULL" ]; then
build-pkgs --no-build-avoidance
else
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER
fi
if [ $? -ne 0 ]; then
echo "Build failed in build-pkgs"
exit 1
fi
build-iso
if [ $? -ne 0 ]; then
echo "Build failed in build-iso"
exit 1
fi
if [ "$BUILD_METHOD" == "FULL" ]; then
# A successful full build. Set last full build symlink.
if [ -L $LAST_FULL_BUILD_LINK ]; then
rm -rf $LAST_FULL_BUILD_LINK
fi
ln -sf $MY_WORKSPACE $LAST_FULL_BUILD_LINK
fi
...
A final note....
To use the full build day as your avoidance build reference point,
modify the "build-pkgs" commands above to use "--build-avoidance-day ",
as shown in the following two examples:
::
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $FULL_BUILD_DAY
# Here is another example with a bit more shuffling of the above script.
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $LAST_FULL_BUILD_DAY
The advantage is that our build is never more than one step removed
from a full build. This assumes the full build was successful.
The disadvantage is that by the end of the week, the reference build is getting
rather old. During active weeks, build times could approach build times for
full builds.
/developer_guide/latest/index
/developer_guide/2018_10/index

View File

@ -0,0 +1,839 @@
===========================
Developer guide stx.2019.05
===========================
This section contains the steps for building a StarlingX ISO from
the stx.2019.05 branch.
If a developer guide is needed for a previous release, review the
:doc:`developer guides for all previous releases </developer_guide/index>`.
------------
Requirements
------------
The recommended minimum requirements include:
*********************
Hardware requirements
*********************
A workstation computer with:
- Processor: x86_64 is the only supported architecture
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Network adapter with active Internet connection
*********************
Software requirements
*********************
A workstation computer with:
- Operating System: Ubuntu 16.04 LTS 64-bit
- Docker
- Android Repo Tool
- Proxy settings configured (if required)
- See
http://lists.starlingx.io/pipermail/starlingx-discuss/2018-July/000136.html
for more details
- Public SSH key
-----------------------------
Development environment setup
-----------------------------
This section describes how to set up a StarlingX development system on a
workstation computer. After completing these steps, you can
build a StarlingX ISO image on the following Linux distribution:
- Ubuntu 16.04 LTS 64-bit
****************************
Update your operating system
****************************
Before proceeding with the build, ensure your Ubuntu distribution is up to date.
You first need to update the local database list of available packages:
.. code:: sh
$ sudo apt-get update
******************************************
Installation requirements and dependencies
******************************************
^^^^
User
^^^^
1. Make sure you are a non-root user with sudo enabled when you build the
StarlingX ISO. You also need to either use your existing user or create a
separate *<user>*:
.. code:: sh
$ sudo useradd -m -d /home/<user> <user>
2. Your *<user>* should have sudo privileges:
.. code:: sh
$ sudo sh -c "echo '<user> ALL=(ALL:ALL) ALL' >> /etc/sudoers"
$ sudo su -c <user>
^^^
Git
^^^
3. Install the required packages on the Ubuntu host system:
.. code:: sh
$ sudo apt-get install make git curl
4. Make sure to set up your identity using the following two commands.
Be sure to provide your actual name and email address:
.. code:: sh
$ git config --global user.name "Name LastName"
$ git config --global user.email "Email Address"
^^^^^^^^^
Docker CE
^^^^^^^^^
5. Install the required Docker CE packages in the Ubuntu host system. See
`Get Docker CE for
Ubuntu <https://docs.docker.com/install/linux/docker-ce/ubuntu/#os-requirements>`__
for more information.
6. Log out and log in to add your *<user>* to the Docker group:
.. code:: sh
$ sudo usermod -aG docker <user>
^^^^^^^^^^^^^^^^^
Android Repo Tool
^^^^^^^^^^^^^^^^^
7. Install the required Android Repo Tool in the Ubuntu host system. Follow
the steps in the `Installing
Repo <https://source.android.com/setup/build/downloading#installing-repo>`__
section.
**********************
Install public SSH key
**********************
#. Follow these instructions on GitHub to `Generate a Public SSH
Key <https://help.github.com/articles/connecting-to-github-with-ssh>`__.
Then upload your public key to your GitHub and Gerrit account
profiles:
- `Upload to
Github <https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account>`__
- `Upload to
Gerrit <https://review.openstack.org/#/settings/ssh-keys>`__
****************************
Create a workspace directory
****************************
#. Create a *starlingx* workspace directory on your system.
Best practices dictate creating the workspace directory
in your $HOME directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/
*************************
Install stx-tools project
*************************
#. Under your $HOME directory, clone the <stx-tools> project:
.. code:: sh
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
#. Navigate to the *<$HOME/stx-tools>* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/
-----------------------------
Prepare the base Docker image
-----------------------------
StarlingX base Docker image handles all steps related to StarlingX ISO
creation. This section describes how to customize the base Docker image
building process.
********************
Configuration values
********************
You can customize values for the StarlingX base Docker image using a
text-based configuration file named ``localrc``:
- ``HOST_PREFIX`` points to the directory that hosts the 'designer'
subdirectory for source code, the 'loadbuild' subdirectory for
the build environment, generated RPMs, and the ISO image.
- ``HOST_MIRROR_DIR`` points to the directory that hosts the CentOS mirror
repository.
^^^^^^^^^^^^^^^^^^^^^^^^^^
localrc configuration file
^^^^^^^^^^^^^^^^^^^^^^^^^^
Create your ``localrc`` configuration file. For example:
.. code:: sh
# tbuilder localrc
MYUNAME=<your user name>
PROJECT=starlingx
HOST_PREFIX=$HOME/starlingx/workspace
HOST_MIRROR_DIR=$HOME/starlingx/mirror
***************************
Build the base Docker image
***************************
Once the ``localrc`` configuration file has been customized, it is time
to build the base Docker image.
#. If necessary, you might have to set http/https proxy in your
Dockerfile before building the docker image:
.. code:: sh
ENV http_proxy " http://your.actual_http_proxy.com:your_port "
ENV https_proxy " https://your.actual_https_proxy.com:your_port "
ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port "
RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf
#. The ``tb.sh`` script automates the Base Docker image build:
.. code:: sh
./tb.sh create
----------------------------------
Build the CentOS mirror repository
----------------------------------
The creation of the StarlingX ISO relies on a repository of RPM Binaries,
RPM Sources, and Tar Compressed files. This section describes how to build
this CentOS mirror repository.
*******************************
Run repository Docker container
*******************************
| Run the following commands under a terminal identified as "**One**":
#. Navigate to the *$HOME/stx-tools/centos-mirror-tool* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/centos-mirror-tools/
#. Launch the Docker container using the previously created base Docker image
*<repository>:<tag>*. As /localdisk is defined as the workdir of the
container, you should use the same folder name to define the volume.
The container starts to run and populate 'logs' and 'output' folders in
this directory. The container runs from the same directory in which the
scripts are stored.
.. code:: sh
$ docker run -it --volume $(pwd):/localdisk local/$USER-stx-builder:7.4 bash
*****************
Download packages
*****************
#. Inside the Docker container, enter the following commands to download
the required packages to populate the CentOS mirror repository:
::
# cd localdisk && bash download_mirror.sh
#. Monitor the download of packages until it is complete. When the download
is complete, the following message appears:
::
totally 17 files are downloaded!
step #3: done successfully
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz
***************
Verify packages
***************
#. Verify no missing or failed packages exist:
::
# cat logs/*_missing_*.log
# cat logs/*_failmove_*.log
#. In case missing or failed packages do exist, which is usually caused by
network instability (or timeout), you need to download the packages
manually.
Doing so assures you get all RPMs listed in
*rpms_3rdparties.lst*/*rpms_centos.lst*/*rpms_centos3rdparties.lst*.
******************
Packages structure
******************
The following is a general overview of the packages structure resulting
from downloading the packages:
::
/home/<user>/stx-tools/centos-mirror-tools/output
└── stx-r1
└── CentOS
└── pike
├── Binary
│   ├── EFI
│   ├── images
│   ├── isolinux
│   ├── LiveOS
│   ├── noarch
│   └── x86_64
├── downloads
│   ├── integrity
│   └── puppet
└── Source
*******************************
Create CentOS mirror repository
*******************************
Outside your Repository Docker container, in another terminal identified
as "**Two**", run the following commands:
#. From terminal identified as "**Two**", create a *mirror/CentOS*
directory under your *starlingx* workspace directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/
#. Copy the built CentOS Mirror Repository built under
*$HOME/stx-tools/centos-mirror-tool* to the *$HOME/starlingx/mirror/*
workspace directory:
.. code:: sh
$ cp -r $HOME/stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/
-------------------------
Create StarlingX packages
-------------------------
*****************************
Run building Docker container
*****************************
#. From the terminal identified as "**Two**", create the workspace folder:
.. code:: sh
$ mkdir -p $HOME/starlingx/workspace
#. Navigate to the *$HOME/stx-tools* project directory:
.. code:: sh
$ cd $HOME/stx-tools
#. Verify environment variables:
.. code:: sh
$ bash tb.sh env
#. Run the building Docker container:
.. code:: sh
$ bash tb.sh run
#. Execute the buiding Docker container:
.. code:: sh
$ bash tb.sh exec
*********************************
Download source code repositories
*********************************
#. From the terminal identified as "**Two**", which is now inside the
Building Docker container, start the internal environment:
.. code:: sh
$ eval $(ssh-agent)
$ ssh-add
#. Use the repo tool to create a local clone of the stx-manifest
Git repository based on the "r/2018.10" branch:
.. code:: sh
$ cd $MY_REPO_ROOT_DIR
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml -b r/2018.10
**NOTE:** To use the "repo" command to clone the stx-manifest repository and
check out the "master" branch, omit the "-b r/2018.10" option.
Following is an example:
.. code:: sh
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml
#. Synchronize the repository:
.. code:: sh
$ repo sync -j`nproc`
#. Create a tarballs repository:
.. code:: sh
$ ln -s /import/mirrors/CentOS/stx-r1/CentOS/pike/downloads/ $MY_REPO/stx/
Alternatively, you can run the "populate_downloads.sh" script to copy
the tarballs instead of using a symlink:
.. code:: sh
$ populate_downloads.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
Outside the container
#. From another terminal identified as "**Three**", create mirror binaries:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/stx-installer
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img $HOME/starlingx/mirror/CentOS/stx-installer/initrd.img
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz $HOME/starlingx/mirror/CentOS/stx-installer/vmlinuz
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img $HOME/starlingx/mirror/CentOS/stx-installer/squashfs.img
**************
Build packages
**************
#. Go back to the terminal identified as "**Two**", which is the Building Docker container.
#. **Temporal!** Build-Pkgs Errors. Be prepared to have some missing /
corrupted rpm and tarball packages generated during
`Build the CentOS Mirror Repository`_, which will cause the next step
to fail. If that step does fail, manually download those missing /
corrupted packages.
#. Update the symbolic links:
.. code:: sh
$ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
#. Build the packages:
.. code:: sh
$ build-pkgs
#. **Optional!** Generate-Cgcs-Tis-Repo:
While this step is optional, it improves performance on subsequent
builds. The cgcs-tis-repo has the dependency information that
sequences the build order. To generate or update the information, you
need to execute the following command after building modified or new
packages.
.. code:: sh
$ generate-cgcs-tis-repo
-------------------
Build StarlingX ISO
-------------------
#. Build the image:
.. code:: sh
$ build-iso
---------------
Build installer
---------------
To get your StarlingX ISO ready to use, you must create the initialization
files used to boot the ISO, additional controllers, and compute nodes.
**NOTE:** You only need this procedure during your first build and
every time you upgrade the kernel.
After running "build-iso", run:
.. code:: sh
$ build-pkgs --installer
This builds *rpm* and *anaconda* packages. Then run:
.. code:: sh
$ update-pxe-network-installer
The *update-pxe-network-installer* covers the steps detailed in
*$MY_REPO/stx/stx-metal/installer/initrd/README*. This script
creates three files on
*/localdisk/loadbuild/pxe-network-installer/output*.
::
new-initrd.img
new-squashfs.img
new-vmlinuz
Rename the files as follows:
::
initrd.img
squashfs.img
vmlinuz
Two ways exist for using these files:
#. Store the files in the */import/mirror/CentOS/stx-installer/* folder
for future use.
#. Store the files in an arbitrary location and modify the
*$MY_REPO/stx/stx-metal/installer/pxe-network-installer/centos/build_srpm.data*
file to point to these files.
Recreate the *pxe-network-installer* package and rebuild the image:
.. code:: sh
$ build-pkgs --clean pxe-network-installer
$ build-pkgs pxe-network-installer
$ build-iso
Your ISO image should be able to boot.
****************
Additional notes
****************
- In order to get the first boot working, this complete procedure needs
to be done. However, once the init files are created, these can be
stored in a shared location where different developers can make use
of them. Updating these files is not a frequent task and should be
done whenever the kernel is upgraded.
- StarlingX is in active development. Consequently, it is possible that in the
future the **0.2** version will change to a more generic solution.
---------------
Build avoidance
---------------
*******
Purpose
*******
Greatly reduce build times after using "repo" to syncronized a local
repository with an upstream source (i.e. "repo sync").
Build avoidance works well for designers working
within a regional office. Starting from a new workspace, "build-pkgs"
typically requires three or more hours to complete. Build avoidance
reduces this step to approximately 20 minutes.
***********
Limitations
***********
- Little or no benefit for designers who refresh a pre-existing
workspace at least daily (e.g. download_mirror.sh, repo sync,
generate-cgcs-centos-repo.sh, build-pkgs, build-iso). In these cases,
an incremental build (i.e. reuse of same workspace without a "build-pkgs
--clean") is often just as efficient.
- Not likely to be useful to solo designers, or teleworkers that wish
to compile on using their home computers. Build avoidance downloads build
artifacts from a reference build, and WAN speeds are generally too
slow.
*****************
Method (in brief)
*****************
#. Reference Builds
- A server in the regional office performs regular (e.g. daily)
automated builds using existing methods. These builds are called
"reference builds".
- The builds are timestamped and preserved for some time (i.e. a
number of weeks).
- A build CONTEXT, which is a file produced by "build-pkgs"
at location *$MY_WORKSPACE/CONTEXT*, is captured. It is a bash script that can
cd to each and every Git and checkout the SHA that contributed to
the build.
- For each package built, a file captures the md5sums of all the
source code inputs required to build that package. These files are
also produced by "build-pkgs" at location
*$MY_WORKSPACE//rpmbuild/SOURCES//srpm_reference.md5*.
- All these build products are accessible locally (e.g. a regional
office) using "rsync".
**NOTE:** Other protocols can be added later.
#. Designers
- Request a build avoidance build. Recommended after you have
done synchronized the repository (i.e. "repo sync").
::
repo sync
generate-cgcs-centos-repo.sh
populate_downloads.sh
build-pkgs --build-avoidance
- Use combinations of additional arguments, environment variables, and a
configuration file unique to the regional office to specify an URL
to the reference builds.
- Using a configuration file to specify the location of your reference build:
::
mkdir -p $MY_REPO/local-build-data
cat <<- EOF > $MY_REPO/local-build-data/build_avoidance_source
# Optional, these are already the default values.
BUILD_AVOIDANCE_DATE_FORMAT="%Y%m%d"
BUILD_AVOIDANCE_TIME_FORMAT="%H%M%S"
BUILD_AVOIDANCE_DATE_TIME_DELIM="T"
BUILD_AVOIDANCE_DATE_TIME_POSTFIX="Z"
BUILD_AVOIDANCE_DATE_UTC=1
BUILD_AVOIDANCE_FILE_TRANSFER="rsync"
# Required, unique values for each regional office
BUILD_AVOIDANCE_USR="jenkins"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
EOF
- Using command-line arguments to specify the location of your reference
build:
::
build-pkgs --build-avoidance --build-avoidance-dir /localdisk/loadbuild/jenkins/StarlingX_Reference_Build --build-avoidance-host stx-builder.mycompany.com --build-avoidance-user jenkins
- Prior to your build attempt, you need to accept the host key.
Doing so prevents "rsync" failures on a "yes/no" prompt.
You only have to do this once.
::
grep -q $BUILD_AVOIDANCE_HOST $HOME/.ssh/known_hosts
if [ $? != 0 ]; then
ssh-keyscan $BUILD_AVOIDANCE_HOST >> $HOME/.ssh/known_hosts
fi
- "build-pkgs" does the following:
- From newest to oldest, scans the CONTEXTs of the various
reference builds. Selects the first (i.e. most recent) context that
satisfies the following requirement: every Git the SHA
specifies in the CONTEXT is present.
- The selected context might be slightly out of date, but not by
more than a day. This assumes daily reference builds are run.
- If the context has not been previously downloaded, then
download it now. This means you need to download select portions of the
reference build workspace into the designer's workspace. This
includes all the SRPMS, RPMS, MD5SUMS, and miscellaneous supporting
files. Downloading these files usually takes about 10 minutes
over an office LAN.
- The designer could have additional commits or uncommitted changes
not present in the reference builds. Affected packages are
identified by the differing md5sum's. In these cases, the packages
are re-built. Re-builds usually take five or more minutes,
depending on the packages that have changed.
- What if no valid reference build is found? Then build-pkgs will fall
back to a regular build.
****************
Reference builds
****************
- The regional office implements an automated build that pulls the
latest StarlingX software and builds it on a regular basis (e.g.
daily builds). Jenkins, cron, or similar tools can trigger these builds.
- Each build is saved to a unique directory, and preserved for a time
that is reflective of how long a designer might be expected to work
on a private branch without syncronizing with the master branch.
This takes about two weeks.
- The *MY_WORKSPACE* directory for the build shall have a common root
directory, and a leaf directory that is a sortable time stamp. The
suggested format is *YYYYMMDDThhmmss*.
.. code:: sh
$ sudo apt-get update
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
- Designers can access all build products over the internal network of
the regional office. The current prototype employs "rsync". Other
protocols that can efficiently share, copy, or transfer large directories
of content can be added as needed.
^^^^^^^^^^^^^^
Advanced usage
^^^^^^^^^^^^^^
Can the reference build itself use build avoidance? Yes it can.
Can it reference itself? Yes it can.
In both these cases, caution is advised. To protect against any possible
'divergence from reality', you should limit how many steps you remove
a build avoidance build from a full build.
Suppose we want to implement a self-referencing daily build in an
environment where a full build already occurs every Saturday.
To protect ourselves from a
build failure on Saturday we also want a limit of seven days since
the last full build. Your build script might look like this ...
::
...
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
FULL_BUILD_DAY="Saturday"
MAX_AGE_DAYS=7
LAST_FULL_BUILD_LINK="$BUILD_AVOIDANCE_DIR/latest_full_build"
LAST_FULL_BUILD_DAY=""
NOW_DAY=$(date -u "+%A")
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
# update software
repo init -u ${BUILD_REPO_URL} -b ${BUILD_BRANCH}
repo sync --force-sync
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/generate-cgcs-centos-repo.sh
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/populate_downloads.sh
# User can optionally define BUILD_METHOD equal to one of 'FULL', 'AVOIDANCE', or 'AUTO'
# Sanitize BUILD_METHOD
if [ "$BUILD_METHOD" != "FULL" ] && [ "$BUILD_METHOD" != "AVOIDANCE" ]; then
BUILD_METHOD="AUTO"
fi
# First build test
if [ "$BUILD_METHOD" != "FULL" ] && [ ! -L $LAST_FULL_BUILD_LINK ]; then
echo "latest_full_build symlink missing, forcing full build"
BUILD_METHOD="FULL"
fi
# Build day test
if [ "$BUILD_METHOD" == "AUTO" ] && [ "$NOW_DAY" == "$FULL_BUILD_DAY" ]; then
echo "Today is $FULL_BUILD_DAY, forcing full build"
BUILD_METHOD="FULL"
fi
# Build age test
if [ "$BUILD_METHOD" != "FULL" ]; then
LAST_FULL_BUILD_DATE=$(basename $(readlink $LAST_FULL_BUILD_LINK) | cut -d '_' -f 1)
LAST_FULL_BUILD_DAY=$(date -d $LAST_FULL_BUILD_DATE "+%A")
AGE_SECS=$(( $(date "+%s") - $(date -d $LAST_FULL_BUILD_DATE "+%s") ))
AGE_DAYS=$(( $AGE_SECS/60/60/24 ))
if [ $AGE_DAYS -ge $MAX_AGE_DAYS ]; then
echo "Haven't had a full build in $AGE_DAYS days, forcing full build"
BUILD_METHOD="FULL"
fi
BUILD_METHOD="AVOIDANCE"
fi
#Build it
if [ "$BUILD_METHOD" == "FULL" ]; then
build-pkgs --no-build-avoidance
else
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER
fi
if [ $? -ne 0 ]; then
echo "Build failed in build-pkgs"
exit 1
fi
build-iso
if [ $? -ne 0 ]; then
echo "Build failed in build-iso"
exit 1
fi
if [ "$BUILD_METHOD" == "FULL" ]; then
# A successful full build. Set last full build symlink.
if [ -L $LAST_FULL_BUILD_LINK ]; then
rm -rf $LAST_FULL_BUILD_LINK
fi
ln -sf $MY_WORKSPACE $LAST_FULL_BUILD_LINK
fi
...
A final note....
To use the full build day as your avoidance build reference point,
modify the "build-pkgs" commands above to use "--build-avoidance-day ",
as shown in the following two examples:
::
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $FULL_BUILD_DAY
# Here is another example with a bit more shuffling of the above script.
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $LAST_FULL_BUILD_DAY
The advantage is that our build is never more than one step removed
from a full build. This assumes the full build was successful.
The disadvantage is that by the end of the week, the reference build is getting
rather old. During active weeks, build times could approach build times for
full builds.

View File

@ -2,16 +2,10 @@
StarlingX Documentation
=======================
Welcome to the StarlingX documentation. This is the documentation
for release stx.2018.10.
Additional information about this release is available in the
Welcome to the StarlingX documentation. This is the documentation for release
stx.2018.10. Additional information about this release is available in the
:ref:`release-notes`.
.. Add the additional version info here e.g.
The following documentation versions are available:
StarlingX stx.2019.09 | StarlingX stx.2019.04
For more information about the project, consult the
`Project Specifications <specs/index.html>`__.

View File

@ -1,8 +1,6 @@
.. _controller-storage:
===================================
Controller Storage Deployment Guide
===================================
===============================================
Controller storage deployment guide stx.2018.10
===============================================
.. contents::
:local:
@ -15,94 +13,94 @@ For approved instructions, see the
`StarlingX Cloud with Controller Storage wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard>`__.
----------------------
Deployment Description
Deployment description
----------------------
The Controller Storage deployment option provides a 2x Node High Availability
Controller / Storage Cluster with:
The Controller Storage deployment option provides a 2x node high availability
controller / storage cluster with:
- A pool of up to seven Compute Nodes (pool size limit due to the capacity of
the Storage Function).
- A growth path for Storage to the full Standard solution with an independent
CEPH Storage Cluster.
- High Availability Services runnning across the Controller Nodes in either
Active/Active or Active/Standby mode.
- Storage Function running on top of LVM on single second disk, DRBD-sync'd
between the Controller Nodes.
- A pool of up to seven compute nodes (pool size limit due to the capacity of
the storage function).
- A growth path for storage to the full standard solution with an independent
CEPH storage cluster.
- High availability services runnning across the controller nodes in either
active/active or active/standby mode.
- Storage function running on top of LVM on single second disk, DRBD-sync'd
between the controller nodes.
.. figure:: figures/starlingx-deployment-options-controller-storage.png
:scale: 50%
:alt: Controller Storage Deployment Configuration
:alt: Controller Storage deployment configuration
*Controller Storage Deployment Configuration*
*Controller Storage deployment configuration*
A Controller Storage deployment provides protection against overall Controller
Node and Compute Node failure:
A Controller Storage deployment provides protection against overall controller
node and compute node failure:
- On overall Controller Node failure, all Controller High Availability Services
go Active on the remaining healthy Controller Node.
- On overall Compute Node failure, Virtual Machines on failed Compute Node are
recovered on the remaining healthy Compute Nodes.
- On overall controller node failure, all controller high availability services
go active on the remaining healthy controller node.
- On overall compute node failure, virtual machines on failed compute node are
recovered on the remaining healthy compute nodes.
------------------------------------
Preparing Controller Storage Servers
Preparing controller storage servers
------------------------------------
**********
Bare Metal
Bare metal
**********
Required Servers:
Required servers:
- Controllers: 2
- Computes: 2 - 100
^^^^^^^^^^^^^^^^^^^^^
Hardware Requirements
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
Controller Storage will be deployed, include:
- Minimum Processor:
- Minimum processor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB Controller
- 32 GB Compute
- 64 GB controller
- 32 GB compute
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary Disk:
- Primary disk:
- 500 GB SDD or NVMe Controller
- 120 GB (min. 10K RPM) Compute
- 500 GB SDD or NVMe controller
- 120 GB (min. 10K RPM) compute
- Additional Disks:
- Additional disks:
- 1 or more 500 GB disks (min. 10K RPM) Compute
- 1 or more 500 GB disks (min. 10K RPM) compute
- Network Ports\*
- Network ports\*
- Management: 10GE Controller, Compute
- OAM: 10GE Controller
- Data: n x 10GE Compute
- Management: 10GE controller, compute
- OAM: 10GE controller
- Data: n x 10GE compute
*******************
Virtual Environment
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
management networks:
::
@ -123,7 +121,7 @@ are:
- controllerstorage-compute-1
^^^^^^^^^^^^^^^^^^^^^^^^^
Power Up a Virtual Server
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up a virtual server, run the following command:
@ -139,7 +137,7 @@ e.g.
$ sudo virsh start controllerstorage-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access Virtual Server Consoles
Access virtual server consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
@ -151,9 +149,9 @@ domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the Controller-0 for the first time, both the serial and
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for Controller-0.
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
@ -164,35 +162,35 @@ sequence which follows the boot device selection. One has a few seconds
to do this.
--------------------------------
Installing the Controller-0 Host
Installing the controller-0 host
--------------------------------
Installing Controller-0 involves initializing a host with software and
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
configured bootstrapped host becomes controller-0.
Procedure:
#. Power on the server that will be Controller-0 with the StarlingX ISO
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
*************************
Initializing Controller-0
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
appears in the controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
@ -202,13 +200,13 @@ StarlingX ISO booting options:
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
- Select "Standard Security Boot Profile" as the security profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
@ -229,28 +227,27 @@ Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring Controller-0
Configuring controller-0
************************
This section describes how to perform the Controller-0 configuration
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be Controller-0).
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as Controller-0. The prompts are grouped by configuration
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
@ -283,21 +280,21 @@ Accept all the default values immediately after system date and time.
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the Controller-0 OAM IP Address. The
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
------------------------------------
Provisioning Controller-0 and System
Provisioning controller-0 and system
------------------------------------
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring Provider Networks at Installation
Configuring provider networks at installation
*********************************************
You must set up provider networks at installation so that you can attach
@ -311,11 +308,11 @@ Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*************************************
Configuring Cinder on Controller Disk
Configuring Cinder on controller disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -328,7 +325,7 @@ physical disk
| 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Create the 'cinder-volumes' local volume group
Create the 'cinder-volumes' local volume group:
::
@ -353,7 +350,7 @@ Create the 'cinder-volumes' local volume group
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -377,7 +374,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -391,7 +388,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -416,14 +413,14 @@ Add the partition to the volume group
| updated_at | None |
+--------------------------+--------------------------------------------------+
Enable LVM Backend.
Enable LVM backend:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
Wait for the storage backend to leave "configuring" state. Confirm LVM
Backend storage is configured:
backend storage is configured:
::
@ -436,11 +433,11 @@ Backend storage is configured:
+--------------------------------------+------------+---------+------------+------+----------+...
**********************
Unlocking Controller-0
Unlocking controller-0
**********************
You must unlock Controller-0 so that you can use it to install the
remaining hosts. On Controller-0, acquire Keystone administrative
You must unlock controller-0 so that you can use it to install the
remaining hosts. On controller-0, acquire Keystone administrative
privileges. Use the system host-unlock command:
::
@ -449,13 +446,13 @@ privileges. Use the system host-unlock command:
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the Controller-0 console.
progress of the reboot, use the controller-0 console.
****************************************
Verifying the Controller-0 Configuration
Verifying the controller-0 configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
@ -475,7 +472,7 @@ Verify that the StarlingX controller services are running:
...
+-----+-------------------------------+--------------+----------------+
Verify that Controller-0 is unlocked, enabled, and available:
Verify that controller-0 is unlocked, enabled, and available:
::
@ -487,7 +484,7 @@ Verify that Controller-0 is unlocked, enabled, and available:
+----+--------------+-------------+----------------+-------------+--------------+
---------------------------------------
Installing Controller-1 / Compute Hosts
Installing controller-1 / compute hosts
---------------------------------------
After initializing and configuring an active controller, you can add and
@ -495,7 +492,7 @@ configure a backup controller and additional compute hosts. For each
host do the following:
*****************
Initializing Host
Initializing host
*****************
Power on Host. In host console you will see:
@ -508,16 +505,16 @@ Power on Host. In host console you will see:
controller node in order to proceed.
***************************************
Updating Host Host Name and Personality
Updating host hostname and personality
***************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new
Wait for controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
@ -542,22 +539,22 @@ Or for compute-0:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0
See also: 'system help host-update'
See also: 'system help host-update'.
Unless it is known that the host's configuration can support the
installation of more than one node, it is recommended that the
installation and configuration of each node be serialized. For example,
if the entire cluster has its virtual disks hosted on the host's root
disk which happens to be a single rotational type hard disk, then the
host cannot (reliably) support parallel node installation.
Unless it is known that the host's configuration can support the installation of
more than one node, it is recommended that the installation and configuration of
each node be serialized. For example, if the entire cluster has its virtual
disks hosted on the host's root disk which happens to be a single rotational
type hard disk, then the host cannot (reliably) support parallel node
installation.
***************
Monitoring Host
Monitoring host
***************
On Controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
On controller-0, you can monitor the installation progress by running the system
host-show command for the host periodically. Progress is shown in the
install_state field:
::
@ -566,16 +563,16 @@ shown in the install_state field.
| install_state | booting |
| install_state_info | None |
Wait while the host is configured and rebooted. Up to 20 minutes may be
required for a reboot, depending on hardware. When the reboot is
complete, the host is reported as Locked, Disabled, and Online.
Wait while the host is configured and rebooted. Up to 20 minutes may be required
for a reboot, depending on hardware. When the reboot is complete, the host is
reported as locked, disabled, and online.
*************
Listing Hosts
Listing hosts
*************
Once all Nodes have been installed, configured and rebooted, on
Controller-0 list the hosts:
Once all nodes have been installed, configured and rebooted, on controller-0
list the hosts:
::
@ -590,10 +587,10 @@ Controller-0 list the hosts:
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Provisioning Controller-1
Provisioning controller-1
-------------------------
On Controller-0, list hosts
On controller-0, list hosts:
::
@ -607,28 +604,28 @@ On Controller-0, list hosts
+----+--------------+-------------+----------------+-------------+--------------+
***********************************************
Provisioning Network Interfaces on Controller-1
Provisioning network interfaces on controller-1
***********************************************
In order to list out hardware port names, types, pci-addresses that have
In order to list out hardware port names, types, PCI addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the oam interface for Controller-1:
Provision the OAM interface for controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
************************************
Provisioning Storage on Controller-1
Provisioning storage on controller-1
************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -641,7 +638,7 @@ physical disk
| 70b83394-968e-4f0d-8a99-7985cd282a21 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 |
+--------------------------------------+-----------+---------+---------+-------+------------+
Assign Cinder storage to the physical disk
Assign Cinder storage to the physical disk:
::
@ -667,7 +664,7 @@ Assign Cinder storage to the physical disk
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk
physical disk:
::
@ -691,7 +688,7 @@ physical disk
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -705,7 +702,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| | ... | | |
+--------------------------------------+...+------------+...+--------+----------------------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -731,25 +728,24 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
**********************
Unlocking Controller-1
Unlocking controller-1
**********************
Unlock Controller-1
Unlock controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
Wait while the controller-1 is rebooted. Up to 10 minutes may be required for a
reboot, depending on hardware.
**REMARK:** Controller-1 will remain in 'degraded' state until
data-syncing is complete. The duration is dependant on the
virtualization host's configuration - i.e., the number and configuration
of physical disks used to host the nodes' virtual disks. Also, the
management network is expected to have link capacity of 10000 (1000 is
not supported due to excessive data-sync time). Use 'fm alarm-list' to
confirm status.
**REMARK:** controller-1 will remain in 'degraded' state until data-syncing is
complete. The duration is dependant on the virtualization host's configuration -
i.e., the number and configuration of physical disks used to host the nodes'
virtual disks. Also, the management network is expected to have link capacity of
10000 (1000 is not supported due to excessive data-sync time). Use
'fm alarm-list' to confirm status.
::
@ -762,26 +758,26 @@ confirm status.
...
---------------------------
Provisioning a Compute Host
Provisioning a compute host
---------------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each Compute Host do the following:
host before you can unlock it. For each compute host do the following:
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*************************************************
Provisioning Network Interfaces on a Compute Host
Provisioning network interfaces on a compute host
*************************************************
On Controller-0, in order to list out hardware port names, types,
On controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in Virtual Environment**: Ensure that the interface used is
- **Only in virtual environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks:
@ -790,21 +786,21 @@ pci-addresses that have been discovered:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for Compute:
Provision the data interface for compute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
***************************
VSwitch Virtual Environment
VSwitch virtual environment
***************************
**Only in Virtual Environment**. If the compute has more than 4 cpus,
the system will auto-configure the vswitch to use 2 cores. However some
virtual environments do not properly support multi-queue required in a
multi-cpu environment. Therefore run the following command to reduce the
vswitch cores to 1:
**Only in virtual environment**. If the compute has more than 4 cpus, the system
will auto-configure the vswitch to use 2 cores. However some virtual
environments do not properly support multi-queue required in a multi-CPU
environment. Therefore run the following command to reduce the vswitch cores to
1:
::
@ -820,7 +816,7 @@ vswitch cores to 1:
+--------------------------------------+-------+-----------+-------+--------+...
**************************************
Provisioning Storage on a Compute Host
Provisioning storage on a compute host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
@ -915,31 +911,30 @@ nova-local:
+-----------------+-------------------------------------------------------------------+
************************
Unlocking a Compute Host
Unlocking a compute host
************************
On Controller-0, use the system host-unlock command to unlock the
Compute node:
On controller-0, use the system host-unlock command to unlock the compute node:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute node is rebooted. Up to 10 minutes may be
Wait while the compute node is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware. The host is rebooted, and
its Availability State is reported as In-Test, followed by
its availability state is reported as in-test, followed by
unlocked/enabled.
-------------------
System Health Check
System health check
-------------------
***********************
Listing StarlingX Nodes
Listing StarlingX nodes
***********************
On Controller-0, after a few minutes, all nodes shall be reported as
Unlocked, Enabled, and Available:
On controller-0, after a few minutes, all nodes shall be reported as
unlocked, enabled, and available:
::
@ -954,18 +949,20 @@ Unlocked, Enabled, and Available:
+----+--------------+-------------+----------------+-------------+--------------+
*****************
System Alarm List
System alarm-list
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
When all nodes are unlocked, enabled and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder
Storage, 2x Computes and all OpenStack services up and running. You can now proceed
with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure
Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.
Your StarlingX deployment is now up and running with 2x HA controllers with
Cinder storage, 2x computes, and all OpenStack services up and running. You can
now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance
images, configure Nova Flavors, configure Neutron networks and launch Nova
virtual machines.
----------------------
Deployment Terminology
Deployment terminology
----------------------
.. include:: deployment_terminology.rst

View File

@ -1,8 +1,6 @@
.. _dedicated-storage:
==================================
Dedicated Storage Deployment Guide
==================================
==============================================
Dedicated storage deployment guide stx.2018.10
==============================================
.. contents::
:local:
@ -15,11 +13,11 @@ For approved instructions, see the
`StarlingX Cloud with Dedicated Storage wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandardStorage>`__.
----------------------
Deployment Description
Deployment description
----------------------
Cloud with Dedicated Storage is the standard StarlingX deployment option with
independent Controller, Compute, and Storage Nodes.
independent controller, compute, and storage nodes.
This deployment option provides the maximum capacity for a single region
deployment, with a supported growth path to a multi-region deployment option by
@ -27,31 +25,31 @@ adding a secondary region.
.. figure:: figures/starlingx-deployment-options-dedicated-storage.png
:scale: 50%
:alt: Dedicated Storage Deployment Configuration
:alt: Dedicated Storage deployment configuration
*Dedicated Storage Deployment Configuration*
*Dedicated Storage deployment configuration*
Cloud with Dedicated Storage includes:
- 2x Node HA Controller Cluster with HA Services running across the Controller
Nodes in either Active/Active or Active/Standby mode.
- Pool of up to 100 Compute Nodes for hosting virtual machines and virtual
- 2x node HA controller cluster with HA services running across the controller
nodes in either active/active or active/standby mode.
- Pool of up to 100 compute nodes for hosting virtual machines and virtual
networks.
- 2-9x Node HA CEPH Storage Cluster for hosting virtual volumes, images, and
- 2-9x node HA CEPH storage cluster for hosting virtual volumes, images, and
object storage that supports a replication factor of 2 or 3.
Storage Nodes are deployed in replication groups of 2 or 3. Replication
Storage nodes are deployed in replication groups of 2 or 3. Replication
of objects is done strictly within the replication group.
Supports up to 4 groups of 2x Storage Nodes, or up to 3 groups of 3x Storage
Nodes.
Supports up to 4 groups of 2x storage nodes, or up to 3 groups of 3x storage
nodes.
-----------------------------------
Preparing Dedicated Storage Servers
Preparing dedicated storage servers
-----------------------------------
**********
Bare Metal
Bare metal
**********
Required Servers:
@ -65,51 +63,51 @@ Required Servers:
- Computes: 2 - 100
^^^^^^^^^^^^^^^^^^^^^
Hardware Requirements
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
Dedicated Storage will be deployed, include:
- Minimum Processor:
- Minimum processor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB Controller, Storage
- 32 GB Compute
- 64 GB controller, storage
- 32 GB compute
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary Disk:
- Primary disk:
- 500 GB SDD or NVMe Controller
- 120 GB (min. 10K RPM) Compute, Storage
- 500 GB SDD or NVMe controller
- 120 GB (min. 10K RPM) compute and storage
- Additional Disks:
- Additional disks:
- 1 or more 500 GB disks (min. 10K RPM) Storage, Compute
- 1 or more 500 GB disks (min. 10K RPM) storage, compute
- Network Ports\*
- Network ports\*
- Management: 10GE Controller, Storage, Compute
- OAM: 10GE Controller
- Data: n x 10GE Compute
- Management: 10GE controller, storage, compute
- OAM: 10GE controller
- Data: n x 10GE compute
*******************
Virtual Environment
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
management networks:
::
@ -132,7 +130,7 @@ are:
- dedicatedstorage-storage-1
^^^^^^^^^^^^^^^^^^^^^^^^^
Power Up a Virtual Server
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up a virtual server, run the following command:
@ -148,7 +146,7 @@ e.g.
$ sudo virsh start dedicatedstorage-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access Virtual Server Consoles
Access virtual server consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
@ -173,12 +171,12 @@ sequence which follows the boot device selection. One has a few seconds
to do this.
--------------------------------
Installing the Controller-0 Host
Installing the controller-0 host
--------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
configured bootstrapped host becomes controller-0.
Procedure:
@ -187,21 +185,21 @@ Procedure:
#. Configure the controller using the config_controller script.
*************************
Initializing Controller-0
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
appears in the controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
@ -211,13 +209,13 @@ StarlingX ISO booting options:
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
- Select "Standard Security Boot Profile" as the security profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
@ -238,14 +236,13 @@ Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring Controller-0
Configuring controller-0
************************
This section describes how to perform the Controller-0 configuration
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
@ -253,9 +250,9 @@ of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
@ -271,7 +268,7 @@ with no parameters:
Enter ! at any prompt to abort...
...
Accept all the default values immediately after system date and time
Accept all the default values immediately after system date and time:
::
@ -292,21 +289,21 @@ Accept all the default values immediately after system date and time
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
------------------------------------
Provisioning Controller-0 and System
Provisioning controller-0 and system
------------------------------------
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring Provider Networks at Installation
Configuring provider networks at installation
*********************************************
You must set up provider networks at installation so that you can attach
@ -320,7 +317,7 @@ Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*********************************************
Adding a Ceph Storage Backend at Installation
Adding a Ceph storage backend at installation
*********************************************
Add CEPH Storage backend:
@ -353,7 +350,7 @@ Add CEPH Storage backend:
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
Confirm CEPH storage is configured
Confirm CEPH storage is configured:
::
@ -370,25 +367,25 @@ Confirm CEPH storage is configured
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
**********************
Unlocking Controller-0
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install the
remaining hosts. Use the system host-unlock command:
You must unlock controller-0 so that you can use it to install the remaining
hosts. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
The host is rebooted. During the reboot, the command line is unavailable, and
any ssh connections are dropped. To monitor the progress of the reboot, use the
controller-0 console.
****************************************
Verifying the Controller-0 Configuration
Verifying the controller-0 configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
@ -420,10 +417,10 @@ Verify that controller-0 is unlocked, enabled, and available:
+----+--------------+-------------+----------------+-------------+--------------+
*******************************
Provisioning Filesystem Storage
Provisioning filesystem storage
*******************************
List the controller filesystems with status and current sizes
List the controller file systems with status and current sizes:
::
@ -449,7 +446,7 @@ Modify filesystem sizes
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12
-------------------------------------------------------
Installing Controller-1 / Storage Hosts / Compute Hosts
Installing controller-1 / storage hosts / compute hosts
-------------------------------------------------------
After initializing and configuring an active controller, you can add and
@ -457,7 +454,7 @@ configure a backup controller and additional compute or storage hosts.
For each host do the following:
*****************
Initializing Host
Initializing host
*****************
Power on Host. In host console you will see:
@ -470,16 +467,16 @@ Power on Host. In host console you will see:
controller node in order to proceed.
**********************************
Updating Host Name and Personality
Updating host name and personality
**********************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new
Wait for controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
@ -498,19 +495,19 @@ Use the system host-add to update host personality attribute:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
**REMARK:** use the Mac Address for the specific network interface you
are going to be connected. e.g. OAM network interface for "Controller-1"
node, Management network interface for "Computes" and "Storage" nodes.
**REMARK:** use the Mac address for the specific network interface you
are going to be connected. e.g. OAM network interface for controller-1
node, management network interface for compute and storage nodes.
Check the **NIC** MAC Address from "Virtual Manager GUI" under *"Show
Check the **NIC** MAC address from "Virtual Manager GUI" under *"Show
virtual hardware details -*\ **i**\ *" Main Banner --> NIC: --> specific
"Bridge name:" under MAC Address text field.*
"Bridge name:" under MAC address text field.*
***************
Monitoring Host
Monitoring host
***************
On Controller-0, you can monitor the installation progress by running
On controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
@ -524,14 +521,14 @@ shown in the install_state field.
Wait while the host is configured and rebooted. Up to 20 minutes may be
required for a reboot, depending on hardware. When the reboot is
complete, the host is reported as Locked, Disabled, and Online.
complete, the host is reported as locked, disabled, and online.
*************
Listing Hosts
Listing hosts
*************
Once all Nodes have been installed, configured and rebooted, on
Controller-0 list the hosts:
Once all nodes have been installed, configured and rebooted, on
controller-0 list the hosts:
::
@ -548,10 +545,10 @@ Controller-0 list the hosts:
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Provisioning Controller-1
Provisioning controller-1
-------------------------
On Controller-0, list hosts
On controller-0, list hosts:
::
@ -565,36 +562,36 @@ On Controller-0, list hosts
+----+--------------+-------------+----------------+-------------+--------------+
***********************************************
Provisioning Network Interfaces on Controller-1
Provisioning network interfaces on controller-1
***********************************************
In order to list out hardware port names, types, pci-addresses that have
In order to list out hardware port names, types, PCI addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the oam interface for Controller-1:
Provision the OAM interface for controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
**********************
Unlocking Controller-1
Unlocking controller-1
**********************
Unlock Controller-1
Unlock controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be
Wait while the controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
**REMARK:** Controller-1 will remain in 'degraded' state until
**REMARK:** controller-1 will remain in degraded state until
data-syncing is complete. The duration is dependant on the
virtualization host's configuration - i.e., the number and configuration
of physical disks used to host the nodes' virtual disks. Also, the
@ -613,14 +610,14 @@ confirm status.
...
-------------------------
Provisioning Storage Host
Provisioning storage host
-------------------------
**************************************
Provisioning Storage on a Storage Host
Provisioning storage on a storage host
**************************************
Available physical disks in Storage-N
Available physical disks in storage-N:
::
@ -640,7 +637,7 @@ Available physical disks in Storage-N
| | | | | | | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Available storage tiers in Storage-N
Available storage tiers in storage-N:
::
@ -651,9 +648,9 @@ Available storage tiers in Storage-N
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
+--------------------------------------+---------+--------+--------------------------------------+
Create a storage function (i.e. OSD) in Storage-N. At least two unlocked and
enabled hosts with monitors are required. Candidates are: Controller-0,
Controller-1, and Storage-0.
Create a storage function (i.e. OSD) in storage-N. At least two unlocked and
enabled hosts with monitors are required. Candidates are: controller-0,
controller-1, and storage-0.
::
@ -676,7 +673,7 @@ Controller-1, and Storage-0.
| updated_at | 2018-08-16T00:40:07.626762+00:00 |
+------------------+--------------------------------------------------+
Create remaining available storage function (an OSD) in Storage-N
Create remaining available storage function (an OSD) in storage-N
based in the number of available physical disks.
List the OSDs:
@ -690,7 +687,7 @@ List the OSDs:
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd | 0 | {} | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
Unlock Storage-N
Unlock storage-N:
::
@ -700,26 +697,26 @@ Unlock Storage-N
remaining storage nodes.
---------------------------
Provisioning a Compute Host
Provisioning a compute host
---------------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each Compute Host do the following:
host before you can unlock it. For each compute host do the following:
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*************************************************
Provisioning Network Interfaces on a Compute Host
Provisioning network interfaces on a compute host
*************************************************
On Controller-0, in order to list out hardware port names, types,
On controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in Virtual Environment**: Ensure that the interface used is
- **Only in virtual environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks.
@ -728,20 +725,20 @@ pci-addresses that have been discovered:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for Compute:
Provision the data interface for compute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
***************************
VSwitch Virtual Environment
VSwitch virtual environment
***************************
**Only in Virtual Environment**. If the compute has more than 4 cpus,
**Only in virtual environment**. If the compute has more than 4 CPUs,
the system will auto-configure the vswitch to use 2 cores. However some
virtual environments do not properly support multi-queue required in a
multi-cpu environment. Therefore run the following command to reduce the
multi-CPU environment. Therefore run the following command to reduce the
vswitch cores to 1:
::
@ -758,7 +755,7 @@ vswitch cores to 1:
+--------------------------------------+-------+-----------+-------+--------+...
**************************************
Provisioning Storage on a Compute Host
Provisioning storage on a compute host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
@ -834,30 +831,30 @@ volumes:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
************************
Unlocking a Compute Host
Unlocking a compute host
************************
On Controller-0, use the system host-unlock command to unlock the
Compute-N:
On controller-0, use the system host-unlock command to unlock the
compute-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute-N is rebooted. Up to 10 minutes may be required
Wait while the compute-N is rebooted. Up to 10 minutes may be required
for a reboot, depending on hardware. The host is rebooted, and its
Availability State is reported as In-Test, followed by unlocked/enabled.
availability state is reported as in-test, followed by unlocked/enabled.
-------------------
System Health Check
System health check
-------------------
***********************
Listing StarlingX Nodes
Listing StarlingX nodes
***********************
On Controller-0, after a few minutes, all nodes shall be reported as
Unlocked, Enabled, and Available:
On controller-0, after a few minutes, all nodes shall be reported as
unlocked, enabled, and available:
::
@ -874,7 +871,7 @@ Unlocked, Enabled, and Available:
+----+--------------+-------------+----------------+-------------+--------------+
******************************
Checking StarlingX CEPH Health
Checking StarlingX CEPH health
******************************
::
@ -892,18 +889,20 @@ Checking StarlingX CEPH Health
controller-0:~$
*****************
System Alarm List
System alarm list
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
When all nodes are unlocked, enabled and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder
Storage, 1x Compute, 3x Storages and all OpenStack services up and running. You can
now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images,
configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.
Your StarlingX deployment is now up and running with 2x HA controllers with
Cinder storage, 1x compute, 3x storages and all OpenStack services up and
running. You can now proceed with standard OpenStack APIs, CLIs and/or Horizon
to load Glance images, configure Nova Flavors, configure Neutron networks and
launch Nova virtual machines.
----------------------
Deployment Terminology
Deployment terminology
----------------------
.. include:: deployment_terminology.rst

View File

@ -0,0 +1,119 @@
.. _incl-simplex-deployment-terminology:
**All-in-one controller node**
A single physical node that provides a controller function, compute
function, and storage function.
.. _incl-simplex-deployment-terminology-end:
.. _incl-standard-controller-deployment-terminology:
**Controller node / function**
A node that runs cloud control function for managing cloud resources.
- Runs cloud control functions for managing cloud resources.
- Runs all OpenStack control functions (e.g. managing images, virtual
volumes, virtual network, and virtual machines).
- Can be part of a two-node HA control node cluster for running control
functions either active/active or active/standby.
**Compute ( & network ) node / function**
A node that hosts applications in virtual machines using compute resources
such as CPU, memory, and disk.
- Runs virtual switch for realizing virtual networks.
- Provides L3 routing and NET services.
.. _incl-standard-controller-deployment-terminology-end:
.. _incl-dedicated-storage-deployment-terminology:
**Storage node / function**
A node that contains a set of disks (e.g. SATA, SAS, SSD, and/or NVMe).
- Runs CEPH distributed storage software.
- Part of an HA multi-node CEPH storage cluster supporting a replication
factor of two or three, journal caching, and class tiering.
- Provides HA persistent storage for images, virtual volumes
(i.e. block storage), and object storage.
.. _incl-dedicated-storage-deployment-terminology-end:
.. _incl-common-deployment-terminology:
**OAM network**
The network on which all external StarlingX platform APIs are exposed,
(i.e. REST APIs, Horizon web server, SSH, and SNMP), typically 1GE.
Only controller type nodes are required to be connected to the OAM
network.
**Management network**
A private network (i.e. not connected externally), tipically 10GE,
used for the following:
- Internal OpenStack / StarlingX monitoring and control.
- VM I/O access to a storage cluster.
All nodes are required to be connected to the management network.
**Data network(s)**
Networks on which the OpenStack / Neutron provider networks are realized
and become the VM tenant networks.
Only compute type and all-in-one type nodes are required to be connected
to the data network(s); these node types require one or more interface(s)
on the data network(s).
**IPMI network**
An optional network on which IPMI interfaces of all nodes are connected.
The network must be reachable using L3/IP from the controller's OAM
interfaces.
You can optionally connect all node types to the IPMI network.
**PXEBoot network**
An optional network for controllers to boot/install other nodes over the
network.
By default, controllers use the management network for boot/install of other
nodes in the openstack cloud. If this optional network is used, all node
types are required to be connected to the PXEBoot network.
A PXEBoot network is required for a variety of special case situations:
- Cases where the management network must be IPv6:
- IPv6 does not support PXEBoot. Therefore, IPv4 PXEBoot network must be
configured.
- Cases where the management network must be VLAN tagged:
- Most server's BIOS do not support PXEBooting over tagged networks.
Therefore, you must configure an untagged PXEBoot network.
- Cases where a management network must be shared across regions but
individual regions' controllers want to only network boot/install nodes
of their own region:
- You must configure separate, per-region PXEBoot networks.
**Infra network**
A deprecated optional network that was historically used for access to the
storage cluster.
If this optional network is used, all node types are required to be
connected to the INFRA network,
**Node interfaces**
All nodes' network interfaces can, in general, optionally be either:
- Untagged single port.
- Untagged two-port LAG and optionally split between redudant L2 switches
running vPC (Virtual Port-Channel), also known as multichassis
EtherChannel (MEC).
- VLAN on either single-port ETH interface or two-port LAG interface.
.. _incl-common-deployment-terminology-end:

View File

@ -1,29 +1,27 @@
.. _duplex:
==================================
All-In-One Duplex Deployment Guide
==================================
==============================================
All-In-One Duplex deployment guide stx.2018.10
==============================================
.. contents::
:local:
:depth: 1
**NOTE:** The instructions to setup a StarlingX All-in-One Duplex
**NOTE:** The instructions to setup a StarlingX All-in-One Duplex
(AIO-DX) with containerized openstack services in this guide
are under development.
For approved instructions, see the
`All in One Duplex Configuration wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIODX>`__.
----------------------
Deployment Description
Deployment description
----------------------
*****************
All-In-One Duplex
*****************
The All-In-One Duplex deployment option provides all three Cloud Functions
(Controller, Compute, and Storage) on two physical servers. With cloud
The All-In-One Duplex (AIO-DX) deployment option provides all three cloud
functions (controller, compute, and storage) on two physical servers. With cloud
technologies, multiple diverse application types can be deployed and
consolidated onto a protected pair of physical servers. For example:
@ -35,100 +33,100 @@ consolidated onto a protected pair of physical servers. For example:
.. figure:: figures/starlingx-deployment-options-duplex.png
:scale: 50%
:alt: All-In-One Duplex Deployment Configuration
:alt: All-In-One Duplex deployment configuration
*All-In-One Duplex Deployment Configuration*
*All-In-One Duplex deployment configuration*
This two node cluster enables:
- High Availability Services running on the Controller Function across the
two physical servers in either Active/Active or Active/Standby mode.
- Storage Function running on top of LVM on single second disk, DRBD-sync'd
- High availability services running on the controller function across the
two physical servers in either active/active or active/standby mode.
- Storage function running on top of LVM on single second disk, DRBD-sync'd
between the servers.
- Virtual Machines being scheduled on both Compute Functions.
- Virtual machines being scheduled on both compute functions.
A All-In-One Duplex deployment provides protection against overall server
hardware fault. Should an overall server hardware fault occur:
- All Controller High Availability Services go Active on remaining
- All controller high availability services go active on remaining
healthy server.
- All Virtual Machines are recovered on remaining healthy server.
- All virtual machines are recovered on remaining healthy server.
The All-In-One Duplex deployment solution is required for a variety of special
case situations, for example:
- Small amount of Cloud Processing/Storage.
- Small amount of cloud processing/storage.
- Protection against overall server hardware fault.
**************************
All-In-One Duplex Extended
All-In-One Duplex extended
**************************
The All-In-One Duplex Extended deployment option extends the capacity of the
All-In-One Duplex Deployment by adding up to four Compute Nodes to the
All-In-One Duplex deployment by adding up to four compute nodes to the
deployment. The extended deployment option provides a capacity growth path for
someone starting with an All-In-One Duplex deployment.
With this option, virtual machines can be scheduled on either of the
All-In-One Controller Nodes and/or the Compute Nodes.
all-in-one controller nodes and/or the compute nodes.
.. figure:: figures/starlingx-deployment-options-duplex-extended.png
:scale: 50%
:alt: All-In-One Duplex Extended Deployment Configuration
*All-In-One Duplex Extended Deployment Configuration*
*All-In-One Duplex Extended deployment configuration*
This configuration is limited to four Compute Nodes as the Controller Function
on the All-In-One Controllers has only a portion of the processing power of the
This configuration is limited to four compute nodes as the controller function
on the all-in-one controllers has only a portion of the processing power of the
overall server.
-----------------------------------
Preparing All-In-One Duplex Servers
Preparing All-In-One Duplex servers
-----------------------------------
**********
Bare Metal
Bare metal
**********
Required Servers:
- Combined Servers (Controller + Compute): 2
- Combined servers (controller + compute): 2
^^^^^^^^^^^^^^^^^^^^^
Hardware Requirements
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
All-In-One Duplex will be deployed, include:
- Minimum Processor:
- Minimum processor:
- Typical Hardware Form Factor:
- Typical hardware form factor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Low Cost / Low Power Hardware Form Factor
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Low cost / low power hardware form factor
- Single-CPU Intel Xeon D-15xx Family, 8 cores
- Single-CPU Intel Xeon D-15xx family, 8 cores
- Memory: 64 GB
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary Disk:
- Primary disk:
- 500 GB SDD or NVMe
- Additional Disks:
- Additional disks:
- Zero or more 500 GB disks (min. 10K RPM)
- Network Ports:
- Network ports:
**NOTE:** The All-In-One Duplex configuration requires one or more data ports.
@ -137,11 +135,11 @@ All-In-One Duplex will be deployed, include:
- Data: n x 10GE
*******************
Virtual Environment
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
Run the libvirt QEMU setup scripts. Setting up virtualized OAM and
management networks:
::
@ -160,7 +158,7 @@ are:
- duplex-controller-1
^^^^^^^^^^^^^^^^^^^^^^^^^
Power Up a Virtual Server
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up a virtual server, run the following command:
@ -176,7 +174,7 @@ e.g.
$ sudo virsh start duplex-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access Virtual Server Consoles
Access virtual server consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
@ -201,12 +199,12 @@ sequence which follows the boot device selection. One has a few seconds
to do this.
--------------------------------
Installing the Controller-0 Host
Installing the controller-0 host
--------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
configured bootstrapped host becomes controller-0.
Procedure:
@ -215,21 +213,21 @@ Procedure:
#. Configure the controller using the config_controller script.
*************************
Initializing Controller-0
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **All-in-one Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
appears in the controller-0 host, select the type of installation
"All-in-one Controller Configuration".
- **Graphical Console**
@ -239,13 +237,13 @@ StarlingX ISO booting options:
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
- Select "Standard Security Boot Profile" as the security profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
@ -266,14 +264,14 @@ Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
controller-0 is initialized with StarlingX, and is ready for
configuration.
************************
Configuring Controller-0
Configuring controller-0
************************
This section describes how to perform the Controller-0 configuration
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
@ -281,9 +279,9 @@ of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
@ -299,7 +297,7 @@ with no parameters:
Enter ! at any prompt to abort...
...
Select [y] for System Date and Time:
Select [y] for System date and time:
::
@ -320,7 +318,7 @@ For System mode choose "duplex":
3) simplex - single node non-redundant configuration
System mode [duplex-direct]: 2
After System Date / Time and System mode:
After System date and time and System mode:
::
@ -340,21 +338,21 @@ After System Date / Time and System mode:
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
----------------------------------
Provisioning the Controller-0 Host
Provisioning the controller-0 host
----------------------------------
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring Provider Networks at Installation
Configuring provider networks at installation
*********************************************
Set up one provider network of the vlan type, named providernet-a:
@ -365,10 +363,10 @@ Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*****************************************
Providing Data Interfaces on Controller-0
Providing data interfaces on controller-0
*****************************************
List all interfaces
List all interfaces:
::
@ -383,7 +381,7 @@ List all interfaces
| f59b9469-7702-4b46-bad5-683b95f0a1cb | enp0s8 | platform |...| None | [u'enp0s8'] | [] | [] | MTU=1500 |..
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
Configure the data interfaces
Configure the data interfaces:
::
@ -415,11 +413,11 @@ Configure the data interfaces
+------------------+--------------------------------------+
*************************************
Configuring Cinder on Controller Disk
Configuring Cinder on controller disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -439,7 +437,7 @@ physical disk
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'cinder-volumes' local volume group
Create the 'cinder-volumes' local volume group:
::
@ -462,7 +460,7 @@ Create the 'cinder-volumes' local volume group
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -486,7 +484,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -500,7 +498,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -526,10 +524,10 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
*********************************************
Adding an LVM Storage Backend at Installation
Adding an LVM storage backend at installation
*********************************************
Ensure requirements are met to add LVM storage
Ensure requirements are met to add LVM storage:
::
@ -543,7 +541,7 @@ Ensure requirements are met to add LVM storage
storage. Set the 'confirmed' field to execute this operation
for the lvm backend.
Add the LVM storage backend
Add the LVM storage backend:
::
@ -559,8 +557,7 @@ Add the LVM storage backend
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configuring |...| cinder | {} |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
Wait for the LVM storage backend to be configured (i.e.
state=Configured)
Wait for the LVM storage backend to be configured (i.e. state=configured):
::
@ -573,11 +570,11 @@ state=Configured)
+--------------------------------------+------------+---------+------------+------+----------+--------------+
***********************************************
Configuring VM Local Storage on Controller Disk
Configuring VM local storage on controller disk
***********************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -597,7 +594,7 @@ physical disk
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'noval-local' volume group
Create the 'noval-local' volume group:
::
@ -622,7 +619,7 @@ Create the 'noval-local' volume group
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -646,7 +643,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -660,7 +657,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -686,11 +683,11 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
**********************
Unlocking Controller-0
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install
Controller-1. Use the system host-unlock command:
controller-1. Use the system host-unlock command:
::
@ -701,10 +698,10 @@ unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
****************************************
Verifying the Controller-0 Configuration
Verifying the controller-0 configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
@ -724,7 +721,7 @@ Verify that the controller-0 services are running:
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 has controller and compute subfunctions
Verify that controller-0 has controller and compute subfunctions:
::
@ -743,17 +740,17 @@ Verify that controller-0 is unlocked, enabled, and available:
+----+--------------+-------------+----------------+-------------+--------------+
--------------------------------
Installing the Controller-1 Host
Installing the controller-1 host
--------------------------------
After initializing and configuring controller-0, you can add and
configure a backup controller controller-1.
******************************
Initializing Controller-1 Host
Initializing controller-1 host
******************************
Power on Controller-1. In Controller-1 console you will see:
Power on controller-1. In controller-1 console you will see:
::
@ -763,16 +760,16 @@ Power on Controller-1. In Controller-1 console you will see:
controller node in order to proceed.
****************************************************
Updating Controller-1 Host Host Name and Personality
Updating controller-1 host hostname and personality
****************************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new
Wait for controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
@ -785,7 +782,7 @@ UNKNOWN host shows up in table:
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
Use the system host-update to update Contoller-1 host personality
Use the system host-update to update contoller-1 host personality
attribute:
::
@ -835,10 +832,10 @@ attribute:
+---------------------+--------------------------------------+
****************************
Monitoring Controller-1 Host
Monitoring controller-1 host
****************************
On Controller-0, you can monitor the installation progress by running
On controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
@ -849,16 +846,16 @@ shown in the install_state field.
| install_state | booting |
| install_state_info | None |
Wait while the Controller-1 is configured and rebooted. Up to 20 minutes
Wait while the controller-1 is configured and rebooted. Up to 20 minutes
may be required for a reboot, depending on hardware. When the reboot is
complete, the Controller-1 is reported as Locked, Disabled, and Online.
complete, the controller-1 is reported as locked, disabled, and online.
*************************
Listing Controller-1 Host
Listing controller-1 host
*************************
Once Controller-1 has been installed, configured and rebooted, on
Controller-0 list the hosts:
Once controller-1 has been installed, configured and rebooted, on
controller-0 list the hosts:
::
@ -871,10 +868,10 @@ Controller-0 list the hosts:
+----+--------------+-------------+----------------+-------------+--------------+
----------------------------------
Provisioning the Controller-1 Host
Provisioning the controller-1 host
----------------------------------
On Controller-0, list hosts
On controller-0, list hosts:
::
@ -887,17 +884,17 @@ On Controller-0, list hosts
+----+--------------+-------------+----------------+-------------+--------------+
***********************************************
Provisioning Network Interfaces on Controller-1
Provisioning network interfaces on controller-1
***********************************************
In order to list out hardware port names, types, pci-addresses that have
In order to list out hardware port names, types, PCI addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the Controller-1 oam interface
Provision the controller-1 OAM interface
::
@ -929,10 +926,10 @@ Provision the Controller-1 oam interface
+------------------+--------------------------------------+
*****************************************
Providing Data Interfaces on Controller-1
Providing data interfaces on controller-1
*****************************************
List all interfaces
List all interfaces:
::
@ -948,7 +945,7 @@ List all interfaces
| e78ad9a9-e74d-4c6c-9de8-0e41aad8d7b7 | eth1000 | None |...| None | [u'eth1000'] | [] | [] | MTU=1500 |..
+--------------------------------------+---------+---------+...+------+--------------+------+------+------------+..
Configure the data interfaces
Configure the data interfaces:
::
@ -980,11 +977,11 @@ Configure the data interfaces
+------------------+--------------------------------------+
************************************
Provisioning Storage on Controller-1
Provisioning storage on controller-1
************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -997,7 +994,7 @@ physical disk
| 623bbfc0-2b38-432a-acf4-a28db6066cce | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
+--------------------------------------+-------------+------------+-------------+----------+---------------+...
Assign Cinder storage to the physical disk
Assign Cinder storage to the physical disk:
::
@ -1023,7 +1020,7 @@ Assign Cinder storage to the physical disk
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk
physical disk:
::
@ -1047,7 +1044,7 @@ physical disk
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -1058,7 +1055,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| 7a41aab0-6695-4d16-9003-73238adda75b |...| /dev/sdb1 |...| None | 16237 | Creating (on unlock) |
+--------------------------------------+...+-------------+...+-----------+----------+----------------------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -1083,14 +1080,12 @@ Add the partition to the volume group
| updated_at | None |
+--------------------------+--------------------------------------------------+
.. _configuring-vm-local-storage-on-controller-disk-1:
***********************************************
Configuring VM Local Storage on Controller Disk
Configuring VM local storage on controller disk
***********************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -1103,7 +1098,7 @@ physical disk
| 623bbfc0-2b38-432a-acf4-a28db6066cce | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
+--------------------------------------+-------------+------------+-------------+----------+---------------+...
Create the 'cinder-volumes' local volume group
Create the 'cinder-volumes' local volume group:
::
@ -1128,7 +1123,7 @@ Create the 'cinder-volumes' local volume group
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -1152,7 +1147,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -1164,7 +1159,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| f7bc6095-9375-49fe-83c7-12601c202376 |...| /dev/sdc1 |...| None | 16237 | Creating (on unlock) |
+--------------------------------------+...+-------------+...+-----------+----------+----------------------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -1190,19 +1185,19 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
**********************
Unlocking Controller-1
Unlocking controller-1
**********************
Unlock Controller-1
Unlock controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be
Wait while the controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
REMARK: Controller-1 will remain in 'degraded' state until data-syncing
REMARK: controller-1 will remain in degraded state until data-syncing
is complete. The duration is dependant on the virtualization host's
configuration - i.e., the number and configuration of physical disks
used to host the nodes' virtual disks. Also, the management network is
@ -1220,20 +1215,20 @@ excessive data-sync time). Use 'fm alarm-list' to confirm status.
+----+--------------+-------------+----------------+-------------+--------------+
-----------------------------------
Extending the Compute Node Capacity
Extending the compute node capacity
-----------------------------------
You can add up to four Compute Nodes to the All-in-One Duplex deployment.
You can add up to four compute nodes to the All-in-One Duplex deployment.
**************************
Compute Hosts Installation
Compute hosts installation
**************************
After initializing and configuring the two controllers, you can add up
to four additional compute hosts. To add a host, do the following:
^^^^^^^^^^^^^^^^^
Initializing Host
Initializing host
^^^^^^^^^^^^^^^^^
Power on the host. The following appears in the host console:
@ -1246,16 +1241,16 @@ Power on the host. The following appears in the host console:
controller node in order to proceed.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Updating the Host Name and Personality
Updating the hostname and personality
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for the Controller-0 to both discover the new host and to list that host
Wait for the controller-0 to both discover the new host and to list that host
as UNKNOWN in the table:
::
@ -1275,7 +1270,7 @@ Use the system host-update command to update the host personality attribute:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0
See also: 'system help host-update'
See also: 'system help host-update'.
Unless it is known that the host's configuration can support the
installation of more than one node, it is recommended that the
@ -1285,10 +1280,10 @@ root disk and that disk happens to be a single rotational type hard disk,
then the host cannot reliably support parallel node installation.
^^^^^^^^^^^^^^^
Monitoring Host
Monitoring host
^^^^^^^^^^^^^^^
On Controller-0, you can monitor the installation progress by periodically
On controller-0, you can monitor the installation progress by periodically
running the system host-show command for the host. Progress is
shown in the install_state field.
@ -1301,11 +1296,11 @@ shown in the install_state field.
Wait while the host is installed, configured, and rebooted. Depending on
hardware, it could take up to 20 minutes for this process to complete.
When the reboot is complete, the host is reported as Locked, Disabled,
and Online.
When the reboot is complete, the host is reported as locked, disabled,
and online.
^^^^^^^^^^^^^
Listing Hosts
Listing hosts
^^^^^^^^^^^^^
You can use the system host-list command to list the hosts once the node
@ -1323,26 +1318,26 @@ has been installed, configured, and rebooted:
+----+--------------+-------------+----------------+-------------+--------------+
*****************************
Provisioning the Compute Host
Provisioning the compute host
*****************************
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each Compute Host, do the following:
host before you can unlock it. For each compute host, do the following:
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Provisioning Network Interfaces on a Compute Host
Provisioning network interfaces on a compute host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to identify hardware port names, types, and discovered
pci-addresses on Controller-0, list the host ports:
pci-addresses on controller-0, list the host ports:
- **Only in Virtual Environment**: Ensure that the interface used is
- **Only in virtual environment**: Ensure that the interface used is
one of those attached to the host bridge with model type "virtio" (i.e.
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks:
@ -1359,17 +1354,17 @@ pci-addresses on Controller-0, list the host ports:
| c1694675-643d-4ba7-b821-cd147450112e | eth1001 | ethernet | 0000:02:04.0 |...
+--------------------------------------+---------+----------+--------------+...
Use the following command to provision the data interface for Compute:
Use the following command to provision the data interface for compute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
^^^^^^^^^^^^^^^^^^^^^^^^^^^
VSwitch Virtual Environment
VSwitch virtual environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Only in Virtual Environment**. If the compute node has more than four CPUs,
**Only in virtual environment**. If the compute node has more than four CPUs,
the system auto-configures the vswitch to use two cores. However, some virtual
environments do not properly support multi-queue, which is required in a
multi-CPU environment. Therefore, run the following command to reduce the
@ -1388,7 +1383,7 @@ vswitch cores to one:
+--------------------------------------+-------+-----------+-------+--------+...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Provisioning Storage on a Compute Host
Provisioning storage on a compute host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Review the available disk space and capacity and then obtain the uuid(s) of
@ -1458,31 +1453,31 @@ group based on the uuid of the physical disk:
+--------------------------+--------------------------------------------+
^^^^^^^^^^^^^^^^^^^^^^^^
Unlocking a Compute Host
Unlocking a compute host
^^^^^^^^^^^^^^^^^^^^^^^^
On Controller-0, use the system host-unlock command to unlock the
Compute node:
On controller-0, use the system host-unlock command to unlock the
compute node:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute node is rebooted and re-configured. Depending on
Wait while the compute node is rebooted and re-configured. Depending on
hardware, it can take up to 10 minutes for the reboot to complete. Once
the reboot is complete, the nodes Availability State reports as "In-Test"
the reboot is complete, the nodes availability state reports as "in-test"
and is followed by unlocked/enabled.
-------------------
System Health Check
System health check
-------------------
***********************
Listing StarlingX Nodes
Listing StarlingX nodes
***********************
On Controller-0, after a few minutes, all nodes are reported as
Unlocked, Enabled, and Available:
On controller-0, after a few minutes, all nodes are reported as
unlocked, enabled, and available:
::
@ -1496,18 +1491,20 @@ Unlocked, Enabled, and Available:
+----+--------------+-------------+----------------+-------------+--------------+
*****************
System Alarm List
System alarm list
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
When all nodes are unlocked, enabled, and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder
Storage and all OpenStack services up and running. You can now proceed with standard
OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors,
configure Neutron networks and launch Nova Virtual Machines.
Your StarlingX deployment is now up and running with 2x HA controllers with
Cinder storage and all OpenStack services up and running. You can now proceed
with standard OpenStack APIs, CLIs and/or Horizon to load Glance images,
configure Nova Flavors, configure Neutron networks and launch Nova virtual
machines.
----------------------
Deployment Terminology
Deployment terminology
----------------------
.. include:: deployment_terminology.rst

View File

@ -0,0 +1,288 @@
==============================
Installation guide stx.2018.10
==============================
This is the installation guide for release stx.2018.10. If an installation
guide is needed for a previous release, review the
:doc:`installation guides for previous releases </installation_guide/index>`.
------------
Introduction
------------
StarlingX may be installed in:
- **Bare metal**: Real deployments of StarlingX are only supported on
physical servers.
- **Virtual environment**: It should only be used for evaluation or
development purposes.
StarlingX installed in virtual environments has two options:
- :doc:`Libvirt/QEMU </installation_guide/2018_10/installation_libvirt_qemu>`
- VirtualBox
------------
Requirements
------------
Different use cases require different configurations.
**********
Bare metal
**********
The minimum requirements for the physical servers where StarlingX might
be deployed, include:
- **Controller hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket
- Minimum memory: 64 GB
- Hard drives:
- Primary hard drive, minimum 500 GB for OS and system databases.
- Secondary hard drive, minimum 500 GB for persistent VM storage.
- 2 physical Ethernet interfaces: OAM and MGMT network.
- USB boot support.
- PXE boot support.
- **Storage hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket.
- Minimum memory: 64 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB for OS.
- 1 or more additional hard drives for CEPH OSD storage, and
- Optionally 1 or more SSD or NVMe drives for CEPH journals.
- 1 physical Ethernet interface: MGMT network
- PXE boot support.
- **Compute hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket.
- Minimum memory: 32 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB for OS.
- 1 or more additional hard drives for ephemeral VM storage.
- 2 or more physical Ethernet interfaces: MGMT network and 1 or more
provider networks.
- PXE boot support.
- **All-In-One Simplex or Duplex, controller + compute hosts**
- Minimum processor is:
- Typical hardware form factor:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Low cost / low power hardware form factor
- Single-CPU Intel Xeon D-15xx family, 8 cores
- Minimum memory: 64 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB SDD or NVMe.
- 0 or more 500 GB disks (min. 10K RPM).
- Network ports:
**NOTE:** Duplex and Simplex configurations require one or more data
ports.
The Duplex configuration requires a management port.
- Management: 10GE (Duplex only)
- OAM: 10GE
- Data: n x 10GE
The recommended minimum requirements for the physical servers are
described later in each StarlingX deployment guide.
^^^^^^^^^^^^^^^^^^^^^^^^
NVMe drive as boot drive
^^^^^^^^^^^^^^^^^^^^^^^^
To use a Non-Volatile Memory Express (NVMe) drive as the boot drive for any of
your nodes, you must configure your host and adjust kernel parameters during
installation:
- Configure the host to be in UEFI mode.
- Edit the kernel boot parameter. After you are presented with the StarlingX
ISO boot options and after you have selected the preferred installation option
(e.g. Standard Configuration / All-in-One Controller Configuration), press the
TAB key to edit the kernel boot parameters. Modify the **boot_device** and
**rootfs_device** from the default **sda** so that it is the correct device
name for the NVMe drive (e.g. "nvme0n1").
::
vmlinuz rootwait console=tty0 inst.text inst.stage2=hd:LABEL=oe_iso_boot
inst.ks=hd:LABEL=oe_iso_boot:/smallsystem_ks.cfg boot_device=nvme0n1
rootfs_device=nvme0n1 biosdevname=0 usbcore.autosuspend=-1 inst.gpt
security_profile=standard user_namespace.enable=1 initrd=initrd.img
*******************
Virtual environment
*******************
The recommended minimum requirements for the workstation, hosting the
virtual machine(s) where StarlingX will be deployed, include:
^^^^^^^^^^^^^^^^^^^^^
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Cores: 8 (4 with careful monitoring of cpu load)
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Two network adapters with active Internet connection
^^^^^^^^^^^^^^^^^^^^^
Software requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt library
- QEMU full-system emulation binaries
- stx-tools project
- StarlingX ISO image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deployment environment setup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section describes how to set up the workstation computer which will
host the virtual machine(s) where StarlingX will be deployed.
''''''''''''''''''''''''''''''
Updating your operating system
''''''''''''''''''''''''''''''
Before proceeding with the build, ensure your OS is up to date. Youll
first need to update the local database list of available packages:
::
$ sudo apt-get update
'''''''''''''''''''''''''
Install stx-tools project
'''''''''''''''''''''''''
Clone the stx-tools project. Usually youll want to clone it under your
users home directory.
::
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
''''''''''''''''''''''''''''''''''''''''
Installing requirements and dependencies
''''''''''''''''''''''''''''''''''''''''
Navigate to the stx-tools installation libvirt directory:
::
$ cd $HOME/stx-tools/deployment/libvirt/
Install the required packages:
::
$ bash install_packages.sh
''''''''''''''''''
Disabling firewall
''''''''''''''''''
Unload firewall and disable firewall on boot:
::
$ sudo ufw disable
Firewall stopped and disabled on system startup
$ sudo ufw status
Status: inactive
-------------------------------
Getting the StarlingX ISO image
-------------------------------
Follow the instructions from the :doc:`/developer_guide/2018_10/index` to build a
StarlingX ISO image.
**********
Bare metal
**********
A bootable USB flash drive containing StarlingX ISO image.
*******************
Virtual environment
*******************
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project
directory:
::
$ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
------------------
Deployment options
------------------
- Standard controller
- :doc:`StarlingX Cloud with Dedicated Storage </installation_guide/2018_10/dedicated_storage>`
- :doc:`StarlingX Cloud with Controller Storage </installation_guide/2018_10/controller_storage>`
- All-in-one
- :doc:`StarlingX Cloud Duplex </installation_guide/2018_10/duplex>`
- :doc:`StarlingX Cloud Simplex </installation_guide/2018_10/simplex>`
.. toctree::
:hidden:
installation_libvirt_qemu
controller_storage
dedicated_storage
duplex
simplex

View File

@ -0,0 +1,204 @@
=====================================
Installation libvirt qemu stx.2018.10
=====================================
Installation for StarlingX stx.2018.10 using Libvirt/QEMU virtualization.
---------------------
Hardware requirements
---------------------
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Memory: At least 32GB RAM
- Hard disk: 500GB HDD
- Network: One network adapter with active Internet connection
---------------------
Software requirements
---------------------
A workstation computer with:
- Operating system: This process is known to work on Ubuntu 16.04 and
is likely to work on other Linux OS's with some appropriate adjustments.
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt library
- QEMU full-system emulation binaries
- stx-tools project
- StarlingX ISO image
----------------------------
Deployment environment setup
----------------------------
*************
Configuration
*************
These scripts are configured using environment variables that all have
built-in defaults. On shared systems you probably do not want to use the
defaults. The simplest way to handle this is to keep an rc file that can
be sourced into an interactive shell that configures everything. Here's
an example called stxcloud.rc:
::
export CONTROLLER=stxcloud
export COMPUTE=stxnode
export STORAGE=stxstorage
export BRIDGE_INTERFACE=stxbr
export INTERNAL_NETWORK=172.30.20.0/24
export INTERNAL_IP=172.30.20.1/24
export EXTERNAL_NETWORK=192.168.20.0/24
export EXTERNAL_IP=192.168.20.1/24
This rc file shows the defaults baked into the scripts:
::
export CONTROLLER=controller
export COMPUTE=compute
export STORAGE=storage
export BRIDGE_INTERFACE=stxbr
export INTERNAL_NETWORK=10.10.10.0/24
export INTERNAL_IP=10.10.10.1/24
export EXTERNAL_NETWORK=192.168.204.0/24
export EXTERNAL_IP=192.168.204.1/24
*************************
Install stx-tools project
*************************
Clone the stx-tools project into a working directory.
::
git clone git://git.openstack.org/openstack/stx-tools.git
It is convenient to set up a shortcut to the deployment script
directory:
::
SCRIPTS=$(pwd)/stx-tools/deployment/libvirt
If you created a configuration, load it from stxcloud.rc:
::
source stxcloud.rc
****************************************
Installing requirements and dependencies
****************************************
Install the required packages and configure QEMU. This only needs to be
done once per host. (NOTE: this script only knows about Ubuntu at this
time):
::
$SCRIPTS/install_packages.sh
******************
Disabling firewall
******************
Unload firewall and disable firewall on boot:
::
sudo ufw disable
sudo ufw status
******************
Configure networks
******************
Configure the network bridges using setup_network.sh before doing
anything else. It will create 4 bridges named stxbr1, stxbr2, stxbr3 and
stxbr4. Set the BRIDGE_INTERFACE environment variable if you need to
change stxbr to something unique.
::
$SCRIPTS/setup_network.sh
The destroy_network.sh script does the reverse, and should not be used
lightly. It should also only be used after all of the VMs created below
have been destroyed.
There is also a script cleanup_network.sh that will remove networking
configuration from libvirt.
*********************
Configure controllers
*********************
One script exists for building different StarlingX cloud configurations:
setup_configuration.sh.
The script uses the cloud configuration with the -c option:
- simplex
- duplex
- controllerstorage
- dedicatedstorage
You need an ISO file for the installation, the script takes a file name
with the -i option:
::
$SCRIPTS/setup_configuration.sh -c <cloud configuration> -i <starlingx iso image>
And the setup will begin. The scripts create one or more VMs and start
the boot of the first controller, named oddly enough \``controller-0``.
If you have Xwindows available you will get virt-manager running. If
not, Ctrl-C out of that attempt if it doesn't return to a shell prompt.
Then connect to the serial console:
::
virsh console controller-0
Continue the usual StarlingX installation from this point forward.
Tear down the VMs using destroy_configuration.sh.
::
$SCRIPTS/destroy_configuration.sh -c <cloud configuration>
--------
Continue
--------
Pick up the installation in one of the existing guides at the initializing
controller-0 step.
- Standard controller
- :doc:`StarlingX Cloud with Dedicated Storage Virtual Environment </installation_guide/2018_10/dedicated_storage>`
- :doc:`StarlingX Cloud with Controller Storage Virtual Environment </installation_guide/2018_10/controller_storage>`
- All-in-one
- :doc:`StarlingX Cloud Duplex Virtual Environment </installation_guide/2018_10/duplex>`
- :doc:`StarlingX Cloud Simplex Virtual Environment </installation_guide/2018_10/simplex>`

View File

@ -1,8 +1,6 @@
.. _simplex:
===================================
All-In-One Simplex Deployment Guide
===================================
===============================================
All-In-One Simplex deployment guide stx.2018.10
===============================================
.. contents::
:local:
@ -15,14 +13,14 @@ For approved instructions, see the
`One Node Configuration wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/Installation>`__.
----------------------
Deployment Description
Deployment description
----------------------
The All-In-One Simplex deployment option provides all three Cloud Functions
(Controller, Compute, and Storage) on a single physical server. With these Cloud
Functions, multiple application types can be deployed and consolidated onto a
single physical server. For example, with a All-In-One Simplex deployment you
can:
The All-In-One Simplex (AIO-SX) deployment option provides all three cloud
gunctions (controller, compute, and storage) on a single physical server. With
these cloud functions, multiple application types can be deployed and
consolidated onto a single physical server. For example, with a AIO-SX
deployment you can:
- Consolidate legacy applications that must run standalone on a server by using
multiple virtual machines on a single physical server.
@ -30,14 +28,14 @@ can:
different distributions of operating systems by using multiple virtual
machines on a single physical server.
Only a small amount of Cloud Processing / Storage power is required with an
Only a small amount of cloud processing / storage power is required with an
All-In-One Simplex deployment.
.. figure:: figures/starlingx-deployment-options-simplex.png
:scale: 50%
:alt: All-In-One Simplex Deployment Configuration
:alt: All-In-One Simplex deployment configuration
*All-In-One Simplex Deployment Configuration*
*All-In-One Simplex deployment configuration*
An All-In-One Simplex deployment provides no protection against an overall
server hardware fault. Protection against overall server hardware fault is
@ -46,52 +44,52 @@ could be enabled if, for example, an HW RAID or 2x Port LAG is used in the
deployment.
--------------------------------------
Preparing an All-In-One Simplex Server
Preparing an All-In-One Simplex server
--------------------------------------
**********
Bare Metal
Bare metal
**********
Required Server:
- Combined Server (Controller + Compute): 1
- Combined server (controller + compute): 1
^^^^^^^^^^^^^^^^^^^^^
Hardware Requirements
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
All-In-One Simplex will be deployed are:
- Minimum Processor:
- Minimum processor:
- Typical Hardware Form Factor:
- Typical hardware form factor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Low Cost / Low Power Hardware Form Factor
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Low cost / low power hardware form factor
- Single-CPU Intel Xeon D-15xx Family, 8 cores
- Single-CPU Intel Xeon D-15xx family, 8 cores
- Memory: 64 GB
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary Disk:
- Primary disk:
- 500 GB SDD or NVMe
- Additional Disks:
- Additional disks:
- Zero or more 500 GB disks (min. 10K RPM)
- Network Ports
- Network ports
**NOTE:** All-In-One Simplex configuration requires one or more data ports.
This configuration does not require a management port.
@ -100,11 +98,11 @@ All-In-One Simplex will be deployed are:
- Data: n x 10GE
*******************
Virtual Environment
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
management networks:
::
@ -121,7 +119,7 @@ The default XML server definition created by the previous script is:
- simplex-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^
Power Up a Virtual Server
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up the virtual server, run the following command:
@ -137,7 +135,7 @@ e.g.
$ sudo virsh start simplex-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access a Virtual Server Console
Access a virtual server console
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
@ -162,12 +160,12 @@ sequence which follows the boot device selection. One has a few seconds
to do this.
------------------------------
Installing the Controller Host
Installing the controller host
------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
configured bootstrapped host becomes controller-0.
Procedure:
@ -176,21 +174,21 @@ Procedure:
#. Configure the controller using the config_controller script.
*************************
Initializing Controller-0
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **All-in-one Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
appears in the controller-0 host, select the type of installation
"All-in-one Controller Configuration".
- **Graphical Console**
@ -203,10 +201,10 @@ StarlingX ISO booting options:
- Select "Standard Security Boot Profile" as the Security Profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
@ -228,14 +226,13 @@ Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring Controller-0
Configuring controller-0
************************
This section describes how to perform the Controller-0 configuration
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
@ -243,9 +240,9 @@ of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
@ -261,7 +258,7 @@ with no parameters:
Enter ! at any prompt to abort...
...
Select [y] for System Date and Time:
Select [y] for System date and time:
::
@ -281,7 +278,7 @@ For System mode choose "simplex":
3) simplex - single node non-redundant configuration
System mode [duplex-direct]: 3
After System Date / Time and System mode:
After System date and time and System mode:
::
@ -302,21 +299,21 @@ After System Date / Time and System mode:
commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
--------------------------------
Provisioning the Controller Host
Provisioning the controller host
--------------------------------
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring Provider Networks at Installation
Configuring provider networks at installation
*********************************************
Set up one provider network of the vlan type, named providernet-a:
@ -327,10 +324,10 @@ Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*****************************************
Providing Data Interfaces on Controller-0
Providing data interfaces on controller-0
*****************************************
List all interfaces
List all interfaces:
::
@ -345,7 +342,7 @@ List all interfaces
| f59b9469-7702-4b46-bad5-683b95f0a1cb | enp0s8 | platform |...| None | [u'enp0s8'] | [] | [] | MTU=1500 |..
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
Configure the data interfaces
Configure the data interfaces:
::
@ -377,11 +374,11 @@ Configure the data interfaces
+------------------+--------------------------------------+
*************************************
Configuring Cinder on Controller Disk
Configuring Cinder on controller disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -401,7 +398,7 @@ physical disk
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'cinder-volumes' local volume group
Create the 'cinder-volumes' local volume group:
::
@ -424,7 +421,7 @@ Create the 'cinder-volumes' local volume group
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -448,7 +445,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -462,7 +459,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -488,10 +485,10 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
*********************************************
Adding an LVM Storage Backend at Installation
Adding an LVM storage backend at installation
*********************************************
Ensure requirements are met to add LVM storage
Ensure requirements are met to add LVM storage:
::
@ -505,7 +502,7 @@ Ensure requirements are met to add LVM storage
storage. Set the 'confirmed' field to execute this operation
for the lvm backend.
Add the LVM storage backend
Add the LVM storage backend:
::
@ -521,8 +518,7 @@ Add the LVM storage backend
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configuring |...| cinder | {} |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
Wait for the LVM storage backend to be configured (i.e.
state=Configured)
Wait for the LVM storage backend to be configured (i.e. state=configured):
::
@ -535,11 +531,11 @@ state=Configured)
+--------------------------------------+------------+---------+------------+------+----------+--------------+
***********************************************
Configuring VM Local Storage on Controller Disk
Configuring VM local storage on controller disk
***********************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
physical disk:
::
@ -559,7 +555,7 @@ physical disk
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'nova-local' volume group
Create the 'nova-local' volume group:
::
@ -584,7 +580,7 @@ Create the 'nova-local' volume group
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group
Create a disk partition to add to the volume group:
::
@ -608,7 +604,7 @@ Create a disk partition to add to the volume group
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
Wait for the new partition to be created (i.e. status=Ready):
::
@ -622,7 +618,7 @@ Wait for the new partition to be created (i.e. status=Ready)
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
Add the partition to the volume group:
::
@ -648,11 +644,11 @@ Add the partition to the volume group
+--------------------------+--------------------------------------------------+
**********************
Unlocking Controller-0
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install
Controller-1. Use the system host-unlock command:
controller-1. Use the system host-unlock command:
::
@ -663,10 +659,10 @@ unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
****************************************
Verifying the Controller-0 Configuration
Verifying the controller-0 configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
On controller-0, acquire Keystone administrative privileges:
::
@ -686,7 +682,7 @@ Verify that the controller-0 services are running:
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 has controller and compute subfunctions
Verify that controller-0 has controller and compute subfunctions:
::
@ -705,18 +701,19 @@ Verify that controller-0 is unlocked, enabled, and available:
+----+--------------+-------------+----------------+-------------+--------------+
*****************
System Alarm List
System alarm list
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
When all nodes are unlocked, enabled, and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 1 Controller with Cinder Storage
and all OpenStack services up and running. You can now proceed with standard OpenStack
APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure
Neutron networks and launch Nova Virtual Machines.
Your StarlingX deployment is now up and running with one controller with Cinder
storage and all OpenStack services up and running. You can now proceed with
standard OpenStack APIs, CLIs and/or Horizon to load Glance images, configure
Nova Flavors, configure Neutron networks and launch Nova virtual machines.
----------------------
Deployment Terminology
Deployment terminology
----------------------
.. include:: deployment_terminology.rst

View File

@ -1,119 +0,0 @@
.. _incl-simplex-deployment-terminology:
**All-In-One Controller Node**
A single physical node that provides a Controller Function, Compute
Function, and Storage Function.
.. _incl-simplex-deployment-terminology-end:
.. _incl-standard-controller-deployment-terminology:
**Controller Node / Function**
A node that runs Cloud Control Function for managing Cloud Resources.
- Runs Cloud Control Functions for managing Cloud Resources.
- Runs all OpenStack Control Functions (e.g. managing Images, Virtual
Volumes, Virtual Network, and Virtual Machines).
- Can be part of a two-node HA Control Node Cluster for running Control
Functions either Active/Active or Active/Standby.
**Compute ( & Network ) Node / Function**
A node that hosts applications in Virtual Machines using Compute Resources
such as CPU, Memory, and Disk.
- Runs Virtual Switch for realizing virtual networks.
- Provides L3 Routing and NET Services.
.. _incl-standard-controller-deployment-terminology-end:
.. _incl-dedicated-storage-deployment-terminology:
**Storage Node / Function**
A node that contains a set of Disks (e.g. SATA, SAS, SSD, and/or NVMe).
- Runs CEPH Distributed Storage Software.
- Part of an HA multi-node CEPH Storage Cluster supporting a replication
factor of two or three, Journal Caching, and Class Tiering.
- Provides HA Persistent Storage for Images, Virtual Volumes
(i.e. Block Storage), and Object Storage.
.. _incl-dedicated-storage-deployment-terminology-end:
.. _incl-common-deployment-terminology:
**OAM Network**
The network on which all external StarlingX Platform APIs are exposed,
(i.e. REST APIs, Horizon Web Server, SSH, and SNMP), typically 1GE.
Only Controller type nodes are required to be connected to the OAM
Network.
**Management Network**
A private network (i.e. not connected externally), tipically 10GE,
used for the following:
- Internal OpenStack / StarlingX monitoring and control.
- VM I/O access to a storage cluster.
All nodes are required to be connected to the Management Network.
**Data Network(s)**
Networks on which the OpenStack / Neutron Provider Networks are realized
and become the VM Tenant Networks.
Only Compute type and All-in-One type nodes are required to be connected
to the Data Network(s); these node types require one or more interface(s)
on the Data Network(s).
**IPMI Network**
An optional network on which IPMI interfaces of all nodes are connected.
The network must be reachable using L3/IP from the Controller's OAM
Interfaces.
You can optionally connect all node types to the IPMI Network.
**PXEBoot Network**
An optional network for Controllers to boot/install other nodes over the
network.
By default, Controllers use the Management Network for boot/install of other
nodes in the openstack cloud. If this optional network is used, all node
types are required to be connected to the PXEBoot Network.
A PXEBoot network is required for a variety of special case situations:
- Cases where the Management Network must be IPv6:
- IPv6 does not support PXEBoot. Therefore, IPv4 PXEBoot network must be
configured.
- Cases where the Management Network must be VLAN tagged:
- Most Server's BIOS do not support PXEBooting over tagged networks.
Therefore, you must configure an untagged PXEBoot network.
- Cases where a Management Network must be shared across regions but
individual regions' Controllers want to only network boot/install nodes
of their own region:
- You must configure separate, per-region PXEBoot Networks.
**Infra Network**
A deprecated optional network that was historically used for access to the
Storage cluster.
If this optional network is used, all node types are required to be
connected to the INFRA Network,
**Node Interfaces**
All Nodes' Network Interfaces can, in general, optionally be either:
- Untagged single port.
- Untagged two-port LAG and optionally split between redudant L2 Switches
running vPC (Virtual Port-Channel), also known as multichassis
EtherChannel (MEC).
- VLAN on either single-port ETH interface or two-port LAG interface.
.. _incl-common-deployment-terminology-end:

View File

@ -1,287 +1,38 @@
==================
Installation Guide
==================
===================
Installation guides
===================
-----
Intro
-----
StarlingX may be installed in:
- **Bare Metal**: Real deployments of StarlingX are only supported on
physical servers.
- **Virtual Environment**: It should only be used for evaluation or
development purposes.
StarlingX installed in virtual environments has two options:
- :ref:`Libvirt/QEMU <Installation-libvirt-qemu>`
- VirtualBox
------------
Requirements
------------
Different use cases require different configurations.
**********
Bare Metal
**********
The minimum requirements for the physical servers where StarlingX might
be deployed, include:
- **Controller Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket
- Minimum Memory: 64 GB
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS and system databases.
- Secondary Hard Drive, minimum 500 GB for persistent VM storage.
- 2 physical Ethernet interfaces: OAM and MGMT Network.
- USB boot support.
- PXE boot support.
- **Storage Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket.
- Minimum Memory: 64 GB.
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS.
- 1 or more additional Hard Drives for CEPH OSD storage, and
- Optionally 1 or more SSD or NVMe Drives for CEPH Journals.
- 1 physical Ethernet interface: MGMT Network
- PXE boot support.
- **Compute Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket.
- Minimum Memory: 32 GB.
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS.
- 1 or more additional Hard Drives for ephemeral VM Storage.
- 2 or more physical Ethernet interfaces: MGMT Network and 1 or more
Provider Networks.
- PXE boot support.
- **All-In-One Simplex or Duplex, Controller + Compute Hosts**
- Minimum Processor is:
- Typical Hardware Form Factor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Low Cost / Low Power Hardware Form Factor
- Single-CPU Intel Xeon D-15xx Family, 8 cores
- Minimum Memory: 64 GB.
- Hard Drives:
- Primary Hard Drive, minimum 500 GB SDD or NVMe.
- 0 or more 500 GB disks (min. 10K RPM).
- Network Ports:
**NOTE:** Duplex and Simplex configurations require one or more data
ports.
The Duplex configuration requires a management port.
- Management: 10GE (Duplex only)
- OAM: 10GE
- Data: n x 10GE
The recommended minimum requirements for the physical servers are
described later in each StarlingX Deployment Options guide.
^^^^^^^^^^^^^^^^^^^^^^^^
NVMe Drive as Boot Drive
^^^^^^^^^^^^^^^^^^^^^^^^
To use a Non-Volatile Memory Express (NVMe) drive as the boot drive for any of
your nodes, you must configure your host and adjust kernel parameters during
installation:
- Configure the host to be in UEFI mode.
- Edit the kernel boot parameter. After you are presented with the StarlingX
ISO boot options and after you have selected the preferred installation option
(e.g. Standard Configuration / All-in-One Controller Configuration), press the
TAB key to edit the Kernel boot parameters. Modify the **boot_device** and
**rootfs_device** from the default **sda** so that it is the correct device
name for the NVMe drive (e.g. "nvme0n1").
::
vmlinuz rootwait console=tty0 inst.text inst.stage2=hd:LABEL=oe_iso_boot
inst.ks=hd:LABEL=oe_iso_boot:/smallsystem_ks.cfg boot_device=nvme0n1
rootfs_device=nvme0n1 biosdevname=0 usbcore.autosuspend=-1 inst.gpt
security_profile=standard user_namespace.enable=1 initrd=initrd.img
*******************
Virtual Environment
*******************
The recommended minimum requirements for the workstation, hosting the
Virtual Machine(s) where StarlingX will be deployed, include:
^^^^^^^^^^^^^^^^^^^^^
Hardware Requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Cores: 8 (4 with careful monitoring of cpu load)
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Two network adapters with active Internet connection
^^^^^^^^^^^^^^^^^^^^^
Software Requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt Library
- QEMU Full System Emulation Binaries
- stx-tools project
- StarlingX ISO Image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deployment Environment Setup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section describes how to set up the workstation computer which will
host the Virtual Machine(s) where StarlingX will be deployed.
''''''''''''''''''''''''''''''
Updating Your Operating System
''''''''''''''''''''''''''''''
Before proceeding with the build, ensure your OS is up to date. Youll
first need to update the local database list of available packages:
::
$ sudo apt-get update
'''''''''''''''''''''''''
Install stx-tools project
'''''''''''''''''''''''''
Clone the stx-tools project. Usually youll want to clone it under your
users home directory.
::
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
''''''''''''''''''''''''''''''''''''''''
Installing Requirements and Dependencies
''''''''''''''''''''''''''''''''''''''''
Navigate to the stx-tools installation libvirt directory:
::
$ cd $HOME/stx-tools/deployment/libvirt/
Install the required packages:
::
$ bash install_packages.sh
''''''''''''''''''
Disabling Firewall
''''''''''''''''''
Unload firewall and disable firewall on boot:
::
$ sudo ufw disable
Firewall stopped and disabled on system startup
$ sudo ufw status
Status: inactive
-------------------------------
Getting the StarlingX ISO Image
-------------------------------
Follow the instructions from the :ref:`developer-guide` to build a
StarlingX ISO image.
**********
Bare Metal
**********
A bootable USB flash drive containing StarlingX ISO image.
*******************
Virtual Environment
*******************
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project
directory:
::
$ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
------------------
Deployment Options
------------------
- Standard Controller
- :ref:`StarlingX Cloud with Dedicated Storage <dedicated-storage>`
- :ref:`StarlingX Cloud with Controller Storage <controller-storage>`
- All-in-one
- :ref:`StarlingX Cloud Duplex <duplex>`
- :ref:`StarlingX Cloud Simplex <simplex>`
Installation steps for StarlingX are release specific. To install the
latest release of StarlingX, use the :doc:`/installation_guide/2018_10/index`.
To install a previous release of StarlingX, use the installation guide
for your specific release:
.. toctree::
:hidden:
:maxdepth: 1
installation_libvirt_qemu
controller_storage
dedicated_storage
duplex
simplex
/installation_guide/latest/index
/installation_guide/2018_10/index
.. How to add a new release (installer and developer guides):
1. Archive previous release
1. Rename old 'latest' folder to the release name e.g. Year_Month
2. Update links in old 'latest' to use new path e.g.
:doc:`Libvirt/QEMU </installation_guide/latest/installation_libvirt_qemu>`
becomes
:doc:`Libvirt/QEMU </installation_guide/2018_10/installation_libvirt_qemu>`
2. Add new release
1. Add a new 'latest' dir and add the new version - likely this will be a copy of the previous version, with updates applied
2. Make sure the new files have the correct version in the page title and intro sentence e.g. '2018.10.rc1 Installation Guide'
3. Make sure all files in new 'latest' link to the correct versions of supporting docs (do this via doc link, so that it goes to top of page e.g. :doc:`/installation_guide/latest/index`)
4. Make sure the new release index is labeled with the correct version name e.g
.. _index-2019-05:
3. Add the archived version to the toctree on this page
4. If adding a new version *before* it is available (e.g. to begin work on new docs),
make sure page text still directs user to the *actual* current release, not the
future-not-yet-released version.
5. When the release is *actually* available, make sure to update these pages:
- index
- installation guide
- developer guide
- release notes

View File

@ -0,0 +1,974 @@
===============================================
Controller storage deployment guide stx.2019.05
===============================================
.. contents::
:local:
:depth: 1
**NOTE:** The instructions to setup a StarlingX Cloud with Controller
Storage with containerized openstack services in this guide
are under development.
For approved instructions, see the
`StarlingX Cloud with Controller Storage wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard>`__.
----------------------
Deployment description
----------------------
The Controller Storage deployment option provides a 2x node high availability
controller / storage cluster with:
- A pool of up to seven compute nodes (pool size limit due to the capacity of
the storage function).
- A growth path for storage to the full standard solution with an independent
CEPH storage cluster.
- High availability services runnning across the controller nodes in either
active/active or active/standby mode.
- Storage function running on top of LVM on single second disk, DRBD-sync'd
between the controller nodes.
.. figure:: figures/starlingx-deployment-options-controller-storage.png
:scale: 50%
:alt: Controller Storage deployment configuration
*Controller Storage deployment configuration*
A Controller Storage deployment provides protection against overall controller
node and compute node failure:
- On overall controller node failure, all controller high availability services
go active on the remaining healthy controller node.
- On overall compute node failure, virtual machines on failed compute node are
recovered on the remaining healthy compute nodes.
------------------------------------
Preparing controller storage servers
------------------------------------
**********
Bare metal
**********
Required servers:
- Controllers: 2
- Computes: 2 - 100
^^^^^^^^^^^^^^^^^^^^^
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
Controller Storage will be deployed, include:
- Minimum processor:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB controller
- 32 GB compute
- BIOS:
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary disk:
- 500 GB SDD or NVMe controller
- 120 GB (min. 10K RPM) compute
- Additional disks:
- 1 or more 500 GB disks (min. 10K RPM) compute
- Network ports\*
- Management: 10GE controller, compute
- OAM: 10GE controller
- Data: n x 10GE compute
*******************
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
management networks:
::
$ bash setup_network.sh
Building XML for definition of virtual servers:
::
$ bash setup_configuration.sh -c controllerstorage -i <starlingx iso image>
The default XML server definitions that are created by the previous script
are:
- controllerstorage-controller-0
- controllerstorage-controller-1
- controllerstorage-compute-0
- controllerstorage-compute-1
^^^^^^^^^^^^^^^^^^^^^^^^^
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up a virtual server, run the following command:
::
$ sudo virsh start <server-xml-name>
e.g.
::
$ sudo virsh start controllerstorage-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access virtual server consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
--------------------------------
Installing the controller-0 host
--------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
*************************
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the security profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring controller-0
************************
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Accept all the default values immediately after system date and time.
::
...
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
------------------------------------
Provisioning controller-0 and system
------------------------------------
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring provider networks at installation
*********************************************
You must set up provider networks at installation so that you can attach
data interfaces and unlock the compute nodes.
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*************************************
Configuring Cinder on controller disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid | device_no | device_ | device_ | size_ | available_ | rpm |...
| | de | num | type | gib | gib | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda | 2048 | HDD | 200.0 | 0.0 | |...
| 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Create the 'cinder-volumes' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
+-----------------+--------------------------------------+
| Property | Value |
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | ece4c755-241c-4363-958e-85e9e3d12917 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T03:59:30.685718+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 203776 |
| uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| idisk_uuid | 89694799-0dd8-4532-8636-c0d8aabfe215 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-22T04:03:40.761221+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready):
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |...| /dev/sdb1 |...| LVM Physical Volume | 199.0 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 060dc47e-bc17-40f4-8f09-5326ef0e86a5 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| created_at | 2018-08-22T04:06:54.008632+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
Enable LVM backend:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
Wait for the storage backend to leave "configuring" state. Confirm LVM
backend storage is configured:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+------------+------+----------+...
| 1daf3e5b-4122-459f-9dba-d2e92896e718 | file-store | file | configured | None | glance |...
| a4607355-be7e-4c5c-bf87-c71a0e2ad380 | lvm-store | lvm | configured | None | cinder |...
+--------------------------------------+------------+---------+------------+------+----------+...
**********************
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install the
remaining hosts. On controller-0, acquire Keystone administrative
privileges. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
****************************************
Verifying the controller-0 configuration
****************************************
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the StarlingX controller services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
---------------------------------------
Installing controller-1 / compute hosts
---------------------------------------
After initializing and configuring an active controller, you can add and
configure a backup controller and additional compute hosts. For each
host do the following:
*****************
Initializing host
*****************
Power on Host. In host console you will see:
::
Waiting for this node to be configured.
Please configure the personality for this node from the
controller node in order to proceed.
***************************************
Updating host hostname and personality
***************************************
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
Use the system host-update to update host personality attribute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 2 personality=controller hostname=controller-1
Or for compute-0:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0
See also: 'system help host-update'.
Unless it is known that the host's configuration can support the installation of
more than one node, it is recommended that the installation and configuration of
each node be serialized. For example, if the entire cluster has its virtual
disks hosted on the host's root disk which happens to be a single rotational
type hard disk, then the host cannot (reliably) support parallel node
installation.
***************
Monitoring host
***************
On controller-0, you can monitor the installation progress by running the system
host-show command for the host periodically. Progress is shown in the
install_state field:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
| install_output | text |
| install_state | booting |
| install_state_info | None |
Wait while the host is configured and rebooted. Up to 20 minutes may be required
for a reboot, depending on hardware. When the reboot is complete, the host is
reported as locked, disabled, and online.
*************
Listing hosts
*************
Once all nodes have been installed, configured and rebooted, on controller-0
list the hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | compute-0 | compute | locked | disabled | online |
| 4 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Provisioning controller-1
-------------------------
On controller-0, list hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2 | controller-1 | controller | locked | disabled | online |
...
+----+--------------+-------------+----------------+-------------+--------------+
***********************************************
Provisioning network interfaces on controller-1
***********************************************
In order to list out hardware port names, types, PCI addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the OAM interface for controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
************************************
Provisioning storage on controller-1
************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1
+--------------------------------------+-----------+---------+---------+-------+------------+
| uuid | device_no | device_ | device_ | size_ | available_ |
| | de | num | type | gib | gib |
+--------------------------------------+-----------+---------+---------+-------+------------+
| f7ce53db-7843-457e-8422-3c8f9970b4f2 | /dev/sda | 2048 | HDD | 200.0 | 0.0 |
| 70b83394-968e-4f0d-8a99-7985cd282a21 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 |
+--------------------------------------+-----------+---------+---------+-------+------------+
Assign Cinder storage to the physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes
+-----------------+--------------------------------------+
| Property | Value |
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | 22d8b94a-200a-4fd5-b1f5-7015ddf10d0b |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T05:33:44.608913+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 70b83394-968e-4f0d-8a99-7985cd282a21 199 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 203776 |
| uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| idisk_uuid | 70b83394-968e-4f0d-8a99-7985cd282a21 |
| ipv_uuid | None |
| status | Creating (on unlock) |
| created_at | 2018-08-22T05:36:42.123770+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready):
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 70b83394-968e-4f0d-8a99-7985cd282a21
+--------------------------------------+...+------------+...+-------+--------+----------------------+
| uuid |...| device_nod | ... | size_g | status |
| |...| e | ... | ib | |
+--------------------------------------+...+------------+ ... +--------+----------------------+
| 16a1c5cb-620c-47a3-be4b-022eafd122ee |...| /dev/sdb1 | ... | 199.0 | Creating (on unlock) |
| |...| | ... | | |
| |...| | ... | | |
+--------------------------------------+...+------------+...+--------+----------------------+
Add the partition to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes 16a1c5cb-620c-47a3-be4b-022eafd122ee
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 01d79ed2-717f-428e-b9bc-23894203b35b |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| created_at | 2018-08-22T05:44:34.715289+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
**********************
Unlocking controller-1
**********************
Unlock controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the controller-1 is rebooted. Up to 10 minutes may be required for a
reboot, depending on hardware.
**REMARK:** controller-1 will remain in 'degraded' state until data-syncing is
complete. The duration is dependant on the virtualization host's configuration -
i.e., the number and configuration of physical disks used to host the nodes'
virtual disks. Also, the management network is expected to have link capacity of
10000 (1000 is not supported due to excessive data-sync time). Use
'fm alarm-list' to confirm status.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
...
---------------------------
Provisioning a compute host
---------------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each compute host do the following:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*************************************************
Provisioning network interfaces on a compute host
*************************************************
On controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in virtual environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for compute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
***************************
VSwitch virtual environment
***************************
**Only in virtual environment**. If the compute has more than 4 cpus, the system
will auto-configure the vswitch to use 2 cores. However some virtual
environments do not properly support multi-queue required in a multi-CPU
environment. Therefore run the following command to reduce the vswitch cores to
1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+--------------------------------------+-------+-----------+-------+--------+...
| uuid | log_c | processor | phy_c | thread |...
| | ore | | ore | |...
+--------------------------------------+-------+-----------+-------+--------+...
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0 | 0 | 0 | 0 |...
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1 | 0 | 1 | 0 |...
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2 | 0 | 2 | 0 |...
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3 | 0 | 3 | 0 |...
+--------------------------------------+-------+-----------+-------+--------+...
**************************************
Provisioning storage on a compute host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
the physical disk(s) to be used for nova local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+...
| uuid | device_no | device_ | device_ | size_ | available_ |...
| | de | num | type | gib | gib |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
| 8a9d2c09-d3a7-4781-bd06-f7abf603713a | /dev/sda | 2048 | HDD | 200.0 | 172.164 |...
| 5ad61bd1-795a-4a76-96ce-39433ef55ca5 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
Create the 'nova-local' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 18898640-c8b7-4bbd-a323-4bf3e35fee4d |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T08:00:51.945160+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local 5ad61bd1-795a-4a76-96ce-39433ef55ca5
+--------------------------+--------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------+
| uuid | 4c81745b-286a-4850-ba10-305e19cee78c |
| pv_state | adding |
| pv_type | disk |
| disk_or_part_uuid | 5ad61bd1-795a-4a76-96ce-39433ef55ca5 |
| disk_or_part_device_node | /dev/sdb |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 |
| lvm_pv_name | /dev/sdb |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| created_at | 2018-08-22T08:07:14.205690+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------+
Specify the local storage space as local copy-on-write image volumes in
nova-local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b image -s 10240 compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 18898640-c8b7-4bbd-a323-4bf3e35fee4d |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T08:00:51.945160+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
************************
Unlocking a compute host
************************
On controller-0, use the system host-unlock command to unlock the compute node:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the compute node is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware. The host is rebooted, and
its availability state is reported as in-test, followed by
unlocked/enabled.
-------------------
System health check
-------------------
***********************
Listing StarlingX nodes
***********************
On controller-0, after a few minutes, all nodes shall be reported as
unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | unlocked | enabled | available |
| 4 | compute-1 | compute | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
*****************
System alarm-list
*****************
When all nodes are unlocked, enabled and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 2x HA controllers with
Cinder storage, 2x computes, and all OpenStack services up and running. You can
now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance
images, configure Nova Flavors, configure Neutron networks and launch Nova
virtual machines.
----------------------
Deployment terminology
----------------------
.. include:: deployment_terminology.rst
:start-after: incl-standard-controller-deployment-terminology:
:end-before: incl-standard-controller-deployment-terminology-end:
.. include:: deployment_terminology.rst
:start-after: incl-common-deployment-terminology:
:end-before: incl-common-deployment-terminology-end:

View File

@ -0,0 +1,918 @@
==============================================
Dedicated storage deployment guide stx.2019.05
==============================================
.. contents::
:local:
:depth: 1
**NOTE:** The instructions to setup a StarlingX Cloud with Dedicated
Storage with containerized openstack services in this guide
are under development.
For approved instructions, see the
`StarlingX Cloud with Dedicated Storage wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandardStorage>`__.
----------------------
Deployment description
----------------------
Cloud with Dedicated Storage is the standard StarlingX deployment option with
independent controller, compute, and storage nodes.
This deployment option provides the maximum capacity for a single region
deployment, with a supported growth path to a multi-region deployment option by
adding a secondary region.
.. figure:: figures/starlingx-deployment-options-dedicated-storage.png
:scale: 50%
:alt: Dedicated Storage deployment configuration
*Dedicated Storage deployment configuration*
Cloud with Dedicated Storage includes:
- 2x node HA controller cluster with HA services running across the controller
nodes in either active/active or active/standby mode.
- Pool of up to 100 compute nodes for hosting virtual machines and virtual
networks.
- 2-9x node HA CEPH storage cluster for hosting virtual volumes, images, and
object storage that supports a replication factor of 2 or 3.
Storage nodes are deployed in replication groups of 2 or 3. Replication
of objects is done strictly within the replication group.
Supports up to 4 groups of 2x storage nodes, or up to 3 groups of 3x storage
nodes.
-----------------------------------
Preparing dedicated storage servers
-----------------------------------
**********
Bare metal
**********
Required Servers:
- Controllers: 2
- Storage
- Replication factor of 2: 2 - 8
- Replication factor of 3: 3 - 9
- Computes: 2 - 100
^^^^^^^^^^^^^^^^^^^^^
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
Dedicated Storage will be deployed, include:
- Minimum processor:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB controller, storage
- 32 GB compute
- BIOS:
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary disk:
- 500 GB SDD or NVMe controller
- 120 GB (min. 10K RPM) compute and storage
- Additional disks:
- 1 or more 500 GB disks (min. 10K RPM) storage, compute
- Network ports\*
- Management: 10GE controller, storage, compute
- OAM: 10GE controller
- Data: n x 10GE compute
*******************
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
management networks:
::
$ bash setup_network.sh
Building XML for definition of virtual servers:
::
$ bash setup_configuration.sh -c dedicatedstorage -i <starlingx iso image>
The default XML server definitions that are created by the previous script
are:
- dedicatedstorage-controller-0
- dedicatedstorage-controller-1
- dedicatedstorage-compute-0
- dedicatedstorage-compute-1
- dedicatedstorage-storage-0
- dedicatedstorage-storage-1
^^^^^^^^^^^^^^^^^^^^^^^^^
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up a virtual server, run the following command:
::
$ sudo virsh start <server-xml-name>
e.g.
::
$ sudo virsh start dedicatedstorage-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access virtual server consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
--------------------------------
Installing the controller-0 host
--------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
*************************
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the security profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring controller-0
************************
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Accept all the default values immediately after system date and time:
::
...
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
------------------------------------
Provisioning controller-0 and system
------------------------------------
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring provider networks at installation
*********************************************
You must set up provider networks at installation so that you can attach
data interfaces and unlock the compute nodes.
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*********************************************
Adding a Ceph storage backend at installation
*********************************************
Add CEPH Storage backend:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
By confirming this operation, Ceph backend will be created.
A minimum of 2 storage nodes are required to complete the configuration.
Please set the 'confirmed' field to execute this operation for the ceph backend.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed
System configuration has changed.
Please follow the administrator guide to complete configuring the system.
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configuring | applying-manifests | cinder, |...
| | | | | | glance, |...
| | | | | | swift |...
| | | | | | nova |...
| | | | | | |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
Confirm CEPH storage is configured:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configured | provision-storage | cinder, |...
| | | | | | glance, |...
| | | | | | swift |...
| | | | | | nova |...
| | | | | | |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
**********************
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install the remaining
hosts. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is unavailable, and
any ssh connections are dropped. To monitor the progress of the reboot, use the
controller-0 console.
****************************************
Verifying the controller-0 configuration
****************************************
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the StarlingX controller services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
*******************************
Provisioning filesystem storage
*******************************
List the controller file systems with status and current sizes:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| UUID | FS Name | Size | Logical Volume | Replicated | State |
| | | in | | | |
| | | GiB | | | |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup | 5 | backup-lv | False | None |
| 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database | 5 | pgsql-lv | True | None |
| 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension | 1 | extension-lv | True | None |
| 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance | 8 | cgcs-lv | True | None |
| 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi | 5 | gnocchi-lv | False | None |
| 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8 | img-conversions-lv | False | None |
| 5811713f-def2-420b-9edf-6680446cd379 | scratch | 8 | scratch-lv | False | None |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
Modify filesystem sizes
::
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12
-------------------------------------------------------
Installing controller-1 / storage hosts / compute hosts
-------------------------------------------------------
After initializing and configuring an active controller, you can add and
configure a backup controller and additional compute or storage hosts.
For each host do the following:
*****************
Initializing host
*****************
Power on Host. In host console you will see:
::
Waiting for this node to be configured.
Please configure the personality for this node from the
controller node in order to proceed.
**********************************
Updating host name and personality
**********************************
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
Use the system host-add to update host personality attribute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
**REMARK:** use the Mac address for the specific network interface you
are going to be connected. e.g. OAM network interface for controller-1
node, management network interface for compute and storage nodes.
Check the **NIC** MAC address from "Virtual Manager GUI" under *"Show
virtual hardware details -*\ **i**\ *" Main Banner --> NIC: --> specific
"Bridge name:" under MAC address text field.*
***************
Monitoring host
***************
On controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
| install_output | text |
| install_state | booting |
| install_state_info | None |
Wait while the host is configured and rebooted. Up to 20 minutes may be
required for a reboot, depending on hardware. When the reboot is
complete, the host is reported as locked, disabled, and online.
*************
Listing hosts
*************
Once all nodes have been installed, configured and rebooted, on
controller-0 list the hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | compute-0 | compute | locked | disabled | online |
| 4 | compute-1 | compute | locked | disabled | online |
| 5 | storage-0 | storage | locked | disabled | online |
| 6 | storage-1 | storage | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Provisioning controller-1
-------------------------
On controller-0, list hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2 | controller-1 | controller | locked | disabled | online |
...
+----+--------------+-------------+----------------+-------------+--------------+
***********************************************
Provisioning network interfaces on controller-1
***********************************************
In order to list out hardware port names, types, PCI addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the OAM interface for controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
**********************
Unlocking controller-1
**********************
Unlock controller-1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
**REMARK:** controller-1 will remain in degraded state until
data-syncing is complete. The duration is dependant on the
virtualization host's configuration - i.e., the number and configuration
of physical disks used to host the nodes' virtual disks. Also, the
management network is expected to have link capacity of 10000 (1000 is
not supported due to excessive data-sync time). Use 'fm alarm-list' to
confirm status.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
...
-------------------------
Provisioning storage host
-------------------------
**************************************
Provisioning storage on a storage host
**************************************
Available physical disks in storage-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid | device_no | device_ | device_ | size_ | available_ | rpm |...
| | de | num | type | gib | gib | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined |...
| | | | | 968 | | |...
| | | | | | | |...
| c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb | 2064 | HDD | 100.0 | 99.997 | Undetermined |...
| | | | | | | |...
| | | | | | | |...
| 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc | 2080 | HDD | 4.0 | 3.997 |...
| | | | | | | |...
| | | | | | | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Available storage tiers in storage-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
+--------------------------------------+---------+--------+--------------------------------------+
| uuid | name | status | backend_using |
+--------------------------------------+---------+--------+--------------------------------------+
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
+--------------------------------------+---------+--------+--------------------------------------+
Create a storage function (i.e. OSD) in storage-N. At least two unlocked and
enabled hosts with monitors are required. Candidates are: controller-0,
controller-1, and storage-0.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 34989bad-67fc-49ea-9e9c-38ca4be95fad |
| ihost_uuid | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994 |
| idisk_uuid | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
| tier_uuid | 4398d910-75e4-4e99-a57f-fc147fb87bdb |
| tier_name | storage |
| created_at | 2018-08-16T00:39:44.409448+00:00 |
| updated_at | 2018-08-16T00:40:07.626762+00:00 |
+------------------+--------------------------------------------------+
Create remaining available storage function (an OSD) in storage-N
based in the number of available physical disks.
List the OSDs:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| uuid | function | osdid | capabilities | idisk_uuid |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd | 0 | {} | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
Unlock storage-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0
**REMARK:** Before you continue, repeat Provisioning Storage steps on
remaining storage nodes.
---------------------------
Provisioning a compute host
---------------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each compute host do the following:
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*************************************************
Provisioning network interfaces on a compute host
*************************************************
On controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in virtual environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for compute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
***************************
VSwitch virtual environment
***************************
**Only in virtual environment**. If the compute has more than 4 CPUs,
the system will auto-configure the vswitch to use 2 cores. However some
virtual environments do not properly support multi-queue required in a
multi-CPU environment. Therefore run the following command to reduce the
vswitch cores to 1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+--------------------------------------+-------+-----------+-------+--------+...
| uuid | log_c | processor | phy_c | thread |...
| | ore | | ore | |...
+--------------------------------------+-------+-----------+-------+--------+...
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0 | 0 | 0 | 0 |...
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1 | 0 | 1 | 0 |...
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2 | 0 | 2 | 0 |...
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3 | 0 | 3 | 0 |...
+--------------------------------------+-------+-----------+-------+--------+...
**************************************
Provisioning storage on a compute host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
the physical disk(s) to be used for nova local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+...
| uuid | device_no | device_ | device_ | size_ | available_ |...
| | de | num | type | gib | gib |...
+--------------------------------------+-----------+---------+---------+-------+------------+
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda | 2048 | HDD | 292. | 265.132 |...
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb | 2064 | HDD | 100.0 | 99.997 |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
Create the 'nova-local' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 37f4c178-f0fe-422d-b66e-24ae057da674 |
| ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-16T00:57:46.340454+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc
+--------------------------+--------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------+
| uuid | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc |
| pv_state | adding |
| pv_type | disk |
| disk_or_part_uuid | a639914b-23a9-4071-9f25-a5f1960846cc |
| disk_or_part_device_node | /dev/sdb |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| lvm_pv_name | /dev/sdb |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 |
| created_at | 2018-08-16T01:05:59.013257+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------+
Remote RAW Ceph storage backed will be used to back nova local ephemeral
volumes:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
************************
Unlocking a compute host
************************
On controller-0, use the system host-unlock command to unlock the
compute-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the compute-N is rebooted. Up to 10 minutes may be required
for a reboot, depending on hardware. The host is rebooted, and its
availability state is reported as in-test, followed by unlocked/enabled.
-------------------
System health check
-------------------
***********************
Listing StarlingX nodes
***********************
On controller-0, after a few minutes, all nodes shall be reported as
unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | unlocked | enabled | available |
| 4 | compute-1 | compute | unlocked | enabled | available |
| 5 | storage-0 | storage | unlocked | enabled | available |
| 6 | storage-1 | storage | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
******************************
Checking StarlingX CEPH health
******************************
::
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s
cluster e14ebfd6-5030-4592-91c3-7e6146b3c910
health HEALTH_OK
monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0}
election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0
osdmap e84: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects
87444 kB used, 197 GB / 197 GB avail
1600 active+clean
controller-0:~$
*****************
System alarm list
*****************
When all nodes are unlocked, enabled and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with 2x HA controllers with
Cinder storage, 1x compute, 3x storages and all OpenStack services up and
running. You can now proceed with standard OpenStack APIs, CLIs and/or Horizon
to load Glance images, configure Nova Flavors, configure Neutron networks and
launch Nova virtual machines.
----------------------
Deployment terminology
----------------------
.. include:: deployment_terminology.rst
:start-after: incl-standard-controller-deployment-terminology:
:end-before: incl-standard-controller-deployment-terminology-end:
.. include:: deployment_terminology.rst
:start-after: incl-dedicated-storage-deployment-terminology:
:end-before: incl-dedicated-storage-deployment-terminology-end:
.. include:: deployment_terminology.rst
:start-after: incl-common-deployment-terminology:
:end-before: incl-common-deployment-terminology-end:

View File

@ -0,0 +1,119 @@
.. _incl-simplex-deployment-terminology:
**All-in-one controller node**
A single physical node that provides a controller function, compute
function, and storage function.
.. _incl-simplex-deployment-terminology-end:
.. _incl-standard-controller-deployment-terminology:
**Controller node / function**
A node that runs cloud control function for managing cloud resources.
- Runs cloud control functions for managing cloud resources.
- Runs all OpenStack control functions (e.g. managing images, virtual
volumes, virtual network, and virtual machines).
- Can be part of a two-node HA control node cluster for running control
functions either active/active or active/standby.
**Compute ( & network ) node / function**
A node that hosts applications in virtual machines using compute resources
such as CPU, memory, and disk.
- Runs virtual switch for realizing virtual networks.
- Provides L3 routing and NET services.
.. _incl-standard-controller-deployment-terminology-end:
.. _incl-dedicated-storage-deployment-terminology:
**Storage node / function**
A node that contains a set of disks (e.g. SATA, SAS, SSD, and/or NVMe).
- Runs CEPH distributed storage software.
- Part of an HA multi-node CEPH storage cluster supporting a replication
factor of two or three, journal caching, and class tiering.
- Provides HA persistent storage for images, virtual volumes
(i.e. block storage), and object storage.
.. _incl-dedicated-storage-deployment-terminology-end:
.. _incl-common-deployment-terminology:
**OAM network**
The network on which all external StarlingX platform APIs are exposed,
(i.e. REST APIs, Horizon web server, SSH, and SNMP), typically 1GE.
Only controller type nodes are required to be connected to the OAM
network.
**Management network**
A private network (i.e. not connected externally), tipically 10GE,
used for the following:
- Internal OpenStack / StarlingX monitoring and control.
- VM I/O access to a storage cluster.
All nodes are required to be connected to the management network.
**Data network(s)**
Networks on which the OpenStack / Neutron provider networks are realized
and become the VM tenant networks.
Only compute type and all-in-one type nodes are required to be connected
to the data network(s); these node types require one or more interface(s)
on the data network(s).
**IPMI network**
An optional network on which IPMI interfaces of all nodes are connected.
The network must be reachable using L3/IP from the controller's OAM
interfaces.
You can optionally connect all node types to the IPMI network.
**PXEBoot network**
An optional network for controllers to boot/install other nodes over the
network.
By default, controllers use the management network for boot/install of other
nodes in the openstack cloud. If this optional network is used, all node
types are required to be connected to the PXEBoot network.
A PXEBoot network is required for a variety of special case situations:
- Cases where the management network must be IPv6:
- IPv6 does not support PXEBoot. Therefore, IPv4 PXEBoot network must be
configured.
- Cases where the management network must be VLAN tagged:
- Most server's BIOS do not support PXEBooting over tagged networks.
Therefore, you must configure an untagged PXEBoot network.
- Cases where a management network must be shared across regions but
individual regions' controllers want to only network boot/install nodes
of their own region:
- You must configure separate, per-region PXEBoot networks.
**Infra network**
A deprecated optional network that was historically used for access to the
storage cluster.
If this optional network is used, all node types are required to be
connected to the INFRA network,
**Node interfaces**
All nodes' network interfaces can, in general, optionally be either:
- Untagged single port.
- Untagged two-port LAG and optionally split between redudant L2 switches
running vPC (Virtual Port-Channel), also known as multichassis
EtherChannel (MEC).
- VLAN on either single-port ETH interface or two-port LAG interface.
.. _incl-common-deployment-terminology-end:

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

View File

@ -0,0 +1,288 @@
==============================
Installation guide stx.2019.05
==============================
This is the installation guide for release stx.2019.05. If an installation
guide is needed for a previous release, review the
:doc:`installation guides for previous releases </installation_guide/index>`.
------------
Introduction
------------
StarlingX may be installed in:
- **Bare metal**: Real deployments of StarlingX are only supported on
physical servers.
- **Virtual environment**: It should only be used for evaluation or
development purposes.
StarlingX installed in virtual environments has two options:
- :doc:`Libvirt/QEMU </installation_guide/latest/installation_libvirt_qemu>`
- VirtualBox
------------
Requirements
------------
Different use cases require different configurations.
**********
Bare metal
**********
The minimum requirements for the physical servers where StarlingX might
be deployed, include:
- **Controller hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket
- Minimum memory: 64 GB
- Hard drives:
- Primary hard drive, minimum 500 GB for OS and system databases.
- Secondary hard drive, minimum 500 GB for persistent VM storage.
- 2 physical Ethernet interfaces: OAM and MGMT network.
- USB boot support.
- PXE boot support.
- **Storage hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket.
- Minimum memory: 64 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB for OS.
- 1 or more additional hard drives for CEPH OSD storage, and
- Optionally 1 or more SSD or NVMe drives for CEPH journals.
- 1 physical Ethernet interface: MGMT network
- PXE boot support.
- **Compute hosts**
- Minimum processor is:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
cores/socket.
- Minimum memory: 32 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB for OS.
- 1 or more additional hard drives for ephemeral VM storage.
- 2 or more physical Ethernet interfaces: MGMT network and 1 or more
provider networks.
- PXE boot support.
- **All-In-One Simplex or Duplex, controller + compute hosts**
- Minimum processor is:
- Typical hardware form factor:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Low cost / low power hardware form factor
- Single-CPU Intel Xeon D-15xx family, 8 cores
- Minimum memory: 64 GB.
- Hard drives:
- Primary hard drive, minimum 500 GB SDD or NVMe.
- 0 or more 500 GB disks (min. 10K RPM).
- Network ports:
**NOTE:** Duplex and Simplex configurations require one or more data
ports.
The Duplex configuration requires a management port.
- Management: 10GE (Duplex only)
- OAM: 10GE
- Data: n x 10GE
The recommended minimum requirements for the physical servers are
described later in each StarlingX deployment guide.
^^^^^^^^^^^^^^^^^^^^^^^^
NVMe drive as boot drive
^^^^^^^^^^^^^^^^^^^^^^^^
To use a Non-Volatile Memory Express (NVMe) drive as the boot drive for any of
your nodes, you must configure your host and adjust kernel parameters during
installation:
- Configure the host to be in UEFI mode.
- Edit the kernel boot parameter. After you are presented with the StarlingX
ISO boot options and after you have selected the preferred installation option
(e.g. Standard Configuration / All-in-One Controller Configuration), press the
TAB key to edit the kernel boot parameters. Modify the **boot_device** and
**rootfs_device** from the default **sda** so that it is the correct device
name for the NVMe drive (e.g. "nvme0n1").
::
vmlinuz rootwait console=tty0 inst.text inst.stage2=hd:LABEL=oe_iso_boot
inst.ks=hd:LABEL=oe_iso_boot:/smallsystem_ks.cfg boot_device=nvme0n1
rootfs_device=nvme0n1 biosdevname=0 usbcore.autosuspend=-1 inst.gpt
security_profile=standard user_namespace.enable=1 initrd=initrd.img
*******************
Virtual environment
*******************
The recommended minimum requirements for the workstation, hosting the
virtual machine(s) where StarlingX will be deployed, include:
^^^^^^^^^^^^^^^^^^^^^
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Cores: 8 (4 with careful monitoring of cpu load)
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Two network adapters with active Internet connection
^^^^^^^^^^^^^^^^^^^^^
Software requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt library
- QEMU full-system emulation binaries
- stx-tools project
- StarlingX ISO image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deployment environment setup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section describes how to set up the workstation computer which will
host the virtual machine(s) where StarlingX will be deployed.
''''''''''''''''''''''''''''''
Updating your operating system
''''''''''''''''''''''''''''''
Before proceeding with the build, ensure your OS is up to date. Youll
first need to update the local database list of available packages:
::
$ sudo apt-get update
'''''''''''''''''''''''''
Install stx-tools project
'''''''''''''''''''''''''
Clone the stx-tools project. Usually youll want to clone it under your
users home directory.
::
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
''''''''''''''''''''''''''''''''''''''''
Installing requirements and dependencies
''''''''''''''''''''''''''''''''''''''''
Navigate to the stx-tools installation libvirt directory:
::
$ cd $HOME/stx-tools/deployment/libvirt/
Install the required packages:
::
$ bash install_packages.sh
''''''''''''''''''
Disabling firewall
''''''''''''''''''
Unload firewall and disable firewall on boot:
::
$ sudo ufw disable
Firewall stopped and disabled on system startup
$ sudo ufw status
Status: inactive
-------------------------------
Getting the StarlingX ISO image
-------------------------------
Follow the instructions from the :doc:`/developer_guide/2018_10/index` to build a
StarlingX ISO image.
**********
Bare metal
**********
A bootable USB flash drive containing StarlingX ISO image.
*******************
Virtual environment
*******************
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project
directory:
::
$ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
------------------
Deployment options
------------------
- Standard controller
- :doc:`StarlingX Cloud with Dedicated Storage </installation_guide/latest/dedicated_storage>`
- :doc:`StarlingX Cloud with Controller Storage </installation_guide/latest/controller_storage>`
- All-in-one
- :doc:`StarlingX Cloud Duplex </installation_guide/latest/duplex>`
- :doc:`StarlingX Cloud Simplex </installation_guide/latest/simplex>`
.. toctree::
:hidden:
installation_libvirt_qemu
controller_storage
dedicated_storage
duplex
simplex

View File

@ -1,13 +1,11 @@
.. _Installation-libvirt-qemu:
=====================================
Installation libvirt qemu stx.2019.05
=====================================
=========================
Installation libvirt qemu
=========================
Installation for StarlingX using Libvirt/QEMU virtualization.
Installation for StarlingX stx.2019.05 using Libvirt/QEMU virtualization.
---------------------
Hardware Requirements
Hardware requirements
---------------------
A workstation computer with:
@ -15,28 +13,27 @@ A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Hard disk: 500GB HDD
- Network: One network adapter with active Internet connection
---------------------
Software Requirements
Software requirements
---------------------
A workstation computer with:
- Operating System: This process is known to work on Ubuntu 16.04 and
is likely to work on other Linux OS's with some appropriate
adjustments.
- Operating system: This process is known to work on Ubuntu 16.04 and
is likely to work on other Linux OS's with some appropriate adjustments.
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt Library
- QEMU Full System Emulation Binaries
- Libvirt library
- QEMU full-system emulation binaries
- stx-tools project
- StarlingX ISO Image
- StarlingX ISO image
----------------------------
Deployment Environment Setup
Deployment environment setup
----------------------------
*************
@ -76,7 +73,7 @@ This rc file shows the defaults baked into the scripts:
*************************
Install stx-tools Project
Install stx-tools project
*************************
Clone the stx-tools project into a working directory.
@ -102,7 +99,7 @@ If you created a configuration, load it from stxcloud.rc:
****************************************
Installing Requirements and Dependencies
Installing requirements and dependencies
****************************************
Install the required packages and configure QEMU. This only needs to be
@ -115,7 +112,7 @@ time):
******************
Disabling Firewall
Disabling firewall
******************
Unload firewall and disable firewall on boot:
@ -127,7 +124,7 @@ Unload firewall and disable firewall on boot:
******************
Configure Networks
Configure networks
******************
Configure the network bridges using setup_network.sh before doing
@ -148,11 +145,11 @@ There is also a script cleanup_network.sh that will remove networking
configuration from libvirt.
*********************
Configure Controllers
Configure controllers
*********************
One script exists for building different StarlingX cloud
configurations: setup_configuration.sh.
One script exists for building different StarlingX cloud configurations:
setup_configuration.sh.
The script uses the cloud configuration with the -c option:
@ -193,15 +190,15 @@ Tear down the VMs using destroy_configuration.sh.
Continue
--------
Pick up the installation in one of the existing guides at the
'Initializing Controller-0 step.
Pick up the installation in one of the existing guides at the initializing
controller-0 step.
- Standard Controller
- Standard controller
- :ref:`StarlingX Cloud with Dedicated Storage Virtual Environment <dedicated-storage>`
- :ref:`StarlingX Cloud with Controller Storage Virtual Environment <controller-storage>`
- :doc:`StarlingX Cloud with Dedicated Storage Virtual Environment </installation_guide/latest/dedicated_storage>`
- :doc:`StarlingX Cloud with Controller Storage Virtual Environment </installation_guide/latest/controller_storage>`
- All-in-one
- :ref:`StarlingX Cloud Duplex Virtual Environment <duplex>`
- :ref:`StarlingX Cloud Simplex Virtual Environment <simplex>`
- :doc:`StarlingX Cloud Duplex Virtual Environment </installation_guide/latest/duplex>`
- :doc:`StarlingX Cloud Simplex Virtual Environment </installation_guide/latest/simplex>`

View File

@ -0,0 +1,729 @@
===============================================
All-In-One Simplex deployment guide stx.2019.05
===============================================
.. contents::
:local:
:depth: 1
**NOTE:** The instructions to setup a StarlingX One Node Configuration
(AIO-SX) system with containerized openstack services in this guide
are under development.
For approved instructions, see the
`One Node Configuration wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/Installation>`__.
----------------------
Deployment description
----------------------
The All-In-One Simplex (AIO-SX) deployment option provides all three cloud
gunctions (controller, compute, and storage) on a single physical server. With
these cloud functions, multiple application types can be deployed and
consolidated onto a single physical server. For example, with a AIO-SX
deployment you can:
- Consolidate legacy applications that must run standalone on a server by using
multiple virtual machines on a single physical server.
- Consolidate legacy applications that run on different operating systems or
different distributions of operating systems by using multiple virtual
machines on a single physical server.
Only a small amount of cloud processing / storage power is required with an
All-In-One Simplex deployment.
.. figure:: figures/starlingx-deployment-options-simplex.png
:scale: 50%
:alt: All-In-One Simplex deployment configuration
*All-In-One Simplex deployment configuration*
An All-In-One Simplex deployment provides no protection against an overall
server hardware fault. Protection against overall server hardware fault is
either not required, or done at a higher level. Hardware component protection
could be enabled if, for example, an HW RAID or 2x Port LAG is used in the
deployment.
--------------------------------------
Preparing an All-In-One Simplex server
--------------------------------------
**********
Bare metal
**********
Required Server:
- Combined server (controller + compute): 1
^^^^^^^^^^^^^^^^^^^^^
Hardware requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
All-In-One Simplex will be deployed are:
- Minimum processor:
- Typical hardware form factor:
- Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
- Low cost / low power hardware form factor
- Single-CPU Intel Xeon D-15xx family, 8 cores
- Memory: 64 GB
- BIOS:
- Hyper-Threading technology: Enabled
- Virtualization technology: Enabled
- VT for directed I/O: Enabled
- CPU power and performance policy: Performance
- CPU C state control: Disabled
- Plug & play BMC detection: Disabled
- Primary disk:
- 500 GB SDD or NVMe
- Additional disks:
- Zero or more 500 GB disks (min. 10K RPM)
- Network ports
**NOTE:** All-In-One Simplex configuration requires one or more data ports.
This configuration does not require a management port.
- OAM: 10GE
- Data: n x 10GE
*******************
Virtual environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
management networks:
::
$ bash setup_network.sh
Building XML for definition of virtual servers:
::
$ bash setup_configuration.sh -c simplex -i <starlingx iso image>
The default XML server definition created by the previous script is:
- simplex-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^
Power up a virtual server
^^^^^^^^^^^^^^^^^^^^^^^^^
To power up the virtual server, run the following command:
::
$ sudo virsh start <server-xml-name>
e.g.
::
$ sudo virsh start simplex-controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access a virtual server console
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XML for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
------------------------------
Installing the controller host
------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
*************************
Initializing controller-0
*************************
This section describes how to initialize StarlingX in host controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **All-in-one Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the controller-0 host, select the type of installation
"All-in-one Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
controller-0 is initialized with StarlingX, and is ready for configuration.
************************
Configuring controller-0
************************
This section describes how to perform the controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the virtual environment, you can accept all the default values
immediately after system date and time.
- For a physical deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Select [y] for System date and time:
::
System date and time:
-----------------------------
Is the current date and time correct? [y/N]: y
For System mode choose "simplex":
::
...
1) duplex-direct: two node-redundant configuration. Management and
infrastructure networks are directly connected to peer ports
2) duplex - two node redundant configuration
3) simplex - single node non-redundant configuration
System mode [duplex-direct]: 3
After System date and time and System mode:
::
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system
commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP address. The
remaining installation instructions will use the CLI.
--------------------------------
Provisioning the controller host
--------------------------------
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
*********************************************
Configuring provider networks at installation
*********************************************
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
*****************************************
Providing data interfaces on controller-0
*****************************************
List all interfaces:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-list -a controller-0
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
| uuid | name | class |...| vlan | ports | uses | used by | attributes |..
| | | |...| id | | i/f | i/f | |..
+--------------------------------------+----------+---------+...+------+--------------+------+---------+------------+..
| 49fd8938-e76f-49f1-879e-83c431a9f1af | enp0s3 | platform |...| None | [u'enp0s3'] | [] | [] | MTU=1500 |..
| 8957bb2c-fec3-4e5d-b4ed-78071f9f781c | eth1000 | None |...| None | [u'eth1000'] | [] | [] | MTU=1500 |..
| bf6f4cad-1022-4dd7-962b-4d7c47d16d54 | eth1001 | None |...| None | [u'eth1001'] | [] | [] | MTU=1500 |..
| f59b9469-7702-4b46-bad5-683b95f0a1cb | enp0s8 | platform |...| None | [u'enp0s8'] | [] | [] | MTU=1500 |..
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
Configure the data interfaces:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| ifname | eth1000 |
| iftype | ethernet |
| ports | [u'eth1000'] |
| providernetworks | providernet-a |
| imac | 08:00:27:c4:ad:3e |
| imtu | 1500 |
| ifclass | data |
| aemode | None |
| schedpolicy | None |
| txhashpolicy | None |
| uuid | 8957bb2c-fec3-4e5d-b4ed-78071f9f781c |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| vlan_id | None |
| uses | [] |
| used_by | [] |
| created_at | 2018-08-28T12:50:51.820151+00:00 |
| updated_at | 2018-08-28T14:46:18.333109+00:00 |
| sriov_numvfs | 0 |
| ipv4_mode | disabled |
| ipv6_mode | disabled |
| accelerated | [True] |
+------------------+--------------------------------------+
*************************************
Configuring Cinder on controller disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+---------+------------+...
| uuid | device_no | device_ | device_ | size_mi | available_ |...
| | de | num | type | b | mib |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
| 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda | 2048 | HDD | 600000 | 434072 |...
| | | | | | |...
| | | | | | |...
| 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb | 2064 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
| 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'cinder-volumes' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | 61cb5cd2-171e-4ef7-8228-915d3560cdc3 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-28T13:45:20.218905+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 16237 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 16237 |
| uuid | 0494615f-bd79-4490-84b9-dcebbe5f377a |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| idisk_uuid | 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-28T13:45:48.512226+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready):
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 534352d8-fec2-4ca5-bda7-0e0abe5a8e17
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 0494615f-bd79-4490-84b9-dcebbe5f377a |...| /dev/sdb1 |...| LVM Physical Volume | 16237 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 0494615f-bd79-4490-84b9-dcebbe5f377a
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 9a0ad568-0ace-4d57-9e03-e7a63f609cf2 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 0494615f-bd79-4490-84b9-dcebbe5f377a |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size | 0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| created_at | 2018-08-28T13:47:39.450763+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
*********************************************
Adding an LVM storage backend at installation
*********************************************
Ensure requirements are met to add LVM storage:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
By confirming this operation, the LVM backend will be created.
Please refer to the system admin guide for minimum spec for LVM
storage. Set the 'confirmed' field to execute this operation
for the lvm backend.
Add the LVM storage backend:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
System configuration has changed.
Please follow the administrator guide to complete configuring the system.
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
| uuid | name | backend | state |...| services | capabilities |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
| 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file | configured |...| glance | {} |
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configuring |...| cinder | {} |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
Wait for the LVM storage backend to be configured (i.e. state=configured):
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+--------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+------------+---------+------------+------+----------+--------------+
| 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file | configured | None | glance | {} |
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configured | None | cinder | {} |
+--------------------------------------+------------+---------+------------+------+----------+--------------+
***********************************************
Configuring VM local storage on controller disk
***********************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+---------+------------+...
| uuid | device_no | device_ | device_ | size_mi | available_ |...
| | de | num | type | b | mib |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
| 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda | 2048 | HDD | 600000 | 434072 |...
| | | | | | |...
| | | | | | |...
| 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb | 2064 | HDD | 16240 | 0 |...
| | | | | | |...
| | | | | | |...
| 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'nova-local' volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 517d313e-8aa0-4b4d-92e6-774b9085f336 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-28T14:02:58.486716+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 146195b2-f3d7-42f9-935d-057a53736929 16237 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
| device_node | /dev/sdc1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 16237 |
| uuid | 009ce3b1-ed07-46e9-9560-9d2371676748 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| idisk_uuid | 146195b2-f3d7-42f9-935d-057a53736929 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-28T14:04:29.714030+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready):
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 146195b2-f3d7-42f9-935d-057a53736929
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 009ce3b1-ed07-46e9-9560-9d2371676748 |...| /dev/sdc1 |...| LVM Physical Volume | 16237 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 nova-local 009ce3b1-ed07-46e9-9560-9d2371676748
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 830c9dc8-c71a-4cb2-83be-c4d955ef4f6b |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 009ce3b1-ed07-46e9-9560-9d2371676748 |
| disk_or_part_device_node | /dev/sdc1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
| lvm_pv_name | /dev/sdc1 |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size | 0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| created_at | 2018-08-28T14:06:05.705546+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
**********************
Unlocking controller-0
**********************
You must unlock controller-0 so that you can use it to install
controller-1. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
****************************************
Verifying the controller-0 configuration
****************************************
On controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the controller-0 services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 has controller and compute subfunctions:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show 1 | grep subfunctions
| subfunctions | controller,compute |
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
*****************
System alarm list
*****************
When all nodes are unlocked, enabled, and available: check 'fm alarm-list' for
issues.
Your StarlingX deployment is now up and running with one controller with Cinder
storage and all OpenStack services up and running. You can now proceed with
standard OpenStack APIs, CLIs and/or Horizon to load Glance images, configure
Nova Flavors, configure Neutron networks and launch Nova virtual machines.
----------------------
Deployment terminology
----------------------
.. include:: deployment_terminology.rst
:start-after: incl-simplex-deployment-terminology:
:end-before: incl-simplex-deployment-terminology-end:
.. include:: deployment_terminology.rst
:start-after: incl-standard-controller-deployment-terminology:
:end-before: incl-standard-controller-deployment-terminology-end:
.. include:: deployment_terminology.rst
:start-after: incl-common-deployment-terminology:
:end-before: incl-common-deployment-terminology-end: