Added instructions to build images and cleaned up trailing whitespace

Change-Id: I6bd8b020025159181c3b920c5e48381c5dbe5769
This commit is contained in:
Scott Daniels 2013-09-27 16:27:36 -04:00
parent 02b6284ec2
commit e6bef98990
2 changed files with 116 additions and 96 deletions

View File

@ -1,27 +1,28 @@
\section{Preparation}
To prepare the workstation and the virtual environment to start an inception cloud, the tasks listed below, and
explained in more detail in subsequent sections, must first be performed.
All of these should be done on the workstation or from the workstation browser via the OpenStack dashboard or the
Nova client CLI.
To prepare the workstation and the virtual environment to start an inception cloud, the tasks listed below, and
explained in more detail in subsequent sections, must first be performed.
All of these should be done on the workstation or from the workstation browser via the OpenStack dashboard or the
Nova client CLI.
\begin{enumerate}
\item Install software
\item Set environment variables
\item Create keys
\item Start small boot-up VM
\item Create Images
\item Add floating IP to the VM
\item Start \verb!sshuttle!
\end{enumerate}
\subsubsection{Install Software}
Some or all of the required software might already be installed.
\subsection{Install Software}
Some or all of the required software might already be installed.
Verify that the correct versions of each of the software packages are available on the workstation
and take steps to upgrade or load the missing packages as is needed.
and take steps to upgrade or load the missing packages as is needed.
The flavour of Linux installed on your workstation will dictate the exact commands (e.g. apt-get or zypper)
that are needed to load and\/or upgrade Python, sshuttle, and pip.
Pip can then be used to install Nova and Oslo.
Examples of each of the commands that might be needed to manage the required software packages are presented
in Appendix A.
that are needed to load and\/or upgrade Python, sshuttle, and pip.
Pip can then be used to install Nova and Oslo.
Examples of each of the commands that might be needed to manage the required software packages are presented
in Appendix A.
\subsubsection{Inception source}
The source for inception is available from github.
@ -31,19 +32,19 @@ The command below will fetch the inception source and place it in a directory un
git clone https://github.com/stackforge/inception.git
\end{verbatim}\normalsize
Following the execution of the \verb!git! command, switch to the inception directory and verify that the
directory was populated.
Inception may be installed, or used from this directory.
Following the execution of the \verb!git! command, switch to the inception directory and verify that the
directory was populated.
Inception may be installed, or used from this directory.
If the decision is made to install inception, the following command should be used:
\small\begin{verbatim}
python setup.py install
\end{verbatim}\normalsize
\subsubsection{Set Environment Variables}
Ensure that the environment variables which define the authorisation URL and credentials for OpenStack are set and exported.
\subsection{Set Environment Variables}
Ensure that the environment variables which define the authorisation URL and credentials for OpenStack are set and exported.
OpenStack credentials can be obtained using the OpenStack dashboard interface and following these steps:
\label{set_env_sect}
\label{set_env_sect}
%
\begin{enumerate}
\item Log into the dashboard
@ -53,7 +54,7 @@ OpenStack credentials can be obtained using the OpenStack dashboard interface an
\end{enumerate}
%&uindent
%
Once the file has been saved to disk you can source the file (the assumption is made that the
Once the file has been saved to disk you can source the file (the assumption is made that the
shell being used is bash compatible).
Sourcing the file should prompt for a password, and then export the following variables to the environment:
@ -70,10 +71,10 @@ Sourcing the file should prompt for a password, and then export the following va
\dlitem{OS\_USERNAME}{ Your user name.}
\dlend
\subsubsection{Create Keys}
\subsection{Create Keys}
Create (if needed) a public/private key pair and register it with OpenStack. If you do not have a key pair, generate one using
nova. (I prefer to name these with the OpenStack cluster/environment name, and then the key name with an extension that
indicates private key: agave.scooter.pk.)
nova. (I prefer to name these with the OpenStack cluster/environment name, and then the key name with an extension that
indicates private key: agave.scooter.pk.)
\small\begin{verbatim}
touch agave.scooter.pk
@ -82,8 +83,8 @@ indicates private key: agave.scooter.pk.)
\end{verbatim}\normalsize
The commands above will create the key, write it to disk and register the public key with OpenStack.
Executing the \verb!touch! and \verb!chmod! commands prior to generating the key adds a bit of security which prevents the exposure of the key
which results if the permissions on the key file are changed after it is generated.
Executing the \verb!touch! and \verb!chmod! commands prior to generating the key adds a bit of security which prevents the exposure of the key
which results if the permissions on the key file are changed after it is generated.
Regardless of when the permissions are changed, they will need to be changed in order for the file to recognised and used by ssh.
If you already have a private key (one that several users might share) then a public key can be generated from the private key and registered with OpenStack:
@ -94,53 +95,71 @@ If you already have a private key (one that several users might share) then a pu
\end{verbatim}\normalsize
If you have both a public and private key file, then the \verb!ssh-keygen! command can be skipped; it is only necessary to supply the existing public key to
OpenStack using the nova command line.
If you have both a public and private key file, then the \verb!ssh-keygen! command can be skipped; it is only necessary to supply the existing public key to
OpenStack using the nova command line.
\subsubsection{Start Tiny Boot-up VM}
Create and initialise a tiny VM that will act as the initial gateway to the virtual environment for processes running on the workstation.
For the examples used in the remainder of this document, the boot-up VM is given the name \verb!scooter_bv.!
The VM should be started with the key that was registered with OpenStack.
\subsection{Start Boot-up VM}
Create and initialise a tiny\footnote{If you are going to need to create OS images during setup, use a meidum VM.}
VM that will act as the initial gateway to the virtual environment for processes running on the workstation.
For the examples used in the remainder of this document, the boot-up VM is given the name \verb!scooter_bv.!
The VM should be started with the key that was registered with OpenStack.
\small\begin{verbatim}
nova boot --image centos --flavor m1.tiny --key_name shared \
--security_groups default scooter_bv
}
\end{verbatim}\normalsize
\noindent
The VM should boot quickly and once it is active you may continue.
\subsubsection{Add A Floating IP Address}
\subsection{Create O/S Images}
If suitable inception cloud images are not already available in the current virtual environment, they must be created.
The following lists the steps necessary to create the O/S images:
\begin{enumerate}
\item Ssh to the boot-up VM.
\item Copy inception utility scripts from the source installed on the workstation.
\item Run the script \verb!pre_switch_kernel.sh! to convert from a vertual kernel to a generic kernel.
\item Wait for the VM to reboot and log in again.
\item Run the script \verb!pre_install_ovs.sh! to install open vswitch.
\item Using nova, or the dashboard, take a snapshot of the VM giving it an image name of XXX-gv.
\item Run the script \verb!pre_install_chefserver.sh! to install the Chef software.
\item Create another image naming it XXX-gvc.
\end{enumerate}
Image names (shown as XXX in the above list), can be anything that aligns with local policy and are used
with the \verb!--image! and \verb!--chefserver_image! options on the \verb!orcehestrator! command line.
\subsection{Add A Floating IP Address}
Add a floating (public) IP address to the new VM so that it can be reached from the "real world."
(xxx.xxx.xxx.xxx is one of the public floating IP addresses that are available; use \verb!nova floating-ip-list!
(xxx.xxx.xxx.xxx is one of the public floating IP addresses that are available; use \verb!nova floating-ip-list!
to get a list of available addresses.
\small\begin{verbatim}
nova add-floating-ip scooter_bv xxx.xxx.xxx.xxx
\end{verbatim}\normalsize
\subsubsection{Start sshuttle}
The sshuttle programme creates a tunnel through ssh allowing programmes on the workstation to access VMs created on the
same internal network as the boot-up VM without having to assign each VM a public address.
The sshuttle command is given the private key portion of the key pair that was used to start the boot-up VM; this is
\subsection{Start sshuttle}
The sshuttle programme creates a tunnel through ssh allowing programmes on the workstation to access VMs created on the
same internal network as the boot-up VM without having to assign each VM a public address.
The sshuttle command is given the private key portion of the key pair that was used to start the boot-up VM; this is
necessary to allow sshuttle to start an ssh session through which the tunnel is created.
The user name (ubuntu in the example below) is any user on the boot-up VM that is available and allows access via
the key (the assumption is that OpenStack created this user, or it was a part of the saved imagee, and the public key was
the key (the assumption is that OpenStack created this user, or it was a part of the saved imagee, and the public key was
inserted into the \verb!authorized_keys! file in the user's \verb!.ssh! directory.)
\small\begin{verbatim}
sshuttle -e ssh -A -i ~/.vmkeys/agave.shared.pk -v \
-r ubuntu@xxx.xxx.xxx.xxx 192.168.254.0/24
-r ubuntu@xxx.xxx.xxx.xxx 192.168.254.0/24
\end{verbatim}\normalsize
\noindent
The \verb!-v! option causes sshuttle to be more verbose with messages to the standard error device.
Ultimately, redirecting the output of sshuttle to \verb!/dev/null! and running the process asynchronously, is probably
wise, but initially seeing the verbose messages scroll by in the window is a nice confirmation that the tunnel is active and data
is being transferred.
Ultimately, redirecting the output of sshuttle to \verb!/dev/null! and running the process asynchronously, is probably
wise, but initially seeing the verbose messages scroll by in the window is a nice confirmation that the tunnel is active and data
is being transferred.
The \verb!xxx! IP address is the public floating IP address assigned earlier. The second address (192.168.254.0 in the example)
bb the virtual network that is created by OpenStack.
bb the virtual network that is created by OpenStack.
If it is not known, the following command (with suitable path for the public key) might provide the address:
\small\begin{verbatim}

View File

@ -1,22 +1,22 @@
\section{Running Orchestrator}
The \verb!orchestrator! command, located in the bin directory under the source that was fetched from github,
is used to start and stop an inception cloud.
The \verb!orchestrator! command, located in the bin directory under the source that was fetched from github,
is used to start and stop an inception cloud.
The inception cloud environment consists of at least 4 Inception Control VMs (ICVMs) in the environment:
%&indent
\begin{itemize}
\item A gateway machine that will be given a public IP address and function much in the same way as the
boot-up VM being used to start the cloud.
\item A gateway machine that will be given a public IP address and function much in the same way as the
boot-up VM being used to start the cloud.
\item A controller machine used to run the OpenStack software and to provide the OpenStack Dashboard interface.
\item A controller machine used to run the OpenStack software and to provide the OpenStack Dashboard interface.
\item A chef machine used to manage Chef installation and configuration scripts.
\item A chef machine used to manage Chef installation and configuration scripts.
\item One or more worker machines used to host the inception virtual machines (iVMs).
\item One or more worker machines used to host the inception virtual machines (iVMs).
\end{itemize}
\noindent
The following sections describe how orchestrator is used.
The following sections describe how orchestrator is used.
\begin{figure}
\centering
@ -29,40 +29,41 @@ The following sections describe how orchestrator is used.
\end{figure}
\subsection{Starting The Inception Cloud}
The \verb!orchestrator! command is located in the \verb!bin! directory within the source cloned from github.
The bin directory can be added to the path, or the command can be executed with a fully qualified path.
The \verb!orchestrator! command is located in the \verb!bin! directory within the source cloned from github.
The bin directory can be added to the path, or the command can be executed with a fully qualified path.
It will probably be necessary to add the top level inception directory to the \verb!PYTHONPATH! environment variable
if inception was not installed.
if inception was not installed.
%.sp
Using the \verb!--help! option will cause all of the possible command line options to be written to the tty device.
For the most part, at least for the first time user, only a few are needed and are described below.
Using the \verb!--help! option will cause all of the possible command line options to be written to the tty device.
For the most part, at least for the first time user, only a few are needed and are described
below. %\footnote{A complete list of orchestrator command line options are presented in an appendix}.
\dlbeg{0.85in}
\dlitem{-p prefix}{
This command line flag is required and supplies the prefix string that is used when defining the ICVM names.
\dlitem{-p prefix}{
This command line flag is required and supplies the prefix string that is used when defining the ICVM names.
}
\vspace{5pt}
\dlitem{-n n}{
Specifies the number of work ICVMs that are created. The iVMs which are created in the inception cloud are hosted
on the work VMs thus the number needed is directly related to the number of iVMs that will be created in the inception cloud.
\dlitem{-n n}{
Specifies the number of work ICVMs that are created. The iVMs which are created in the inception cloud are hosted
on the work VMs thus the number needed is directly related to the number of iVMs that will be created in the inception cloud.
}
\vspace{5pt}
\dlitem{-~-image=}{
Supplied the image name to be used for all ICVMs. If not supplied a base image of Ubuntu 12.04 (64 bit) is
created and used for each ICVM.
\dlitem{-~-image=}{
Supplied the image name to be used for all ICVMs. If not supplied a base image of Ubuntu 12.04 (64 bit) is
created and used for each ICVM.
}
\vspace{5pt}
\dlitem{-~-ssh\_keyfile=}{
Provides the name of the private key that is to be injected as the user key for each of the
\dlitem{-~-ssh\_keyfile=}{
Provides the name of the private key that is to be injected as the user key for each of the
control VMs that are created.
}
\vspace{5pt}
\dlitem{-~-user=}{
The user name created on each node with sudo capabilities. If not given, ubuntu is used.
\dlitem{-~-user=}{
The user name created on each node with sudo capabilities. If not given, ubuntu is used.
}
\dlend
@ -75,9 +76,9 @@ The following illustrates the command to start an inception cloud with one worke
\end{verbatim}\normalsize
\noindent
The creation and initialisation of the ICVMs takes about 20 minutes during which time a fair few messages
are written to standard error.
When orchestrator has finished, a set of messages should be written to stdout indicating success and which give the IP
addresses and URLs for various things in the newly created environment.
are written to standard error.
When orchestrator has finished, a set of messages should be written to stdout indicating success and which give the IP
addresses and URLs for various things in the newly created environment.
The following is a sample of these messages (date, time, and system indentification information has bee excluded for brevity):
\small\begin{verbatim}
@ -92,12 +93,12 @@ The inception cloud can be stopped manually by halting all of the ICVMs, or orch
giving it the \verb!--cleanup! command line flag which causes it to terminate all of the ICVMs.
\small\begin{verbatim}
orchestrator -p scooter0 --cleanup
orchestrator -p scooter0 --cleanup
\end{verbatim}\normalsize
\subsection{Finalisation}
It will take approximately 20 minutes for orchestrator to start the inception cloud.
Once orchestrator reports that the inception cloud is ready, a small amount of housekeeping should be done.
Once orchestrator reports that the inception cloud is ready, a small amount of housekeeping should be done.
These tasks include:
\begin{itemize}
@ -113,14 +114,14 @@ These tasks include:
\end{itemize}
\subsubsection{Repointing Sshuttle}
Once the ICVMs are running, sshuttle should be "pointed" at the gateway ICVM so that the boot-up VM can be stopped.
Sshuttle must also be set to tunnel requests for the private control network that is used by the
ICVMs as this is the network on which the nova authorisation and dashboard processes listen on.
The following commands illustrate how this can be done:
Once the ICVMs are running, sshuttle should be "pointed" at the gateway ICVM so that the boot-up VM can be stopped.
Sshuttle must also be set to tunnel requests for the private control network that is used by the
ICVMs as this is the network on which the nova authorisation and dashboard processes listen on.
The following commands illustrate how this can be done:
\small\begin{verbatim}
nova list | grep scooter0-gateway
ssh ubuntu@yyy.yyy.yyy.yyy ifconfig eth1
nova list | grep scooter0-gateway
ssh ubuntu@yyy.yyy.yyy.yyy ifconfig eth1
sshuttle -e ssh -A -i ~/.vmkeys/agave.shared.pk -v \
-r ubuntu@yyy.yyy.yyy.yyy \
192.168.254.0/24 zzz.zzz.zzz.0/24
@ -136,34 +137,34 @@ Where:
\dlitem{yyy.yyy.yyy.yyy}{ Is the public IP address for the gateway.}
\vspace{5pt}
\dlitem{zzz.zzz.zzz.0} {
Is the network address of the control network. The netmask also must be checked to determine if /24 is
the appropriate number of bits being used to represent a host id; if not it must be changed to match the netmask.
\dlitem{zzz.zzz.zzz.0} {
Is the network address of the control network. The netmask also must be checked to determine if /24 is
the appropriate number of bits being used to represent a host id; if not it must be changed to match the netmask.
}
\vspace{5pt}
\dlitem{ubuntu}{ Is the user name that was injected onto each of the ICVMs. }
\dlend
After these commands are executed sshuttle will again be running on the workstation and managing a tunnel between the
workstation and both the virtual network and the inception cloud's private control network in the OpenStack environment.
After these commands are executed sshuttle will again be running on the workstation and managing a tunnel between the
workstation and both the virtual network and the inception cloud's private control network in the OpenStack environment.
\subsubsection{Modifying /etc/hosts}
In order to use the inception cloud OpenStack dashboard (URL given in the last set of messages generated by orchestrator),
the workstation must be able to resolve the
controller host name (e.g. \verb!scooter0-controller! using the earlier example prefix).
The easiest way to do this is to have sshuttle forward all DNS requests to the inception cloud environment for resolution.
the workstation must be able to resolve the
controller host name (e.g. \verb!scooter0-controller! using the earlier example prefix).
The easiest way to do this is to have sshuttle forward all DNS requests to the inception cloud environment for resolution.
This is done with the addition of a command line flag on the sshuttle command, however shuffling all of the workstation's DNS
traffic into the VM environment is probably not a very wise choice.
Instead, the host name of the controller, and it's control network IP address (zzz.zzz.zzz.hhh) should be added to the
\verb!/etc/hosts! file on the workstation.
traffic into the VM environment is probably not a very wise choice.
Instead, the host name of the controller, and it's control network IP address (zzz.zzz.zzz.hhh) should be added to the
\verb!/etc/hosts! file on the workstation.
\subsubsection{Setting Credentials}
Credentials must be set in the environment to allow nova to be used on the workstation to control the iVMs in the
inception cloud.
The following is a list of variables that must be exported and their approximate values (the IP address supplied for the
authorisation URL will be different as might the username).
Credentials must be set in the environment to allow nova to be used on the workstation to control the iVMs in the
inception cloud.
The following is a list of variables that must be exported and their approximate values (the IP address supplied for the
authorisation URL will be different as might the username).
The password was given to the user \emph{demo} via the dashboard using the \emph{admin} user ID.
\small\begin{verbatim}
@ -173,9 +174,9 @@ The password was given to the user \emph{demo} via the dashboard using the \emph
export OS_PASSWORD=demo
\end{verbatim}\normalsize
The network address given is that of the private control network.
The dashboard can be used to setup any users and\/or projects (tenants) that are needed in the inception cloud.
The admin user ID and password are admin/admin by default.
The network address given is that of the private control network.
The dashboard can be used to setup any users and\/or projects (tenants) that are needed in the inception cloud.
The admin user ID and password are admin/admin by default.
\begin{figure}
\centering