Import docs-cloud baseline
|
@ -1,12 +0,0 @@
|
|||
.. _about:
|
||||
|
||||
About OpenStack Charms
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
charms.rst
|
||||
juju.rst
|
||||
maas.rst
|
||||
lxd.rst
|
|
@ -1,9 +0,0 @@
|
|||
.. _next-steps:
|
||||
|
||||
Next steps
|
||||
~~~~~~~~~~
|
||||
|
||||
Your OpenStack environment now includes the charms service.
|
||||
|
||||
To add additional services, see
|
||||
https://docs.openstack.org/project-install-guide/ocata/.
|
|
@ -1,11 +0,0 @@
|
|||
.. _architecture:
|
||||
|
||||
Deployment architecture
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
service_arch.rst
|
||||
network_arch.rst
|
||||
storage_arch.rst
|
|
@ -1,6 +0,0 @@
|
|||
.. _charms:
|
||||
|
||||
Charms
|
||||
~~~~~~
|
||||
|
||||
TODO: general overview of charms in the context of this project.
|
|
@ -0,0 +1,469 @@
|
|||
Configure OpenStack
|
||||
===================
|
||||
|
||||
Now we've used `Juju <./install-juju.html>`__ and `MAAS <./install-maas.html>`__
|
||||
to deploy `OpenStack <./install-openstack.html>`__, it's time to configure
|
||||
OpenStack for use within a typical production environment.
|
||||
|
||||
We'll cover first principles; setting up the environment variables, adding a
|
||||
project, virtual network access and Ubuntu cloud image deployment to create a
|
||||
strong OpenStack foundation that can easily be expanded upon.
|
||||
|
||||
Environment variables
|
||||
---------------------
|
||||
|
||||
When accessing OpenStack from the command line, specific environment variables
|
||||
need to be set. We've put these in a file called ``nova.rc`` which easily be
|
||||
*sourced* (made active) whenever needed.
|
||||
|
||||
The file contains the following:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
export OS_AUTH_URL=http://192.168.100.95:5000/v2.0/
|
||||
export OS_USERNAME=admin
|
||||
export OS_PASSWORD=openstack
|
||||
export OS_TENANT_NAME=admin
|
||||
|
||||
The ``OS_AUTH_URL`` is the address of the `OpenStack
|
||||
Keystone <./install-openstack.html#keystone>`__ node for authentication. This
|
||||
can be retrieved by Juju with the following command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju status --format=yaml keystone/0 | grep public-address | awk '{print $2}'
|
||||
|
||||
The environment variables can be enabled/sourced with the following command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
source nova.rc
|
||||
|
||||
You can check the variables have been set correctly by seeing if your OpenStack
|
||||
endpoints are visible with the ``openstack endpoint list`` command. The output
|
||||
will look something like this:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+----------------------------------+-----------+--------------+--------------+
|
||||
| ID | Region | Service Name | Service Type |
|
||||
+----------------------------------+-----------+--------------+--------------+
|
||||
| 060d704e582b4f9cb432e9ecbf3f679e | RegionOne | cinderv2 | volumev2 |
|
||||
| 269fe0ad800741c8b229a0b305d3ee23 | RegionOne | neutron | network |
|
||||
| 3ee5114e04bb45d99f512216f15f9454 | RegionOne | swift | object-store |
|
||||
| 68bc78eb83a94ac48e5b79893d0d8870 | RegionOne | nova | compute |
|
||||
| 59c83d8484d54b358f3e4f75a21dda01 | RegionOne | s3 | s3 |
|
||||
| bebd70c3f4e84d439aa05600b539095e | RegionOne | keystone | identity |
|
||||
| 1eb95d4141c6416c8e0d9d7a2eed534f | RegionOne | glance | image |
|
||||
| 8bd7f4472ced40b39a5b0ecce29df3a0 | RegionOne | cinder | volume |
|
||||
+----------------------------------+-----------+--------------+--------------+
|
||||
|
||||
If the endpoints aren't visible, it's likely your environment variables aren't
|
||||
configured correctly.
|
||||
|
||||
As with both MAAS and Juju, most OpenStack operations can be accomplished using
|
||||
either the command line or a web UI. In the following examples, we'll use the
|
||||
command line for brevity. But keep in mind that the web UI is a always potential
|
||||
alternative and a good way of seeing immediate feedback from any changes you
|
||||
apply.
|
||||
|
||||
Define an external network
|
||||
--------------------------
|
||||
|
||||
We'll start by defining a network called ``Pub_Net`` that will use a subnet with
|
||||
within the range of addresses we put aside in MAAS and Juju:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack network create Pub_Net --share --external
|
||||
|
||||
The output from this, as with the output from many OpenStack commands, will show
|
||||
the various fields and values for the chosen configuration option. Typing
|
||||
``openstack network list`` will show the new network ID alongside its name:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+--------------------------------------+---------+---------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+---------+---------+
|
||||
| fc171d22-d1b0-467d-b6fa-109dfb77787b | Pub_Net | |
|
||||
+--------------------------------------+---------+---------+
|
||||
|
||||
We now need a subnet for the network. The following command will create this
|
||||
subnet using the various addresses from our MAAS and Juju configuration
|
||||
(``192.168.100.3`` is the IP address of the MAAS server):
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack subnet create Pub_Subnet --allocation-pool \
|
||||
start=192.168.100.150,end=192.168.100.199 --subnet-range 192.168.100.0/24 \
|
||||
--no-dhcp --gateway 192.168.100.1 --dns-nameserver 192.168.100.3 \
|
||||
--dns-nameserver 8.8.8.8 --network Pub_Net
|
||||
|
||||
The output from the previous command provides a comprehensive overview of the
|
||||
new subnet's configuration:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| allocation_pools | 192.168.100.150-192.168.100.199 |
|
||||
| cidr | 192.168.100.0/24 |
|
||||
| created_at | 2017-04-21T13:43:48 |
|
||||
| description | |
|
||||
| dns_nameservers | 192.168.100.3, 8.8.8.8 |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 192.168.100.1 |
|
||||
| host_routes | |
|
||||
| id | 563ecd06-bbc3-4c98-b93e |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | Pub_Subnet |
|
||||
| network_id | fc171d22-d1b0-467d-b6fa-109dfb77787b |
|
||||
| project_id | 4068710688184af997c1907137d67c76 |
|
||||
| revision_number | None |
|
||||
| segment_id | None |
|
||||
| service_types | None |
|
||||
| subnetpool_id | None |
|
||||
| updated_at | 2017-04-21T13:43:48 |
|
||||
| use_default_subnet_pool | None |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
!!! Note: OpenStack has
|
||||
`deprecated <https://docs.openstack.org/developer/python-neutronclient/devref/transition_to_osc.html>`__
|
||||
the use of the ``neutron`` command for network configuration, migrating most of
|
||||
its functionality into the Python OpenStack client. Version 2.4.0 or later of
|
||||
this client is needed for the ``subnet create`` command.
|
||||
|
||||
Cloud images
|
||||
------------
|
||||
|
||||
To add an Ubuntu image to Glance, we need to first download an image locally.
|
||||
Canonical's Ubuntu cloud images can be found here:
|
||||
|
||||
`https://cloud-images.ubuntu.com <https://cloud-images.ubuntu.com/>`__
|
||||
|
||||
You could use ``wget`` to download the image of Ubuntu 16.04 LTS (Xenial):
|
||||
|
||||
.. code:: bash
|
||||
|
||||
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
|
||||
|
||||
The following command will add this image to Glance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack image create --public --min-disk 3 --container-format bare \
|
||||
--disk-format qcow2 --property architecture=x86_64 \
|
||||
--property hw_disk_bus=virtio --property hw_vif_model=virtio \
|
||||
--file xenial-server-cloudimg-amd64-disk1.img \
|
||||
"xenial x86_64"
|
||||
|
||||
To make sure the image was successfully imported, type ``openstack image list``.
|
||||
This will output the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+--------------------------------------+---------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+---------------+--------+
|
||||
| d4244007-5864-4a2d-9cfd-f008ade72df4 | xenial x86_64 | active |
|
||||
+--------------------------------------+---------------+--------+
|
||||
|
||||
The 'Compute>Images' page of OpenStack's Horizon web UI lists many more details
|
||||
about imported images. In particular, note their size as this will limit the
|
||||
minimum root storage size of any OpenStack flavours used to deploy them.
|
||||
|
||||
.. figure:: ./media/config-openstack_images.png
|
||||
:alt: Horizon image details
|
||||
|
||||
Horizon image details
|
||||
|
||||
Working with projects
|
||||
---------------------
|
||||
|
||||
Projects, users and roles are a vital part of OpenStack operations. We'll create
|
||||
a single project and single user for our new deployment, starting with the
|
||||
project:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack project create --enable --description 'First Project' P01
|
||||
|
||||
To add a user and assign that user to the project:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack user create --project P01 --password openstack --enable p01user
|
||||
|
||||
The output to the previous command will be similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+------------+----------------------------------+
|
||||
| email | None |
|
||||
| enabled | True |
|
||||
| id | a1c55e45ec374dacb151a8aa3ecb3571 |
|
||||
| name | p01user |
|
||||
| project_id | 1992e606b51b404c9151f8cb464aa420 |
|
||||
| username | p01user |
|
||||
+------------+----------------------------------+
|
||||
|
||||
In the same way we used ``nova.rc`` to hold the OpenStack environment variables
|
||||
for the ``admin`` account, we can create a similar file to hold the details on
|
||||
the new project and user:
|
||||
|
||||
Create the following ``project.rc`` file:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
export OS_AUTH_URL=http://192.168.100.95:5000/v2.0/
|
||||
export OS_USERNAME=p01user
|
||||
export OS_PASSWORD=openstack
|
||||
export OS_TENANT_NAME=P01
|
||||
|
||||
Source this file's contents to effectively switch users:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
source project.rc
|
||||
|
||||
Every subsequent action will now be performed by the ``p01user`` user within the
|
||||
new ``P01`` project.
|
||||
|
||||
Create a virtual network
|
||||
------------------------
|
||||
|
||||
We need a fixed IP address to access any instances we deploy from OpenStack. In
|
||||
order to assign a fixed IP, we need a project-specific network with a private
|
||||
subnet, and a router to link this network to the ``Pub_Net`` we created earlier.
|
||||
|
||||
To create the new network, enter the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack network create P01_Network
|
||||
|
||||
Create a private subnet with the following parameters:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack subnet create P01_Subnet --allocation-pool \
|
||||
start=10.0.0.10,end=10.0.0.99 --subnet-range 10.0.0.0/24 \
|
||||
--gateway 10.0.0.1 --dns-nameserver 192.168.100.3 \
|
||||
--dns-nameserver 8.8.8.8 --network P01_Network
|
||||
|
||||
You'll see verbose output similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| allocation_pools | 10.0.0.10-10.0.0.99 |
|
||||
| cidr | 10.0.0.0/24 |
|
||||
| created_at | 2017-04-21T16:46:35 |
|
||||
| description | |
|
||||
| dns_nameservers | 192.168.100.3, 8.8.8.8 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.0.1 |
|
||||
| host_routes | |
|
||||
| id | a91a604a-70d6-4688-915e-ed14c7db7ebd |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | P01_Subnet |
|
||||
| network_id | 8b0baa43-cb25-4a70-bf41-d4136cbfe16e |
|
||||
| project_id | 1992e606b51b404c9151f8cb464aa420 |
|
||||
| revision_number | None |
|
||||
| segment_id | None |
|
||||
| service_types | None |
|
||||
| subnetpool_id | None |
|
||||
| updated_at | 2017-04-21T16:46:35 |
|
||||
| use_default_subnet_pool | None |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
The following commands will add the router, connecting this new network to the
|
||||
Pub\_Net:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack router create P01_Public_Router
|
||||
openstack router set P01_Public_Router --external-gateway Pub_Net
|
||||
openstack router add subnet P01_Public_Router P01_Subnet
|
||||
|
||||
Use ``openstack router show P01_Public_Router`` to verify all parameters have
|
||||
been set correctly.
|
||||
|
||||
Finally, we can add a floating IP address to our project's new network:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack floating ip create Pub_Net
|
||||
|
||||
Details on the address will be shown in the output:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | None |
|
||||
| description | |
|
||||
| fixed_ip_address | None |
|
||||
| floating_ip_address | 192.168.100.152 |
|
||||
| floating_network_id | fc171d22-d1b0-467d-b6fa-109dfb77787b |
|
||||
| id | f9b4193d-4385-4b25-83ed-89ed3358668e |
|
||||
| name | 192.168.100.152 |
|
||||
| port_id | None |
|
||||
| project_id | 1992e606b51b404c9151f8cb464aa420 |
|
||||
| revision_number | None |
|
||||
| router_id | None |
|
||||
| status | DOWN |
|
||||
| updated_at | None |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
This address will be added to the pool of available floating IP addresses that
|
||||
can be assigned to any new instances we deploy.
|
||||
|
||||
SSH access
|
||||
----------
|
||||
|
||||
To create an OpenStack SSH keypair for accessing deployments with SSH, use the
|
||||
following command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack keypair create P01-keypair > ~/.ssh/p01-keypair.pem
|
||||
|
||||
With SSH, it's imperative that the file has the correct permissions:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
chmod 600 ~/.ssh/p01-keypair.pem
|
||||
|
||||
Alternatively, you can import your pre-existing keypair with the following
|
||||
command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack keypair create --public-key ~/.ssh/id_rsa.pub my-keypair
|
||||
|
||||
You can view which keypairs have been added to OpenStack using the
|
||||
``openstack keypair list`` command, which generates output similar to the
|
||||
following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+-------------------+-------------------------------------------------+
|
||||
| Name | Fingerprint |
|
||||
+-------------------+-------------------------------------------------+
|
||||
| my-keypair | 1d:35:52:08:55:d5:54:04:a3:e0:23:f0:20:c4:b0:eb |
|
||||
| P01-keypair | 1f:1a:74:a5:cb:87:e1:f3:2e:08:9e:40:dd:dd:7c:c4 |
|
||||
+-------------------+-------------------------------------------------+
|
||||
|
||||
To permit SSH traffic access to our deployments, we need to define a security
|
||||
group and a corresponding network rule:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack security group create --description 'Allow SSH' P01_Allow_SSH
|
||||
|
||||
The following rule will open TCP port 22 and apply it to the above security
|
||||
group:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack security group rule create --proto tcp --dst-port 22 P01_Allow_SSH
|
||||
|
||||
Create a cloud instance
|
||||
-----------------------
|
||||
|
||||
Before launching our first cloud instance, we'll need the network ID for the
|
||||
``P01_Network``. This can be retrieved from the first column of output from the
|
||||
``openstack network list`` command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+--------------------------------------+-------------+------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+-------------+------------------------+
|
||||
| fc171d22-d1b0-467d-b6fa-109dfb77787b | Pub_Net |563ecd06-bbc3-4c98-b93e |
|
||||
| 8b0baa43-cb25-4a70-bf41-d4136cbfe16e | P01_Network |a91a604a-70d6-4688-915e |
|
||||
+--------------------------------------+-------------+------------------------+
|
||||
|
||||
Use the network ID to replace the example in the following ``server create``
|
||||
command to deploy a new instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack server create Server_01 --availability-zone nova \
|
||||
--image 'xenial x86_64' --flavor m1.small \
|
||||
--key-name P01-keypair --security-group \
|
||||
P01_Allow_SSH --nic net-id=8b0baa43-cb25-4a70-bf41-d4136cbfe16e
|
||||
|
||||
You can monitor progress with the ``openstack server list`` command by waiting
|
||||
for the server to show a status of ``ACTIVE``:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+--------------------+-----------+--------+--------- ------------+---------------+
|
||||
| ID | Name | Status | Networks | Image Name |
|
||||
+--------------------+-----------+--------+----------------------+---------------+
|
||||
| 4a61f2ad-5d89-43a6 | Server_01 | ACTIVE |P01_Network=10.0.0.11 | xenial x86_64 |
|
||||
+--------------------+-----------+--------+----------------------+---------------+
|
||||
|
||||
All that's left to do is assign a floating IP to the new server and connect with
|
||||
SSH.
|
||||
|
||||
Typing ``openstack floating ip list`` will show the floating IP address we
|
||||
liberated from ``Pub_Net`` earlier.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
+----------+---------------------+------------------+------+--------------------+---------+
|
||||
| ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project |
|
||||
+----------+---------------------+------------------+------+--------------------+---------+
|
||||
| f9b4193d | 192.168.100.152 | None | None | fc171d22-d1b0-467d | 1992e65 |
|
||||
+----------+---------------------+------------------+------+--------------------+---------+
|
||||
|
||||
The above output shows that the floating IP address is yet to be assigned. Use
|
||||
the following command to assign the IP address to our new instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
openstack server add floating ip Server_01 192.168.100.152
|
||||
|
||||
You will now be able to connect to your new cloud server using SSH:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
ssh -i ~/.ssh/p01-keypair.pem 192.168.100.152
|
||||
|
||||
Next Steps
|
||||
----------
|
||||
|
||||
Congratulations! You have now built and successfully deployed a new cloud
|
||||
instance running on OpenStack, taking full advantage of both Juju and MAAS.
|
||||
|
||||
This is a strong foundation to build upon. You could install Juju `on top of
|
||||
OpenStack <https://jujucharms.com/docs/stable/help-openstack>`__, for example,
|
||||
giving your OpenStack deployment the same powerful application modelling
|
||||
capabilities we used to deploy OpenStack. You might also want to look into using
|
||||
Juju to deploy `Landscape <https://landscape.canonical.com/>`__, Canonical's
|
||||
leading management tool.
|
||||
|
||||
Whatever you choose to do, MAAS and Juju will scale to manage your needs, while
|
||||
making your deployments easier to design, maintain and manage.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- LINKS -->
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- IMAGES -->
|
|
@ -1,32 +0,0 @@
|
|||
.. _configure_deployment:
|
||||
|
||||
Configure the deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Service Placement
|
||||
+++++++++++++++++
|
||||
|
||||
TODO: Reference bundle for this documentation
|
||||
|
||||
TODO: matrix of services and compatibility for placement
|
||||
|
||||
Service Storage Configuration
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
Service Networking Configuration
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
High Availability VIPs
|
||||
----------------------
|
||||
|
||||
Network Space Bindings
|
||||
----------------------
|
||||
|
||||
OpenStack Networking Configuration
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
Overlay network configuration
|
||||
-----------------------------
|
||||
|
||||
External network configuration
|
||||
------------------------------
|
|
@ -1,4 +0,0 @@
|
|||
.. _deploy:
|
||||
|
||||
Deploy the bundle
|
||||
~~~~~~~~~~~~~~~~~
|
|
@ -1,14 +0,0 @@
|
|||
.. _deployment_workflow:
|
||||
|
||||
Deployment workflow
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following diagram shows the high level workflow to deploy an
|
||||
OpenStack Cloud using MAAS, Juju and the OpenStack Charms.
|
||||
|
||||
TODO: Diagram
|
||||
|
||||
Prepare deployment environment
|
||||
Configure deployment bundle
|
||||
Deploy bundle
|
||||
Verify deployment
|
|
@ -1,16 +1,32 @@
|
|||
=================================
|
||||
OpenStack Charms Deployment Guide
|
||||
=================================
|
||||
.. OpenStack documentation master file, created by
|
||||
sphinx-quickstart on Fri Jun 30 11:14:11 2017.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
This guide provides instructions for performing a deployment of
|
||||
OpenStack using the OpenStack Charms with Juju and MAAS.
|
||||
OpenStack documentation
|
||||
=====================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
overview.rst
|
||||
prepare_env.rst
|
||||
configure_deployment.rst
|
||||
deploy.rst
|
||||
verify.rst
|
||||
appendices.rst
|
||||
Installation / Configuration
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
installation/index.rst
|
||||
install-maas.rst
|
||||
install-juju.rst
|
||||
install-openstack.rst
|
||||
install-openstack-bundle.rst
|
||||
config-openstack.rst
|
||||
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
|
|
@ -0,0 +1,149 @@
|
|||
Install Juju
|
||||
============
|
||||
|
||||
`Juju <https://jujucharms.com/about>`__ is an open source application modelling
|
||||
tool that allows you to deploy, configure, scale and operate your software on
|
||||
public and private clouds.
|
||||
|
||||
In the `previous step <./install-maas.html>`__, we installed, deployed and
|
||||
configured `MAAS <https://maas.io/>`__ to use as a foundation for Juju to deploy
|
||||
a fully fledged OpenStack cloud.
|
||||
|
||||
We're now going to install and configure the following two core components of
|
||||
Juju to use our MAAS deployment:
|
||||
|
||||
- The *controller* is the management node for a cloud environment. We'll be
|
||||
using the MAAS node we tagged with ``juju`` to host the Juju controller.
|
||||
- The *client* is used by the operator to talk to one or more controllers,
|
||||
managing one or more different cloud environments. As long as it can access
|
||||
the controller, almost any machine and operating system can run the Juju
|
||||
client.
|
||||
|
||||
Package installation
|
||||
--------------------
|
||||
|
||||
We're going to start by installing the Juju client on a machine running `Ubuntu
|
||||
16.04 <http://releases.ubuntu.com/16.04/>`__ LTS (Xenial) with network access to
|
||||
the MAAS deployment. For other installation options, see `Getting started with
|
||||
Juju <https://jujucharms.com/docs/stable/getting-started>`__.
|
||||
|
||||
To install Juju, enter the following in the terminal:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo add-apt-repository -u ppa:juju/stable
|
||||
sudo apt install juju
|
||||
|
||||
Client configuration
|
||||
--------------------
|
||||
|
||||
The Juju client needs two pieces of information before it can control our MAAS
|
||||
deployment.
|
||||
|
||||
1. A cloud definition for the MAAS deployment. This definition will include
|
||||
where MAAS can be found and how Juju can authenticate itself with it.
|
||||
2. A separate credentials definition that's used when accessing MAAS. This links
|
||||
the authentication details to the cloud definition.
|
||||
|
||||
To create the cloud definition, type ``juju add-cloud mymaas`` to add a cloud
|
||||
called ``mymaas``. This will produce output similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Cloud Types
|
||||
maas
|
||||
manual
|
||||
openstack
|
||||
vsphere
|
||||
|
||||
Select cloud type:
|
||||
|
||||
Enter ``maas`` as the cloud type and you will be asked for the API endpoint URL.
|
||||
This URL is the same as the URL used to access the MAAS web UI in the previous
|
||||
step: ``http://<your.maas.ip>:5240/MAAS/``.
|
||||
|
||||
With the endpoint added, Juju will inform you that ``mymass`` was successfully
|
||||
added. The next step is to add credentials. This is initiated by typing
|
||||
``juju add-credential mymaas``. Enter ``admin`` when asked for a credential
|
||||
name.
|
||||
|
||||
Juju will output the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Enter credential name: admin
|
||||
|
||||
Using auth-type "oauth1".
|
||||
|
||||
Enter maas-oauth:
|
||||
|
||||
The ``oauth1`` credential value is the MAAS API key for the ``admin`` user. To
|
||||
retrieve this, login to the MAAS web UI and click on the ``admin`` username near
|
||||
the top right. This will show the user preferences page. The top field will hold
|
||||
your MAAS keys:
|
||||
|
||||
.. figure:: ./media/install-juju_maaskey.png
|
||||
:alt: MAAS API key
|
||||
|
||||
MAAS API key
|
||||
|
||||
Copy and paste this key into the terminal and press return. You will be informed
|
||||
that credentials have been added for cloud ``mymaas``.
|
||||
|
||||
You can check the cloud definition has been added with the ``juju clouds``
|
||||
command, and you can list credentials with the ``juju credentials`` command.
|
||||
|
||||
Testing the environment
|
||||
-----------------------
|
||||
|
||||
The Juju client now has everything it needs to instruct MAAS to deploy a Juju
|
||||
controller.
|
||||
|
||||
But before we move on to deploying OpenStack, it's worth checking that
|
||||
everything is working first. To do this, we'll simply ask Juju to create a new
|
||||
controller for our cloud:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju bootstrap --constraints tags=juju maas maas-controller
|
||||
|
||||
The constraint in the above command will ask MAAS to use any nodes tagged with
|
||||
``juju`` to host the controller for the Juju client. We tagged this node within
|
||||
MAAS in the `previous step <./install-maas.md#commision-nodes>`__.
|
||||
|
||||
The output to a successful bootstrap will look similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Creating Juju controller "maas-controller" on mymaas
|
||||
Looking for packaged Juju agent version 2.2-alpha1 for amd64
|
||||
Launching controller instance(s) on mymaas...
|
||||
- 7cm8tm (arch=amd64 mem=2G cores=2)
|
||||
Fetching Juju GUI 2.4.4
|
||||
Waiting for address
|
||||
Attempting to connect to 192.168.100.106:22
|
||||
Bootstrap agent now started
|
||||
Contacting Juju controller at 192.168.100.106 to verify accessibility...
|
||||
Bootstrap complete, "maas-controller" controller now available.
|
||||
Controller machines are in the "controller" model.
|
||||
Initial model "default" added.
|
||||
|
||||
If you're monitoring the nodes view of the MAAS web UI, you will notice that the
|
||||
node we tagged with ``juju`` starts deploying Ubuntu 16.04 LTS automatically,
|
||||
which will be used to host the Juju controller.
|
||||
|
||||
Next steps
|
||||
----------
|
||||
|
||||
We've now installed the Juju client and given it enough details to control our
|
||||
MAAS deployment, which we've tested by bootstrapping a new Juju controller. The
|
||||
next step will be to use Juju to deploy and link the various components required
|
||||
by OpenStack.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- LINKS -->
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- IMAGES -->
|
|
@ -0,0 +1,332 @@
|
|||
Install MAAS
|
||||
============
|
||||
|
||||
`MAAS <https://maas.io/>`__, *Metal As A Service*, brings cloud convenience to
|
||||
your hardware, enabling pools of physical servers to be controlled just like
|
||||
virtual machines in the cloud.
|
||||
|
||||
On its own, MAAS can perform zero-touch deployments of `Windows, Ubuntu, CentOS,
|
||||
RHEL and SUSE <https://maas.io/#pricing>`__. But in combination with
|
||||
`Juju <https://jujucharms.com/about>`__, complex environments can be modelled
|
||||
and deployed, pulled down and redeployed again, easily and entirely abstracted
|
||||
from your underlying infrastructure.
|
||||
|
||||
We're going to use MAAS as the foundation for Juju to deploy a fully fledged
|
||||
OpenStack cloud.
|
||||
|
||||
The following is what you'll find in a typical MAAS environment and we'll use
|
||||
this as the framework for our own deployment:
|
||||
|
||||
- A **Region controller** interacts with and controls the wider environment for
|
||||
a region
|
||||
- One or more **Rack controllers** manage locally grouped hardware, usually
|
||||
within a data centre rack
|
||||
- Multiple **Nodes** are individual machines managed by the Rack controller,
|
||||
and ultimately, the Region controller
|
||||
- Complex **Networking** topologies can be modelled and implemented by MAAS,
|
||||
from a single fabric to multiple zones and many overlapping spaces
|
||||
|
||||
What you'll need
|
||||
----------------
|
||||
|
||||
MAAS can work at any scale, from a test deployment using nothing but `LXD on a
|
||||
single machine <http://conjure-up.io/>`__ to thousands of machines deployed
|
||||
across multiple regions.
|
||||
|
||||
It's our intention to build a useful minimal deployment of OpenStack, capable of
|
||||
both performing some real work and scaling to fit more ambitious projects.
|
||||
|
||||
To make this minimal configuration as accessible as possible, we'll be using
|
||||
single nodes for multiple services, reducing the total number of machines
|
||||
required. The four cloud nodes, for instance, will co-host Ceph, Glance and
|
||||
Swift, as well as the other services required by OpenStack.
|
||||
|
||||
The hardware we'll be using is based on the following specifications:
|
||||
|
||||
- 1 x MAAS Rack with Region controller: 8GB RAM, 2 CPUs, 1 NIC, 40GB storage
|
||||
- 1 x Juju node: 4GB RAM, 2 CPUs, 1 NIC, 40GB storage
|
||||
- 4 x OpenStack cloud nodes: 8GB RAM, 2 CPUs, 2 NICs, 80GB storage
|
||||
|
||||
To get a better idea of the minimum requirements for the machines that run MAAS,
|
||||
take a look at the `MAAS
|
||||
documentation <https://docs.ubuntu.com/maas/2.2/en/#minimum-requirements>`__.
|
||||
|
||||
As with the hardware, our network topology is also going to be as simple as
|
||||
possible whilst remaining both scaleable and functional. It contains a single
|
||||
zone for the four cloud nodes, with the machine hosting the MAAS region and rack
|
||||
controllers connected to both the external network and the single zone. It's
|
||||
recommended that MAAS is the sole provider of DHCP and DNS for the network
|
||||
hosting the nodes MAAS is going to manage, but we'll cover this in an imminent
|
||||
step.
|
||||
|
||||
Your hardware could differ considerably from the above and both MAAS and Juju
|
||||
will easily adapt. The Juju node could operate perfectly adequately with half
|
||||
the RAM (this would need to be defined as a bootstrap constraint) and adding
|
||||
more nodes will obviously improve performance.
|
||||
|
||||
.. note:: We'll be using the web UI whenever possible, but it's worth noting
|
||||
that everything (and more) we do with MAAS can also be done from the
|
||||
`CLI <https://docs.ubuntu.com/maas/2.2/en/manage-cli>`__ and the
|
||||
`API <https://docs.ubuntu.com/maas/2.2/en/api>`__.
|
||||
|
||||
Package installation
|
||||
--------------------
|
||||
|
||||
The first step is to install `Ubuntu Server 16.04
|
||||
LTS <https://www.ubuntu.com/download/server>`__ on the machine that's going to
|
||||
host both the MAAS Rack and Region controllers. The Ubuntu Server install menu
|
||||
includes the option to `Install and configure both
|
||||
controllers <https://docs.ubuntu.com/maas/2.1/en/installconfig-iso-install>`__,
|
||||
but to cover more use cases, we will assume that you have a fresh install of
|
||||
Ubuntu Server.
|
||||
|
||||
The network configuration for your new server will depend on your own
|
||||
infrastructure. In our example, the MAAS server network interface connects to
|
||||
the wider network through ``192.168.100.0/24``. These options can be configured
|
||||
during installation. See the Ubuntu Server `Network Configuration
|
||||
documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`__
|
||||
for further details on modifying your network configuration.
|
||||
|
||||
To update the package database and install MAAS, issue the following commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo apt update
|
||||
sudo apt install maas
|
||||
|
||||
At this point, MAAS is now running, albeit without a meaningful configuration.
|
||||
You can check this by pointing a web browser at
|
||||
``http://<your.maas.ip>:5240/MAAS/``. You will see a page complaining that no
|
||||
admin user has been created yet.
|
||||
|
||||
A MAAS admin account is needed before we can start configuring MAAS. This needs
|
||||
to be done on the command line by typing the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo maas createadmin
|
||||
|
||||
You'll be asked for a username, a password and an email address. The following
|
||||
text will assume ``admin`` was used as the username.
|
||||
|
||||
.. note:: MAAS does not currently make use of the email address.
|
||||
|
||||
A final question will ask whether you want to import SSH keys. MAAS uses the
|
||||
public SSH key of a user to manage and secure access to deployed nodes, just as
|
||||
you might with managed servers or remote machines. Press ``Enter`` to skip this
|
||||
as we'll do this from the web UI in the next step.
|
||||
|
||||
On-boarding
|
||||
-----------
|
||||
|
||||
Now we've created an admin account, the web interface will update to ask for
|
||||
login credentials. With credentials successfully accepted, the web interface
|
||||
will launch the 'Welcome to MAAS' on-boarding page:
|
||||
|
||||
.. figure:: ./media/install-maas_welcome.png
|
||||
:alt: welcome to maas
|
||||
|
||||
welcome to maas
|
||||
|
||||
This is the first page of two that will step through the final steps necessary
|
||||
for MAAS to get up and running. Unless you have specific requirements, most of
|
||||
these options can be left at their default values:
|
||||
|
||||
- **Connectivity**: important services that default to being outside of your
|
||||
network. These include package archives and the DNS forwarder.
|
||||
|
||||
- **Ubuntu**: this section refers to the versions and architectures of the
|
||||
Ubuntu images MAAS will import and use on deployed nodes. Select
|
||||
``14.04 LTS`` alongside ``16.04 LTS`` as an add an additional image.
|
||||
|
||||
.. figure:: ./media/install-maas_images.png
|
||||
:alt: Ubuntu images
|
||||
|
||||
Ubuntu images
|
||||
|
||||
- **Keys**: You can conveniently import your public SSH key(s) from both
|
||||
Launchpad and Github by entering your user id for these services. To add a
|
||||
local public key file, usually ``HOME/ssh/id_rsa.pub``, select ``Upload`` and
|
||||
paste file contents into the box that appears. Click ``Import`` to fix the
|
||||
setting.
|
||||
|
||||
.. figure:: ./media/install-maas_sshkeys.png
|
||||
:alt: SSH key import
|
||||
|
||||
SSH key import
|
||||
|
||||
If you need to generate a local SSH public/private key pair, type
|
||||
``ssh-keygen -t rsa`` from the Linux account you'll control MAAS from, and when
|
||||
asked, leave the passphrase blank.
|
||||
|
||||
Adding SSH keys completes this initial MAAS configuration. Click
|
||||
``Go to the dashboard`` to move to the MAAS dashboard and the device discovery
|
||||
process
|
||||
|
||||
Networking
|
||||
----------
|
||||
|
||||
By default, MAAS will monitor local network traffic and report any devices it
|
||||
discovers on the 'Device discovery' page of the web UI. This page also functions
|
||||
as the landing page for the dashboard and will be the first one you see
|
||||
progressing from the installation on-boarding.
|
||||
|
||||
.. figure:: ./media/install-maas_discovery.png
|
||||
:alt: Device discovery
|
||||
|
||||
Device discovery
|
||||
|
||||
Before taking the configuration further, we need to tell MAAS about our network
|
||||
and how we'd like connections to be configured.
|
||||
|
||||
These options are managed from the ``Subnets`` page of the web UI. The subnets
|
||||
page defaults to listing connections by fabric and MAAS creates one fabric per
|
||||
physical NIC on the MAAS server. As we're configuring a machine with a single
|
||||
NIC, a single fabric will be be listed linked to the external subnet.
|
||||
|
||||
We need to add DHCP to the subnet that's going to manage the nodes. To do this,
|
||||
select the ``untagged`` VLAN the subnet to the right of ``fabric-0``.
|
||||
|
||||
The page that appears will be labelled something similar to
|
||||
``Default VLAN in fabric-0``. From here, click on the ``Take action`` button in
|
||||
the top right and select ``Provide DHCP``. A new pane will appear that allows
|
||||
you to specify the start and end IP addresses for the DHCP range. Select
|
||||
``Provide DHCP`` to accept the default values. The VLAN summary should now show
|
||||
DHCP as ``Enabled``.
|
||||
|
||||
.. figure:: ./media/install-maas_dhcp.png
|
||||
:alt: Provide DHCP
|
||||
|
||||
Provide DHCP
|
||||
|
||||
.. note:: See `Concepts and
|
||||
Terms <https://docs.ubuntu.com/maas/2.1/en/intro-concepts>`__ in the MAAS
|
||||
documentation for clarification on the terminology used within MAAS.
|
||||
|
||||
Images
|
||||
------
|
||||
|
||||
We have already downloaded the images we need as part of the on-boarding
|
||||
process, but it's worth checking that both the images we requested are
|
||||
available. To do this, select the 'Images' page from the top menu of the web UI.
|
||||
|
||||
The ``Images`` page allows you to download new images, use a custom source for
|
||||
images, and check on the status of any images currently downloaded. These appear
|
||||
at the bottom, and both 16.04 LTS and 14.04 LTS should be listed with a status
|
||||
of ``Synced``.
|
||||
|
||||
.. figure:: ./media/install-maas_imagestatus.png
|
||||
:alt: Image status
|
||||
|
||||
Image status
|
||||
|
||||
Adding nodes
|
||||
------------
|
||||
|
||||
MAAS is now ready to accept new nodes. To do this, first ensure your four cloud
|
||||
nodes and single Juju node are set to boot from a PXE image. Now simply power
|
||||
them on. MAAS will add these new nodes automatically by taking the following
|
||||
steps:
|
||||
|
||||
- Detect each new node on the network
|
||||
- Probe and log each node's hardware (using an ephemeral boot image)
|
||||
- Add each node to the ``Nodes`` page with a status of ``New``
|
||||
|
||||
Though less satisfying, we'd recommend powering up each node one at a time, as
|
||||
it can be difficult to know which is which at this stage.
|
||||
|
||||
In order to fully manage a deployment, MAAS needs to be able power cycle each
|
||||
node. This is why MAAS will attempt to power each node off during the discovery
|
||||
phase. If your hardware does not power off, it's likely that it's not using an
|
||||
IPMI based BMC and you will need to edit a node's power configuration to enable
|
||||
MAAS to control its power. See the `MAAS
|
||||
documentation <https://docs.ubuntu.com/maas/2.2/en/installconfig-nodes-power-types>`__
|
||||
for more information on power types, including a
|
||||
`table <https://docs.ubuntu.com/maas/2.2/en/installconfig-nodes-power-types#bmc-driver-support>`__
|
||||
showing a feature comparison for the supported BMC drivers.
|
||||
|
||||
To edit a node's power configuration, click on the arbitrary name your machine
|
||||
has been given in the ``Nodes`` page. This will open the configuration page for
|
||||
that specific machine. ``Power`` is the second section from the top.
|
||||
|
||||
Use the drop-down ``Power type`` menu to open the configuration options for your
|
||||
node's specific power configuration and enter any further details that the
|
||||
configuration may require.
|
||||
|
||||
.. figure:: ./media/install-maas_power.png
|
||||
:alt: Power configuration
|
||||
|
||||
Power configuration
|
||||
|
||||
Click ``Save changes`` when finished. You should now be able to power off the
|
||||
machine using the ``Take action`` menu in the top right.
|
||||
|
||||
Commission nodes
|
||||
----------------
|
||||
|
||||
From the ``Nodes`` page, select all the check boxes for all the machines in a
|
||||
``New`` state and use the ``Take action`` menu to select ``Commission``. After a
|
||||
few minutes, successfully commissioned nodes will change their status to
|
||||
``Ready``. The CPU cores, RAM, number of drives and storage fields should now
|
||||
correctly reflect the hardware on each node.
|
||||
|
||||
For more information on the different states and actions for a node, see `Node
|
||||
actions <https://docs.ubuntu.com/maas/2.1/en/intro-concepts#node-actions>`__ in
|
||||
the MAAS documentation.
|
||||
|
||||
We're now almost at the stage where we can let Juju do its thing. But before we
|
||||
take that next step, we're going to rename and ``tag`` the newly added nodes so
|
||||
that we can instruct Juju which machines to use for which purpose.
|
||||
|
||||
To change the name of a node, select it from the ``Nodes`` page and use the
|
||||
editable name field in the top right. All nodes will automatically be suffixed
|
||||
with ``.maas``. Click on ``Save`` to fix the change.
|
||||
|
||||
Tags are normally used to identify nodes with specific hardware, such GPUs for
|
||||
GPU-accelerated CUDA processing. This allows Juju to target these capabilities
|
||||
when deploying applications that may use them. But they can also be used for
|
||||
organisational and management purposes. This is how we're going to use them, by
|
||||
adding a ``compute`` tag to the four cloud nodes and a ``juju`` tag to the node
|
||||
that will act as the Juju controller.
|
||||
|
||||
Tags are added from the ``Machine summary`` section of the same individual node
|
||||
page we used to rename a node. Click ``Edit`` on this section and look for
|
||||
``Tags``. A tag is added by entering a name for the tag in the empty field and
|
||||
clicking ``Save changes``.
|
||||
|
||||
.. figure:: ./media/install-maas_tags.png
|
||||
:alt: Adding tags
|
||||
|
||||
Adding tags
|
||||
|
||||
Here's a summary of the status of each node we've now added to MAAS, showing
|
||||
their names and tags alongside each node's hardware configuration:
|
||||
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
| Node name | Tag(s) | CPUs | RAM | Drives | Storage |
|
||||
+=====================+===========+========+=======+==========+===========+
|
||||
| os-compute01.maas | compute | 2 | 6.0 | 3 | 85.9 |
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
| os-compute02.maas | compute | 2 | 6.0 | 3 | 85.9 |
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
| os-compute03.maas | compute | 2 | 6.0 | 3 | 85.9 |
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
| os-compute04.maas | compute | 2 | 6.0 | 3 | 85.9 |
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
| os-juju01.maas | juju | 2 | 4.0 | 1 | 42.9 |
|
||||
+---------------------+-----------+--------+-------+----------+-----------+
|
||||
|
||||
Next steps
|
||||
----------
|
||||
|
||||
Everything is now configured and ready for our next step. This will involve
|
||||
deploying the Juju controller onto its own node. From there, we'll be using Juju
|
||||
and MAAS together to deploy OpenStack into the four remaining cloud nodes.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- LINKS -->
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- IMAGES -->
|
|
@ -0,0 +1,37 @@
|
|||
Install OpenStack from a bundle
|
||||
===============================
|
||||
|
||||
`Stepping through the deployment <./install-openstack.html>`__ of each OpenStack
|
||||
application is the best way to understanding how OpenStack and Juju operate, and
|
||||
how each application relates to one another. But it's a labour intensive
|
||||
process.
|
||||
|
||||
A bundle allows you to accomplish the same deployment with a single command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy openstack.bundle
|
||||
|
||||
A `bundle <https://jujucharms.com/docs/stable/charms-bundles>`__, as used above,
|
||||
encapsulates the entire deployment process, including all applications, their
|
||||
configuration parameters and any relations that need to be made. Generally, you
|
||||
can use a local file, as above, or deploy a curated bundle from the `charm
|
||||
store <./install-openstack-bundle.html>`__.
|
||||
|
||||
For our project, [download the OpenStack][downloadbundle] and deploy OpenStack
|
||||
using the above command.
|
||||
|
||||
The speed of the deployment depends on your hardware, but may take some time.
|
||||
Monitor the output of ``juju status`` to see when everything is ready.
|
||||
|
||||
Next steps
|
||||
----------
|
||||
|
||||
See the `Install OpenStack <./install-openstack.md#test-openstack>`__
|
||||
documentation for details on testing your OpenStack deployment, or jump directly
|
||||
to `Configure OpenStack <./config-openstack.html>`__ to start using OpenStack
|
||||
productively as quickly as possible.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- LINKS -->
|
|
@ -0,0 +1,553 @@
|
|||
Install OpenStack
|
||||
=================
|
||||
|
||||
Now that we've installed and configured `MAAS <./install-maas.html>`__ and
|
||||
successfully deployed a `Juju <./install-juju.html>`__ controller, it's time to
|
||||
do some real work; use Juju to deploy
|
||||
`OpenStack <https://www.openstack.org/>`__, the leading open cloud platform.
|
||||
|
||||
We have two options when installing OpenStack.
|
||||
|
||||
1. Install and configure each OpenStack component separately. Adding Ceph,
|
||||
Compute, Swift, RabbitMQ, Keystone and Neutron in this way allows you to see
|
||||
exactly what Juju and MAAS are doing, and consequently, gives you a better
|
||||
understanding of the underlying OpenStack deployment.
|
||||
2. Use a `bundle <https://jujucharms.com/docs/stable/charms-bundles>`__ to
|
||||
deploy OpenStack with a single command. A bundle is an encapsulation of a
|
||||
working deployment, including all configuration, resources and references. It
|
||||
allows you to effortlessly recreate a deployment with a single command or
|
||||
share that deployment with other Juju users.
|
||||
|
||||
If this is your first foray into MAAS, Juju and OpenStack territory, we'd
|
||||
recommend starting with the first option. This will give you a stronger
|
||||
foundation for maintaining and expanding the default deployment. Our
|
||||
instructions for this option continue below.
|
||||
|
||||
Alternatively, jump to `Deploying OpenStack as a
|
||||
bundle <./install-openstack-bundle.html>`__ to learn about deploying as a
|
||||
bundle.
|
||||
|
||||
Deploy the Juju controller
|
||||
--------------------------
|
||||
|
||||
`Previously <./install-juju.html>`__, we tested our MAAS and Juju configuration
|
||||
by deploying a new Juju controller called ``maas-controller``. You can check
|
||||
this controller is still operational by typing ``juju status``. With the Juju
|
||||
controller running, the output will look similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Model Controller Cloud/Region Version
|
||||
default maas-controller-two mymaas 2.2-alpha1
|
||||
|
||||
App Version Status Scale Charm Store Rev OS Notes
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
|
||||
Machine State DNS Inst id Series AZ
|
||||
|
||||
If you need to remove and redeploy the controller, use the following two
|
||||
commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju kill-controller maas-controller
|
||||
juju bootstrap --constraints tags=juju maas maas-controller
|
||||
|
||||
During the bootstrap process, Juju will create a model called ``default``, as
|
||||
shown in the output from ``juju status`` above.
|
||||
`Models <https://jujucharms.com/docs/stable/models>`__ act as containers for
|
||||
applications, and Juju's default model is great for experimentation.
|
||||
|
||||
We're going to create a new model called ``uos`` to hold our OpenStack
|
||||
deployment exclusively, making the entire deployment easier to manage and
|
||||
maintain.
|
||||
|
||||
To create a model called ``uos`` (and switch to it), simply type the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-model uos
|
||||
|
||||
Deploy OpenStack
|
||||
----------------
|
||||
|
||||
We're now going to step through adding each of the various OpenStack components
|
||||
to the new model. Each application will be installed from the `Charm
|
||||
store <https://jujucharms.com>`__. We'll be providing the configuration for many
|
||||
of the charms as a ``yaml`` file which we include as we deploy them.
|
||||
|
||||
`Ceph-OSD <https://jujucharms.com/ceph-osd>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We're starting with the Ceph object storage daemon and we want to configure Ceph
|
||||
to use the second drive of a cloud node, ``/dev/sdb``. Change or ignore this to
|
||||
match your own configuration. The configuration is held within the following
|
||||
file we've called ``ceph-osd.yaml``:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
ceph-osd:
|
||||
osd-devices: /dev/sdb
|
||||
osd-reformat: "yes"
|
||||
|
||||
We're going to deploy Ceph-OSD to each of the four cloud nodes we've already
|
||||
tagged with ``compute``. The following command will import the settings above
|
||||
and deploy Ceph-OSD to each of the four nodes:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --constraints tags=compute --config ceph-osd.yaml -n 4 ceph-osd
|
||||
|
||||
In the background, Juju will ask MAAS to commission the nodes, powering them on
|
||||
and installing Ubuntu. Juju then takes over and installs the necessary packages
|
||||
for the required application.
|
||||
|
||||
Remember, you can check on the status of a deployment using the ``juju status``
|
||||
command. To see the status of a single charm of application, append the charm
|
||||
name:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju status ceph-osd
|
||||
|
||||
In this early stage of deployment, the output will look similar to the
|
||||
following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Model Controller Cloud/Region Version
|
||||
uoa maas-controller mymaas 2.2-beta1
|
||||
|
||||
App Version Status Scale Charm Store Rev OS Notes
|
||||
ceph-osd 10.2.6 blocked 4 ceph-osd jujucharms 241 ubuntu
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0 blocked idle 0 192.168.100.113 Missing relation: monitor
|
||||
ceph-osd/1* blocked idle 1 192.168.100.114 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 192.168.100.115 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 192.168.100.112 Missing relation: monitor
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 192.168.100.113 fr36gt xenial default Deployed
|
||||
1 started 192.168.100.114 nnpab4 xenial default Deployed
|
||||
2 started 192.168.100.115 a83gcy xenial default Deployed
|
||||
3 started 192.168.100.112 7gan3t xenial default Deployed
|
||||
|
||||
Don't worry about the 'Missing relation' messages. We'll add the required
|
||||
relations in a later step. You also don't have to wait for a deployment to
|
||||
finish before adding further applications to Juju. Errors will resolve
|
||||
themselves as applications are deployed and dependencies are met.
|
||||
|
||||
`Nova Compute <https://jujucharms.com/nova-compute/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We're going use three machines to host the OpenStack Nova Compute application.
|
||||
The first will use the following configuration file, ``compute.yaml``, while
|
||||
we'll use the second and third to scale-out the same application to two other
|
||||
machines.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
nova-compute:
|
||||
enable-live-migration: True
|
||||
enable-resize: True
|
||||
migration-auth-type: ssh
|
||||
virt-type: qemu
|
||||
|
||||
Type the following to deploy ``nova-compute`` to machine number 1:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to 1 --config compute.yaml nova-compute
|
||||
|
||||
And use the following commands to scale-out Nova Compute to machines 2 and 3:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-unit --to 2 nova-compute
|
||||
juju add-unit --to 3 nova-compute
|
||||
|
||||
As before, it's worth checking ``juju status nova-compute`` output to make sure
|
||||
``nova-compute`` has been deployed to three machines. Look for lines similar to
|
||||
these:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
1 started 192.168.100.117 7gan3t xenial default Deployed
|
||||
2 started 192.168.100.118 fr36gt xenial default Deployed
|
||||
3 started 192.168.100.119 nnpab4 xenial default Deployed
|
||||
|
||||
`Swift storage <https://jujucharms.com/swift-storage/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Swift-storage application is going to be deployed to the first machine
|
||||
(``machine 0``), and scaled across the other three with the following
|
||||
configuration file:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
swift-storage:
|
||||
block-device: sdc
|
||||
overwrite: "true"
|
||||
|
||||
Here are the four deploy commands for the four machines:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to 0 --config swift-storage.yaml swift-storage
|
||||
juju add-unit --to 1 swift-storage
|
||||
juju add-unit --to 2 swift-storage
|
||||
juju add-unit --to 3 swift-storage
|
||||
|
||||
`Neutron networking <https://jujucharms.com/neutron-api/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Next comes Neutron for OpenStack networking. We have just a couple of
|
||||
configuration options than need to be placed within ``neuton.yaml`` and we're
|
||||
going to use this for two applications, ``neutron-gateway`` and ``neutron-api``:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
neutron-gateway:
|
||||
ext-port: 'eth1'
|
||||
neutron-api:
|
||||
neutron-security-groups: True
|
||||
|
||||
First, deploy the gateway:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to 0 --config neutron.yaml neutron-gateway
|
||||
|
||||
We're going to colocate the Neutron API on machine 1 by using an
|
||||
`LXD <https://www.ubuntu.com/containers/lxd>`__ container. This is a great
|
||||
solution for both local deployment and for managing cloud instances.
|
||||
|
||||
We'll also deploy Neutron OpenvSwitch:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:1 --config neutron.yaml neutron-api
|
||||
juju deploy neutron-openvswitch
|
||||
|
||||
We've got to a stage where we can start to connect applications together. Juju's
|
||||
ability to add these links, known as a relation in Juju, is one of its best
|
||||
features.
|
||||
|
||||
See `Managing
|
||||
relationships <https://jujucharms.com/docs/stable/charms-relations>`__ in the
|
||||
Juju documentation for more information on relations.
|
||||
|
||||
Add the network relations with the following commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation neutron-api neutron-gateway
|
||||
juju add-relation neutron-api neutron-openvswitch
|
||||
juju add-relation neutron-openvswitch nova-compute
|
||||
|
||||
There are still 'Missing relations' messages in the status output, leading to
|
||||
the status of some applications to be ``blocked``. This is because there are
|
||||
many more relations to be added but they'll resolve themselves automatically as
|
||||
we add them.
|
||||
|
||||
`Percona cluster <https://jujucharms.com/percona-cluster/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Percona XtraDB cluster application comes next, and like Neutron API above,
|
||||
we're going to use LXD.
|
||||
|
||||
The following ``mysql.yaml`` is the only configuration we need:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
mysql:
|
||||
max-connections: 20000
|
||||
|
||||
To deploy Percona alongside MySQL:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:0 --config mysql.yaml percona-cluster mysql
|
||||
|
||||
And there's just a single new relation to add:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation neutron-api mysql
|
||||
|
||||
`Keystone <https://jujucharms.com/keystone/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As Keystone handles OpenStack identity management and access, we're going to use
|
||||
the following contents of ``keystone.yaml`` to set an admin password for
|
||||
OpenStack:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
keystone:
|
||||
admin-password: openstack
|
||||
|
||||
We'll use an LXD container on machine 3 to help balance the load a little. To
|
||||
deploy the application, use the following command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:3 --config keystone.yaml keystone
|
||||
|
||||
Then add these relations:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation keystone mysql
|
||||
juju add-relation neutron-api keystone
|
||||
|
||||
`RabbitMQ <https://jujucharms.com/rabbitmq-server/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We're using RabbitMQ as the messaging server. Deployment requires no further
|
||||
configuration than running the following command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:0 rabbitmq-server
|
||||
|
||||
This brings along four new connections that need to be made:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation neutron-api rabbitmq-server
|
||||
juju add-relation neutron-openvswitch rabbitmq-server
|
||||
juju add-relation nova-compute:amqp rabbitmq-server
|
||||
juju add-relation neutron-gateway:amqp rabbitmq-server:amqp
|
||||
|
||||
`Nova Cloud Controller <https://jujucharms.com/nova-cloud-controller/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is the controller service for OpenStack, and includes the nova-scheduler,
|
||||
nova-api and nova-conductor services.
|
||||
|
||||
The following simple ``controller.yaml`` configuration file will be used:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
nova-cloud-controller:
|
||||
network-manager: "Neutron"
|
||||
|
||||
To add the controller to your deployment, enter the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:2 --config controller.yaml nova-cloud-controller
|
||||
|
||||
Followed by these ``add-relation`` connections:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation nova-cloud-controller mysql
|
||||
juju add-relation nova-cloud-controller keystone
|
||||
juju add-relation nova-cloud-controller rabbitmq-server
|
||||
juju add-relation nova-cloud-controller neutron-gateway
|
||||
juju add-relation neutron-api nova-cloud-controller
|
||||
juju add-relation nova-compute nova-cloud-controller
|
||||
|
||||
`OpenStack Dashboard <https://jujucharms.com/openstack-dashboard/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We'll deploy the dashboard to another LXD container with a single command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:3 openstack-dashboard
|
||||
|
||||
And a single relation:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation openstack-dashboard keystone
|
||||
|
||||
`Glance <https://jujucharms.com/glance/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For the Glance image service, deploy as follows:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:2 glance
|
||||
|
||||
Relations:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation nova-cloud-controller glance
|
||||
juju add-relation nova-compute glance
|
||||
juju add-relation glance mysql
|
||||
juju add-relation glance keystone
|
||||
juju add-relation glance rabbitmq-server
|
||||
|
||||
`Ceph monitor <https://jujucharms.com/ceph/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ceph, the distributed storage system, needs a couple of extra parameters.
|
||||
|
||||
The first is a UUID, ensuring each cluster has a unique identifier. This is
|
||||
simply generated by running the ``uuid`` command (``apt install uuid``, if it's
|
||||
not already installed). We'll use this value as the ``fsid`` in the following
|
||||
``ceph-mon.yaml`` configuration file.
|
||||
|
||||
The second parameter is a ``monitor-secret`` for the configuration file. This is
|
||||
generated on the MAAS machine by first installing the ``ceph-common`` package
|
||||
and then by typing the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
ceph-authtool /dev/stdout --name=mon. --gen-key
|
||||
|
||||
The output will be similar to the following:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
[mon.]
|
||||
key = AQAARuRYD1p/AhAAKvtuJtim255+E1sBJNUkcg==
|
||||
|
||||
This is what the configuration file looks like with the required parameters:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
ceph-mon:
|
||||
fsid: "a1ee9afe-194c-11e7-bf0f-53d6"
|
||||
monitor-secret: AQAARuRYD1p/AhAAKvtuJtim255+E1sBJNUkcg==
|
||||
|
||||
Finally, deploy and scale the application as follows:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:1 --config ceph-mon.yaml ceph-mon
|
||||
juju add-unit --to lxd:2 ceph-mon
|
||||
juju add-unit --to lxd:3 ceph-mon
|
||||
|
||||
With these additional relations:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation ceph-osd ceph-mon
|
||||
juju add-relation nova-compute ceph-mon
|
||||
juju add-relation glance ceph-mon
|
||||
|
||||
`Cinder <https://jujucharms.com/cinder/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For Cinder block storage, use the following ``cinder.yaml`` file:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
cinder:
|
||||
glance-api-version: 2
|
||||
block-device: None
|
||||
|
||||
And deploy with this:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:1 --config cinder.yaml cinder
|
||||
|
||||
Relations:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation nova-cloud-controller cinder
|
||||
juju add-relation cinder mysql
|
||||
juju add-relation cinder keystone
|
||||
juju add-relation cinder rabbitmq-server
|
||||
juju add-relation cinder:image-service glance:image-service
|
||||
juju add-relation cinder ceph-mon
|
||||
|
||||
`Swift proxy <https://jujucharms.com/swift-proxy/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Swift also needs a unique identifier, best generated with the ``uuid`` command
|
||||
used previously. The output UUID is used for the ``swift-hash`` value in the
|
||||
``swift-proxy.yaml`` configuration file:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
swift-proxy:
|
||||
zone-assignment: auto
|
||||
swift-hash: "a1ee9afe-194c-11e7-bf0f-53d662bc4339"
|
||||
|
||||
Use the following command to deploy:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy --to lxd:0 --config swift-proxy.yaml swift-proxy
|
||||
|
||||
These are its two relations:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation swift-proxy swift-storage
|
||||
juju add-relation swift-proxy keystone
|
||||
|
||||
`NTP <https://jujucharms.com/ntp/>`__
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The final component we need to deploy is a Network Time Protocol client, to keep
|
||||
everything in time. This is added with the following simple command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju deploy ntp
|
||||
|
||||
These last few ``add-relation`` commands finish all the connections we need to
|
||||
make:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju add-relation neutron-gateway ntp
|
||||
juju add-relation nova-compute ntp
|
||||
juju add-relation ceph-osd ntp
|
||||
|
||||
All that's now left to do is wait on the output from ``juju status`` to show
|
||||
when everything is ready (everything turns green, if your terminal support
|
||||
colour).
|
||||
|
||||
Test OpenStack
|
||||
--------------
|
||||
|
||||
After everything has deployed and the output of ``juju status`` settles, you can
|
||||
check to make sure OpenStack is working by logging into the Horizon dashboard.
|
||||
|
||||
The quickest way to get the IP address for the dashboard is with the following
|
||||
command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}'
|
||||
|
||||
The URL will be ``http://<IP ADDRESS>/horizon``. When you enter this into your
|
||||
browser you can login with ``admin`` and ``openstack``, unless you changed the
|
||||
password in the configuration file.
|
||||
|
||||
If everything works, you will see something similar to the following:
|
||||
|
||||
.. figure:: ./media/install-openstack_horizon.png
|
||||
:alt: Horizon dashboard
|
||||
|
||||
Horizon dashboard
|
||||
|
||||
Next steps
|
||||
----------
|
||||
|
||||
Congratulations, you've successfully deployed a working OpenStack environment
|
||||
using both Juju and MAAS. The next step is to `configure
|
||||
OpenStack <./config-openstack.html>`__ for use within a production environment.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- LINKS -->
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!-- IMAGES -->
|
|
@ -1,6 +0,0 @@
|
|||
.. _juju:
|
||||
|
||||
Juju
|
||||
~~~~
|
||||
|
||||
TODO: general overview of juju in the context of this project.
|
|
@ -1,6 +0,0 @@
|
|||
.. _lxd:
|
||||
|
||||
LXD
|
||||
~~~
|
||||
|
||||
TODO: general overview of LXD in the context of this project.
|
|
@ -1,6 +0,0 @@
|
|||
.. _maas:
|
||||
|
||||
MAAS
|
||||
~~~~
|
||||
|
||||
TODO: general overview of MAAS in the context of this project.
|
After Width: | Height: | Size: 164 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 40 KiB |
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 50 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 46 KiB |
|
@ -1,6 +0,0 @@
|
|||
.. _network_arch:
|
||||
|
||||
Network Archictecture
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
TODO:
|
|
@ -1,11 +0,0 @@
|
|||
.. _overview:
|
||||
|
||||
Overview
|
||||
~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
about.rst
|
||||
architecture.rst
|
||||
deployment_workflow.rst
|
|
@ -1,41 +0,0 @@
|
|||
.. _prepare_env.rst:
|
||||
|
||||
Prepare the deployment environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Install MAAS
|
||||
++++++++++++
|
||||
|
||||
Enlist Servers
|
||||
++++++++++++++
|
||||
|
||||
Commission Servers
|
||||
++++++++++++++++++
|
||||
|
||||
Configure Servers
|
||||
+++++++++++++++++
|
||||
|
||||
Storage Configuration
|
||||
---------------------
|
||||
|
||||
TODO: Bcache storage, LVM storage, recommendations
|
||||
|
||||
Network Configuration
|
||||
---------------------
|
||||
|
||||
TODO: bonding, vlans, recommendations
|
||||
|
||||
Kernel Configuration
|
||||
--------------------
|
||||
|
||||
TODO: hugepages, iommu, cpu isolation
|
||||
|
||||
Configure Juju
|
||||
++++++++++++++
|
||||
|
||||
TODO: General instructions on configuring Juju to use MAAS.
|
||||
|
||||
Test Deployment
|
||||
+++++++++++++++
|
||||
|
||||
TODO: Deploy both physical servers and LXD containers using magpie.
|
|
@ -1,6 +0,0 @@
|
|||
.. _service_arch:
|
||||
|
||||
Service Archictecture
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
TODO:
|
|
@ -1,6 +0,0 @@
|
|||
.. _storage_arch:
|
||||
|
||||
Storage Archictecture
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
TODO:
|
|
@ -1,4 +0,0 @@
|
|||
.. _verify:
|
||||
|
||||
Verify the deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|