Editorial review and update of first app doc

Change-Id: Id4a59a39e70ab083f90fcc17b0f2d40463af76a4
Closes-Bug: #1516850
This commit is contained in:
Diane Fleming 2015-11-16 21:57:14 -06:00 committed by Andreas Jaeger
parent 80609b6428
commit 82e9181236
11 changed files with 552 additions and 561 deletions

View File

@ -2,129 +2,120 @@
Advice for developers new to operations
=======================================
In this section, we will introduce some operational concepts and tasks
which may be new to developers who have not written cloud applications
before.
This section introduces some operational concepts and tasks to
developers who have not written cloud applications before.
Monitoring
~~~~~~~~~~
Monitoring is essential for cloud applications, especially if the
application is to be 'scalable'. You must know how many requests are
coming in, and what impact that has on the various services -- in
other words, enough information to determine whether you should start
another worker or API service as we did in :doc:`/scaling_out`.
Monitoring is essential for 'scalable' cloud applications. You must
know how many requests are coming in and the impact that these
requests have on various services. You must have enough information to
determine whether to start another worker or API service as you
did in :doc:`/scaling_out`.
.. todo:: explain how to achieve this kind of monitoring. Ceilometer?
.. todo:: explain how to achieve this kind of monitoring. Ceilometer?
(STOP LAUGHING.)
Aside from this kind of monitoring, you should consider availability
monitoring. Does your application care about a worker going down?
Maybe not. Does it care about a failed database server? Probably yes.
In addition to this kind of monitoring, you should consider
availability monitoring. Although your application might not care
about a failed worker, it should care about a failed database server.
One great pattern to add this to your application is the `Health
Endpoint Monitoring Pattern
<https://msdn.microsoft.com/en-us/library/dn589789.aspx>`, where a
special API endpoint is introduced to your application for a basic
health check.
Use the
`Health Endpoint Monitoring Pattern <https://msdn.microsoft.com/en-us/library/dn589789.aspx>`
to implement functional checks within your application that external
tools can access through exposed endpoints at regular intervals.
Backups
~~~~~~~
Where instances store information that is not reproducible (such as a
database server, a file server, or even log files for an application),
it is important to back up this information as you would a normal
non-cloud server. It sounds simple, but just because it is 'in the
cloud' does not mean it has any additional robustness or resilience
when it comes to failure of the underlying hardware or systems.
Just as you back up information on a non-cloud server, you must back
up non-reproducible information, such as information on a database
server, file server, or in application log files. Just because
something is 'in the cloud' does not mean that the underlying hardware
or systems cannot fail.
OpenStack provides a couple of tools that make it easier to perform
backups. If your provider runs OpenStack Object Storage, this is
normally extremely robust and has several handy API calls and CLI
tools for working with archive files.
OpenStack provides a couple of tools that make it easy to back up
data. If your provider runs OpenStack Object Storage, you can use its
API calls and CLI tools to work with archive files.
It is also possible to create snapshots of running instances and persistent
volumes using the OpenStack API. Refer to the documentation of your SDK for
more.
You can also use the OpenStack API to create snapshots of running
instances and persistent volumes. For more information, see your SDK
documentation.
.. todo:: Link to appropriate documentation, or better yet, link and
also include the commands here.
While the technical action to perform backups can be straightforward,
you should also think about your policies regarding what is backed up
and how long each item should be retained.
In addition to configuring backups, review your policies about what
you back up and how long to retain each backed up item.
Phoenix Servers
Phoenix servers
~~~~~~~~~~~~~~~
Application developers and operators who employ
`Phoenix Servers <http://martinfowler.com/bliki/PhoenixServer.html>`_
have built systems that start from a known baseline (sometimes just a specific
version of an operating system) and have built tooling that will automatically
build, install, and configure a system with no manual intervention.
`Phoenix Servers <http://martinfowler.com/bliki/PhoenixServer.html>`_,
named for the mythical bird that is consumed by fire and rises from
the ashes to live again, make it easy to start over with new
instances.
Phoenix Servers, named for the mythological bird that would live its life,
be consumed by fire, then rise from the ashes to live again, make it possible
to easily "start over" with new instances.
Application developers and operators who use phoenix servers have
access to systems that are built from a known baseline, such as a
specific operating system version, and to tooling that automatically
builds, installs, and configures a system.
If your application is automatically deployed on a regular basis,
resolving outages and security updates are not special operations that
require manual intervention. If you suffer an outage, provision more
resources in another region. If you have to patch security holes,
provision more compute nodes that will be built with the
updated/patched software, then terminate vulnerable nodes, with
traffic automatically failing over to the new instances.
If you deploy your application on a regular basis, you can resolve
outages and make security updates without manual intervention. If an
outage occurs, you can provision more resources in another region. If
you must patch security holes, you can provision additional compute
nodes that are built with the updated software. Then, you can
terminate vulnerable nodes and automatically fail-over traffic to the
new instances.
Security
~~~~~~~~
Security-wise, one thing to keep in mind is that if one instance of an
application is compromised, all instances with the same image and
configuration are likely to suffer the same vulnerability. In this
case, it is safer to rebuild all of your instances (a task made easier
by configuration management - see below).
If one application instance is compromised, all instances with the
same image and configuration will likely suffer the same
vulnerability. The safest path is to use configuration management to
rebuild all instances.
Configuration management
~~~~~~~~~~~~~~~~~~~~~~~~
Tools such as Ansible, Chef, and Puppet allow you to describe exactly
what should be installed on an instance and how it should be
configured. Using these descriptions, the tool implements any changes
required to get to the desired state.
Configuration management tools, such as Ansible, Chef, and Puppet,
enable you to describe exactly what to install and configure on an
instance. Using these descriptions, these tools implement the changes
that are required to get to the desired state.
These tools vastly reduce the amount of effort it takes to work with
large numbers of servers, and also improves the ability to recreate,
update, move, or distribute applications.
These tools vastly reduce the effort it takes to work with large
numbers of servers, and also improve the ability to recreate, update,
move, and distribute applications.
Application deployment
~~~~~~~~~~~~~~~~~~~~~~
Related to configuration management is the question of how you deploy
your application.
How do you deploy your application? For example, do you pull the
latest code from a source control repository? Do you make packaged
releases that update infrequently? Do you perform haphazard tests in a
development environment and deploy only after major changes?
For example, do you:
One of the latest trends in scalable cloud application deployment is
`continuous integration <http://en.wikipedia.org/wiki/Continuous_integration>`_
and `continuous deployment <http://en.wikipedia.org/wiki/Continuous_delivery>`_
(CI/CD).
* pull the latest code from a source control repository?
* make packaged releases that update infrequently?
* big-bang test in a development environment and deploy only after
major changes?
One of the latest trends in deploying scalable cloud applications is
`continuous integration
<http://en.wikipedia.org/wiki/Continuous_integration>`_ / `continuous
deployment <http://en.wikipedia.org/wiki/Continuous_delivery>`_
(CI/CD). Working in a CI/CD fashion means you are always testing your
application and making frequent deployments to production.
CI/CD means that you always test your application and make frequent
deployments to production.
In this tutorial, we have downloaded the latest version of our
application from source and installed it on a standard image. Our
magic install script also updates the standard image to have the
latest dependencies we need to run the application.
magic installation script also updates the standard image to have the
latest dependencies that you need to run the application.
Another approach to this is to create a 'gold' image - one that has your
application and dependencies pre-installed. This means faster boot times and
a higher degree of control over what is on the instance, however a process is
needed to ensure that 'gold' images do not fall behind on security updates.
Another approach is to create a 'gold' image, which pre-installs your
application and its dependencies. A 'gold' image enables faster boot
times and more control over what is on the instance. However, if you
use 'gold' images, you must have a process in place to ensure that
these images do not fall behind on security updates.
Fail fast
~~~~~~~~~

View File

@ -2,13 +2,13 @@
Appendix
========
Bootstrapping your network
~~~~~~~~~~~~~~~~~~~~~~~~~~
Bootstrap your network
~~~~~~~~~~~~~~~~~~~~~~
Most cloud providers will provision all of the required network
objects necessary to boot an instance. An easy way to see if these
have been created for you is to access the Network Topology section of
the OpenStack dashboard.
Most cloud providers provision all network objects that are required
to boot an instance. To determine whether these objects were created
for you, access the Network Topology section of the OpenStack
dashboard.
.. figure:: images/network-topology.png
:width: 920px

View File

@ -5,35 +5,38 @@ Block Storage
.. todo:: (For nick: Restructure the introduction to this chapter to
provide context of what we're actually going to do.)
By default, data in OpenStack instances is stored on 'ephemeral' disks. These
disks stay with the instance throughout its lifetime, but when the instance is
terminated, that storage and all the data stored on it disappears. Ephemeral
storage is allocated to a single instance and cannot be moved to another
instance.
By default, data in OpenStack instances is stored on 'ephemeral'
disks. These disks remain with the instance throughout its lifetime.
When you terminate the instance, that storage and all the data stored
on it disappears. Ephemeral storage is allocated to a single instance
and cannot be moved to another instance.
This section introduces block storage, also known as volume storage, which
provides access to persistent storage devices. You interact with block storage
by attaching volumes to running instances just as you might attach a USB drive
to a physical server. You can detach volumes from one instance and reattach
them to another instance and the data remains intact. The OpenStack Block
Storage (cinder) project implements block storage.
This section introduces block storage, also known as volume storage,
which provides access to persistent storage devices. You interact with
block storage by attaching volumes to running instances just as you
might attach a USB drive to a physical server. You can detach volumes
from one instance and reattach them to another instance and the data
remains intact. The OpenStack Block Storage (cinder) project
implements block storage.
Though you might have configured Object Storage to store images, the Fractal
application needs a database to track the location of and parameters that were
used to create images in Object Storage. This database server cannot fail.
Though you might have configured Object Storage to store images, the
Fractal application needs a database to track the location of, and
parameters that were used to create, images in Object Storage. This
database server cannot fail.
If you are an advanced user, consider how you might remove the database from
the architecture and replace it with Object Storage metadata (then contribute
these steps to :doc:`craziness`). Other users can continue reading to learn
how to work with block storage and move the Fractal application database
server to use it.
If you are an advanced user, think about how you might remove the
database from the architecture and replace it with Object Storage
metadata, and then contribute these steps to :doc:`craziness`.
Otherwise, continue reading to learn how to work with, and move the
Fractal application database server to use, block storage.
Basics
~~~~~~
Later on, we'll use a Block Storage volume to provide persistent storage for
the database server for the Fractal application. But first, learn how to
create and attach a Block Storage device.
Later on, you will use a Block Storage volume to provide persistent
storage for the database server for the Fractal application. But
first, learn how to create and attach a Block Storage device.
.. only:: dotnet
@ -61,7 +64,7 @@ create and attach a Block Storage device.
PHP-OpenCloud SDK.
As always, connect to the API endpoint:
Connect to the API endpoint:
.. only:: libcloud
@ -113,7 +116,7 @@ To try it out, make a 1GB volume called :test'.
.. note:: The parameter :code:`size` is in gigabytes.
List all volumes to see if it was successful:
To see if the volume creation was successful, list all volumes:
.. only:: libcloud
@ -164,9 +167,9 @@ MySQL, port 3306) from the network:
:start-after: step-4
:end-before: step-5
Create a volume object by using the unique identifier (UUID) for the volume.
Then, use the server object from the previous code snippet to attach the
volume to it at :code:`/dev/vdb`:
Create a volume object by using the unique identifier (UUID) for the
volume. Then, use the server object from the previous code snippet to
attach the volume to it at :code:`/dev/vdb`:
.. only:: libcloud
@ -275,8 +278,7 @@ To detach and delete a volume:
volume name and not the volume object?
For information about these and other calls, see
`libcloud documentation
<http://ci.apache.org/projects/libcloud/docs/compute/drivers/openstack.html>`_.
`libcloud documentation <http://ci.apache.org/projects/libcloud/docs/compute/drivers/openstack.html>`_.
Work with the OpenStack Database service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -292,12 +294,12 @@ component provides Database as a Service (DBaaS).
SDKs do not generally support the service yet, but you can use the
'trove' command-line client to work with it instead.
Install the trove command-line client by following this guide:
http://docs.openstack.org/cli-reference/content/install_clients.html
To install the 'trove' command-line client, see
`Install the OpenStack command-line clients <http://docs.openstack.org/cli-reference/content/install_clients.html>`_.
Then, set up necessary variables for your cloud in an :file:`openrc.sh` file
by using this guide:
http://docs.openstack.org/cli-reference/content/cli_openrc.html
To set up environment variables for your cloud in an :file:`openrc.sh`
file, see
`Set environment variables using the OpenStack RC file <http://docs.openstack.org/cli-reference/content/cli_openrc.html>`_.
Ensure you have an :file:`openrc.sh` file, source it, and validate that
your trove client works: ::
@ -314,17 +316,17 @@ your trove client works: ::
$ trove --version
1.0.9
For information about supported features and how to work with an existing
database service installation, see these
`slides <http://www.slideshare.net/hastexo/hands-on-trove-database-as-a-service-in-openstack-33588994>`_.
For information about supported features and how to work with an
existing database service installation, see
`Database as a Service in OpenStack <http://www.slideshare.net/hastexo/hands-on-trove-database-as-a-service-in-openstack-33588994>`_.
Next steps
~~~~~~~~~~
You should now be fairly confident working with Block Storage volumes. For
information about other calls, see the volume documentation for your SDK or
try one of these tutorial steps:
You should now be fairly confident working with Block Storage volumes.
For information about other calls, see the volume documentation for
your SDK. Or, try one of these tutorial steps:
* :doc:`/orchestration`: to automatically orchestrate the application
* :doc:`/networking`: to learn about more complex networking
* :doc:`/advice`: for advice for developers new to operations
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.

View File

@ -63,7 +63,7 @@ project = u'FirstApp'
bug_tag = u'firstapp'
copyright = u'2015, OpenStack contributors'
# The version info for the project you're documenting, acts as replacement for
# The version info for the project you are documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
@ -124,23 +124,23 @@ pygments_style = 'sphinx'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
@ -148,7 +148,7 @@ html_theme_path = [openstackdocstheme.get_html_theme_path()]
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
@ -199,7 +199,7 @@ html_show_sourcelink = False
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''

View File

@ -8,13 +8,13 @@ Regions and geographic diversity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: For more information about multi-site clouds, see the
`Multi-Site chapter
<http://docs.openstack.org/arch-design/content/multi_site.html>`_
`Multi-Site chapter <http://docs.openstack.org/arch-design/content/multi_site.html>`_
in the Architecture Design Guide.
OpenStack supports 'regions', which are geographically-separated installations
that are connected to a single service catalog. This section explains how to
expand the Fractal application to use multiple regions for high availability.
OpenStack supports 'regions', which are geographically-separated
installations that are connected to a single service catalog. This
section explains how to expand the Fractal application to use multiple
regions for high availability.
.. note:: This section is incomplete. Please help us finish it!
@ -26,8 +26,9 @@ Multiple clouds
<http://docs.openstack.org/arch-design/content/hybrid.html>`_
in the Architecture Design Guide.
You might want to use multiple clouds such as a private cloud inside your
organization and a public cloud. This section attempts to do exactly that.
You might want to use multiple clouds, such as a private cloud inside
your organization and a public cloud. This section attempts to do
exactly that.
.. note:: This section is incomplete. Please help us finish it!
@ -43,31 +44,34 @@ conf.d, etc.d
Use conf.d and etc.d.
In earlier sections, the Fractal application used an installation script into
which the metadata API passed parameters to bootstrap the cluster. `Etcd
<https://github.com/coreos/etcd>`_ is "a distributed, consistent key-value
store for shared configuration and service discovery" that you can use to
store configurations. You can write updated versions of the Fractal worker
component to connect to Etcd or use `Confd
<https://github.com/kelseyhightower/confd>`_ to poll for changes from Etcd and
write changes to a configuration file on the local file system, which the
Fractal worker can use for configuration.
In earlier sections, the Fractal application used an installation
script into which the metadata API passed parameters to bootstrap the
cluster. `Etcd <https://github.com/coreos/etcd>`_ is "a distributed,
consistent key-value store for shared configuration and service
discovery" that you can use to store configurations. You can write
updated versions of the Fractal worker component to connect to Etcd or
use `Confd <https://github.com/kelseyhightower/confd>`_ to poll for
changes from Etcd and write changes to a configuration file on the
local file system, which the Fractal worker can use for configuration.
Using Object Storage instead of a database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use Object Storage instead of a database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have not quite figured out how to do this yet, but the general steps involve
changing the fractal upload code to store metadata with the object in swift,
then changing the API code such as "list fractals" to query swift to get the
metadata. If you do this, you should be able to stop using a database.
We have not quite figured out how to stop using a database, but the
general steps are:
* Change the Fractal upload code to store metadata with the object in
Object Storage.
* Change the API code, such as "list fractals," to query Object Storage
to get the metadata.
.. note:: This section is incomplete. Please help us finish it!
Next steps
~~~~~~~~~~
Wow! If you have made it through this section, you know more than the authors of
this guide know about working with OpenStack clouds.
Wow! If you have made it through this section, you know more than the
authors of this guide know about working with OpenStack clouds.
Perhaps you can `contribute
<https://wiki.openstack.org/wiki/Documentation/HowTo>`_?
Perhaps you can `contribute <https://wiki.openstack.org/wiki/Documentation/HowTo>`_?

View File

@ -11,11 +11,11 @@ Making it durable
.. todo:: Large object support in Swift
http://docs.openstack.org/developer/swift/overview_large_objects.html
This section introduces object storage. `OpenStack Object Storage
This section introduces object storage. `OpenStack Object Storage
<http://www.openstack.org/software/openstack-storage/>`_ (code-named
swift) is open source software for creating redundant, scalable data
storage using clusters of standardized servers to store petabytes of
accessible data. It is a long-term storage system for large amounts
accessible data. It is a long-term storage system for large amounts
of static data that can be retrieved, leveraged, and updated. Access
is via an API, not through a file-system like more traditional
storage.
@ -28,7 +28,7 @@ API. The Object Storage API is organized around two types of entities:
Similar to the Unix programming model, an object is a "bag of bytes"
that contains data, such as documents and images. Containers are used
to group objects. You can make many objects inside a container, and
to group objects. You can make many objects inside a container, and
have many containers inside your account.
If you think about how you traditionally make what you store durable,
@ -50,16 +50,16 @@ to keep them safe.
Using Object Storage to store fractals
--------------------------------------
The Fractals app currently uses the local filesystem on the instance
The Fractals app currently uses the local file system on the instance
to store the images it generates. This is not scalable or durable, for
a number of reasons.
Because the local filesystem is ephemeral storage, if the instance is
terminated, the fractal images will be lost along with the
instance. Block based storage, which we'll discuss in
:doc:`/block_storage`, avoids that problem, but like local filesystems, it
requires administration to ensure that it does not fill up, and
immediate attention if disks fail.
Because the local file system is ephemeral storage, if the instance is
terminated, the fractal images will be lost along with the instance.
Block based storage, which we will discuss in :doc:`/block_storage`,
avoids that problem, but like local file systems, it requires
administration to ensure that it does not fill up, and immediate
attention if disks fail.
The Object Storage service manages many of these tasks that normally
would require the application owner to manage them, and presents a
@ -96,14 +96,14 @@ First, let's learn how to connect to the Object Storage endpoint:
Libcloud 0.16 and 0.17 are afflicted with a bug that means
authentication to a swift endpoint can fail with `a Python
exception
<https://issues.apache.org/jira/browse/LIBCLOUD-635>`_. If
<https://issues.apache.org/jira/browse/LIBCLOUD-635>`_. If
you encounter this, you can upgrade your libcloud version, or
apply a simple `2-line patch
<https://github.com/fifieldt/libcloud/commit/ec58868c3344a9bfe7a0166fc31c0548ed22ea87>`_.
.. note:: Libcloud uses a different connector for Object Storage
to all other OpenStack services, so a conn object from
previous sections won't work here and we have to create
previous sections will not work here and we have to create
a new one named :code:`swift`.
.. only:: pkgcloud
@ -151,8 +151,8 @@ all containers in your account:
[<Container: name=fractals, provider=OpenStack Swift>]
The next logical step is to upload an object. Find a photo of a goat
online, name it :code:`goat.jpg` and upload it to your container
:code:`fractals`:
on line, name it :code:`goat.jpg`, and upload it to your
:code:`fractals` container:
.. only:: libcloud
@ -160,9 +160,9 @@ online, name it :code:`goat.jpg` and upload it to your container
:start-after: step-4
:end-before: step-5
List objects in your container :code:`fractals` to see if the upload
was successful, then download the file to verify the md5sum is the
same:
List objects in your :code:`fractals` container to see if the upload
was successful. Then, download the file to verify that the md5sum is
the same:
.. only:: libcloud
@ -245,14 +245,17 @@ swift container. A simple for loop takes care of that:
.. note:: Replace :code:`IP_API_1` with the IP address of the API instance.
.. note:: The example code uses the awesome `Requests library <http://docs.python-requests.org/en/latest/>`_. Ensure that it is installed on your system before trying to run the script above.
.. note:: The example code uses the awesome
`Requests library <http://docs.python-requests.org/en/latest/>`_.
Before you try to run the previous script, make sure that
it is installed on your system.
Configure the Fractals app to use Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning:: Currently it is not possible to directly store generated
images on the OpenStack Object Storage. Please revisit
.. warning:: Currently, you cannot directly store generated
images in OpenStack Object Storage. Please revisit
this section again in the future.
Extra features
@ -261,9 +264,9 @@ Extra features
Delete containers
~~~~~~~~~~~~~~~~~
One call we didn't cover above that you probably need to know is how
to delete a container. Ensure that you have removed all objects from
the container before running this, otherwise it will fail:
One call we did not cover and that you probably need to know is how
to delete a container. Ensure that you have removed all objects from
the container before running this script. Otherwise, the script fails:
.. only:: libcloud
@ -276,13 +279,13 @@ the container before running this, otherwise it will fail:
Add metadata to objects
~~~~~~~~~~~~~~~~~~~~~~~
You can also do advanced things like uploading an object with metadata, such
as in this below example, but for further information we'll refer you to the
documentation for your SDK. This option also uses a bit stream to upload the
file - iterating bit by bit over the file and passing those bits to swift as
they come, compared to loading the entire file in memory and then sending it.
This is more efficient, especially for larger files.
You can also do advanced things like uploading an object with
metadata, such as in following example. For more information, see the
documentation for your SDK. This option also uses a bit stream to
upload the file, iterating bit by bit over the file and passing those
bits to Object Storage as they come. Compared to loading the entire
file in memory and then sending it, this method is more efficient,
especially for larger files.
.. only:: libcloud
@ -302,9 +305,9 @@ For efficiency, most Object Storage installations treat large objects
If you are working with large objects, use the
:code:`ex_multipart_upload_object` call instead of the simpler
:code:`upload_object` call. How the upload works behind-the-scenes
is by splitting the large object into chunks, and creating a
special manifest so they can be recombined on download. Alter the
:code:`upload_object` call. Behind the scenes, the call splits the
large object into chunks and creates a special manifest so that
the chunks can be recombined on download. Alter the
:code:`chunk_size` parameter (in bytes) according to what your
cloud can accept.
@ -316,17 +319,18 @@ For efficiency, most Object Storage installations treat large objects
Next steps
----------
You should now be fairly confident working with Object Storage.
You can find more about the Object Storage SDK calls at:
You should now be fairly confident working with Object Storage. You
can find more information about the Object Storage SDK calls at:
.. only:: libcloud
https://libcloud.readthedocs.org/en/latest/storage/api.html
Or try a different step in the tutorial, including:
Or, try one of these steps in the tutorial:
* :doc:`/block_storage`: to migrate the database to block storage, or use
the database-as-as-service component
* :doc:`/orchestration`: to automatically orchestrate the application
* :doc:`/networking`: to learn about more complex networking
* :doc:`/advice`: for advice for developers new to operations
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

View File

@ -5,22 +5,20 @@ Getting started
Who should read this guide
~~~~~~~~~~~~~~~~~~~~~~~~~~
This guide is for software developers who want to deploy applications to
OpenStack clouds.
This guide is for experienced software developers who want to deploy
applications to OpenStack clouds.
We assume that you're an experienced programmer who has not created a cloud
application in general or an OpenStack application in particular.
If you're familiar with OpenStack, this section teaches you how to program
with its components.
If you are familiar with OpenStack but have not created a cloud
application in general or an OpenStack application in particular, this
section teaches you how to program with OpenStack components.
What you will learn
~~~~~~~~~~~~~~~~~~~
Deploying applications in a cloud environment can be very different from
deploying them in a traditional IT environment. This guide teaches you how to
deploy applications on OpenStack and some best practices for cloud application
development.
Deploying applications in a cloud environment can be very different
from deploying them in a traditional IT environment. This guide
teaches you how to deploy applications on OpenStack and some best
practices for cloud application development.
A general overview
~~~~~~~~~~~~~~~~~~
@ -54,16 +52,16 @@ and toolkits with the OpenStack cloud:
Language Name Description URL
============== ============= ================================================================= ====================================================
Python Libcloud A Python-based library managed by the Apache Foundation.
This library enables you to work with multiple types of clouds. https://libcloud.apache.org
Python OpenStack SDK A Python-based library specifically developed for OpenStack. http://git.openstack.org/cgit/openstack/python-openstacksdk
Python Shade A Python-based library developed by OpenStack Infra team to http://git.openstack.org/cgit/openstack-infra/shade
This library enables you to work with multiple types of clouds. https://libcloud.apache.org
Python OpenStack SDK A Python-based library specifically developed for OpenStack. http://git.openstack.org/cgit/openstack/python-openstacksdk
Python Shade A Python-based library developed by OpenStack Infra team to http://git.openstack.org/cgit/openstack-infra/shade
operate multiple OpenStack clouds.
Java jClouds A Java-based library. Like Libcloud, it's also managed by the https://jclouds.apache.org
Java jClouds A Java-based library. Like Libcloud, it is also managed by the https://jclouds.apache.org
Apache Foundation and works with multiple types of clouds.
Ruby fog A Ruby-based SDK for multiple clouds. https://github.com/fog/fog/blob/master/lib/fog/openstack/docs/getting_started.md
node.js pkgcloud A Node.js-based SDK for multiple clouds. https://github.com/pkgcloud/pkgcloud
PHP php-opencloud A library for developers using PHP to work with OpenStack clouds. http://php-opencloud.com/
.NET Framework OpenStack SDK A .NET-based library enables you to write C++ or C# code for https://www.nuget.org/packages/openstack.net
Ruby fog A Ruby-based SDK for multiple clouds. https://github.com/fog/fog/blob/master/lib/fog/openstack/docs/getting_started.md
node.js pkgcloud A Node.js-based SDK for multiple clouds. https://github.com/pkgcloud/pkgcloud
PHP php-opencloud A library for developers using PHP to work with OpenStack clouds. http://php-opencloud.com/
.NET Framework OpenStack SDK A .NET-based library enables you to write C++ or C# code for https://www.nuget.org/packages/openstack.net
for Microsoft Microsoft applications.
.NET
============== ============= ================================================================= ====================================================
@ -71,7 +69,7 @@ PHP php-opencloud A library for developers using PHP to work with Ope
For a list of available SDKs, see `Software Development Kits <https://wiki.openstack.org/wiki/SDKs>`_.
Other versions of this guide show you how to use the other SDKs and
languages to complete these tasks. If you're a developer for another toolkit
languages to complete these tasks. If you are a developer for another toolkit
that you would like this guide to include, feel free to submit code snippets.
You can contact `OpenStack Documentation team <https://wiki.openstack.org/Documentation>`_
members for more information.
@ -176,7 +174,7 @@ The actual auth URL is:
http://controller:5000
How you'll interact with OpenStack
How you interact with OpenStack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this tutorial, you interact with your OpenStack cloud through the SDK that
@ -530,7 +528,7 @@ upload one depending on the policy settings of your cloud. For information about
how to upload images, see
`obtaining images <http://docs.openstack.org/image-guide/content/ch_obtaining_images.html>`_.
Set the image and size variables to appropriate values for your cloud. We'll
Set the image and size variables to appropriate values for your cloud. We will
use these variables in later sections.
First, tell the connection to get a specified image by using the ID of the
@ -726,7 +724,7 @@ Next, tell the script which flavor you want to use:
openstack.compute.v2.flavor.Flavor(attrs={u'name': u'm1.small', u'links': [{u'href': u'http://controller:8774/v2/96ff6aa79e60423d9848b70d5475c415/flavors/2', u'rel': u'self'}, {u'href': u'http://controller:8774/96ff6aa79e60423d9848b70d5475c415/flavors/2', u'rel': u'bookmark'}], u'ram': 2048, u'OS-FLV-DISABLED:disabled': False, u'vcpus': 1, u'swap': u'', u'os-flavor-access:is_public': True, u'rxtx_factor': 1.0, u'OS-FLV-EXT-DATA:ephemeral': 0, u'disk': 20, 'id': u'2'}, loaded=True)
Now, you're ready to launch the instance.
Now, you can launch the instance.
Launch an instance
~~~~~~~~~~~~~~~~~~
@ -958,8 +956,8 @@ Before you continue, you must do one more thing.
Destroy an instance
~~~~~~~~~~~~~~~~~~~
Cloud resources such as running instances that you no longer use can cost
money. Destroy cloud resources to avoid unexpected expenses.
Cloud resources, such as running instances that you no longer use, can
cost money. To avoid unexpected expenses, destroy cloud resources.
.. only:: fog
@ -1010,15 +1008,15 @@ Deploy the application to a new instance
Now that you know how to create and delete instances, you can deploy the
sample application. The instance that you create for the application is
similar to the first instance that you created, but this time, we'll briefly
similar to the first instance that you created, but this time, we will briefly
introduce a few extra concepts.
.. note:: Internet connectivity from your cloud instance is required
to download the application.
When you create an instance for the application, you'll want to give it a bit
When you create an instance for the application, you want to give it a bit
more information than you supplied to the bare instance that you just created
and deleted. We'll go into more detail in later sections, but for now,
and deleted. We will go into more detail in later sections, but for now,
simply create the following resources so that you can feed them to the
instance:
@ -1027,10 +1025,10 @@ instance:
instance. Typically, your public key is written to :code:`.ssh/id_rsa.pub`. If
you do not have an SSH public key file, follow
`these instructions <https://help.github.com/articles/generating-ssh- keys/>`_ first.
We'll cover these instructions in depth in :doc:`/introduction`.
We will cover these instructions in depth in :doc:`/introduction`.
In the following example, :code:`pub_key_file` should be set to the location
of your public SSH key file.
In the following example, set :code:`pub_key_file` to the location of
your public SSH key file.
.. only:: fog
@ -1074,7 +1072,7 @@ of your public SSH key file.
* Network access. By default, OpenStack filters all traffic. You must create
a security group and apply it to your instance. The security group allows HTTP
and SSH access. We'll go into more detail in :doc:`/introduction`.
and SSH access. We will go into more detail in :doc:`/introduction`.
.. only:: fog
@ -1110,7 +1108,7 @@ of your public SSH key file.
* Userdata. During instance creation, you can provide userdata to OpenStack to
configure instances after they boot. The cloud-init service applies the
user data to an instance. You must pre-install the cloud-init service on your
chosen image. We'll go into more detail in :doc:`/introduction`.
chosen image. We will go into more detail in :doc:`/introduction`.
.. only:: fog
@ -1139,7 +1137,7 @@ of your public SSH key file.
.. only:: openstacksdk
.. note:: User data in openstacksdk must be encoded to base64
.. note:: User data in openstacksdk must be encoded to Base64
.. literalinclude:: ../samples/openstacksdk/getting_started.py
:start-after: step-11
@ -1150,8 +1148,8 @@ Now, you can boot and configure the instance.
Boot and configure an instance
------------------------------
Use the image, flavor, key pair, and userdata to create an instance. After you
request the instance, wait for it to build.
Use the image, flavor, key pair, and userdata to create an instance.
After you request the instance, wait for it to build.
.. only:: fog
@ -1192,15 +1190,16 @@ instance to deploy the Fractals application.
Associate a floating IP for external connectivity
-------------------------------------------------
We'll cover networking in detail in :doc:`/networking`.
We cover networking in detail in :doc:`/networking`.
To see the application running, you must know where to look for it. By
default, your instance has outbound network access. To make your instance
reachable from the Internet, you need an IP address. By default in some cases,
your instance is provisioned with a publicly rout-able IP address. In this
case, you'll see an IP address listed under `public_ips` or `private_ips` when
you list the instances. If not, you must create and attach a floating IP
address to your instance.
default, your instance has outbound network access. To make your
instance reachable from the Internet, you need an IP address. By
default in some cases, your instance is provisioned with a publicly
rout-able IP address. In this case, you see an IP address listed
under `public_ips` or `private_ips` when you list the instances. If
not, you must create and attach a floating IP address to your
instance.
.. only:: fog
@ -1209,8 +1208,7 @@ address to your instance.
:start-after: step-13
:end-before: step-14
This will get an ip address that you can assign to your instance
with:
This gets an IP address that you can assign to your instance:
.. literalinclude:: ../samples/fog/getting_started.rb
:language: ruby
@ -1219,7 +1217,7 @@ address to your instance.
.. only:: libcloud
To see if there is a private IP address assigned to your instance:
To see whether a private IP address is assigned to your instance:
.. literalinclude:: ../samples/libcloud/getting_started.py
:start-after: step-13
@ -1239,7 +1237,6 @@ address to your instance.
To create a floating IP address to use with your instance:
Use :code:`ex_list_floating_ip_pools()` and select the first floating IP
address pool. Allocate this pool to your project and use it to get a
floating IP address.
@ -1263,7 +1260,7 @@ address to your instance.
Use :code:`getFloatingIps` to check for unused addresses. Select the first
available address. Otherwise, use :code:`allocateNewFloatingIp` to
allocate a new floating IP to your project from the default address pool.
allocate a floating IP to your project from the default address pool.
.. literalinclude:: ../samples/pkgcloud/getting_started.js
:start-after: step-13
@ -1289,12 +1286,11 @@ address to your instance.
.. only:: openstacksdk
.. note:: For this example we take Floating IP pool from network
which is called 'public'. This should be your external
network.
.. note:: For this example, we take a floating IP pool from the 'public'
network, which is your external network.
List all available Floating IPs for this project and select the first free
one. Allocate new Floating IP if none is available.
List all available floating IPs for this project and select the first free
one. Allocate a new floating IP if none is available.
.. literalinclude:: ../samples/openstacksdk/getting_started.py
:start-after: step-13
@ -1362,16 +1358,16 @@ interface at the following link.
Next steps
~~~~~~~~~~
Don't worry if these concepts are not yet completely clear. In
Do not worry if these concepts are not yet completely clear. In
:doc:`/introduction`, we explore these concepts in more detail.
* :doc:`/scaling_out`: Learn how to scale your application
* :doc:`/durability`: Learn how to use Object Storage to make your application durable
* :doc:`/scaling_out`: Learn how to scale your application.
* :doc:`/durability`: Learn how to use Object Storage to make your application durable.
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component
* :doc:`/orchestration`: Automatically orchestrate your application
* :doc:`/networking`: Learn about complex networking
* :doc:`/advice`: Get advice about operations
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)
.. todo:: List the next sections here or simply reference introduction.

View File

@ -2,9 +2,10 @@
Introduction to the fractals application architecture
=====================================================
This section introduces the application architecture and explains how it was
designed to take advantage of cloud features in general and OpenStack in
particular. It also describes some commands in the previous section.
This section introduces the application architecture and explains how
it was designed to take advantage of cloud features in general and
OpenStack in particular. It also describes some commands in the
previous section.
.. todo:: (for Nick) Improve the architecture discussion.
@ -68,11 +69,10 @@ Fault tolerance
In cloud programming, there is a well-known analogy known as "cattle vs
pets". If you have not heard it before, it goes like this:
When you are dealing with pets, you name them and care for them and if
they get sick, you nurse them back to health. Nursing pets back to
health can be difficult and very time consuming. When you are dealing
with cattle, you attach a numbered tag to their ear and if they get
sick you put them down and move on.
When you deal with pets, you name and care for them. If they get sick,
you nurse them back to health, which can be difficult and very time
consuming. When you deal with cattle, you attach a numbered tag to
their ear. If they get sick, you put them down and move on.
That, as it happens, is the new reality of programming. Applications
and systems used to be created on large, expensive servers, cared for
@ -82,24 +82,23 @@ whatever it took to make it right again and save the server and the
application.
In cloud programming, it is very different. Rather than large,
expensive servers, you are dealing with virtual machines that are
literally disposable; if something goes wrong, you shut it down and
spin up a new one. There is still operations staff, but rather than
nursing individual servers back to health, their job is to monitor the
health of the overall system.
expensive servers, you have virtual machines that are disposable; if
something goes wrong, you shut the server down and spin up a new one.
There is still operations staff, but rather than nursing individual
servers back to health, their job is to monitor the health of the
overall system.
There are definite advantages to this architecture. It is easy to get a
"new" server, without any of the issues that inevitably arise when a
server has been up and running for months, or even years.
As with classical infrastructure, failures of the underpinning cloud
infrastructure (hardware, networks, and software) are
unavoidable. When you are designing for the cloud, it is crucial that
your application is designed for an environment where failures can
happen at any moment. This may sound like a liability, but it is not;
by designing your application with a high degree of fault tolerance,
you are also making it resilient in the face of change, and therefore
more adaptable.
infrastructure (hardware, networks, and software) are unavoidable.
When you design for the cloud, it is crucial that your application is
designed for an environment where failures can happen at any moment.
This may sound like a liability, but it is not; by designing your
application with a high degree of fault tolerance, you also make it
resilient, and more adaptable, in the face of change.
Fault tolerance is essential to the cloud-based application.
@ -424,7 +423,7 @@ To see which security groups apply to an instance, you can:
.. todo:: print() ?
Once you have configured permissions, you will need to know where to
Once you have configured permissions, you must know where to
access the application.
Introduction to Floating IPs
@ -611,9 +610,8 @@ Parameter Description Values
:start-after: step-11
:end-before: step-12
Note that this time, when you create a security group, you are
including a rule that only applies for instances that are part of the
worker_group.
Note that this time, when you create a security group, you include a
rule that applies to only instances that are part of the worker group.
Next, start a second instance, which will be the worker instance:
@ -647,11 +645,11 @@ Next, start a second instance, which will be the worker instance:
Notice that you have added this instance to the worker_group, so it can
access the controller.
As you can see from the parameters passed to the installation script, you are
specifying that this is the worker instance, but you are also passing the
address of the API instance and the message queue so the worker can pick up
requests. The Fractals application installation script can take several
parameters.
As you can see from the parameters passed to the installation script,
you define this instance as the worker instance. But, you also pass
the address of the API instance and the message queue so the worker
can pick up requests. The Fractals application installation script
accepts several parameters.
========== ==================================================== ====================================
Parameter Description Example
@ -708,7 +706,7 @@ Now you can SSH into the instance:
worker instance and USERNAME to the appropriate user name.
Once you have logged in, check to see whether the worker service process
is running as expected. You can find the logs of the worker service
is running as expected. You can find the logs of the worker service
in the directory :code:`/var/log/supervisor/`.
::
@ -781,9 +779,9 @@ with :code:`faafo get --help`, :code:`faafo list --help`, and
:code:`faafo delete --help`.
.. note:: The application stores the generated fractal images directly
in the database used by the API service instance. Storing
in the database used by the API service instance. Storing
image files in a database is not good practice. We are doing it
here as an example only as an easy way to allow multiple
here as an example only as an easy way to enable multiple
instances to have access to the data. For best practice, we
recommend storing objects in Object Storage, which is
covered in :doc:`durability`.
@ -799,27 +797,26 @@ instances to run it. These are the basic steps for requesting and
using compute resources in order to run your application on an
OpenStack cloud.
From here, you should go to :doc:`/scaling_out` to learn how to scale your
application further. Alternatively, you may jump to any of these
sections:
From here, go to :doc:`/scaling_out` to learn how to further scale
your application. Or, try one of these steps in the tutorial:
* :doc:`/durability`: Learn how to use Object Storage to make your application more durable
* :doc:`/durability`: Learn how to use Object Storage to make your application more durable.
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component
* :doc:`/orchestration`: Automatically orchestrate the application
* :doc:`/networking`: Learn about more complex networking
* :doc:`/advice`: Get advice about operations
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)
Complete code sample
~~~~~~~~~~~~~~~~~~~~
The following file contains all of the code from this section of the tutorial.
This comprehensive code sample lets you view and run the code as a single script.
The following file contains all of the code from this section of the
tutorial. This comprehensive code sample lets you view and run the
code as a single script.
Before you run this script, confirm that you have set your authentication
information, the flavor ID, and image ID.
Before you run this script, confirm that you have set your
authentication information, the flavor ID, and image ID.
.. only:: shade

View File

@ -12,7 +12,7 @@ This chapter introduces the Networking API. This will enable us to build
networking topologies that separate public traffic accessing the application
from traffic between the API and the worker components. We also introduce
load balancing for resilience, and create a secure back-end network for
communication between the database, webserver, file storage, and worker
communication between the database, web server, file storage, and worker
components.
.. warning:: This section assumes that your cloud provider has implemented the
@ -90,10 +90,10 @@ neutron client works: ::
Networking segmentation
~~~~~~~~~~~~~~~~~~~~~~~
In traditional data centers, network segments are dedicated to specific types
of network traffic.
In traditional data centers, network segments are dedicated to
specific types of network traffic.
The fractal application we are building contains three types of
The fractal application we are building contains these types of
network traffic:
* public-facing web traffic
@ -135,18 +135,18 @@ that was previously created by your cloud provider or by yourself, following
the instructions in the appendix.
Many of the network concepts that are discussed in this section are
already present in the diagram above. A tenant router provides
routing and external access for the worker nodes, and floating IP addresses
are associated with each node in the Fractal application cluster
to facilitate external access.
already present in the diagram above. A tenant router provides routing
and external access for the worker nodes, and floating IP addresses
are associated with each node in the Fractal application cluster to
facilitate external access.
At the end of this section, we will be making some slight changes to
the networking topology by using the OpenStack Networking API to
create a network to which the worker nodes will attach
(10.0.1.0/24). We will use the API network (10.0.3.0/24) to attach the
Fractal API servers. Webserver instances have their own network
(10.0.2.0/24) and will be accessible by fractal aficionados
worldwide, by allocating floating IPs from the public network.
At the end of this section, you make some slight changes to the
networking topology by using the OpenStack Networking API to create
the 10.0.1.0/24 network to which the worker nodes attach. You use the
10.0.3.0/24 API network to attach the Fractal API servers. Web server
instances have their own 10.0.2.0/24 network, which is accessible by
fractal aficionados worldwide, by allocating floating IPs from the
public network.
.. nwdiag::
@ -196,16 +196,17 @@ worker back end is as follows:
Create networks
~~~~~~~~~~~~~~~
Most cloud providers will make a public network accessible to you.
We will attach a router to this public network to grant Internet access
to our instances. After also attaching this router to our internal networks, we
will allocate floating IPs from the public network for instances which need to
be accessed from the Internet.
Most cloud providers make a public network accessible to you. We will
attach a router to this public network to grant Internet access to our
instances. After also attaching this router to our internal networks,
we will allocate floating IPs from the public network for instances
which need to be accessed from the Internet.
Let's just confirm that we have a public network by listing the networks our
tenant has access to. The public network doesn't have to be named public -
it could be 'external', 'net04_ext' or something else - the important thing
is it exists and can be used to reach the Internet.
Let's just confirm that we have a public network by listing the
networks our tenant has access to. The public network does not have to
be named public - it could be 'external', 'net04_ext' or something
else - the important thing is it exists and can be used to reach the
Internet.
::
@ -371,17 +372,18 @@ the name of the public/external network offered by your cloud provider.
| tenant_id | 0cb06b70ef67424b8add447415449722 |
+---------------------+--------------------------------------+
.. note:: The world is running out of IPv4 addresses. If you get an error like
"No more IP addresses available on network", contact your cloud
administrator. You may also want to ask about IPv6 :)
.. note:: The world is running out of IPv4 addresses. If you get the
"No more IP addresses available on network" error,
contact your cloud administrator. You may also want to ask
about IPv6 :)
Connecting to the Internet
~~~~~~~~~~~~~~~~~~~~~~~~~~
Most instances will need access to the Internet. The instances in our Fractals
App are no exception! We'll add routers to pass traffic between the various
networks we are using.
Most instances require access to the Internet. The instances in your
Fractals app are no exception! Add routers to pass traffic between the
various networks that you use.
::
@ -399,8 +401,8 @@ networks we are using.
| tenant_id | f77bf3369741408e89d8f6fe090d29d2 |
+-----------------------+--------------------------------------+
We tell OpenStack which network should be used for Internet access by
specifying an external gateway for our router.
Specify an external gateway for your router to tell OpenStack which
network to use for Internet access.
::
@ -422,7 +424,7 @@ specifying an external gateway for our router.
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Now, attach our router to the worker, api, and webserver subnets.
Now, attach your router to the worker, API, and web server subnets.
::
@ -483,8 +485,8 @@ already.
Load balancing
~~~~~~~~~~~~~~
After separating the Fractal worker nodes into their own network, the
next logical step is to move the Fractal API service onto a load
After separating the Fractal worker nodes into their own networks, the
next logical step is to move the Fractal API service to a load
balancer, so that multiple API workers can handle requests. By using a
load balancer, the API service can be scaled out in a similar fashion
to the worker nodes.
@ -500,11 +502,11 @@ Neutron LbaaS API
this section needs rewriting to use the libcloud API
The OpenStack Networking API provides support for creating
loadbalancers, which can be used to scale the Fractal app web
service. In the following example, we create two compute instances via
the Compute API, then instantiate a load balancer that will use a
virtual IP (VIP) for accessing the web service offered by the two
compute nodes. The end result will be the following network topology:
loadbalancers, which can be used to scale the Fractal app web service.
In the following example, we create two compute instances via the
Compute API, then instantiate a load balancer that will use a virtual
IP (VIP) for accessing the web service offered by the two compute
nodes. The end result will be the following network topology:
.. nwdiag::
@ -539,7 +541,7 @@ Let's start by looking at what's already in place.
| 7ad1ce2b-4b8c-4036-a77b-90332d7f4dbe | public | 47fd3ff1-ead6-4d23-9ce6-2e66a3dae425 203.0.113.0/24 |
+--------------------------------------+-------------------+-----------------------------------------------------+
Now let's go ahead and create 2 instances.
Go ahead and create two instances.
::
@ -587,7 +589,7 @@ Confirm that they were added:
| 8fadf892-b6e9-44f4-b132-47c6762ffa2c | test-2 | ACTIVE | - | Running | private=10.0.2.3 |
+--------------------------------------+--------+--------+------------+-------------+------------------+
Now let's look at what ports are available:
Look at which ports are available:
::
@ -601,8 +603,8 @@ Now let's look at what ports are available:
| 7451d01f-bc3b-46a6-9ae3-af260d678a63 | | fa:16:3e:c6:d4:9c | {"subnet_id": "3eada497-36dd-485b-9ba4-90c5e3340a53", "ip_address": "10.0.2.3"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
Next create additional floating IPs by specifying the fixed IP
addresses they should point to and the ports they should use:
Next, create additional floating IPs. Specify the fixed IP addresses
they should point to and the ports that they should use:
::
@ -635,8 +637,8 @@ addresses they should point to and the ports they should use:
| tenant_id | 0cb06b70ef67424b8add447415449722 |
+---------------------+--------------------------------------+
All right, now you're ready to go ahead and create members for the
load balancer pool, referencing the floating IPs:
You are ready to create members for the load balancer pool, which
reference the floating IPs:
::
@ -738,7 +740,7 @@ the various members of the pool:
| tenant_id | 0cb06b70ef67424b8add447415449722 |
+---------------------+--------------------------------------+
And confirm it's in place:
And confirm it is in place:
::
@ -793,11 +795,9 @@ topology now reflects the modular nature of the application itself.
Next steps
~~~~~~~~~~
You should now be fairly confident working with the Network API. There
are several calls we did not cover. To see these and more, refer to
the volume documentation of your SDK, or try a different step in the
tutorial, including:
You should now be fairly confident working with the Network API. To
see calls that we did not cover, see the volume documentation of your
SDK, or try one of these tutorial steps:
* :doc:`/advice`: for advice for developers new to operations
* :doc:`/craziness`: to see all the crazy things we think ordinary
folks won't want to do ;)
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

View File

@ -2,18 +2,20 @@
Orchestration
=============
This guide describes the importance of durability and scalability for your
cloud-based applications. In most cases, you must automate tasks, such as
scaling and other operational tasks, to achieve these goals.
This chapter explains the importance of durability and scalability for
your cloud-based applications. In most cases, really achieving these
qualities means automating tasks such as scaling and other operational
tasks.
The Orchestration module provides a template-based way to describe a cloud
application, then coordinates running the needed OpenStack API calls to run
cloud applications. The templates enable you to create most OpenStack resource
types, such as instances, networking information, volumes, security groups,
and even users. It also provides more advanced functionality, such as
instance high availability, instance auto-scaling, and nested stacks.
The Orchestration module provides a template-based way to describe a
cloud application, then coordinates running the needed OpenStack API
calls to run cloud applications. The templates enable you to create
most OpenStack resource types, such as instances, networking
information, volumes, security groups, and even users. It also provides
more advanced functionality, such as instance high availability,
instance auto-scaling, and nested stacks.
The OpenStack Orchestration API contains the following constructs:
The OpenStack Orchestration API contains these constructs:
* Stacks
* Resources
@ -42,11 +44,12 @@ This section introduces the
`HOT templating language <http://docs.openstack.org/developer/heat/template_guide/hot_guide.html>`_,
and takes you through some common OpenStack Orchestration calls.
Unlike in previous sections where you used your SDK to programatically
interact with OpenStack, you use the 'heat' command-line client to access the
Orchestration API directly through template files.
In previous sections of this guide, you used your SDK to
programatically interact with OpenStack. In this section you work from
the command line to use the Orchestration API directly through
template files.
Use this guide to install the 'heat' command-line client:
Install the 'heat' command-line client by following this guide:
http://docs.openstack.org/cli-reference/content/install_clients.html
Then, use this guide to set up the necessary variables for your cloud in an 'openrc' file:
@ -83,12 +86,12 @@ http://docs.openstack.org/cli-reference/content/cli_openrc.html
.. note:: PHP-opencloud supports OpenStack Orchestration :D:D:D but this section is not written yet.
HOT Templating Language
HOT templating language
-----------------------
The best place to learn about the OpenStack Orchestration template syntax is the
To learn about the template syntax for OpenStack Orchestration, how to
create basic templates, and their inputs and outputs, see
`Heat Orchestration Template (HOT) Guide <http://docs.openstack.org/developer/heat/template_guide/hot_guide.html>`_.
Read this guide to learn about basic templates and their inputs and outputs.
Work with stacks: Basics
------------------------
@ -182,7 +185,6 @@ impact dozens of instances or that add and remove instances on demand.
Continue to the next section to learn more.
Work with stacks: Advanced
--------------------------
.. todo:: needs more explanatory material
@ -205,10 +207,11 @@ For an example template that creates an auto-scaling Wordpress instance, see
Next steps
----------
You should now be fairly confident working with the Orchestration service. To
see the calls that we did not cover, see the volume documentation for your
SDK. Or, try one of these tutorial steps:
You should now be fairly confident working with the Orchestration
service. To see the calls that we did not cover and more, see the
volume documentation of your SDK. Or, try one of these steps in the
tutorial:
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`. Learn some crazy things that you might not think to do ;)
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

View File

@ -6,50 +6,46 @@ Scaling out
the fractals app that simply returns the CPU load on the
local server. Then add to this section a simple loop that
checks to see if any servers are overloaded and adds a new
one if they are. (Or do this via SSH and w)
one if they are. (Or do this through SSH and w)
One of the most-often cited reasons for designing applications using
cloud patterns is the ability to **scale out**. That is: to add
additional resources as required. This is in contrast to the previous
strategy of increasing capacity by scaling up the size of existing
resources. In order for scale out to be feasible, you'll need to
do two things:
An often-cited reason for designing applications by using cloud
patterns is the ability to **scale out**. That is: to add additional
resources, as required. Contrast this strategy to the previous one of
increasing capacity by scaling up the size of existing resources. To
scale out, you must:
* Architect your application to make use of additional resources.
* Make it possible to add new resources to your application.
.. todo:: nickchase needs to restate the second point
In section :doc:`/introduction`, we talked about various aspects of the
application architecture, such as building in a modular fashion,
creating an API, and so on. Now you'll see why those are so
important. By creating a modular application with decoupled services,
it is possible to identify components that cause application
performance bottlenecks and scale them out.
The :doc:`/introduction` section describes how to build in a modular
fashion, create an API, and other aspects of the application
architecture. Now you will see why those strategies are so important.
By creating a modular application with decoupled services, you can
identify components that cause application performance bottlenecks and
scale them out. Just as importantly, you can also remove resources
when they are no longer necessary. It is very difficult to overstate
the cost savings that this feature can bring, as compared to
traditional infrastructure.
Just as importantly, you can also remove resources when they are no
longer necessary. It is very difficult to overstate the cost savings
that this feature can bring, as compared to traditional
infrastructure.
Of course, just having access to additional resources is only part of
the battle; while it's certainly possible to manually add or delete
resources, you'll get more value -- and more responsiveness -- if the
application simply requests new resources automatically when it needs
them.
Of course, having access to additional resources is only part of the
game plan; while you can manually add or delete resources, you get
more value and more responsiveness if the application automatically
requests additional resources when it needs them.
This section continues to illustrate the separation of services onto
multiple instances and highlights some of the choices we've made that
facilitate scalability in the app's architecture.
multiple instances and highlights some of the choices that we have
made that facilitate scalability in the application architecture.
We'll progressively ramp up to use up to about six instances, so ensure
that your cloud account has appropriate quota to handle that many.
You will progressively ramp up to use up six instances, so make sure that your
cloud account has the appropriate quota.
In the previous section, we used two virtual machines - one 'control'
service and one 'worker'. In our application, the speed at which
fractals can be generated depends on the number of workers. With just
one worker, we can only produce one fractal at a time. Before long, it
will be clear that we need more resources.
The previous section uses two virtual machines - one 'control' service
and one 'worker'. The speed at which your application can generate
fractals depends on the number of workers. With just one worker, you
can produce only one fractal at a time. Before long, you will need more
resources.
.. note:: If you do not have a working application, follow the steps in
:doc:`introduction` to create one.
@ -60,37 +56,37 @@ will be clear that we need more resources.
Generate load
~~~~~~~~~~~~~
You can test for yourself what happens when the Fractals application is under
load by:
To test what happens when the Fractals application is under load, you
can:
* maxing out the CPU of the existing worker instances (loading the worker)
* generating a lot of API requests (load up the API)
* Load the worker: Create a lot of tasks to max out the CPU of existing
worker instances
* Load the API: Create a lot of API service requests
Create more tasks
-----------------
Create a greater number of tasks
--------------------------------
Use SSH to login to the controller instance, :code:`app-controller`,
using the previous added SSH keypair.
Use SSH with the existing SSH keypair to log in to the
:code:`app-controller` controller instance.
::
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_CONTROLLER
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance and USERNAME to the appropriate
username.
controller instance and USERNAME with the appropriate
user name.
Call the Fractal application's command line interface (:code:`faafo`) to
request the generation of 5 large fractals.
Call the :code:`faafo` command-line interface to request the
generation of five large fractals.
::
$ faafo create --height 9999 --width 9999 --tasks 5
Now if you check the load on the worker, you can see that the instance
is not doing well. On our single CPU flavor instance, a load average
of more than 1 means we are at capacity.
If you check the load on the worker, you can see that the instance is
not doing well. On the single CPU flavor instance, a load average
greater than 1 means that the server is at capacity.
::
@ -98,29 +94,28 @@ of more than 1 means we are at capacity.
10:37:39 up 1:44, 2 users, load average: 1.24, 1.40, 1.36
.. note:: Replace :code:`IP_WORKER` with the IP address of the worker
instance and USERNAME to the appropriate username.
instance and USERNAME with the appropriate user name.
Create a greater number of API service requests
-----------------------------------------------
Create more API service requests
--------------------------------
API load is a slightly different problem to the previous one regarding
capacity to work. We can simulate many requests to the API as follows:
API load is a slightly different problem than the previous one regarding
capacity to work. We can simulate many requests to the API, as follows:
Use SSH to login to the controller instance, :code:`app-controller`,
using the previous added SSH keypair.
Use SSH with the existing SSH keypair to log in to the
:code:`app-controller` controller instance.
::
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_CONTROLLER
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance and USERNAME to the appropriate
username.
controller instance and USERNAME with the appropriate
user name.
Call the Fractal application's command line interface (:code:`faafo`) in a for
loop to send many requests to the API. The following command will
request a random set of fractals, 500 times:
Use a for loop to call the :code:`faafo` command-line interface to
request a random set of fractals 500 times:
::
@ -129,31 +124,30 @@ request a random set of fractals, 500 times:
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance.
Now if you check the load on the API service instance,
:code:`app-controller`, you can see that the instance is not doing
well. On our single CPU flavor instance, a load average of more than
1 means we are at capacity.
If you check the load on the :code:`app-controller` API service
instance, you see that the instance is not doing well. On your single
CPU flavor instance, a load average greater than 1 means that the server is
at capacity.
::
$ uptime
10:37:39 up 1:44, 2 users, load average: 1.24, 1.40, 1.36
The number of requests coming in means that some requests for fractals
may not even get onto the message queue to be processed. To ensure we
can cope with demand, we need to scale out our API services as well.
As you can see, we need to scale out the Fractals application's API capability.
The sheer number of requests means that some requests for fractals
might not make it to the message queue for processing. To ensure that
you can cope with demand, you must also scale out the API capability
of the Fractals application.
Scaling out
~~~~~~~~~~~
Remove the old app
------------------
Remove the existing app
-----------------------
Go ahead and delete the existing instances and security groups you
created in previous sections. Remember, when instances in the cloud
are no longer working, remove them and re-create something new.
Go ahead and delete the existing instances and security groups that
you created in previous sections. Remember, when instances in the
cloud are no longer working, remove them and re-create something new.
.. only:: shade
@ -177,9 +171,9 @@ are no longer working, remove them and re-create something new.
Extra security groups
---------------------
As you change the topology of your applications, you will need to
update or create new security groups. Here, we will re-create the
required security groups.
As you change the topology of your applications, you must update or
create security groups. Here, you re-create the required security
groups.
.. only:: shade
@ -199,12 +193,12 @@ required security groups.
:start-after: step-2
:end-before: step-3
A Floating IP helper function
A floating IP helper function
-----------------------------
Define a short function to locate unused IPs or allocate a new floating
IP. This saves a few lines of code and prevents you from
reaching your Floating IP quota too quickly.
Define a short function to locate unused or allocate floating IPs.
This saves a few lines of code and prevents you from reaching your
floating IP quota too quickly.
.. only:: shade
@ -224,14 +218,13 @@ reaching your Floating IP quota too quickly.
:start-after: step-3
:end-before: step-4
Splitting off the database and message queue
--------------------------------------------
Split the database and message queue
------------------------------------
Prior to scaling out our application services, like the API service or
the workers, we have to add a central database and messaging instance,
called :code:`app-services`. The database and messaging queue will be used
to track the state of the fractals and to coordinate the communication
between the services.
Before you scale out your application services, like the API service or the
workers, you must add a central database and an :code:`app-services` messaging
instance. The database and messaging queue will be used to track the state of
fractals and to coordinate the communication between the services.
.. only:: shade
@ -251,15 +244,15 @@ between the services.
:start-after: step-4
:end-before: step-5
Scaling the API service
-----------------------
Scale the API service
---------------------
With multiple workers producing fractals as fast as they can, we also
need to make sure we can receive the requests for fractals as quickly
as possible. If our application becomes popular, we may have many
thousands of users trying to connect to our API to generate fractals.
With multiple workers producing fractals as fast as they can, the
system must be able to receive the requests for fractals as quickly as
possible. If our application becomes popular, many thousands of users
might connect to our API to generate fractals.
Armed with our security group, image and flavor size we can now add
Armed with a security group, image, and flavor size, you can add
multiple API services:
.. only:: shade
@ -280,27 +273,27 @@ multiple API services:
:start-after: step-5
:end-before: step-6
These are client-facing services, so unlike the workers they do not
use a message queue to distribute tasks. Instead, we'll need to
introduce some kind of load balancing mechanism to share incoming
requests between the different API services.
These services are client-facing, so unlike the workers they do not
use a message queue to distribute tasks. Instead, you must introduce
some kind of load balancing mechanism to share incoming requests
between the different API services.
A simple solution is to give half of your friends one address and half
the other, but that solution is not sustainable. Instead, you can use
a `DNS round robin <http://en.wikipedia.org/wiki/Round- robin_DNS>`_
to do that automatically. However, OpenStack networking can provide
Load Balancing as a Service, which :doc:`/networking` explains.
One simple way might be to give half of our friends one address and
half the other, but that's certainly not a sustainable solution.
Instead, we can do that automatically using a `DNS round robin
<http://en.wikipedia.org/wiki/Round-robin_DNS>`_. However, OpenStack
networking can provide Load Balancing as a Service, which we'll
explain in :doc:`/networking`.
.. todo:: Add a note that we demonstrate this by using the first API
instance for the workers and the second API instance for the
load simulation.
Scaling the workers
-------------------
Scale the workers
-----------------
To increase the overall capacity, we will now add 3 workers:
To increase the overall capacity, add three workers:
.. only:: shade
@ -320,43 +313,45 @@ To increase the overall capacity, we will now add 3 workers:
:start-after: step-6
:end-before: step-7
Adding this capacity enables you to deal with a higher number of
requests for fractals. As soon as these worker instances come up,
they'll start checking the message queue looking for requests,
reducing the overall backlog like a new register opening in the
supermarket.
requests for fractals. As soon as these worker instances start, they
begin checking the message queue for requests, reducing the overall
backlog like a new register opening in the supermarket.
This was obviously a very manual process - figuring out we needed more
workers and then starting new ones required some effort. Ideally the
system would do this itself. If your application has been built to
detect these situations, you can have it automatically request and
remove resources, but you do not actually need to do this work
This process was obviously a very manual one. Figuring out that we
needed more workers and then starting new ones required some effort.
Ideally the system would do this itself. If you build your application
to detect these situations, you can have it automatically request and
remove resources, which saves you the effort of doing this work
yourself. Instead, the OpenStack Orchestration service can monitor
load and start instances as appropriate. See :doc:`orchestration` to find
out how to set that up.
load and start instances, as appropriate. To find out how to set that
up, see :doc:`orchestration`.
Verifying we've had an impact
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Verify that we have had an impact
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the steps above, we've split out several services and expanded
capacity. SSH to one of the app instances and create a few fractals.
You will see that the Fractals app has a few new features.
In the previous steps, you split out several services and expanded
capacity. To see the new features of the Fractals application, SSH to
one of the app instances and create a few fractals.
::
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_API_1
.. note:: Replace :code:`IP_API_1` with the IP address of the first
API instance and USERNAME to the appropriate username.
API instance and USERNAME with the appropriate user name.
Use the Fractal application's command line interface to generate fractals
:code:`faafo create`. Watch the progress of fractal generation with
the :code:`faafo list`. Use :code:`faafo UUID` to examine some of the
fractals. The generated_by field will show which worker created the
fractal. The fact that multiple worker instances are sharing the work
means that fractals will be generated more quickly and the death of a
worker probably won't even be noticed.
Use the :code:`faafo create` command to generate fractals.
Use the :code:`faafo list` command to watch the progress of fractal
generation.
Use the :code:`faafo UUID` command to examine some of the fractals.
The `generated_by` field shows the worker that created the fractal.
Because multiple worker instances share the work, fractals are
generated more quickly and users might not even notice when a worker
fails.
::
@ -403,72 +398,71 @@ worker probably won't even be noticed.
| generated_by | app-worker-1 |
+--------------+------------------------------------------------------------------+
The fractals are now available from any of the app-api hosts. Visit
http://IP_API_1/fractal/FRACTAL_UUID and
http://IP_API_2/fractal/FRACTAL_UUID to verify. Now you have multiple
redundant web services. If one dies, the others can be used.
The fractals are now available from any of the app-api hosts. To
verify, visit http://IP_API_1/fractal/FRACTAL_UUID and
http://IP_API_2/fractal/FRACTAL_UUID. You now have multiple redundant
web services. If one fails, you can use the others.
.. note:: Replace :code:`IP_API_1` and :code:`IP_API_2` with the
corresponding Floating IPs. Replace FRACTAL_UUID the UUID
corresponding floating IPs. Replace FRACTAL_UUID with the UUID
of an existing fractal.
Go ahead and test the fault tolerance. Start deleting workers and API
instances. As long as you have one of each, your application should
be fine. There is one weak point though. The database contains the
instances. As long as you have one of each, your application is fine.
However, be aware of one weak point. The database contains the
fractals and fractal metadata. If you lose that instance, the
application will stop. Future sections will work to address this weak
point.
application stops. Future sections will explain how to address this
weak point.
If we had a load balancer, we could distribute this load between the
two different API services. As mentioned previously, there are several
options. We will show one in :doc:`networking`.
If you had a load balancer, you could distribute this load between the
two different API services. You have several options. The
:doc:`networking` section shows you one option.
You could in theory use a simple script to monitor the load on your
workers and API services and trigger the creation of new instances,
which you already know how to do. If you can see how to do that -
congratulations, you're ready to create scalable cloud applications.
Of course, creating a monitoring system just for one application may
not always be the best way. We recommend you look at :doc:`orchestration`
to find out about how you can use OpenStack Orchestration's monitoring
and autoscaling capabilities to do steps like this automatically.
In theory, you could use a simple script to monitor the load on your
workers and API services and trigger the creation of instances, which
you already know how to do. Congratulations! You are ready to create
scalable cloud applications.
Of course, creating a monitoring system for a single application might
not make sense. To learn how to use the OpenStack Orchestration
monitoring and auto-scaling capabilities to automate these steps, see
:doc:`orchestration`.
Next steps
~~~~~~~~~~
You should be fairly confident now about starting new instances, and
distributing services from an application amongst the instances.
You should be fairly confident about starting instances and
distributing services from an application among these instances.
As mentioned in :doc:`/introduction` the generated fractal images will be
saved on the local filesystem of the API service instances. Because we
now have multiple API instances up and running, the fractal
images will be spread across multiple API services. This results in a number of
:code:`IOError: [Errno 2] No such file or directory` exceptions when trying to download a
fractal image from an API service instance not holding the fractal
image on its local filesystem.
As mentioned in :doc:`/introduction`, the generated fractal images are
saved on the local file system of the API service instances. Because
you have multiple API instances up and running, the fractal images are
spread across multiple API services, which causes a number of
:code:`IOError: [Errno 2] No such file or directory` exceptions when
trying to download a fractal image from an API service instance that
does not have the fractal image on its local file system.
From here, you should go to :doc:`/durability` to learn how to use
Object Storage to solve this problem in a elegant way. Alternatively,
you may jump to any of these sections:
Go to :doc:`/durability` to learn how to use Object Storage to solve
this problem in a elegant way. Or, you can proceed to one of these
sections:
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component
* :doc:`/orchestration`: Automatically orchestrate your application
* :doc:`/networking`: Learn about complex networking
* :doc:`/advice`: Get advice about operations
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)
Complete code sample
~~~~~~~~~~~~~~~~~~~~
The following file contains all of the code from this
section of the tutorial. This comprehensive code sample lets you view
and run the code as a single script.
This file contains all the code from this tutorial section. This
comprehensive code sample lets you view and run the code as a single
script.
Before you run this script, confirm that you have set your
authentication information, the flavor ID, and image ID.
Before you run this script, confirm that you have set your authentication
information, the flavor ID, and image ID.
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb