Remove firstapp guides

These guides are dead, remove them completely. This will also remove
the documents from the developer website, so remove all links.

Change-Id: Ifd9a4d8a187962498d6f1133226a77308bc2f5ff
This commit is contained in:
Andreas Jaeger 2019-07-16 16:43:33 +02:00 committed by Andreas Jaeger
parent 7d46f8b42d
commit d2e1f23d91
65 changed files with 4 additions and 29141 deletions

2
.gitignore vendored
View File

@ -7,8 +7,6 @@ target/
/build-*.log.gz /build-*.log.gz
/generated /generated
/api-quick-start/build/ /api-quick-start/build/
/firstapp/build*/
swagger/
# Packages # Packages
*.egg *.egg

View File

@ -22,7 +22,6 @@ which includes these pages:
In addition to these documents, this repository contains: In addition to these documents, this repository contains:
* Landing page for developer.openstack.org: ``www`` * Landing page for developer.openstack.org: ``www``
* Writing your first OpenStack application tutorial (in progress): ``firstapp``
To complete code reviews in this repository, use the standard To complete code reviews in this repository, use the standard
OpenStack Gerrit `workflow <https://review.opendev.org>`_. OpenStack Gerrit `workflow <https://review.opendev.org>`_.
@ -49,28 +48,6 @@ To build an individual document, such as the API Guide::
The locally-built output files are found in a ``publish-docs`` directory. The locally-built output files are found in a ``publish-docs`` directory.
"Writing your First OpenStack Application" tutorial
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To build the "Writing your first OpenStack application" tutorial, you must
install `Graphviz <http://www.graphviz.org/>`_.
To install Graphviz for Ubuntu 12.04 or later or Debian 7 ("wheezy") or later::
apt-get install graphviz
On Fedora 22 and later::
dnf install graphviz
On openSUSE::
zypper install graphviz
On Mac OSX with Homebrew installed::
brew install graphviz
Build and update API docs Build and update API docs
========================= =========================

View File

@ -4,12 +4,12 @@ declare -A DIRECTORIES=(
# books to be built # books to be built
declare -A BOOKS=( declare -A BOOKS=(
["de"]="api-quick-start firstapp" ["de"]="api-quick-start"
["eo"]="api-quick-start" ["eo"]="api-quick-start"
["id"]="api-quick-start firstapp" ["id"]="api-quick-start"
["ja"]="api-quick-start" ["ja"]="api-quick-start"
["ko_KR"]="api-quick-start" ["ko_KR"]="api-quick-start"
["tr_TR"]="api-quick-start firstapp" ["tr_TR"]="api-quick-start"
["zh_CN"]="api-quick-start" ["zh_CN"]="api-quick-start"
) )
@ -22,7 +22,6 @@ DOC_DIR="./"
declare -A SPECIAL_BOOKS declare -A SPECIAL_BOOKS
SPECIAL_BOOKS=( SPECIAL_BOOKS=(
["api-quick-start"]="RST" ["api-quick-start"]="RST"
["firstapp"]="RST"
# These are translated in openstack-manuals # These are translated in openstack-manuals
["common"]="skip" ["common"]="skip"
# Obsolete # Obsolete

View File

@ -1,47 +0,0 @@
========================================
Writing Your First OpenStack Application
========================================
This directory contains the "Writing Your First OpenStack Application"
tutorial.
The tutorials work with an application that can be found in the
`openstack/faafo <https://opendev.org/openstack/faafo>`_
repository.
Prerequisites
-------------
To build the documentation, you must install the Graphviz package.
/source
~~~~~~~
The :code:`/source` directory contains the tutorial documentation as
`reStructuredText <http://docutils.sourceforge.net/rst.html>`_ (RST).
To build the documentation, you must install `Sphinx <http://sphinx-doc.org/>`_ and the
`OpenStack docs.openstack.org Sphinx theme (openstackdocstheme) <https://pypi.org/project/openstackdocstheme/>`_. When
you invoke tox, these dependencies are automatically pulled in from the
top-level :code:`test-requirements.txt`.
You must also install `Graphviz <http://www.graphviz.org/>`_ on your build system.
The RST source includes conditional output logic. The following command
invokes :code:`sphinx-build` with :code:`-t libcloud`::
tox -e firstapp-libcloud
Only the sections marked :code:`.. only:: libcloud` in the RST are built.
/samples
~~~~~~~~
The code samples in this guide are located in this directory. The code samples
for each SDK are located in separate subdirectories.
/build-libcloud
~~~~~~~~~~~~~~~
The HTML documentation is built in this directory. The :code:`.gitignore` file
for the project specifies this directory.

View File

@ -1,112 +0,0 @@
using System;
using System.Collections.Generic;
using net.openstack.Core.Domain;
using net.openstack.Core.Providers;
using net.openstack.Providers.Rackspace;
namespace openstack
{
class MainClass
{
public static void Main (string[] args)
{
// step-1
var username = "your_auth_username";
var password = "your_auth_password";
var project_name = "your_project_name";
var project_id = "your_project_id";
var auth_url = "http://controller:5000/v2.0";
var region = "your_region_name";
var networkid = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
var identity = new CloudIdentityWithProject () {
Username = username,
Password = password,
ProjectId = new ProjectId(project_id),
ProjectName = project_name
};
var identityProvider = new OpenStackIdentityProvider (
new Uri (auth_url));
var conn = new CloudServersProvider (identity, identityProvider);
// step-2
var images = conn.ListImages (region: region);
foreach (var image in images) {
Console.WriteLine (string.Format(
"Image Id: {0} - Image Name: {1}",
image.Id,
image.Name));
}
// step-3
var flavors = conn.ListFlavors (region: region);
foreach (var flavor in flavors) {
Console.WriteLine (string.Format(
"Flavor Id: {0} - Flavor Name: {1}",
flavor.Id,
flavor.Name));
}
// step-4
var image_id = "97f55846-6ea5-4e9d-b437-bda97586bd0c";
var _image = conn.GetImage(image_id, region:region);
Console.WriteLine (string.Format(
"Image Id: {0} - Image Name: {1}",
_image.Id,
_image.Name));
// step-5
var flavor_id = "2";
var _flavor = conn.GetFlavor (flavor_id, region: region);
Console.WriteLine (string.Format(
"Flavor Id: {0} - Flavor Name: {1}",
_flavor.Id,
_flavor.Name));
// step-6
var instance_name = "testing";
var testing_instance = conn.CreateServer (instance_name,
_image.Id,
_flavor.Id,
region: region,
networks: new List<String> () { networkid });
Console.WriteLine (string.Format(
"Instance Id: {0} at {1}",
testing_instance.Id,
testing_instance.Links
));
// step-7
var instances = conn.ListServers(region:region);
foreach (var instance in instances) {
Console.WriteLine (string.Format(
"Instance Id: {0} at {1}",
testing_instance.Id,
testing_instance.Links));
}
// step-8
conn.DeleteServer(testing_instance.Id, region:region);
// step-9
// step-10
// step-11
// step-12
// step-13
// step-14
// step-15
Console.Read ();
}
}
}

View File

@ -1,56 +0,0 @@
#!/usr/bin/env ruby
require 'fog/openstack'
# step-1
auth_username = "your_auth_username"
auth_password = "your_auth_password"
auth_url = "http://controller:5000"
project_name = "your_project_name_or_id"
conn = Fog::Compute::OpenStack.new openstack_auth_url: auth_url + "/v3/auth/tokens",
openstack_domain_id: "default",
openstack_username: auth_username,
openstack_api_key: auth_password,
openstack_project_name: project_name
# step-2
volume = conn.volumes.create name: "test",
description: "",
size: 1
p volume
# step-3
p conn.volumes.summary
# step-4
db_group = conn.security_groups.create name: "database",
description: "for database service"
conn.security_group_rules.create parent_group_id: db_group.id,
ip_protocol: "tcp",
from_port: 3306,
to_port: 3306
instance = conn.servers.create name: "app-database",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: key_pair.name,
security_groups: db_group
instance.wait_for { ready? }
# step-5
volume = conn.volumes.get "755ab026-b5f2-4f53-b34a-6d082fb36689"
instance.attach_volume volume.id, "/dev/vdb"
# step-6
instance.detach_volume volume.id
volume.destroy
# step-7
conn.snapshots.create volume_id: volume.id,
name: "test_backup_1",
description: "test"
# step-8

View File

@ -1,108 +0,0 @@
#!/usr/bin/env ruby
require 'fog/openstack'
require 'digest/md5'
require 'net/http'
require 'json'
require 'openuri'
# step-1
auth_username = "your_auth_username"
auth_password = "your_auth_password"
auth_url = "http://controller:5000"
project_name = "your_project_name_or_id"
swift = Fog::Storage::OpenStack.new openstack_auth_url: auth_url + "/v3/auth/tokens",
openstack_domain_id: "default",
openstack_username: auth_username,
openstack_api_key: auth_password,
openstack_project_name: project_name
# step-2
container_name = "fractals"
container = swift.directories.create key: container_name
p container
# step-3
p swift.directories.all
# step-4
file_path = "goat.jpg"
object_name = "an amazing goat"
container = swift.directories.get container_name
object = container.files.create body: File.read(File.expand_path(file_path)),
key: object_name
# step-5
p container.files.all
# step-6
p container.files.get object_name
# step-7
puts Digest::MD5.hexdigest(File.read(File.expand_path(file_path)))
# step-8
object.destroy
# step-9
p container.files.all
# step-10
container_name = 'fractals'
container = swift.directories.get container_name
# step-11
endpoint = "http://IP_API_1"
uri = URI("#{endpoint}/v1/fractal")
uri.query = URI.encode_www_form results_per_page: -1
data = JSON.parse(Net::HTTP.get_response(uri).body)
data["objects"].each do |fractal|
body = open("#{endpoint}/fractal/#{fractal["uuid"]}") {|f| f.read}
object = container.files.create body: body, key: fractal["uuid"]
end
p container.files.all
# step-12
container.files.each do |file|
file.destroy
end
container.destroy
# step-13
object_name = "backup_goat.jpg"
file_path = "backup_goat.jpg"
extra = {
description: "a funny goat",
created: "2015-06-02"
}
object = container.files.create body: File.read(File.expand_path(file_path)),
key: object_name,
metadata: extra
# step-14
def chunked_file_upload(swift, container_name, object_name, file_path)
chunk_size = 4096
offset = 0
hash = Digest::MD5.hexdigest(File.read(File.expand_path(file_path)))
object = swift.put_object(container_name, object_name, nil) do
chunk = File.read(file_path, chunk_size, offset)
offset += chunk_size
chunk ? chunk : ''
end
unless hash == object.data[:headers]["etag"]
swift.delete_object container_name, object_name
raise "Checksums do not match. Please retry."
end
container = swift.directories.get container_name
container.files.get object_name
end
object_name = "very_large_file"
file_path = "very_large_file"
object = chunked_file_upload(swift, container_name, object_name, file_path)
# step-15

View File

@ -1,139 +0,0 @@
#!/usr/bin/env ruby
require 'fog/openstack'
# step-1
auth_username = "your_auth_username"
auth_password = "your_auth_password"
auth_url = "http://controller:5000"
project_name = "your_project_name_or_id"
conn = Fog::Compute::OpenStack.new openstack_auth_url: auth_url + "/v3/auth/tokens",
openstack_domain_id: "default",
openstack_username: auth_username,
openstack_api_key: auth_password,
openstack_project_name: project_name
# step-2
p conn.images.summary
# step-3
p conn.flavors.summary
# step-4
image = conn.images.get "2cccbea0-cea9-4f86-a3ed-065c652adda5"
p image
# step-5
flavor = conn.flavors.get "2"
p flavor
# step-6
instance_name = "testing"
testing_instance = conn.servers.create name: instance_name,
image_ref: image.id,
flavor_ref: flavor.id
testing_instance.wait_for { ready? }
p testing_instance
# step-7
p conn.servers.summary
# step-8
testing_instance.destroy
# step-9
puts "Checking for existing SSH key pair..."
key_pair_name = "demokey"
pub_key_file_path = "~/.ssh/id_rsa.pub"
if key_pair = conn.key_pairs.get(key_pair_name)
puts "Keypair #{key_pair_name} already exists. Skipping import."
else
puts "adding keypair..."
key_pair = conn.key_pairs.create name: key_pair_name,
public_key: File.read(File.expand_path(pub_key_file_path))
end
p conn.key_pairs.all
# step-10
puts "Checking for existing security group..."
security_group_name = "all-in-one"
all_in_one_security_group = conn.security_groups.find do |security_group|
security_group.name == security_group_name
end
if all_in_one_security_group
puts "Security Group #{security_group_name} already exists. Skipping creation."
else
all_in_one_security_group = conn.security_groups.create name: security_group_name,
description: "network access for all-in-one application."
conn.security_group_rules.create parent_group_id: all_in_one_security_group.id,
ip_protocol: "tcp",
from_port: 80,
to_port: 80
conn.security_group_rules.create parent_group_id: all_in_one_security_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
end
p conn.security_groups.all
# step-11
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
END
# step-12
puts "Checking for existing instance..."
instance_name = "all-in-one"
if testing_instance = conn.servers.find {|instance| instance.name == instance_name}
puts "Instance #{instance_name} already exists. Skipping creation."
else
testing_instance = conn.servers.create name: instance_name,
image_ref: image.id,
flavor_ref: flavor.id,
key_name: key_pair.name,
user_data: user_data,
security_groups: all_in_one_security_group
testing_instance.wait_for { ready? }
end
testing_instance.reload
p conn.servers.summary
# step-13
puts "Private IP found: #{private_ip_address}" if private_ip_address = testing_instance.private_ip_address
# step-14
puts "Public IP found: #{floating_ip_address}" if floating_ip_address = testing_instance.floating_ip_address
# step-15
puts "Checking for unused Floating IP..."
unless unused_floating_ip_address = conn.addresses.find {|address| address.instance_id.nil?}
pool_name = conn.addresses.get_address_pools[0]["name"]
puts "Allocating new Floating IP from pool: #{pool_name}"
unused_floating_ip_address = conn.addresses.create pool: pool_name
end
# step-16
if floating_ip_address
puts "Instance #{testing_instance.name} already has a public ip. Skipping attachment."
elsif unused_floating_ip_address
unused_floating_ip_address.server = testing_instance
end
# step-17
actual_ip_address = floating_ip_address || unused_floating_ip_address.ip || private_ip_address
puts "The Fractals app will be deployed to http://#{actual_ip_address}"

View File

@ -1,146 +0,0 @@
# step-1
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
END
instance_name = "all-in-one"
testing_instance = conn.servers.create name: instance_name,
image_ref: image.id,
flavor_ref: flavor.id,
key_name: key_pair.name,
user_data: user_data,
security_groups: all_in_one_security_group
testing_instance.wait_for { ready? }
# step-2
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api -r worker -r demo
END
# step-3
all_in_one_security_group = conn.security_groups.create name: "all-in-one",
description: "network access for all-in-one application."
conn.security_group_rules.create parent_group_id: all_in_one_security_group.id,
ip_protocol: "tcp",
from_port: 80,
to_port: 80
conn.security_group_rules.create parent_group_id: all_in_one_security_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
# step-4
conn.security_groups.all
# step-5
rule.destroy
security_group.destroy
# step-6
testing_instance.security_groups
# step-7
puts "Found an unused Floating IP: #{unused_floating_ip_address.ip}" if unused_floating_ip_address = conn.addresses.find {|address| address.instance_id.nil?}
# step-8
pool_name = conn.addresses.get_address_pools[0]["name"]
# step-9
unused_floating_ip_address = conn.addresses.create pool: pool_name
# step-10
unused_floating_ip_address.server = testing_instance
# step-11
worker_group = conn.security_groups.create name: "worker",
description: "for services that run on a worker node"
conn.security_group_rules.create parent_group_id: worker_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
controller_group = conn.security_groups.create name: "control",
description: "for services that run on a control node"
conn.security_group_rules.create parent_group_id: controller_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
conn.security_group_rules.create parent_group_id: controller_group.id,
ip_protocol: "tcp",
from_port: 80,
to_port: 80
conn.security_group_rules.create parent_group_id: controller_group.id,
ip_protocol: "tcp",
from_port: 5672,
to_port: 5672,
group: worker_group.id
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api
END
instance_controller_1 = conn.servers.create name: "app-controller",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: controller_group
instance_controller_1.wait_for { ready? }
puts "Checking for unused Floating IP..."
unless unused_floating_ip_address = conn.addresses.find {|address| address.instance_id.nil?}
pool_name = conn.addresses.get_address_pools[0]["name"]
puts "Allocating new Floating IP from pool: #{pool_name}"
unused_floating_ip_address = conn.addresses.create pool: pool_name
end
unused_floating_ip_address.server = instance_controller_1
puts "Application will be deployed to http://#{unused_floating_ip_address.ip}"
# step-12
instance_controller_1 = conn.servers.get(instance_controller_1.id)
ip_controller = instance_controller_1.private_ip_address || instance_controller_1.floating_ip_address
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e "http://#{ip_controller}" -m "amqp://guest:guest@#{ip_controller}:5672/"
END
instance_worker_1 = conn.servers.create name: "app-worker-1",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: worker_group
instance_worker_1.wait_for { ready? }
puts "Checking for unused Floating IP..."
unless unused_floating_ip_address = conn.addresses.find {|address| address.instance_id.nil?}
pool_name = conn.addresses.get_address_pools[0]["name"]
puts "Allocating new Floating IP from pool: #{pool_name}"
unused_floating_ip_address = conn.addresses.create pool: pool_name
end
unused_floating_ip_address.server = instance_worker_1
puts "The worker will be available for SSH at #{unused_floating_ip_address.ip}"
# step-13
puts instance_worker_1.private_ip_address
# step-14

View File

@ -1,163 +0,0 @@
# step-1
instance_names = ["all-in-one","app-worker-1", "app-worker-2", "app-controller"]
conn.servers.select {|instance| instance_names.include?(instance.name)}.each do |instance|
puts "Destroying Instance: #{instance.name}"
instance.destroy
end
security_group_names = ["control", "worker", "api", "services"]
conn.security_groups.select {|security_group| security_group_names.include?(security_group.name)}.each do |security_group|
puts "Deleting security group: #{security_group.name}"
security_group.destroy
end
# step-2
api_group = conn.security_groups.create name: "api",
description: "for API services only"
worker_group = conn.security_groups.create name: "worker",
description: "for services that run on a worker node"
services_group = conn.security_groups.create name: "services",
description: "for DB and AMQP services only"
rules = [
{
parent_group_id: api_group.id,
ip_protocol: "tcp",
from_port: 80,
to_port: 80
},
{
parent_group_id: api_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
},
{
parent_group_id: worker_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
},
{
parent_group_id: services_group.id,
ip_protocol: "tcp",
from_port: 22,
to_port: 22
},
{
parent_group_id: services_group.id,
ip_protocol: "tcp",
from_port: 3306,
to_port: 3306,
group: api_group.id
},
{
parent_group_id: services_group.id,
ip_protocol: "tcp",
from_port: 5672,
to_port: 5672,
group: worker_group.id
},
{
parent_group_id: services_group.id,
ip_protocol: "tcp",
from_port: 5672,
to_port: 5672,
group: api_group.id
}
]
rules.each {|rule| conn.security_group_rules.create rule }
# step-3
def get_floating_ip_address(conn)
unless unused_floating_ip_address = conn.addresses.find {|address| address.instance_id.nil?}
pool_name = conn.addresses.get_address_pools[0]["name"]
puts "Allocating new Floating IP from pool: #{pool_name}"
unused_floating_ip_address = conn.addresses.create pool: pool_name
end
unused_floating_ip_address
end
# step-4
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i database -i messaging
END
instance_services = conn.servers.create name: "app-services",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: services_group
instance_services.wait_for { ready? }
services_ip_address = instance_services.private_ip_address
# step-5
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r api -m "amqp://guest:guest@#{services_ip_address}:5672/" -d "mysql+pymysql://faafo:password@#{services_ip_address}:3306/faafo"
END
instance_api_1 = conn.servers.create name: "app-api-1",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: api_group
instance_api_2 = conn.servers.create name: "app-api-2",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: api_group
instance_api_1.wait_for { ready? }
api_1_ip_address = instance_api_1.private_ip_address
instance_api_2.wait_for { ready? }
api_2_ip_address = instance_api_2.private_ip_address
[instance_api_1, instance_api_2].each do |instance|
floating_ip_address = get_floating_ip_address(conn)
floating_ip_address.server = instance
puts "allocated #{floating_ip_address.ip} to #{instance.name}"
end
# step-6
user_data = <<END
#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e "http://#{api_1_ip_address}" -m "amqp://guest:guest@#{services_ip_address}:5672/"
END
instance_worker_1 = conn.servers.create name: "app-worker-1",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: worker_group
instance_worker_2 = conn.servers.create name: "app-worker-2",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: worker_group
instance_worker_3 = conn.servers.create name: "app-worker-3",
image_ref: image.id,
flavor_ref: flavor.id,
key_name: "demokey",
user_data: user_data,
security_groups: worker_group
# step-7

View File

@ -1,135 +0,0 @@
package main
import (
"fmt"
"os"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack"
"github.com/gophercloud/gophercloud/openstack/blockstorage/v1/volumes"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/secgroups"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/pagination"
)
func main() {
// step-1
authOpts, err := openstack.AuthOptionsFromEnv()
if err != nil {
fmt.Println(err)
return
}
provider, err := openstack.AuthenticatedClient(authOpts)
if err != nil {
fmt.Println(err)
return
}
var regionName = os.Getenv("OS_REGION_NAME")
volumeClient, err := openstack.NewBlockStorageV1(provider, gophercloud.EndpointOpts{
Region: regionName,
})
if err != nil {
fmt.Println(err)
return
}
// step-2
volumeOpts := volumes.CreateOpts{Size: 1, Name: "test"}
volume, err := volumes.Create(volumeClient, volumeOpts).Extract()
if err != nil {
fmt.Println(err)
return
}
// step-3
_ = volumes.List(volumeClient, nil).EachPage(func(page pagination.Page) (bool, error) {
volumeList, _ := volumes.ExtractVolumes(page)
for _, vol := range volumeList {
fmt.Println(vol)
}
return true, nil
})
// step-4
computeClient, err := openstack.NewComputeV2(provider, gophercloud.EndpointOpts{
Region: regionName,
Type: "computev21",
})
if err != nil {
fmt.Println(err)
return
}
securityGroupName := "database"
databaseSecurityGroup, _ := secgroups.Create(computeClient, secgroups.CreateOpts{
Name: securityGroupName,
Description: "for database service",
}).Extract()
secgroups.CreateRule(computeClient, secgroups.CreateRuleOpts{
ParentGroupID: databaseSecurityGroup.ID,
FromPort: 22,
ToPort: 22,
IPProtocol: "TCP",
CIDR: "0.0.0.0/0",
}).Extract()
secgroups.CreateRule(computeClient, secgroups.CreateRuleOpts{
ParentGroupID: databaseSecurityGroup.ID,
FromPort: 3306,
ToPort: 3306,
IPProtocol: "TCP",
CIDR: "0.0.0.0/0",
}).Extract()
userData := `#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo`
imageID := "41ba40fd-e801-4639-a842-e3a2e5a2ebdd"
flavorID := "3"
networkID := "aba7a6f8-6ec9-4666-8c42-ac2d00707010"
serverOpts := servers.CreateOpts{
Name: "app-database",
ImageRef: imageID,
FlavorRef: flavorID,
Networks: []servers.Network{servers.Network{UUID: networkID}},
SecurityGroups: []string{securityGroupName},
UserData: []byte(userData),
}
server, err := servers.Create(computeClient, keypairs.CreateOptsExt{
CreateOptsBuilder: serverOpts,
KeyName: "demokey",
}).Extract()
if err != nil {
fmt.Println(err)
return
}
servers.WaitForStatus(computeClient, server.ID, "ACTIVE", 300)
// step-5
volumeAttachOptions := volumeattach.CreateOpts{
VolumeID: volume.ID,
}
volumeAttach, err := volumeattach.Create(computeClient, server.ID, volumeAttachOptions).Extract()
if err != nil {
fmt.Println(err)
return
}
volumes.WaitForStatus(volumeClient, volumeAttach.ID, "available", 60)
// step-6
err = volumeattach.Delete(computeClient, server.ID, volume.ID).ExtractErr()
if err != nil {
fmt.Println(err)
return
}
volumes.WaitForStatus(volumeClient, volume.ID, "available", 60)
volumes.Delete(volumeClient, volume.ID)
}

View File

@ -1,158 +0,0 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack"
"github.com/gophercloud/gophercloud/openstack/objectstorage/v1/containers"
"github.com/gophercloud/gophercloud/openstack/objectstorage/v1/objects"
"github.com/gophercloud/gophercloud/pagination"
)
func main() {
// step-1
authOpts, err := openstack.AuthOptionsFromEnv()
if err != nil {
fmt.Println(err)
return
}
provider, err := openstack.AuthenticatedClient(authOpts)
if err != nil {
fmt.Println(err)
return
}
var regionName = os.Getenv("OS_REGION_NAME")
objectClient, err := openstack.NewObjectStorageV1(provider, gophercloud.EndpointOpts{
Region: regionName,
})
if err != nil {
fmt.Println(err)
return
}
// step-2
containerName := "fractals"
containers.Create(objectClient, containerName, nil)
// step-3
containers.List(objectClient, &containers.ListOpts{}).EachPage(func(page pagination.Page) (bool, error) {
containerList, _ := containers.ExtractNames(page)
for _, name := range containerList {
fmt.Printf("Container name [%s] \n", name)
}
return true, nil
})
// step-4
filePath := "goat.jpg"
objectName := "an amazing goat"
f, _ := os.Open(filePath)
defer f.Close()
reader := bufio.NewReader(f)
options := objects.CreateOpts{
Content: reader,
}
objects.Create(objectClient, containerName, objectName, options)
// step-5
objects.List(objectClient, containerName, &objects.ListOpts{}).EachPage(func(page pagination.Page) (bool, error) {
objectList, _ := objects.ExtractNames(page)
for _, name := range objectList {
fmt.Printf("Object name [%s] \n", name)
}
return true, nil
})
// step-6
// step-7
// step-8
objects.Delete(objectClient, containerName, objectName, nil)
// step-9
// step-10
containerName = "fractals"
containers.Create(objectClient, containerName, nil)
// step-11
endpoint := "http://IP_API_1"
resp, _ := http.Get(endpoint + "/v1/fractal")
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
type Fractal struct {
UUID string `json:"uuid"`
}
type Data struct {
Results int `json:"num_results"`
Objects []Fractal `json:"objects"`
Page int `json:"page"`
TotalPages int `json:"total_pages"`
}
var data Data
json.Unmarshal([]byte(body), &data)
for _, fractal := range data.Objects {
r, _ := http.Get(endpoint + "/fractal/" + fractal.UUID)
defer r.Body.Close()
image := fractal.UUID + ".png"
out, _ := os.Create(image)
defer out.Close()
io.Copy(out, r.Body)
f, _ := os.Open(image)
defer f.Close()
reader := bufio.NewReader(f)
options := objects.CreateOpts{
Content: reader,
}
objectName = fractal.UUID
fmt.Printf("Uploading object [%s] in container [%s]... \n", objectName, containerName)
objects.Create(objectClient, containerName, objectName, options)
}
objects.List(objectClient, containerName, &objects.ListOpts{}).EachPage(func(page pagination.Page) (bool, error) {
objectList, _ := objects.ExtractNames(page)
for _, name := range objectList {
fmt.Printf("Object [%s] in container [%s] \n", name, containerName)
}
return true, nil
})
// step-12
objects.List(objectClient, containerName, &objects.ListOpts{}).EachPage(func(page pagination.Page) (bool, error) {
objectList, _ := objects.ExtractNames(page)
for _, name := range objectList {
fmt.Printf("Deleting object [%s] in container [%s]... \n", name, containerName)
objects.Delete(objectClient, containerName, name, nil)
}
return true, nil
})
fmt.Printf("Deleting container [%s] \n", containerName)
containers.Delete(objectClient, containerName)
// step-13
objects.Update(objectClient, containerName, objectName, &objects.UpdateOpts{Metadata: map[string]string{"foo": "bar"}})
// step-14
}

View File

@ -1,310 +0,0 @@
package main
import (
"fmt"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/secgroups"
"github.com/gophercloud/gophercloud/openstack/compute/v2/flavors"
"github.com/gophercloud/gophercloud/openstack/compute/v2/images"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/external"
"github.com/gophercloud/gophercloud/openstack/networking/v2/networks"
"io/ioutil"
"os"
)
func main() {
// step-1
authOpts, err := openstack.AuthOptionsFromEnv()
if err != nil {
fmt.Println(err)
return
}
provider, err := openstack.AuthenticatedClient(authOpts)
if err != nil {
fmt.Println(err)
return
}
var regionName = os.Getenv("OS_REGION_NAME")
client, err := openstack.NewComputeV2(provider, gophercloud.EndpointOpts{
Region: regionName,
Type: "computev21",
})
if err != nil {
fmt.Println(err)
return
}
// step-2
pager := images.ListDetail(client, images.ListOpts{})
page, _ := pager.AllPages()
imageList, _ := images.ExtractImages(page)
fmt.Println(imageList)
// step-3
pager = flavors.ListDetail(client, flavors.ListOpts{})
page, _ = pager.AllPages()
flavorList, _ := flavors.ExtractFlavors(page)
fmt.Println(flavorList)
// step-4
imageID := "74e6d1ec-9a08-444c-8518-4f232446386d"
image, _ := images.Get(client, imageID).Extract()
fmt.Println(image)
// step-5
flavorID := "1"
flavor, _ := flavors.Get(client, flavorID).Extract()
fmt.Println(flavor)
// step-6
instanceName := "testing"
testingInstance, err := servers.Create(client, servers.CreateOpts{
Name: instanceName,
ImageRef: imageID,
FlavorRef: flavorID,
}).Extract()
if err != nil {
fmt.Println(err)
return
}
fmt.Println(testingInstance)
// step-7
pager = servers.List(client, servers.ListOpts{})
page, _ = pager.AllPages()
serverList, _ := servers.ExtractServers(page)
fmt.Println(serverList)
// step-8
servers.Delete(client, testingInstance.ID)
// step-9
fmt.Println("Checking for existing SSH key pair...")
keyPairName := "demokey"
pubKeyFile := "~/.ssh/id_rsa.pub"
keyPairExists := false
pager = keypairs.List(client)
page, _ = pager.AllPages()
keypairList, _ := keypairs.ExtractKeyPairs(page)
for _, k := range keypairList {
if k.Name == keyPairName {
keyPairExists = true
break
}
}
if keyPairExists {
fmt.Println("Keypair " + keyPairName + " already exists. Skipping import.")
} else {
fmt.Println("adding keypair...")
bs, _ := ioutil.ReadFile(pubKeyFile)
keypairs.Create(client, keypairs.CreateOpts{
Name: keyPairName,
PublicKey: string(bs),
}).Extract()
}
pager = keypairs.List(client)
page, _ = pager.AllPages()
keypairList, _ = keypairs.ExtractKeyPairs(page)
fmt.Println(keypairList)
// step-10
fmt.Println("Checking for existing security group...")
var allInOneSecurityGroup secgroups.SecurityGroup
securityGroupName := "all-in-one"
securityGroupExists := false
pager = secgroups.List(client)
page, _ = pager.AllPages()
secgroupList, _ := secgroups.ExtractSecurityGroups(page)
for _, secGroup := range secgroupList {
if secGroup.Name == securityGroupName {
allInOneSecurityGroup = secGroup
securityGroupExists = true
break
}
}
if securityGroupExists {
fmt.Println("Security Group " + allInOneSecurityGroup.Name + " already exists. Skipping creation.")
} else {
allInOneSecurityGroup, _ := secgroups.Create(client, secgroups.CreateOpts{
Name: securityGroupName,
Description: "network access for all-in-one application.",
}).Extract()
secgroups.CreateRule(client, secgroups.CreateRuleOpts{
ParentGroupID: allInOneSecurityGroup.ID,
FromPort: 80,
ToPort: 80,
IPProtocol: "TCP",
CIDR: "0.0.0.0/0",
}).Extract()
secgroups.CreateRule(client, secgroups.CreateRuleOpts{
ParentGroupID: allInOneSecurityGroup.ID,
FromPort: 22,
ToPort: 22,
IPProtocol: "TCP",
CIDR: "0.0.0.0/0",
}).Extract()
}
pager = secgroups.List(client)
page, _ = pager.AllPages()
secgroupList, _ = secgroups.ExtractSecurityGroups(page)
fmt.Println(secgroupList)
// step-11
userData := `#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo`
// step-12
fmt.Println("Checking for existing instance...")
instanceName = "all-in-one"
instanceExists := false
pager = servers.List(client, servers.ListOpts{})
page, _ = pager.AllPages()
serverList, _ = servers.ExtractServers(page)
for _, s := range serverList {
if s.Name == instanceName {
testingInstance = &s
instanceExists = true
break
}
}
if instanceExists {
fmt.Println("Instance " + testingInstance.Name + " already exists. Skipping creation.")
} else {
opts := servers.CreateOpts{
Name: instanceName,
ImageRef: image.ID,
FlavorRef: flavor.ID,
SecurityGroups: []string{securityGroupName},
UserData: []byte(userData),
}
testingInstance, err = servers.Create(client, keypairs.CreateOptsExt{
CreateOptsBuilder: opts,
KeyName: keyPairName,
}).Extract()
if err != nil {
fmt.Println(err)
return
}
}
servers.WaitForStatus(client, testingInstance.ID, "ACTIVE", 300)
pager = servers.List(client, servers.ListOpts{})
page, _ = pager.AllPages()
serverList, _ = servers.ExtractServers(page)
fmt.Println(serverList)
// step-13
var privateIP string
for t, addrs := range testingInstance.Addresses {
if t != "private" || len(privateIP) != 0 {
continue
}
addrs, ok := addrs.([]interface{})
if !ok {
continue
}
for _, addr := range addrs {
a, ok := addr.(map[string]interface{})
if !ok || a["version"].(float64) != 4 {
continue
}
ip, ok := a["addr"].(string)
if ok && len(ip) != 0 {
privateIP = ip
fmt.Println("Private IP found: " + privateIP)
break
}
}
}
// step-14
var publicIP string
for t, addrs := range testingInstance.Addresses {
if t != "public" || len(publicIP) != 0 {
continue
}
addrs, ok := addrs.([]interface{})
if !ok {
continue
}
for _, addr := range addrs {
a, ok := addr.(map[string]interface{})
if !ok || a["version"].(float64) != 4 {
continue
}
ip, ok := a["addr"].(string)
if ok && len(ip) != 0 {
publicIP = ip
fmt.Println("Public IP found: " + publicIP)
break
}
}
}
// step-15
fmt.Println("Checking for unused Floating IP...")
var unusedFloatingIP string
pager = floatingips.List(client)
page, _ = pager.AllPages()
floatingIPList, _ := floatingips.ExtractFloatingIPs(page)
for _, ip := range floatingIPList {
if ip.InstanceID == "" {
unusedFloatingIP = ip.IP
break
}
}
networkClient, _ := openstack.NewNetworkV2(provider, gophercloud.EndpointOpts{
Region: regionName,
})
pager = networks.List(networkClient, networks.ListOpts{})
page, _ = pager.AllPages()
poolList, _ := external.ExtractList(page)
for _, pool := range poolList {
if len(unusedFloatingIP) != 0 || !pool.External {
continue
}
fmt.Println("Allocating new Floating IP from pool: " + pool.Name)
f, _ := floatingips.Create(client, floatingips.CreateOpts{Pool: pool.Name}).Extract()
unusedFloatingIP = f.IP
}
// step-16
if len(publicIP) != 0 {
fmt.Println("Instance " + testingInstance.Name + " already has a public ip. Skipping attachment.")
} else {
opts := floatingips.AssociateOpts{
FloatingIP: unusedFloatingIP,
}
floatingips.AssociateInstance(client, testingInstance.ID, opts)
}
// step-17
var actualIPAddress string
if len(publicIP) != 0 {
actualIPAddress = publicIP
} else if len(unusedFloatingIP) != 0 {
actualIPAddress = unusedFloatingIP
} else {
actualIPAddress = privateIP
}
fmt.Println("The Fractals app will be deployed to http://" + actualIPAddress)
}

View File

@ -1,278 +0,0 @@
heat_template_version: 2014-10-16
description: |
A template that starts the faafo application with auto-scaling workers
parameters:
key_name:
type: string
description: Name of an existing keypair to enable SSH access to the instances
default: id_rsa
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
flavor:
type: string
description: The flavor that the application uses
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
image_id:
type: string
description: The ID of the image to use to create the instance
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
period:
type: number
description: The period to use to calculate the ceilometer statistics (in seconds)
default: 60
faafo_source:
type: string
description: The location of the faafo application install script on the Internet
# allows you to clone and play with the faafo code if you like
default: https://opendev.org/openstack/faafo/raw/contrib/install.sh
resources:
api:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh and http on an api node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 80,
port_range_max: 80},]
worker:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh on a worker node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},]
services:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh, DB and AMPQ on the services node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 80,
port_range_max: 80},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 5672,
port_range_max: 5672,
remote_mode: remote_group_id,
remote_group_id: { get_resource: worker } },
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 5672,
port_range_max: 5672,
remote_mode: remote_group_id,
remote_group_id: { get_resource: api } },
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 3306,
port_range_max: 3306,
remote_mode: remote_group_id,
remote_group_id: { get_resource: api } },
]
app_services:
# The database and AMPQ services run on this instance.
type: OS::Nova::Server
properties:
image: { get_param: image_id }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
name: services
security_groups:
- {get_resource: services}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i database -i messaging
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
faafo_installer: { get_param: faafo_source }
api_instance:
# The web interface runs on this instance
type: OS::Nova::Server
properties:
image: { get_param: image_id }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
name: api
security_groups:
- {get_resource: api}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i faafo -r api -m 'amqp://guest:guest@services_ip:5672/' \
-d 'mysql+pymysql://faafo:password@services_ip:3306/faafo'
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
services_ip: { get_attr: [app_services, first_address] }
faafo_installer: { get_param: faafo_source }
worker_auto_scaling_group:
# The worker instances are managed by this auto-scaling group
type: OS::Heat::AutoScalingGroup
properties:
resource:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: { get_param: image_id }
flavor: { get_param: flavor }
# The metadata used for ceilometer monitoring
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
name: worker
security_groups:
- {get_resource: worker}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i faafo -r worker -e 'http://api_1_ip' -m 'amqp://guest:guest@services_ip:5672/'
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
api_1_ip: { get_attr: [api_instance, first_address] }
services_ip: { get_attr: [app_services, first_address] }
faafo_installer: { get_param: faafo_source }
min_size: 1
desired_capacity: 1
max_size: 3
wait_handle:
type: OS::Heat::WaitConditionHandle
wait_condition:
type: OS::Heat::WaitCondition
depends_on: [ app_services, api_instance, worker_auto_scaling_group ]
properties:
handle: { get_resource: wait_handle }
# All three initial servers clock in when they finish installing their software
count: 3
# 10 minute limit for installation
timeout: 600
scale_up_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: worker_auto_scaling_group}
cooldown: { get_param: period }
scaling_adjustment: 1
scale_down_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: worker_auto_scaling_group}
cooldown: { get_param: period }
scaling_adjustment: '-1'
cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
description: Scale-up if the average CPU > 90% for period seconds
meter_name: cpu_util
statistic: avg
period: { get_param: period }
evaluation_periods: 1
threshold: 90
alarm_actions:
- {get_attr: [scale_up_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: gt
cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
description: Scale-down if the average CPU < 15% for period seconds
meter_name: cpu_util
statistic: avg
period: { get_param: period }
evaluation_periods: 1
threshold: 15
alarm_actions:
- {get_attr: [scale_down_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: lt
outputs:
api_url:
description: The URL for api server
value:
list_join: ['', ['http://', get_attr: [api_instance, first_address]]]
scale__workers_up_url:
description: >
HTTP POST to this URL webhook to scale up the worker group.
Does not accept request headers or body. Place quotes around the URL.
value: {get_attr: [scale_up_policy, alarm_url]}
scale_workers_down_url:
description: >
HTTP POST to this URL webhook to scale down the worker group.
Does not accept request headers or body. Place quotes around the URL.
value: {get_attr: [scale_down_policy, alarm_url]}
ceilometer_statistics_query:
value:
str_replace:
template: >
ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=stackval -p period -a avg
params:
stackval: { get_param: "OS::stack_id" }
period: { get_param: period }
description: >
This query shows the cpu_util sample statistics of the worker group in this stack.
These statistics trigger the alarms.
ceilometer_sample_query:
value:
str_replace:
template: >
ceilometer sample-list -m cpu_util -q metadata.user_metadata.stack=stackval
params:
stackval: { get_param: "OS::stack_id" }
description: >
This query shows the cpu_util meter samples of the worker group in this stack.
These samples are used to calculate the statistics.

View File

@ -1,90 +0,0 @@
heat_template_version: 2014-10-16
description: |
A template to bring up the faafo application as an all in one install
parameters:
key_name:
type: string
description: Name of an existing KeyPair to enable SSH access to the instances
default: id_rsa
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
flavor:
type: string
description: The flavor the application is to use
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
image_id:
type: string
description: ID of the image to use to create the instance
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
faafo_source:
type: string
description: The http location of the faafo application install script
default: https://opendev.org/openstack/faafo/raw/contrib/install.sh
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
description: "SSH and HTTP for the all in one server"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 80,
port_range_max: 80},]
server:
type: OS::Nova::Server
properties:
image: { get_param: image_id }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
security_groups:
- {get_resource: security_group}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
faafo_installer: { get_param: faafo_source }
wait_handle:
type: OS::Heat::WaitConditionHandle
wait_condition:
type: OS::Heat::WaitCondition
depends_on: server
properties:
handle: { get_resource: wait_handle }
count: 1
# we'll give it 10 minutes
timeout: 600
outputs:
faafo_ip:
description: The faafo url
value:
list_join: ['', ['Faafo can be found at: http://', get_attr: [server, first_address]]]

View File

@ -1,243 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import com.google.common.io.Closeables;
import org.jclouds.ContextBuilder;
import org.jclouds.net.domain.IpProtocol;
import org.jclouds.openstack.cinder.v1.CinderApi;
import org.jclouds.openstack.cinder.v1.domain.Snapshot;
import org.jclouds.openstack.cinder.v1.domain.Volume;
import org.jclouds.openstack.cinder.v1.features.SnapshotApi;
import org.jclouds.openstack.cinder.v1.features.VolumeApi;
import org.jclouds.openstack.cinder.v1.options.CreateSnapshotOptions;
import org.jclouds.openstack.cinder.v1.options.CreateVolumeOptions;
import org.jclouds.openstack.cinder.v1.predicates.SnapshotPredicates;
import org.jclouds.openstack.cinder.v1.predicates.VolumePredicates;
import org.jclouds.openstack.nova.v2_0.NovaApi;
import org.jclouds.openstack.nova.v2_0.domain.Ingress;
import org.jclouds.openstack.nova.v2_0.domain.SecurityGroup;
import org.jclouds.openstack.nova.v2_0.domain.ServerCreated;
import org.jclouds.openstack.nova.v2_0.extensions.SecurityGroupApi;
import org.jclouds.openstack.nova.v2_0.features.ServerApi;
import org.jclouds.openstack.nova.v2_0.options.CreateServerOptions;
import org.jclouds.openstack.nova.v2_0.predicates.ServerPredicates;
import java.io.Closeable;
import java.io.IOException;
import java.util.Optional;
import java.util.Scanner;
import java.util.concurrent.TimeoutException;
import static java.lang.System.out;
public class BlockStorage implements Closeable {
// Set the following to match the values for your cloud
private static final String IDENTITY = "your_project_name:your_auth_username"; // note: projectName:userName
private static final String KEY_PAIR_NAME = "your_key_pair_name";
private static final String AUTH_URL = "http://controller:5000";
private static final String AVAILABILITY_ZONE = "your_availability_zone";
private static final String IMAGE_ID = "your_desired_image_id";
private static final String FLAVOR_ID = "your_desired_flavor_id";
private static final String DATABASE_SECURITY_GROUP_NAME = "database";
private final CinderApi cinderApi;
private final VolumeApi volumeApi;
private final NovaApi novaApi;
private final ServerApi serverApi;
private final String region;
// step-1
public BlockStorage(final String password) {
cinderApi = ContextBuilder.newBuilder("openstack-cinder")
.endpoint(AUTH_URL)
.credentials(IDENTITY, password)
.buildApi(CinderApi.class);
region = cinderApi.getConfiguredRegions().iterator().next();
out.println("Running in region: " + region);
volumeApi = cinderApi.getVolumeApi(region);
novaApi = ContextBuilder.newBuilder("openstack-nova")
.endpoint(AUTH_URL)
.credentials(IDENTITY, password)
.buildApi(NovaApi.class);
serverApi = novaApi.getServerApi(region);
}
// step-2
private Volume createVolume() throws TimeoutException {
String volumeName = "Test";
CreateVolumeOptions options = CreateVolumeOptions.Builder
.name(volumeName)
.availabilityZone(AVAILABILITY_ZONE);
out.println("Creating 1 Gig volume named '" + volumeName + "'");
Volume volume = volumeApi.create(1, options);
// Wait for the volume to become available
if (!VolumePredicates.awaitAvailable(volumeApi).apply(volume)) {
throw new TimeoutException("Timeout on volume create");
}
return volume;
}
// step-3
private void listVolumes() {
out.println("Listing volumes");
cinderApi.getConfiguredRegions().forEach((region) -> {
out.println(" In region: " + region);
cinderApi.getVolumeApi(region).list().forEach((volume) -> {
out.println(" " + volume.getName());
});
});
}
// step-4
private boolean isSecurityGroup(String securityGroupName, SecurityGroupApi securityGroupApi) {
for (SecurityGroup securityGroup : securityGroupApi.list()) {
if (securityGroup.getName().equals(securityGroupName)) {
return true;
}
}
return false;
}
// A utility method to convert a google optional into a Java 8 optional
private <T> Optional<T> optional(com.google.common.base.Optional<T> target) {
return target.isPresent() ? Optional.of(target.get()) : Optional.empty();
}
private void createSecurityGroup(String securityGroupName) {
optional(novaApi.getSecurityGroupApi(region)).ifPresent(securityGroupApi -> {
if (isSecurityGroup(securityGroupName, securityGroupApi)) {
out.println("Security group " + securityGroupName + " already exists");
} else {
out.println("Creating security group " + securityGroupName + "...");
SecurityGroup securityGroup =
securityGroupApi.createWithDescription(securityGroupName,
"For database service");
securityGroupApi.createRuleAllowingCidrBlock(
securityGroup.getId(), Ingress
.builder()
.ipProtocol(IpProtocol.TCP)
.fromPort(3306)
.toPort(3306)
.build(), "0.0.0.0/0");
}
});
}
private String createDbInstance() throws TimeoutException {
String instanceName = "app-database";
out.println("Creating instance " + instanceName);
CreateServerOptions allInOneOptions = CreateServerOptions.Builder
.keyPairName(KEY_PAIR_NAME)
.availabilityZone(AVAILABILITY_ZONE)
.securityGroupNames(DATABASE_SECURITY_GROUP_NAME);
ServerCreated server = serverApi.create(instanceName, IMAGE_ID, FLAVOR_ID, allInOneOptions);
String id = server.getId();
// Wait for the server to become available
if (!ServerPredicates.awaitActive(serverApi).apply(id)) {
throw new TimeoutException("Timeout on server create");
}
return id;
}
// step-5
private void attachVolume(Volume volume, String instanceId) throws TimeoutException {
out.format("Attaching volume %s to instance %s%n", volume.getId(), instanceId);
optional(novaApi.getVolumeAttachmentApi(region)).ifPresent(volumeAttachmentApi -> {
volumeAttachmentApi.attachVolumeToServerAsDevice(volume.getId(), instanceId, "/dev/vdb");
}
);
// Wait for the volume to be attached
if (!VolumePredicates.awaitInUse(volumeApi).apply(volume)) {
throw new TimeoutException("Timeout on volume attach");
}
}
// step-6
private void detachVolume(Volume volume, String instanceId) throws TimeoutException {
out.format("Detach volume %s from instance %s%n", volume.getId(), instanceId);
optional(novaApi.getVolumeAttachmentApi(region)).ifPresent(volumeAttachmentApi -> {
volumeAttachmentApi.detachVolumeFromServer(volume.getId(), instanceId);
});
// Wait for the volume to be detached
if (!VolumePredicates.awaitAvailable(volumeApi).apply(Volume.forId(volume.getId()))) {
throw new TimeoutException("Timeout on volume detach");
}
}
private void destroyVolume(Volume volume) throws TimeoutException {
out.println("Destroy volume " + volume.getName());
volumeApi.delete(volume.getId());
// Wait for the volume to be deleted
if (!VolumePredicates.awaitDeleted(volumeApi).apply(volume)) {
throw new TimeoutException("Timeout on volume delete");
}
}
// step-7
private Snapshot createVolumeSnapshot(Volume volume) throws TimeoutException {
out.println("Create snapshot of volume " + volume.getName());
SnapshotApi snapshotApi = cinderApi.getSnapshotApi(region);
CreateSnapshotOptions options = CreateSnapshotOptions.Builder
.name(volume.getName() + " snapshot")
.description("Snapshot of " + volume.getId());
Snapshot snapshot = snapshotApi.create(volume.getId(), options);
// Wait for the snapshot to become available
if (!SnapshotPredicates.awaitAvailable(snapshotApi).apply(snapshot)) {
throw new TimeoutException("Timeout on volume snapshot");
}
return snapshot;
}
private void deleteVolumeSnapshot(Snapshot snapshot) throws TimeoutException {
out.println("Delete volume snapshot " + snapshot.getName());
SnapshotApi snapshotApi = cinderApi.getSnapshotApi(region);
snapshotApi.delete(snapshot.getId());
// Wait for the snapshot to be deleted
if (!SnapshotPredicates.awaitDeleted(snapshotApi).apply(snapshot)) {
throw new TimeoutException("Timeout on snapshot delete");
}
}
// step-8
@Override
public void close() throws IOException {
Closeables.close(novaApi, true);
Closeables.close(cinderApi, true);
}
public static void main(String... args) throws TimeoutException, IOException {
try (Scanner scanner = new Scanner(System.in, "UTF-8")) {
out.println("Please enter your API password: ");
String password = scanner.next();
try (BlockStorage storage = new BlockStorage(password)) {
Volume volume = storage.createVolume();
storage.listVolumes();
storage.createSecurityGroup(DATABASE_SECURITY_GROUP_NAME);
String dbInstanceId = storage.createDbInstance();
storage.attachVolume(volume, dbInstanceId);
storage.detachVolume(volume, dbInstanceId);
Snapshot snapshot = storage.createVolumeSnapshot(volume);
// have to delete the snapshot before we can delete the volume...
storage.deleteVolumeSnapshot(snapshot);
storage.destroyVolume(volume);
}
}
}
}

View File

@ -1,296 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import com.google.common.collect.ImmutableMap;
import com.google.common.io.ByteSource;
import com.google.common.io.Closeables;
import com.google.common.io.Files;
import com.google.gson.Gson;
import org.jclouds.ContextBuilder;
import org.jclouds.blobstore.BlobStore;
import org.jclouds.blobstore.domain.Blob;
import org.jclouds.domain.Location;
import org.jclouds.io.Payload;
import org.jclouds.io.Payloads;
import org.jclouds.openstack.swift.v1.SwiftApi;
import org.jclouds.openstack.swift.v1.blobstore.RegionScopedBlobStoreContext;
import org.jclouds.openstack.swift.v1.domain.Container;
import org.jclouds.openstack.swift.v1.domain.SwiftObject;
import org.jclouds.openstack.swift.v1.features.ContainerApi;
import org.jclouds.openstack.swift.v1.features.ObjectApi;
import org.jclouds.openstack.swift.v1.options.CreateContainerOptions;
import org.jclouds.openstack.swift.v1.options.PutOptions;
import java.io.*;
import java.net.URL;
import java.net.URLConnection;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.List;
import static com.google.common.collect.Iterables.getOnlyElement;
import static java.lang.System.out;
import static org.jclouds.io.Payloads.newFilePayload;
import static org.jclouds.io.Payloads.newInputStreamPayload;
public class Durability implements Closeable {
// step-1
private final SwiftApi swiftApi;
private final String region;
private static final String PROVIDER = "openstack-swift";
private static final String OS_AUTH_URL = "http://controller:5000/v2.0/";
// format for identity is projectName:userName
private static final String IDENTITY = "your_project_name:your_auth_username";
private static final String PASSWORD = "your_auth_password";
public Durability() {
swiftApi = ContextBuilder.newBuilder(PROVIDER)
.endpoint(OS_AUTH_URL)
.credentials(IDENTITY, PASSWORD)
.buildApi(SwiftApi.class);
region = swiftApi.getConfiguredRegions().iterator().next();
out.println("Running in region: " + region);
}
// step-2
private void createContainer(String containerName) {
ContainerApi containerApi = swiftApi.getContainerApi(region);
if (containerApi.create(containerName)) {
out.println("Created container: " + containerName);
} else {
out.println("Container all ready exists: " + containerName);
}
}
// step-3
private void listContainers() {
out.println("Containers:");
ContainerApi containerApi = swiftApi.getContainerApi(region);
containerApi.list().forEach(container -> out.println(" " + container));
}
// step-4
private String uploadObject(String containerName, String objectName, String filePath) {
Payload payload = newFilePayload(new File(filePath));
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
String eTag = objectApi.put(objectName, payload);
out.println(String.format("Uploaded %s as \"%s\" eTag = %s", filePath, objectName, eTag));
return eTag;
}
// step-5
private void listObjectsInContainer(String containerName) {
out.println("Objects in " + containerName + ":");
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
objectApi.list().forEach(object -> out.println(" " + object));
}
// step-6
private SwiftObject getObjectFromContainer(String containerName, String objectName) {
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
SwiftObject object = objectApi.get(objectName);
out.println("Fetched: " + object.getName());
return object;
}
// step-7
private void calculateMd5ForFile(String filePath) {
try (FileInputStream fis = new FileInputStream(new File(filePath))) {
MessageDigest md5Digest = MessageDigest.getInstance("MD5");
byte[] byteArray = new byte[1024];
int bytesCount;
while ((bytesCount = fis.read(byteArray)) != -1) {
md5Digest.update(byteArray, 0, bytesCount);
}
byte[] digest = md5Digest.digest();
// Convert decimal number to hex string
StringBuilder sb = new StringBuilder();
for (byte aByte : digest) {
sb.append(Integer.toString((aByte & 0xff) + 0x100, 16).substring(1));
}
out.println("MD5 for file " + filePath + ": " + sb.toString());
} catch (IOException | NoSuchAlgorithmException e) {
out.println("Could not calculate md5: " + e.getMessage());
}
}
// step-8
private void deleteObject(String containerName, String objectName) {
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
objectApi.delete(objectName);
out.println("Deleted: " + objectName);
}
// step-10
private Container getContainer(String containerName) {
ContainerApi containerApi = swiftApi.getContainerApi(region);
// ensure container exists
containerApi.create(containerName);
return containerApi.get(containerName);
}
// step-11
static class Fractal {
// only included elements we want to work with
String uuid;
}
static class Fractals {
// only included elements we want to work with
List<Fractal> objects;
}
private void backupFractals(String containerName, String fractalsIp) {
// just need to make sure that there is container
getContainer(containerName);
try {
String response = "";
String endpoint = "http://" + fractalsIp + "/v1/fractal";
URLConnection connection = new URL(endpoint).openConnection();
connection.setRequestProperty("'results_per_page", "-1");
connection.getInputStream();
try (BufferedReader in = new BufferedReader(new InputStreamReader(
connection.getInputStream()))) {
String inputLine;
while ((inputLine = in.readLine()) != null) {
response = response + inputLine;
}
}
Gson gson = new Gson();
Fractals fractals = gson.fromJson(response, Fractals.class);
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
fractals.objects.forEach(fractal -> {
try {
String fractalEndpoint = "http://" + fractalsIp + "/fractal/" + fractal.uuid;
URLConnection conn = new URL(fractalEndpoint).openConnection();
try (InputStream inputStream = conn.getInputStream()) {
Payload payload = newInputStreamPayload(inputStream);
String eTag = objectApi.put(fractal.uuid, payload);
out.println(String.format("Backed up %s eTag = %s", fractal.uuid, eTag));
}
} catch (IOException e) {
out.println("Could not backup " + fractal.uuid + "! Cause: " + e.getMessage());
}
});
out.println("Backed up:");
objectApi.list().forEach(object -> out.println(" " + object));
} catch (IOException e) {
out.println("Could not backup fractals! Cause: " + e.getMessage());
}
}
// step-12
private boolean deleteContainer(String containerName) {
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
objectApi.list().forEach(object -> objectApi.delete(object.getName()));
ContainerApi containerApi = swiftApi.getContainerApi(region);
return containerApi.deleteIfEmpty(containerName);
}
// step-13
private void createWithMetadata(String containerName, String objectName, String filePath) {
ContainerApi containerApi = swiftApi.getContainerApi(region);
CreateContainerOptions options = CreateContainerOptions.Builder
.metadata(ImmutableMap.of("photos", "of fractals"));
if (containerApi.create(containerName, options)) {
out.println("Uploading: " + objectName);
ObjectApi objectApi = swiftApi.getObjectApi(region, containerName);
Payload payload = newFilePayload(new File(filePath));
PutOptions putOptions = PutOptions.Builder
.metadata(ImmutableMap.of(
"description", "a funny goat",
"created", "2015-06-02"));
String eTag = objectApi.put(objectName, payload, putOptions);
out.println(
String.format("Uploaded %s as \"%s\" eTag = %s", filePath, objectName, eTag));
} else {
out.println("Could not upload " + objectName);
}
}
// step-14
private void uploadLargeFile(String containerName, String pathNameOfLargeFile) {
// Only works with jclouds V2 (in beta at the time of writing). See:
// https://issues.apache.org/jira/browse/JCLOUDS-894
try {
RegionScopedBlobStoreContext context = ContextBuilder.newBuilder(PROVIDER)
.credentials(IDENTITY, PASSWORD)
.endpoint(OS_AUTH_URL)
.buildView(RegionScopedBlobStoreContext.class);
String region = context.getConfiguredRegions().iterator().next();
out.println("Running in region: " + region);
BlobStore blobStore = context.getBlobStore(region);
// create the container if it doesn't exist...
Location location = getOnlyElement(blobStore.listAssignableLocations());
blobStore.createContainerInLocation(location, containerName);
File largeFile = new File(pathNameOfLargeFile);
ByteSource source = Files.asByteSource(largeFile);
Payload payload = Payloads.newByteSourcePayload(source);
payload.getContentMetadata().setContentLength(largeFile.length());
out.println("Uploading file. This may take some time!");
Blob blob = blobStore.blobBuilder(largeFile.getName())
.payload(payload)
.build();
org.jclouds.blobstore.options.PutOptions putOptions =
new org.jclouds.blobstore.options.PutOptions();
String eTag = blobStore.putBlob(containerName, blob, putOptions.multipart());
out.println(String.format("Uploaded %s eTag=%s", largeFile.getName(), eTag));
} catch (UnsupportedOperationException e) {
out.println("Sorry: large file uploads only work in jclouds V2...");
}
}
// step-15
@Override
public void close() throws IOException {
Closeables.close(swiftApi, true);
}
public static void main(String[] args) throws IOException {
try (Durability tester = new Durability()) {
String containerName = "fractals";
String objectName = "an amazing goat";
String goatImageFilePath = "goat.jpg";
String fractalsIp = "IP_API_1";
String pathNameOfLargeFile = "big.img";
tester.createContainer(containerName);
tester.listContainers();
tester.uploadObject(containerName, objectName, goatImageFilePath);
tester.listObjectsInContainer(containerName);
tester.getObjectFromContainer(containerName, objectName);
tester.calculateMd5ForFile(goatImageFilePath);
tester.deleteObject(containerName, objectName);
tester.getContainer(containerName);
tester.backupFractals(containerName, fractalsIp);
tester.deleteContainer(containerName);
tester.createWithMetadata(containerName, objectName, goatImageFilePath);
tester.listContainers();
tester.uploadLargeFile("largeObject", pathNameOfLargeFile);
}
}
}

View File

@ -1,266 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import com.google.common.base.Optional;
import org.jclouds.ContextBuilder;
import org.jclouds.net.domain.IpProtocol;
import org.jclouds.openstack.nova.v2_0.NovaApi;
import org.jclouds.openstack.nova.v2_0.domain.*;
import org.jclouds.openstack.nova.v2_0.extensions.FloatingIPApi;
import org.jclouds.openstack.nova.v2_0.extensions.KeyPairApi;
import org.jclouds.openstack.nova.v2_0.extensions.SecurityGroupApi;
import org.jclouds.openstack.nova.v2_0.features.FlavorApi;
import org.jclouds.openstack.nova.v2_0.features.ImageApi;
import org.jclouds.openstack.nova.v2_0.features.ServerApi;
import org.jclouds.openstack.nova.v2_0.options.CreateServerOptions;
import org.jclouds.openstack.nova.v2_0.predicates.ServerPredicates;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.attribute.PosixFilePermission;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import static java.lang.System.out;
class GettingStarted {
public static void main(String[] args) throws IOException {
out.println("=============================");
// # step-1
String provider = "openstack-nova";
String identity = "your_project_name_or_id:your_auth_username";
// NB: Do not check this file into source control with a real password in it!
String credential = "your_auth_password";
String authUrl = "http://controller:5000/v2.0/";
NovaApi conn = ContextBuilder.newBuilder(provider)
.endpoint(authUrl)
.credentials(identity, credential)
.buildApi(NovaApi.class);
String region = conn.getConfiguredRegions().iterator().next();
out.println("Running in region: " + region);
// # step-2
ImageApi imageApi = conn.getImageApi(region);
out.println("Images in region:");
imageApi.list().concat().forEach(image -> out.println(" " + image.getName()));
// # step-3
FlavorApi flavorApi = conn.getFlavorApi(region);
out.println("Flavors in region:");
flavorApi.list().concat().forEach(flavor -> out.println(" " + flavor.getName()));
// # step-4
String imageId = "778e7b2e-4e67-44eb-9565-9c920e236dfd";
Image retrievedImage = conn.getImageApi(region).get(imageId);
out.println(retrievedImage.toString());
// # step-5
String flavorId = "639b8b2a-a5a6-4aa2-8592-ca765ee7af63";
Flavor flavor = conn.getFlavorApi(region).get(flavorId);
out.println(flavor.toString());
// # step-6
String testingInstance = "testingInstance";
ServerCreated testInstance = conn.getServerApi(region).create(testingInstance, imageId, flavorId);
out.println("Server created. ID: " + testInstance.getId());
// # step-7
ServerApi serverApi = conn.getServerApi(region);
out.println("Instances in region:");
serverApi.list().concat().forEach(instance -> out.println(" " + instance));
// # step-8
if (serverApi.delete(testInstance.getId())) {
out.println("Server " + testInstance.getId() + " being deleted, please wait.");
ServerPredicates.awaitStatus(serverApi, Server.Status.DELETED, 600, 5).apply(testInstance.getId());
serverApi.list().concat().forEach(instance -> out.println(" " + instance));
} else {
out.println("Server not deleted.");
}
// # step-9
String pub_key_file = "id_rsa";
String privateKeyFile = "~/.ssh/" + pub_key_file;
Optional<? extends KeyPairApi> keyPairApiExtension = conn.getKeyPairApi(region);
if (keyPairApiExtension.isPresent()) {
out.println("Checking for existing SSH keypair...");
KeyPairApi keyPairApi = keyPairApiExtension.get();
boolean keyPairFound = keyPairApi.get(pub_key_file) != null;
if (keyPairFound) {
out.println("Keypair " + pub_key_file + " already exists.");
} else {
out.println("Creating keypair.");
KeyPair keyPair = keyPairApi.create(pub_key_file);
try {
Files.write(Paths.get(privateKeyFile), keyPair.getPrivateKey().getBytes());
out.println("Wrote " + privateKeyFile + ".");
// set file permissions to 600
Set<PosixFilePermission> permissions = new HashSet<>();
permissions.add(PosixFilePermission.OWNER_READ);
permissions.add(PosixFilePermission.OWNER_WRITE);
Files.setPosixFilePermissions(Paths.get(privateKeyFile), permissions);
} catch (IOException e) {
e.printStackTrace();
}
}
out.println("Existing keypairs:");
keyPairApi.list().forEach(keyPair -> out.println(" " + keyPair));
} else {
out.println("No keypair extension present; skipping keypair checks.");
}
// # step-10
String securityGroupName = "all-in-one";
Optional<? extends SecurityGroupApi> securityGroupApiExtension = conn.getSecurityGroupApi(region);
if (securityGroupApiExtension.isPresent()) {
out.println("Checking security groups.");
SecurityGroupApi securityGroupApi = securityGroupApiExtension.get();
boolean securityGroupFound = false;
for (SecurityGroup securityGroup : securityGroupApi.list()) {
securityGroupFound = securityGroupFound || securityGroup.getName().equals(securityGroupName);
}
if (securityGroupFound) {
out.println("Security group " + securityGroupName + " already exists.");
} else {
out.println("Creating " + securityGroupName + "...");
SecurityGroup securityGroup = securityGroupApi.createWithDescription(securityGroupName,
securityGroupName + " network access for all-in-one application.");
Ingress sshIngress = Ingress.builder().fromPort(22).ipProtocol(IpProtocol.TCP).toPort(22).build();
Ingress webIngress = Ingress.builder().fromPort(80).ipProtocol(IpProtocol.TCP).toPort(80).build();
securityGroupApi.createRuleAllowingCidrBlock(securityGroup.getId(), sshIngress, "0.0.0.0/0");
securityGroupApi.createRuleAllowingCidrBlock(securityGroup.getId(), webIngress, "0.0.0.0/0");
}
out.println("Existing Security Groups: ");
for (SecurityGroup thisSecurityGroup : securityGroupApi.list()) {
out.println(" " + thisSecurityGroup);
thisSecurityGroup.getRules().forEach(rule -> out.println(" " + rule));
}
} else {
out.println("No security group extension present; skipping security group checks.");
}
// # step-11
String ex_userdata = "#!/usr/bin/env bash\n" +
" curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \\\n" +
" -i faafo -i messaging -r api -r worker -r demo\n";
// # step-12
out.println("Checking for existing instance...");
String instanceName = "all-in-one";
Server allInOneInstance = null;
for (Server thisInstance : serverApi.listInDetail().concat()) {
if (thisInstance.getName().equals(instanceName)) {
allInOneInstance = thisInstance;
}
}
if (allInOneInstance != null) {
out.println("Instance " + instanceName + " already exists. Skipping creation.");
} else {
out.println("Creating instance...");
CreateServerOptions allInOneOptions = CreateServerOptions.Builder
.keyPairName(pub_key_file)
.securityGroupNames(securityGroupName)
// If not running in a single-tenant network this where you add your network...
// .networks("79e8f822-99e1-436f-a62c-66e8d3706940")
.userData(ex_userdata.getBytes());
ServerCreated allInOneInstanceCreated = serverApi.create(instanceName, imageId, flavorId, allInOneOptions);
ServerPredicates.awaitActive(serverApi).apply(allInOneInstanceCreated.getId());
allInOneInstance = serverApi.get(allInOneInstanceCreated.getId());
out.println("Instance created: " + allInOneInstance.getId());
}
out.println("Existing instances:");
serverApi.listInDetail().concat().forEach(instance -> out.println(" " + instance.getName()));
// # step-13
out.println("Checking for unused floating IP's...");
FloatingIP unusedFloatingIP = null;
if (conn.getFloatingIPApi(region).isPresent()) {
FloatingIPApi floatingIPApi = conn.getFloatingIPApi(region).get();
List<FloatingIP> freeIP = floatingIPApi.list().toList().stream().filter(
floatingIp -> floatingIp.getInstanceId() == null).collect(Collectors.toList());
if (freeIP.size() > 0) {
out.println("The following IPs are available:");
freeIP.forEach(floatingIP -> out.println(" " + floatingIP.getIp()));
unusedFloatingIP = freeIP.get(0);
} else {
out.println("Creating new floating IP.... ");
unusedFloatingIP = floatingIPApi.create();
}
if (unusedFloatingIP != null) {
out.println("Using: " + unusedFloatingIP.getIp());
}
} else {
out.println("No floating ip extension present; skipping floating ip creation.");
}
// # step-14
out.println(allInOneInstance.getAddresses());
if (allInOneInstance.getAccessIPv4() != null) {
out.println("Public IP already assigned. Skipping attachment.");
} else if (unusedFloatingIP != null) {
out.println("Attaching new IP, please wait...");
// api must be present if we have managed to allocate a floating IP
conn.getFloatingIPApi(region).get().addToServer(unusedFloatingIP.getIp(), allInOneInstance.getId());
//This operation takes some indeterminate amount of time; don't move on until it's done.
while (allInOneInstance.getAccessIPv4() != null) {
//Objects are not updated "live" so keep checking to make sure it's been added
try {
TimeUnit.SECONDS.sleep(1);
} catch(InterruptedException ex) {
out.println( "Awakened prematurely." );
}
allInOneInstance = serverApi.get(allInOneInstance.getId());
}
}
// # step-15
out.print("Be patient: all going well, the Fractals app will soon be available at http://" + allInOneInstance.getAccessIPv4());
// # step-16
}
}

View File

@ -1,428 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableSet;
import com.google.common.io.Closeables;
import com.google.inject.Module;
import org.jclouds.ContextBuilder;
import org.jclouds.logging.slf4j.config.SLF4JLoggingModule;
import org.jclouds.net.domain.IpProtocol;
import org.jclouds.openstack.nova.v2_0.NovaApi;
import org.jclouds.openstack.nova.v2_0.domain.*;
import org.jclouds.openstack.nova.v2_0.extensions.FloatingIPApi;
import org.jclouds.openstack.nova.v2_0.extensions.FloatingIPPoolApi;
import org.jclouds.openstack.nova.v2_0.extensions.SecurityGroupApi;
import org.jclouds.openstack.nova.v2_0.extensions.ServerWithSecurityGroupsApi;
import org.jclouds.openstack.nova.v2_0.features.ServerApi;
import org.jclouds.openstack.nova.v2_0.options.CreateServerOptions;
import org.jclouds.openstack.nova.v2_0.predicates.ServerPredicates;
import java.io.Closeable;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Scanner;
import java.util.Set;
import java.util.stream.Collectors;
import static java.lang.System.out;
public class Introduction implements Closeable {
private static final String PROVIDER = "openstack-nova";
private static final String OS_AUTH_URL = "http://controller:5000/v2.0";
// format for identity is tenantName:userName
private static final String IDENTITY = "your_project_name_or_id:your_auth_username";
private static final String IMAGE_ID = "2cccbea0-cea9-4f86-a3ed-065c652adda5";
private static final String FLAVOR_ID = "2";
private static final String ALL_IN_ONE_SECURITY_GROUP_NAME = "all-in-one";
public static final String ALL_IN_ONE_INSTANCE_NAME = "all-in-one";
private static final String WORKER_SECURITY_GROUP_NAME = "worker";
private static final String CONTROLLER_SECURITY_GROUP_NAME = "control";
private final NovaApi novaApi;
private final String region;
private final ServerApi serverApi;
private final String ex_keypair = "demokey";
public Introduction(final String password) {
Iterable<Module> modules = ImmutableSet.<Module>of(new SLF4JLoggingModule());
novaApi = ContextBuilder.newBuilder(PROVIDER)
.endpoint(OS_AUTH_URL)
.credentials(IDENTITY, password)
.modules(modules)
.buildApi(NovaApi.class);
region = novaApi.getConfiguredRegions().iterator().next();
serverApi = novaApi.getServerApi(region);
out.println("Running in region: " + region);
}
// step-3
private Ingress getIngress(int port) {
return Ingress
.builder()
.ipProtocol(IpProtocol.TCP)
.fromPort(port)
.toPort(port)
.build();
}
private void createAllInOneSecurityGroup() {
Optional<? extends SecurityGroupApi> optional = novaApi.getSecurityGroupApi(region);
if (optional.isPresent()) {
SecurityGroupApi securityGroupApi = optional.get();
if (isSecurityGroup(ALL_IN_ONE_SECURITY_GROUP_NAME, securityGroupApi)) {
out.println("Security Group " + ALL_IN_ONE_SECURITY_GROUP_NAME + " already exists");
} else {
out.println("Creating Security Group " + ALL_IN_ONE_SECURITY_GROUP_NAME + "...");
SecurityGroup securityGroup =
securityGroupApi.createWithDescription(ALL_IN_ONE_SECURITY_GROUP_NAME,
"Network access for all-in-one application.");
securityGroupApi.createRuleAllowingCidrBlock(
securityGroup.getId(), getIngress(22), "0.0.0.0/0");
securityGroupApi.createRuleAllowingCidrBlock(
securityGroup.getId(), getIngress(80), "0.0.0.0/0");
}
} else {
out.println("No Security Group extension present; skipping security group demo.");
}
}
// step-3-end
private SecurityGroup getSecurityGroup(String securityGroupName, SecurityGroupApi securityGroupApi) {
for (SecurityGroup securityGroup : securityGroupApi.list()) {
if (securityGroup.getName().equals(securityGroupName)) {
return securityGroup;
}
}
return null;
}
private boolean isSecurityGroup(String securityGroupName, SecurityGroupApi securityGroupApi) {
return getSecurityGroup(securityGroupName, securityGroupApi) != null;
}
// step-4
private void listAllSecurityGroups() {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
out.println("Existing Security Groups:");
for (SecurityGroup securityGroup : securityGroupApi.list()) {
out.println(" " + securityGroup.getName());
}
} else {
out.println("No Security Group extension present; skipping listing of security groups.");
}
}
// step-4-end
// step-5
private void deleteSecurityGroupRule(SecurityGroupRule rule) {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
out.println("Deleting Security Group Rule " + rule.getIpProtocol());
securityGroupApi.deleteRule(rule.getId());
} else {
out.println("No Security Group extension present; can't delete Rule.");
}
}
private void deleteSecurityGroup(SecurityGroup securityGroup) {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
out.println("Deleting Security Group " + securityGroup.getName());
securityGroupApi.delete(securityGroup.getId());
} else {
out.println("No Security Group extension present; can't delete Security Group.");
}
}
// step-5-end
private void deleteSecurityGroups(String... groups) {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
securityGroupApi.list().forEach(securityGroup -> {
if (Arrays.asList(groups).contains(securityGroup.getName())) {
deleteSecurityGroup(securityGroup);
}
});
} else {
out.println("No Security Group extension present; can't delete Security Groups.");
}
}
private void deleteSecurityGroupRules(String securityGroupName) {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
for (SecurityGroup thisSecurityGroup : securityGroupApi.list()) {
if (thisSecurityGroup.getName().equals(securityGroupName)) {
out.println("Deleting Rules for Security Group " + securityGroupName);
Set<SecurityGroupRule> rules = thisSecurityGroup.getRules();
if (rules != null) {
rules.forEach(this::deleteSecurityGroupRule);
}
}
}
} else {
out.println("No Security Group extension present; skipping deleting of Rules.");
}
}
// step-2
final String ex_userdata = "#!/usr/bin/env bash\n" +
" curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh" +
" | bash -s -- \\\n" +
" -i faafo -i messaging -r api -r worker -r demo\n";
// step-2-end
// step-1
/**
* A helper function to create an instance
*
* @param name The name of the instance that is to be created
* @param options Keypairs, security groups etc...
* @return the id of the newly created instance.
*/
private String create_node(String name, CreateServerOptions... options) {
out.println("Creating Instance " + name);
ServerCreated serverCreated = serverApi.create(name, IMAGE_ID, FLAVOR_ID, options);
String id = serverCreated.getId();
ServerPredicates.awaitActive(serverApi).apply(id);
return id;
}
private String createAllInOneInstance() {
CreateServerOptions allInOneOptions = CreateServerOptions.Builder
.keyPairName(ex_keyname)
.securityGroupNames(ALL_IN_ONE_SECURITY_GROUP_NAME)
.userData(ex_userdata.getBytes());
return create_node(ALL_IN_ONE_INSTANCE_NAME, allInOneOptions);
}
// step-1-end
// step-6
private void listSecurityGroupsForServer(String serverId) {
if (novaApi.getServerWithSecurityGroupsApi(region).isPresent()) {
out.println("Listing Security Groups for Instance with id " + serverId);
ServerWithSecurityGroupsApi serverWithSecurityGroupsApi =
novaApi.getServerWithSecurityGroupsApi(region).get();
ServerWithSecurityGroups serverWithSecurityGroups =
serverWithSecurityGroupsApi.get(serverId);
Set<String> securityGroupNames = serverWithSecurityGroups.getSecurityGroupNames();
securityGroupNames.forEach(name -> out.println(" " + name));
} else {
out.println("No Server With Security Groups API found; can't list Security Groups for Instance.");
}
}
// step-6-end
private void deleteInstance(String instanceName) {
serverApi.listInDetail().concat().forEach(instance -> {
if (instance.getName().equals(instanceName)) {
out.println("Deleting Instance: " + instance.getName());
serverApi.delete(instance.getId());
ServerPredicates.awaitStatus(serverApi, Server.Status.DELETED, 600, 5).apply(instance.getId());
}
});
}
// step-7
private FloatingIP getFreeFloatingIp() {
FloatingIP unusedFloatingIP = null;
if (novaApi.getFloatingIPApi(region).isPresent()) {
out.println("Checking for unused Floating IP's...");
FloatingIPApi floatingIPApi = novaApi.getFloatingIPApi(region).get();
List<FloatingIP> freeIP = floatingIPApi.list().toList().stream().filter(
floatingIp -> floatingIp.getInstanceId() == null).collect(Collectors.toList());
if (freeIP.size() > 0) {
unusedFloatingIP = freeIP.get(0);
}
} else {
out.println("No Floating IP extension present; could not fetch Floating IP.");
}
return unusedFloatingIP;
}
// step-7-end
// step-8
private String getFirstFloatingIpPoolName() {
String floatingIpPoolName = null;
if (novaApi.getFloatingIPPoolApi(region).isPresent()) {
out.println("Getting Floating IP Pool.");
FloatingIPPoolApi poolApi = novaApi.getFloatingIPPoolApi(region).get();
if (poolApi.list().first().isPresent()) {
FloatingIPPool floatingIPPool = poolApi.list().first().get();
floatingIpPoolName = floatingIPPool.getName();
} else {
out.println("There is no Floating IP Pool");
}
} else {
out.println("No Floating IP Pool API present; could not fetch Pool.");
}
return floatingIpPoolName;
}
// step-8-end
// step-9
private FloatingIP allocateFloatingIpFromPool(String poolName) {
FloatingIP unusedFloatingIP = null;
if (novaApi.getFloatingIPApi(region).isPresent()) {
out.println("Allocating IP from Pool " + poolName);
FloatingIPApi floatingIPApi = novaApi.getFloatingIPApi(region).get();
unusedFloatingIP = floatingIPApi.allocateFromPool(poolName);
} else {
out.println("No Floating IP extension present; could not allocate IP from Pool.");
}
return unusedFloatingIP;
}
// step-9-end
// step-10
private void attachFloatingIpToInstance(FloatingIP unusedFloatingIP, String targetInstanceId) {
if (novaApi.getFloatingIPApi(region).isPresent()) {
out.println("Attaching new IP, please wait...");
FloatingIPApi floatingIPApi = novaApi.getFloatingIPApi(region).get();
floatingIPApi.addToServer(unusedFloatingIP.getIp(), targetInstanceId);
} else {
out.println("No Floating IP extension present; cannot attach IP to Instance.");
}
}
// step-10-end
private void attachFloatingIp(String allInOneInstanceId) {
FloatingIP freeFloatingIp = getFreeFloatingIp();
if (freeFloatingIp == null) {
String poolName = getFirstFloatingIpPoolName();
if (poolName != null) {
freeFloatingIp = allocateFloatingIpFromPool(poolName);
if (freeFloatingIp != null) {
attachFloatingIpToInstance(freeFloatingIp, allInOneInstanceId);
}
}
}
}
// step-11
private void createApiAndWorkerSecurityGroups() {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
SecurityGroup workerGroup =
getSecurityGroup(WORKER_SECURITY_GROUP_NAME, securityGroupApi);
if (workerGroup == null) {
out.println("Creating Security Group " + WORKER_SECURITY_GROUP_NAME + "...");
workerGroup = securityGroupApi.createWithDescription(WORKER_SECURITY_GROUP_NAME,
"For services that run on a worker node.");
securityGroupApi.createRuleAllowingCidrBlock(
workerGroup.getId(), getIngress(22), "0.0.0.0/0");
}
SecurityGroup apiGroup =
getSecurityGroup(CONTROLLER_SECURITY_GROUP_NAME, securityGroupApi);
if (apiGroup == null) {
apiGroup = securityGroupApi.createWithDescription(CONTROLLER_SECURITY_GROUP_NAME,
"For services that run on a control node");
securityGroupApi.createRuleAllowingCidrBlock(
apiGroup.getId(), getIngress(80), "0.0.0.0/0");
securityGroupApi.createRuleAllowingCidrBlock(
apiGroup.getId(), getIngress(22), "0.0.0.0/0");
securityGroupApi.createRuleAllowingSecurityGroupId(
apiGroup.getId(), getIngress(5672), workerGroup.getId());
}
} else {
out.println("No Security Group extension present; skipping Security Group create.");
}
}
private String createApiInstance() {
String userData = "#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh" +
" | bash -s -- \\\n" +
" -i messaging -i faafo -r api\n";
String instanceName = "app-controller";
CreateServerOptions allInOneOptions = CreateServerOptions.Builder
.keyPairName(ex_keyname)
.securityGroupNames(CONTROLLER_SECURITY_GROUP_NAME)
.userData(userData.getBytes());
return create_node(instanceName, allInOneOptions);
}
// step-11-end
// step-12
private String createWorkerInstance(String apiAccessIP) {
String userData = "#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh" +
" | bash -s -- \\\n" +
" -i faafo -r worker -e 'http://%1$s' -m 'amqp://guest:guest@%1$s:5672/'";
userData = String.format(userData, apiAccessIP);
CreateServerOptions options = CreateServerOptions.Builder
.keyPairName(ex_keyname)
.securityGroupNames(WORKER_SECURITY_GROUP_NAME)
.userData(userData.getBytes());
return create_node("app-worker-1", options);
}
// step-12-end
private void createApiAndWorkerInstances() {
createApiAndWorkerSecurityGroups();
String apiInstanceId = createApiInstance();
attachFloatingIp(apiInstanceId);
String apiAccessIP = serverApi.get(apiInstanceId).getAccessIPv4();
out.println("Controller is deployed to http://" + apiAccessIP);
String workerInstanceId = createWorkerInstance(apiAccessIP);
attachFloatingIp(workerInstanceId);
// step-13
String workerAccessIP = serverApi.get(workerInstanceId).getAccessIPv4();
out.println("Worker is deployed to " + workerAccessIP);
// step-13-end
}
private void setupIntroduction() {
createAllInOneSecurityGroup();
String allInOneInstanceId = createAllInOneInstance();
listAllSecurityGroups();
listSecurityGroupsForServer(allInOneInstanceId);
attachFloatingIp(allInOneInstanceId);
deleteInstance(ALL_IN_ONE_INSTANCE_NAME);
deleteSecurityGroupRules(ALL_IN_ONE_SECURITY_GROUP_NAME);
deleteSecurityGroups(ALL_IN_ONE_SECURITY_GROUP_NAME);
createApiAndWorkerInstances();
}
@Override
public void close() throws IOException {
Closeables.close(novaApi, true);
}
public static void main(String[] args) throws IOException {
try (Scanner scanner = new Scanner(System.in)) {
System.out.println("Please enter your password: ");
String password = scanner.next();
try (Introduction gs = new Introduction(password)) {
gs.setupIntroduction();
}
}
}
}

View File

@ -1,300 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import com.google.common.collect.ImmutableSet;
import com.google.common.io.Closeables;
import com.google.inject.Module;
import org.jclouds.ContextBuilder;
import org.jclouds.logging.slf4j.config.SLF4JLoggingModule;
import org.jclouds.net.domain.IpProtocol;
import org.jclouds.openstack.nova.v2_0.NovaApi;
import org.jclouds.openstack.nova.v2_0.domain.FloatingIP;
import org.jclouds.openstack.nova.v2_0.domain.Ingress;
import org.jclouds.openstack.nova.v2_0.domain.SecurityGroup;
import org.jclouds.openstack.nova.v2_0.domain.ServerCreated;
import org.jclouds.openstack.nova.v2_0.extensions.FloatingIPApi;
import org.jclouds.openstack.nova.v2_0.extensions.SecurityGroupApi;
import org.jclouds.openstack.nova.v2_0.features.ServerApi;
import org.jclouds.openstack.nova.v2_0.options.CreateServerOptions;
import org.jclouds.openstack.nova.v2_0.predicates.ServerPredicates;
import java.io.Closeable;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.Scanner;
import java.util.stream.Collectors;
import static java.lang.System.out;
/**
* A class that shows the jclouds implementation for the scaling out chapter of the
* "Writing your first OpenStack application" book
* (http://developer.openstack.org/firstapp-libcloud/scaling_out.html)
*/
public class ScalingOut implements Closeable {
private final NovaApi novaApi;
private final String region;
private final ServerApi serverApi;
// change the following to fit your OpenStack installation
private static final String KEY_PAIR_NAME = "demokey";
private static final String PROVIDER = "openstack-nova";
private static final String OS_AUTH_URL = "http://controller:5000/v2.0";
// format for identity is tenantName:userName
private static final String IDENTITY = "your_project_name_or_id:your_auth_username";
private static final String IMAGE_ID = "2cccbea0-cea9-4f86-a3ed-065c652adda5";
private static final String FLAVOR_ID = "2";
public ScalingOut(final String password) {
Iterable<Module> modules = ImmutableSet.<Module>of(new SLF4JLoggingModule());
novaApi = ContextBuilder.newBuilder(PROVIDER)
.endpoint(OS_AUTH_URL)
.credentials(IDENTITY, password)
.modules(modules)
.buildApi(NovaApi.class);
region = novaApi.getConfiguredRegions().iterator().next();
serverApi = novaApi.getServerApi(region);
out.println("Running in region: " + region);
}
// step-1
private void deleteInstances() {
List instances = Arrays.asList(
"all-in-one", "app-worker-1", "app-worker-2", "app-controller");
serverApi.listInDetail().concat().forEach(instance -> {
if (instances.contains(instance.getName())) {
out.println("Destroying Instance: " + instance.getName());
serverApi.delete(instance.getId());
}
});
}
private void deleteSecurityGroups() {
List securityGroups = Arrays.asList(
"all-in-one", "control", "worker", "api", "services");
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
securityGroupApi.list().forEach(securityGroup -> {
if (securityGroups.contains(securityGroup.getName())) {
out.println("Deleting Security Group: " + securityGroup.getName());
securityGroupApi.delete(securityGroup.getId());
}
});
} else {
out.println("No security group extension present; skipping security group delete.");
}
}
// step-2
private Ingress getIngress(int port) {
return Ingress
.builder()
.ipProtocol(IpProtocol.TCP)
.fromPort(port)
.toPort(port)
.build();
}
private void createSecurityGroups() {
if (novaApi.getSecurityGroupApi(region).isPresent()) {
SecurityGroupApi securityGroupApi = novaApi.getSecurityGroupApi(region).get();
SecurityGroup apiGroup = securityGroupApi.createWithDescription("api",
"for API services only");
ImmutableSet.of(22, 80).forEach(port ->
securityGroupApi.createRuleAllowingCidrBlock(
apiGroup.getId(), getIngress(port), "0.0.0.0/0"));
SecurityGroup workerGroup = securityGroupApi.createWithDescription("worker",
"for services that run on a worker node");
securityGroupApi.createRuleAllowingCidrBlock(
workerGroup.getId(), getIngress(22), "0.0.0.0/0");
SecurityGroup controllerGroup = securityGroupApi.createWithDescription("control",
"for services that run on a control node");
ImmutableSet.of(22, 80).forEach(port ->
securityGroupApi.createRuleAllowingCidrBlock(
controllerGroup.getId(), getIngress(port), "0.0.0.0/0"));
securityGroupApi.createRuleAllowingSecurityGroupId(
controllerGroup.getId(), getIngress(5672), workerGroup.getId());
SecurityGroup servicesGroup = securityGroupApi.createWithDescription("services",
"for DB and AMQP services only");
securityGroupApi.createRuleAllowingCidrBlock(
servicesGroup.getId(), getIngress(22), "0.0.0.0/0");
securityGroupApi.createRuleAllowingSecurityGroupId(
servicesGroup.getId(), getIngress(3306), apiGroup.getId());
securityGroupApi.createRuleAllowingSecurityGroupId(
servicesGroup.getId(), getIngress(5672), workerGroup.getId());
securityGroupApi.createRuleAllowingSecurityGroupId(
servicesGroup.getId(), getIngress(5672), apiGroup.getId());
} else {
out.println("No security group extension present; skipping security group create.");
}
}
// step-3
private Optional<FloatingIP> getOrCreateFloatingIP() {
FloatingIP unusedFloatingIP = null;
if (novaApi.getFloatingIPApi(region).isPresent()) {
FloatingIPApi floatingIPApi = novaApi.getFloatingIPApi(region).get();
List<FloatingIP> freeIP = floatingIPApi.list().toList().stream()
.filter(floatingIP1 -> floatingIP1.getInstanceId() == null)
.collect(Collectors.toList());
unusedFloatingIP = freeIP.size() > 0 ? freeIP.get(0) : floatingIPApi.create();
} else {
out.println("No floating ip extension present; skipping floating ip creation.");
}
return Optional.ofNullable(unusedFloatingIP);
}
// step-4
/**
* A helper function to create an instance
*
* @param name The name of the instance that is to be created
* @param options Keypairs, security groups etc...
* @return the id of the newly created instance.
*/
private String createInstance(String name, CreateServerOptions... options) {
out.println("Creating server " + name);
ServerCreated serverCreated = serverApi.create(name, IMAGE_ID, FLAVOR_ID, options);
String id = serverCreated.getId();
ServerPredicates.awaitActive(serverApi).apply(id);
return id;
}
/**
* @return the id of the newly created instance.
*/
private String createAppServicesInstance() {
String userData = "#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \\\n" +
"-i database -i messaging\n";
CreateServerOptions options = CreateServerOptions.Builder
.keyPairName(KEY_PAIR_NAME)
.securityGroupNames("services")
.userData(userData.getBytes());
return createInstance("app-services", options);
}
// step-5
/**
* @return the id of the newly created instance.
*/
private String createApiInstance(String name, String servicesIp) {
String userData = String.format("#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \\\n" +
" -i faafo -r api -m 'amqp://guest:guest@%1$s:5672/' \\\n" +
" -d 'mysql+pymysql://faafo:password@%1$s:3306/faafo'", servicesIp);
CreateServerOptions options = CreateServerOptions.Builder
.keyPairName(KEY_PAIR_NAME)
.securityGroupNames("api")
.userData(userData.getBytes());
return createInstance(name, options);
}
/**
* @return the id's of the newly created instances.
*/
private String[] createApiInstances(String servicesIp) {
return new String[]{
createApiInstance("app-api-1", servicesIp),
createApiInstance("app-api-2", servicesIp)
};
}
// step-6
/**
* @return the id of the newly created instance.
*/
private String createWorkerInstance(String name, String apiIp, String servicesIp) {
String userData = String.format("#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \\\n" +
" -i faafo -r worker -e 'http://%s' -m 'amqp://guest:guest@%s:5672/'",
apiIp, servicesIp);
CreateServerOptions options = CreateServerOptions.Builder
.keyPairName(KEY_PAIR_NAME)
.securityGroupNames("worker")
.userData(userData.getBytes());
return createInstance(name, options);
}
private void createWorkerInstances(String apiIp, String servicesIp) {
createWorkerInstance("app-worker-1", apiIp, servicesIp);
createWorkerInstance("app-worker-2", apiIp, servicesIp);
createWorkerInstance("app-worker-3", apiIp, servicesIp);
}
// step-7
private String getPublicIp(String serverId) {
String publicIP = serverApi.get(serverId).getAccessIPv4();
if (publicIP == null) {
Optional<FloatingIP> optionalFloatingIP = getOrCreateFloatingIP();
if (optionalFloatingIP.isPresent()) {
publicIP = optionalFloatingIP.get().getIp();
novaApi.getFloatingIPApi(region).get().addToServer(publicIP, serverId);
}
}
return publicIP;
}
private String getPublicOrPrivateIP(String serverId) {
String result = serverApi.get(serverId).getAccessIPv4();
if (result == null) {
// then there must be private one...
result = serverApi.get(serverId).getAddresses().values().iterator().next().getAddr();
}
return result;
}
private void setupFaafo() {
deleteInstances();
deleteSecurityGroups();
createSecurityGroups();
String serviceId = createAppServicesInstance();
String servicesIp = getPublicOrPrivateIP(serviceId);
String[] apiIds = createApiInstances(servicesIp);
String apiIp = getPublicIp(apiIds[0]);
createWorkerInstances(apiIp, servicesIp);
out.println("The Fractals app will be deployed to http://" + apiIp);
}
@Override
public void close() throws IOException {
Closeables.close(novaApi, true);
}
public static void main(String... args) throws IOException {
try (Scanner scanner = new Scanner(System.in)) {
System.out.println("Please enter your password: ");
String password = scanner.next();
try (ScalingOut gs = new ScalingOut(password)) {
gs.setupFaafo();
}
}
}
}

View File

@ -1,48 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>openstack.demo.app</groupId>
<artifactId>faafo_infrastructure</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<jclouds.version>1.9.2</jclouds.version>
</properties>
<dependencies>
<!-- the jclouds code -->
<dependency>
<groupId>org.apache.jclouds</groupId>
<artifactId>jclouds-all</artifactId>
<version>${jclouds.version}</version>
</dependency>
<!-- Some of the examples introduce the logging module -->
<dependency>
<groupId>org.apache.jclouds.driver</groupId>
<artifactId>jclouds-slf4j</artifactId>
<version>${jclouds.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.0.13</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,49 +0,0 @@
# step-1
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
auth_username = 'your_auth_username'
auth_password = 'your_auth_password'
auth_url = 'http://controller:5000'
project_name = 'your_project_name_or_id'
region_name = 'your_region_name'
provider = get_driver(Provider.OPENSTACK)
connection = provider(auth_username,
auth_password,
ex_force_auth_url=auth_url,
ex_force_auth_version='2.0_password',
ex_tenant_name=project_name,
ex_force_service_region=region_name)
# step-2
volume = connection.create_volume(1, 'test')
print(volume)
# step-3
volumes = connection.list_volumes()
print(volumes)
# step-4
db_group = connection.ex_create_security_group('database', 'for database service')
connection.ex_create_security_group_rule(db_group, 'TCP', 3306, 3306)
instance = connection.create_node(name='app-database',
image=image,
size=flavor,
ex_keyname=keypair_name,
ex_security_groups=[db_group])
# step-5
volume = connection.ex_get_volume('755ab026-b5f2-4f53-b34a-6d082fb36689')
connection.attach_volume(instance, volume, '/dev/vdb')
# step-6
connection.detach_volume(volume)
connection.destroy_volume(volume)
# step-7
snapshot_name = 'test_backup_1'
connection.create_volume_snapshot('test', name=snapshot_name)
# step-8

View File

@ -1,92 +0,0 @@
# step-1
from __future__ import print_function
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
auth_username = 'your_auth_username'
auth_password = 'your_auth_password'
auth_url = 'http://controller:5000'
project_name = 'your_project_name_or_id'
region_name = 'your_region_name'
provider = get_driver(Provider.OPENSTACK_SWIFT)
swift = provider(auth_username,
auth_password,
ex_force_auth_url=auth_url,
ex_force_auth_version='2.0_password',
ex_tenant_name=project_name,
ex_force_service_region=region_name)
# step-2
container_name = 'fractals'
container = swift.create_container(container_name=container_name)
print(container)
# step-3
print(swift.list_containers())
# step-4
file_path = 'goat.jpg'
object_name = 'an amazing goat'
container = swift.get_container(container_name=container_name)
object = container.upload_object(file_path=file_path, object_name=object_name)
# step-5
objects = container.list_objects()
print(objects)
# step-6
object = swift.get_object(container_name, object_name)
print(object)
# step-7
import hashlib
print(hashlib.md5(open('goat.jpg', 'rb').read()).hexdigest())
# step-8
swift.delete_object(object)
# step-9
objects = container.list_objects()
print(objects)
# step-10
container_name = 'fractals'
container = swift.get_container(container_name)
# step-11
import json
import requests
endpoint = 'http://IP_API_1'
params = { 'results_per_page': '-1' }
response = requests.get('%s/v1/fractal' % endpoint, params=params)
data = json.loads(response.text)
for fractal in data['objects']:
response = requests.get('%s/fractal/%s' % (endpoint, fractal['uuid']), stream=True)
container.upload_object_via_stream(response.iter_content(), object_name=fractal['uuid'])
for object in container.list_objects():
print(object)
# step-12
for object in container.list_objects():
container.delete_object(object)
swift.delete_container(container)
# step-13
file_path = 'goat.jpg'
object_name = 'backup_goat.jpg'
extra = {'meta_data': {'description': 'a funny goat', 'created': '2015-06-02'}}
with open('goat.jpg', 'rb') as iterator:
object = swift.upload_object_via_stream(iterator=iterator,
container=container,
object_name=object_name,
extra=extra)
# step-14
swift.ex_multipart_upload_object(file_path, container, object_name,
chunk_size=33554432)
# step-15

View File

@ -1,158 +0,0 @@
# step-1
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
auth_username = 'your_auth_username'
auth_password = 'your_auth_password'
auth_url = 'http://controller:5000'
project_name = 'your_project_name_or_id'
region_name = 'your_region_name'
provider = get_driver(Provider.OPENSTACK)
conn = provider(auth_username,
auth_password,
ex_force_auth_url=auth_url,
ex_force_auth_version='2.0_password',
ex_tenant_name=project_name,
ex_force_service_region=region_name)
# step-2
images = conn.list_images()
for image in images:
print(image)
# step-3
flavors = conn.list_sizes()
for flavor in flavors:
print(flavor)
# step-4
image_id = '2cccbea0-cea9-4f86-a3ed-065c652adda5'
image = conn.get_image(image_id)
print(image)
# step-5
flavor_id = '2'
flavor = conn.ex_get_size(flavor_id)
print(flavor)
# step-6
instance_name = 'testing'
testing_instance = conn.create_node(name=instance_name, image=image, size=flavor)
print(testing_instance)
# step-7
instances = conn.list_nodes()
for instance in instances:
print(instance)
# step-8
conn.destroy_node(testing_instance)
# step-9
print('Checking for existing SSH key pair...')
keypair_name = 'demokey'
pub_key_file = '~/.ssh/id_rsa.pub'
keypair_exists = False
for keypair in conn.list_key_pairs():
if keypair.name == keypair_name:
keypair_exists = True
if keypair_exists:
print('Keypair ' + keypair_name + ' already exists. Skipping import.')
else:
print('adding keypair...')
conn.import_key_pair_from_file(keypair_name, pub_key_file)
for keypair in conn.list_key_pairs():
print(keypair)
# step-10
print('Checking for existing security group...')
security_group_name = 'all-in-one'
security_group_exists = False
for security_group in conn.ex_list_security_groups():
if security_group.name == security_group_name:
all_in_one_security_group = security_group
security_group_exists = True
if security_group_exists:
print('Security Group ' + all_in_one_security_group.name + ' already exists. Skipping creation.')
else:
all_in_one_security_group = conn.ex_create_security_group(security_group_name, 'network access for all-in-one application.')
conn.ex_create_security_group_rule(all_in_one_security_group, 'TCP', 80, 80)
conn.ex_create_security_group_rule(all_in_one_security_group, 'TCP', 22, 22)
for security_group in conn.ex_list_security_groups():
print(security_group)
# step-11
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
# step-12
print('Checking for existing instance...')
instance_name = 'all-in-one'
instance_exists = False
for instance in conn.list_nodes():
if instance.name == instance_name:
testing_instance = instance
instance_exists = True
if instance_exists:
print('Instance ' + testing_instance.name + ' already exists. Skipping creation.')
else:
testing_instance = conn.create_node(name=instance_name,
image=image,
size=flavor,
ex_keyname=keypair_name,
ex_userdata=userdata,
ex_security_groups=[all_in_one_security_group])
conn.wait_until_running([testing_instance])
for instance in conn.list_nodes():
print(instance)
# step-13
private_ip = None
if len(testing_instance.private_ips):
private_ip = testing_instance.private_ips[0]
print('Private IP found: {}'.format(private_ip))
# step-14
public_ip = None
if len(testing_instance.public_ips):
public_ip = testing_instance.public_ips[0]
print('Public IP found: {}'.format(public_ip))
# step-15
print('Checking for unused Floating IP...')
unused_floating_ip = None
for floating_ip in conn.ex_list_floating_ips():
if not floating_ip.node_id:
unused_floating_ip = floating_ip
break
if not unused_floating_ip and len(conn.ex_list_floating_ip_pools()):
pool = conn.ex_list_floating_ip_pools()[0]
print('Allocating new Floating IP from pool: {}'.format(pool))
unused_floating_ip = pool.create_floating_ip()
# step-16
if public_ip:
print('Instance ' + testing_instance.name + ' already has a public ip. Skipping attachment.')
elif unused_floating_ip:
conn.ex_attach_floating_ip_to_node(testing_instance, unused_floating_ip)
# step-17
actual_ip_address = None
if public_ip:
actual_ip_address = public_ip
elif unused_floating_ip:
actual_ip_address = unused_floating_ip.ip_address
elif private_ip:
actual_ip_address = private_ip
print('The Fractals app will be deployed to http://{}'.format(actual_ip_address))

View File

@ -1,133 +0,0 @@
# step-1
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
instance_name = 'all-in-one'
testing_instance = conn.create_node(name=instance_name,
image=image,
size=flavor,
ex_keyname=keypair_name,
ex_userdata=userdata,
ex_security_groups=[all_in_one_security_group])
# step-2
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api -r worker -r demo
'''
# step-3
all_in_one_security_group = conn.ex_create_security_group('all-in-one', 'network access for all-in-one application.')
conn.ex_create_security_group_rule(all_in_one_security_group, 'TCP', 80, 80)
conn.ex_create_security_group_rule(all_in_one_security_group, 'TCP', 22, 22)
# step-4
conn.ex_list_security_groups()
# step-5
conn.ex_delete_security_group_rule(rule)
conn.ex_delete_security_group(security_group)
# step-6
conn.ex_get_node_security_groups(testing_instance)
# step-7
unused_floating_ip = None
for floating_ip in conn.ex_list_floating_ips():
if not floating_ip.node_id:
unused_floating_ip = floating_ip
print("Found an unused Floating IP: %s" % floating_ip)
break
# step-8
pool = conn.ex_list_floating_ip_pools()[0]
# step-9
unused_floating_ip = pool.create_floating_ip()
# step-10
conn.ex_attach_floating_ip_to_node(instance, unused_floating_ip)
# step-11
worker_group = conn.ex_create_security_group('worker', 'for services that run on a worker node')
conn.ex_create_security_group_rule(worker_group, 'TCP', 22, 22)
controller_group = conn.ex_create_security_group('control', 'for services that run on a control node')
conn.ex_create_security_group_rule(controller_group, 'TCP', 22, 22)
conn.ex_create_security_group_rule(controller_group, 'TCP', 80, 80)
conn.ex_create_security_group_rule(controller_group, 'TCP', 5672, 5672, source_security_group=worker_group)
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api
'''
instance_controller_1 = conn.create_node(name='app-controller',
image=image,
size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[controller_group])
conn.wait_until_running([instance_controller_1])
print('Checking for unused Floating IP...')
unused_floating_ip = None
for floating_ip in conn.ex_list_floating_ips():
if not floating_ip.node_id:
unused_floating_ip = floating_ip
break
if not unused_floating_ip:
pool = conn.ex_list_floating_ip_pools()[0]
print('Allocating new Floating IP from pool: {}'.format(pool))
unused_floating_ip = pool.create_floating_ip()
conn.ex_attach_floating_ip_to_node(instance_controller_1, unused_floating_ip)
print('Application will be deployed to http://%s' % unused_floating_ip.ip_address)
# step-12
instance_controller_1 = conn.ex_get_node_details(instance_controller_1.id)
if instance_controller_1.public_ips:
ip_controller = instance_controller_1.private_ips[0]
else:
ip_controller = instance_controller_1.public_ips[0]
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e 'http://%(ip_controller)s' -m 'amqp://guest:guest@%(ip_controller)s:5672/'
''' % {'ip_controller': ip_controller}
instance_worker_1 = conn.create_node(name='app-worker-1',
image=image,
size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[worker_group])
conn.wait_until_running([instance_worker_1])
print('Checking for unused Floating IP...')
unused_floating_ip = None
for floating_ip in conn.ex_list_floating_ips():
if not floating_ip.node_id:
unused_floating_ip = floating_ip
break
if not unused_floating_ip:
pool = conn.ex_list_floating_ip_pools()[0]
print('Allocating new Floating IP from pool: {}'.format(pool))
unused_floating_ip = pool.create_floating_ip()
conn.ex_attach_floating_ip_to_node(instance_worker_1, unused_floating_ip)
print('The worker will be available for SSH at %s' % unused_floating_ip.ip_address)
# step-13
ip_instance_worker_1 = instance_worker_1.private_ips[0]
print(ip_instance_worker_1)
# step-14

View File

@ -1,110 +0,0 @@
# step-1
for instance in conn.list_nodes():
if instance.name in ['all-in-one','app-worker-1', 'app-worker-2', 'app-controller']:
print('Destroying Instance: %s' % instance.name)
conn.destroy_node(instance)
for group in conn.ex_list_security_groups():
if group.name in ['control', 'worker', 'api', 'services']:
print('Deleting security group: %s' % group.name)
conn.ex_delete_security_group(group)
# step-2
api_group = conn.ex_create_security_group('api', 'for API services only')
conn.ex_create_security_group_rule(api_group, 'TCP', 80, 80)
conn.ex_create_security_group_rule(api_group, 'TCP', 22, 22)
worker_group = conn.ex_create_security_group('worker', 'for services that run on a worker node')
conn.ex_create_security_group_rule(worker_group, 'TCP', 22, 22)
controller_group = conn.ex_create_security_group('control', 'for services that run on a control node')
conn.ex_create_security_group_rule(controller_group, 'TCP', 22, 22)
conn.ex_create_security_group_rule(controller_group, 'TCP', 80, 80)
conn.ex_create_security_group_rule(controller_group, 'TCP', 5672, 5672, source_security_group=worker_group)
services_group = conn.ex_create_security_group('services', 'for DB and AMQP services only')
conn.ex_create_security_group_rule(services_group, 'TCP', 22, 22)
conn.ex_create_security_group_rule(services_group, 'TCP', 3306, 3306, source_security_group=api_group)
conn.ex_create_security_group_rule(services_group, 'TCP', 5672, 5672, source_security_group=worker_group)
conn.ex_create_security_group_rule(services_group, 'TCP', 5672, 5672, source_security_group=api_group)
# step-3
def get_floating_ip(conn):
'''A helper function to re-use available Floating IPs'''
unused_floating_ip = None
for floating_ip in conn.ex_list_floating_ips():
if not floating_ip.node_id:
unused_floating_ip = floating_ip
break
if not unused_floating_ip:
pool = conn.ex_list_floating_ip_pools()[0]
unused_floating_ip = pool.create_floating_ip()
return unused_floating_ip
# step-4
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i database -i messaging
'''
instance_services = conn.create_node(name='app-services',
image=image,
size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[services_group])
instance_services = conn.wait_until_running([instance_services])[0][0]
services_ip = instance_services.private_ips[0]
# step-5
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r api -m 'amqp://guest:guest@%(services_ip)s:5672/' \
-d 'mysql+pymysql://faafo:password@%(services_ip)s:3306/faafo'
''' % { 'services_ip': services_ip }
instance_api_1 = conn.create_node(name='app-api-1',
image=image,
size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[api_group])
instance_api_2 = conn.create_node(name='app-api-2',
image=image,
size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[api_group])
instance_api_1 = conn.wait_until_running([instance_api_1])[0][0]
api_1_ip = instance_api_1.private_ips[0]
instance_api_2 = conn.wait_until_running([instance_api_2])[0][0]
api_2_ip = instance_api_2.private_ips[0]
for instance in [instance_api_1, instance_api_2]:
floating_ip = get_floating_ip(conn)
conn.ex_attach_floating_ip_to_node(instance, floating_ip)
print('allocated %(ip)s to %(host)s' % {'ip': floating_ip.ip_address, 'host': instance.name})
# step-6
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e 'http://%(api_1_ip)s' -m 'amqp://guest:guest@%(services_ip)s:5672/'
''' % {'api_1_ip': api_1_ip, 'services_ip': services_ip}
instance_worker_1 = conn.create_node(name='app-worker-1',
image=image, size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[worker_group])
instance_worker_2 = conn.create_node(name='app-worker-2',
image=image, size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[worker_group])
instance_worker_3 = conn.create_node(name='app-worker-3',
image=image, size=flavor,
ex_keyname='demokey',
ex_userdata=userdata,
ex_security_groups=[worker_group])
# step-7

View File

@ -1,188 +0,0 @@
# step-1
import base64
from os.path import expanduser
from openstack import connection
from openstack import exceptions
auth_username = 'your_auth_username'
auth_password = 'your_auth_password'
auth_url = 'http://controller:5000/v2.0'
project_name = 'your_project_name_or_id'
region_name = 'your_region_name'
conn = connection.Connection(auth_url=auth_url,
project_name=project_name,
username=auth_username,
password=auth_password,
region=region_name)
# step-2
images = conn.image.images()
for image in images:
print(image)
# step-3
flavors = conn.compute.flavors()
for flavor in flavors:
print(flavor)
# step-4
image_id = 'cb6b7936-d2c5-4901-8678-c88b3a6ed84c'
image = conn.compute.get_image(image_id)
print(image)
# step-5
flavor_id = '2'
flavor = conn.compute.get_flavor(flavor_id)
print(flavor)
# step-6
instance_name = 'testing'
image_args = {
'name': instance_name,
'imageRef': image,
'flavorRef': flavor
}
testing_instance = conn.compute.create_server(**image_args)
print(testing_instance)
# step-7
instances = conn.compute.servers()
for instance in instances:
print(instance)
# step-8
conn.compute.delete_server(testing_instance)
# step-9
print('Checking for existing SSH key pair...')
keypair_name = 'demokey'
keypair_exists = False
for keypair in conn.compute.keypairs():
if keypair.name == keypair_name:
keypair_exists = True
if keypair_exists:
print('Keypair ' + keypair_name + ' already exists. Skipping import.')
else:
print('adding keypair...')
pub_key_file = open(expanduser('~/.ssh/id_rsa.pub')).read()
keypair_args = {
"name": keypair_name,
"public_key": pub_key_file
}
conn.compute.create_keypair(**keypair_args)
for keypair in conn.compute.keypairs():
print(keypair)
# step-10
print('Checking for existing security group...')
security_group_name = 'all-in-one'
security_group_exists = False
for security_group in conn.network.security_groups():
if security_group.name == security_group_name:
all_in_one_security_group = security_group
security_group_exists = True
if security_group_exists:
print('Security Group ' + all_in_one_security_group.name + ' already exists. Skipping creation.')
else:
security_group_args = {
'name' : security_group_name,
'description': 'network access for all-in-one application.'
}
all_in_one_security_group = conn.network.create_security_group(**security_group_args)
security_rule_args = {
'security_group_id': all_in_one_security_group,
'direction': 'ingress',
'protocol': 'tcp',
'port_range_min': '80',
'port_range_max': '80'
}
conn.network.create_security_group_rule(**security_rule_args)
security_rule_args['port_range_min'] = '22'
security_rule_args['port_range_max'] = '22'
conn.network.create_security_group_rule(**security_rule_args)
for security_group in conn.network.security_groups():
print(security_group)
# step-11
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
userdata_b64str = base64.b64encode(userdata)
# step-12
print('Checking for existing instance...')
instance_name = 'all-in-one'
instance_exists = False
for instance in conn.compute.servers():
if instance.name == instance_name:
testing_instance = instance
instance_exists = True
if instance_exists:
print('Instance ' + testing_instance.name + ' already exists. Skipping creation.')
else:
testing_instance_args = {
'name': instance_name,
'imageRef': image,
'flavorRef': flavor,
'key_name': keypair_name,
'user_data': userdata_b64str,
'security_groups': [{'name': all_in_one_security_group.name}]
}
testing_instance = conn.compute.create_server(**testing_instance_args)
conn.compute.wait_for_server(testing_instance)
for instance in conn.compute.servers():
print(instance)
# step-13
print('Checking if Floating IP is already assigned to testing_instance...')
testing_instance_floating_ip = None
for values in testing_instance.addresses.values():
for address in values:
if address['OS-EXT-IPS:type'] == 'floating':
testing_instance_floating_ip = conn.network.find_ip(address['addr'])
unused_floating_ip = None
if not testing_instance_floating_ip:
print('Checking for unused Floating IP...')
for floating_ip in conn.network.ips():
if not floating_ip.fixed_ip_address:
unused_floating_ip = floating_ip
break
if not testing_instance_floating_ip and not unused_floating_ip:
print('No free unused Floating IPs. Allocating new Floating IP...')
public_network_id = conn.network.find_network('public').id
try:
unused_floating_ip = conn.network.create_ip(floating_network_id=public_network_id)
unused_floating_ip = conn.network.get_ip(unused_floating_ip)
print(unused_floating_ip)
except exceptions.HttpException as e:
print(e)
# step-14
if testing_instance_floating_ip:
print('Instance ' + testing_instance.name + ' already has a public ip. Skipping attachment.')
else:
for port in conn.network.ports():
if port.device_id == testing_instance.id:
testing_instance_port = port
testing_instance_floating_ip = unused_floating_ip
conn.network.add_ip_to_port(testing_instance_port, testing_instance_floating_ip)
# step-15
print('The Fractals app will be deployed to http://%s' % testing_instance_floating_ip.floating_ip_address)

View File

@ -1,228 +0,0 @@
# step-1
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
userdata_b64str = base64.b64encode(userdata)
instance_name = 'all-in-one'
testing_instance_args = {
'name': instance_name,
'imageRef': image,
'flavorRef': flavor,
'key_name': keypair_name,
'user_data': userdata_b64str,
'security_groups': [{'name': all_in_one_security_group.name}]
}
testing_instance = conn.compute.create_server(**testing_instance_args)
# step-2
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
userdata_b64str = base64.b64encode(userdata)
# step-3
security_group_args = {
'name' : 'all-in-one',
'description': 'network access for all-in-one application.'
}
all_in_one_security_group = conn.network.create_security_group(**security_group_args)
# HTTP access
security_rule_args = {
'security_group_id': all_in_one_security_group,
'direction': 'ingress',
'protocol': 'tcp',
'port_range_min': '80',
'port_range_max': '80'
}
conn.network.create_security_group_rule(**security_rule_args)
# SSH access
security_rule_args['port_range_min'] = '22'
security_rule_args['port_range_max'] = '22'
conn.network.create_security_group_rule(**security_rule_args)
# step-4
for security_group in conn.network.security_groups():
print(security_group)
# step-5
conn.network.delete_security_group_rule(rule)
conn.network.delete_security_group(security_group)
# step-6
testing_instance['security_groups']
# step-7
unused_floating_ip = None
for floating_ip in conn.network.ips():
if not floating_ip.fixed_ip_address:
unused_floating_ip = floating_ip
print("Found an unused Floating IP: %s" % floating_ip)
break
# step-8
public_network_id = conn.network.find_network('public').id
# step-9
unused_floating_ip = conn.network.create_ip(floating_network_id=public_network_id)
unused_floating_ip = conn.network.get_ip(unused_floating_ip)
# step-10
for port in conn.network.ports():
if port.device_id == testing_instance.id:
testing_instance_port = port
break
testing_instance_floating_ip = unused_floating_ip
conn.network.add_ip_to_port(testing_instance_port, testing_instance_floating_ip)
# step-11
security_group_args = {
'name' : 'worker',
'description': 'for services that run on a worker node'
}
worker_group = conn.network.create_security_group(**security_group_args)
security_rule_args = {
'security_group_id': worker_group,
'direction': 'ingress',
'protocol': 'tcp',
'port_range_min': '22',
'port_range_max': '22'
}
conn.network.create_security_group_rule(**security_rule_args)
security_group_args = {
'name' : 'control',
'description': 'for services that run on a control node'
}
controller_group = conn.network.create_security_group(**security_group_args)
# Switch to controller_group and readd SSH access rule
security_rule_args['security_group_id'] = controller_group
conn.network.create_security_group_rule(**security_rule_args)
# Add HTTP access rule
security_rule_args['port_range_min'] = '80'
security_rule_args['port_range_max'] = '80'
conn.network.create_security_group_rule(**security_rule_args)
# Add RabbitMQ access rule for all instances with
# 'worker' security group
security_rule_args['port_range_min'] = '5672'
security_rule_args['port_range_max'] = '5672'
security_rule_args['remote_group_id'] = worker_group
conn.network.create_security_group_rule(**security_rule_args)
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api
'''
userdata_b64str = base64.b64encode(userdata)
instance_controller_1_args = {
'name': 'app-controller',
'imageRef': image,
'flavorRef': flavor,
'key_name': 'demokey',
'user_data': userdata_b64str,
'security_groups': [{'name': controller_group.name}]
}
instance_controller_1 = conn.compute.create_server(**instance_controller_1_args)
conn.compute.wait_for_server(instance_controller_1)
print('Checking for unused Floating IP...')
unused_floating_ip = None
for floating_ip in conn.network.ips():
if not floating_ip.fixed_ip_address:
unused_floating_ip = floating_ip
print("Found an unused Floating IP: %s" % floating_ip)
break
if not unused_floating_ip:
print('No free unused Floating IPs. Allocating new Floating IP...')
public_network_id = conn.network.find_network('public').id
unused_floating_ip = conn.network.create_ip(floating_network_id=public_network_id)
unused_floating_ip = conn.network.get_ip(unused_floating_ip)
for port in conn.network.ports():
if port.device_id == instance_controller_1.id:
controller_instance_port = port
break
controller_instance_floating_ip = unused_floating_ip
conn.network.add_ip_to_port(controller_instance_port, controller_instance_floating_ip)
# Retrieve all information about 'instance_controller_1'
instance_controller_1 = conn.compute.get_server(instance_controller_1)
print('Application will be deployed to http://%s' % controller_instance_floating_ip.floating_ip_address)
# step-12
for values in instance_controller_1.addresses.values():
for address in values:
if address['OS-EXT-IPS:type'] == 'fixed':
ip_controller = address['addr']
break
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e 'http://%(ip_controller)s' -m 'amqp://guest:guest@%(ip_controller)s:5672/'
''' % {'ip_controller': ip_controller}
userdata_b64str = base64.b64encode(userdata)
instance_worker_1_args = {
'name': 'app-worker-1',
'imageRef': image,
'flavorRef': flavor,
'key_name': 'demokey',
'user_data': userdata_b64str,
'security_groups': [{'name': worker_group.name}]
}
instance_worker_1 = conn.compute.create_server(**instance_worker_1_args)
conn.compute.wait_for_server(instance_worker_1)
print('Checking for unused Floating IP...')
unused_floating_ip = None
for floating_ip in conn.network.ips():
if not floating_ip.fixed_ip_address:
unused_floating_ip = floating_ip
print("Found an unused Floating IP: %s" % floating_ip)
break
if not unused_floating_ip:
print('No free unused Floating IPs. Allocating new Floating IP...')
public_network_id = conn.network.find_network('public').id
unused_floating_ip = conn.network.create_ip(floating_network_id=public_network_id)
unused_floating_ip = conn.network.get_ip(unused_floating_ip)
for port in conn.network.ports():
if port.device_id == instance_worker_1.id:
worker_instance_port = port
break
worker_instance_floating_ip = unused_floating_ip
conn.network.add_ip_to_port(worker_instance_port, worker_instance_floating_ip)
# Retrieve all information about 'instance_worker_1'
instance_worker_1 = conn.compute.get_server(instance_worker_1)
print('The worker will be available for SSH at %s' % worker_instance_floating_ip.floating_ip_address)
# step-13
for values in instance_worker_1.addresses.values():
for address in values:
if address['OS-EXT-IPS:type'] == 'floating':
ip_instance_worker_1 = address['addr']
break
print(ip_instance_worker_1)
# step-14

View File

@ -1,182 +0,0 @@
// step-1
auth_username = 'your_auth_username';
auth_password = 'your_auth_password';
auth_url = 'http://controller:5000';
project_name = 'your_project_name_or_id';
region_name = 'your_region_name';
var conn = require('pkgcloud').compute.createClient({
provider: 'openstack',
username: auth_username,
password: auth_password,
authUrl:auth_url,
region: region_name
});
// step-2
conn.getImages(function(err, images) {
for (i =0; i<images.length; i++) {
console.log("id: " + images[i].id);
console.log("name: " + images[i].name);
console.log("created: " + images[i].created);
console.log("updated: " + images[i].updated);
console.log("status: " + images[i].status + "\n");
}});
// step-3
conn.getFlavors(function(err, flavors) {
for (i=0; i<flavors.length; i++) {
console.log("id: " + flavors[i].id);
console.log("name: " + flavors[i].name);
console.log("ram: " + flavors[i].ram);
console.log("disk: " + flavors[i].disk);
console.log("vcpus: " + flavors[i].vcpus + "\n");
}});
// step-4
image_id = '2cccbea0-cea9-4f86-a3ed-065c652adda5';
conn.getImage(image_id, function(err, image) {
console.log("id: " + image.id);
console.log("name: " + image.name);
console.log("created: " + image.created);
console.log("updated: " + image.updated);
console.log("status: " + image.status + "\n");
});
// step-5
flavor_id = 'cba9ea52-8e90-468b-b8c2-777a94d81ed3';
conn.getFlavor(flavor_id, function(err, flavor) {
console.log("id: " + flavor.id);
console.log("name: " + flavor.name);
console.log("ram: " + flavor.ram);
console.log("disk: " + flavor.disk);
console.log("vcpus: " + flavor.vcpus + "\n");
});
// step-6
instance_name = 'testing';
conn.createServer({
name: instance_name,
flavor: flavor_id,
image: image_id
}, function(err, server) {
console.log(server.id)
});
// step-7
conn.getServers(console.log);
// TODO - make a decision about printing this nicely or not
// step-8
test_instance_id = '0d7968dc-4bf4-4e01-b822-43c9c1080d77';
conn.destroyServer(test_instance_id, console.log);
// TODO - make a decision about printing this nicely or not
// step-9
console.log('Checking for existing SSH key pair...');
keypair_name = 'demokey';
pub_key_file = '/home/user/.ssh/id_rsa.pub';
pub_key_string = '';
keypair_exists = false;
conn.listKeys(function (err, keys) {
for (i=0; i<keys.length; i++){
if (keys[i].keypair.name == keypair_name) {
keypair_exists = true;
}}});
if (keypair_exists) {
console.log('Keypair already exists. Skipping import.');
} else {
console.log('adding keypair...');
fs = require('fs');
fs.readFile(pub_key_file, 'utf8', function (err, data) {
pub_key_string = data;
});
conn.addKey({name: keypair_name, public_key:pub_key_string}, console.log);
}
conn.listKeys(function (err, keys) {
for (i=0; i<keys.length; i++){
console.log(keys[i].keypair.name)
console.log(keys[i].keypair.fingerprint)
}});
// step-10
security_group_name = 'all-in-one';
security_group_exists = false;
all_in_one_security_group = false;
conn.listGroups(function (err, groups) {
for (i=0; i<groups.length; i++){
if (groups[i].name == security_group_name) {
security_group_exists = true;
}}});
if (security_group_exists) {
console.log('Security Group already exists. Skipping creation.');
} else {
conn.addGroup({ name: 'all-in-one',
description: 'network access for all-in-one application.'
}, function (err, group) {
all_in_one_security_group = group.id;
conn.addRule({ groupId: group.id,
ipProtocol: 'TCP',
fromPort: 80,
toPort: 80}, console.log);
conn.addRule({ groupId: group.id,
ipProtocol: 'TCP',
fromPort: 22,
toPort: 22}, console.log);
});
};
// step-11
userdata = "#!/usr/bin/env bash\n" +
"curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh" +
" | bash -s -- -i faafo -i messaging -r api -r worker -r demo";
userdata = new Buffer(userdata).toString('base64')
// step-12
instance_name = 'all-in-one'
conn.createServer({ name: instance_name,
image: image_id,
flavor: flavor_id,
keyname: keypair_name,
cloudConfig: userdata,
securityGroups: all_in_one_security_group},
function(err, server) {
server.setWait({ status: server.STATUS.running }, 5000, console.log)
});
// step-13
console.log('Checking for unused Floating IP...')
unused_floating_ips = []
conn.getFloatingIps(function (err, ips) {
console.log(ips)
for (i=0; i<ips.length; i++){
if (ips[i].node_id) {
unused_floating_ips = ips[i];
break;
}}});
if (!unused_floating_ip) {
conn.allocateNewFloatingIp(function (err, ip) {
unused_floating_ip = ip.ip;
})};
console.log(unused_floating_ip);
// step-14
conn.addFloatingIp(testing_instance, unused_floating_ip, console.log)
// step-15
console.log('The Fractals app will be deployed to http://%s' % unused_floating_ip.ip_address)

View File

@ -1,37 +0,0 @@
# step-1
import shade
conn = shade.openstack_cloud(cloud='myfavoriteopenstack')
# step-2
volume = conn.create_volume(size=1, display_name='test')
# step-3
volumes = conn.list_volumes()
for vol in volumes:
print(vol)
# step-4
db_group = conn.create_security_group('database', 'for database service')
conn.create_security_group_rule(db_group['name'], 22, 22, 'TCP')
conn.create_security_group_rule(db_group['name'], 3306, 3306, 'TCP')
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i database -i messaging
'''
instance = conn.create_server(wait=True, auto_ip=False,
name='app-database',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[db_group['name']],
userdata=userdata)
# step-5
conn.attach_volume(instance, volume, '/dev/vdb')
# step-6
conn.detach_volume(instance, volume)
conn.delete_volume(volume['id'])

View File

@ -1,8 +0,0 @@
clouds:
myfavoriteopenstack:
auth:
auth_url: https://controller:5000/
username: $YOUR_USERNAME
password: $YOUR_PASSWORD
project_name: $YOUR_PROJECT
region_name: $REGION

View File

@ -1,74 +0,0 @@
# step-1
from __future__ import print_function
import hashlib
from shade import *
conn = openstack_cloud(cloud='myfavoriteopenstack')
# step-2
container_name = 'fractals'
container = conn.create_container(container_name)
print(container)
# step-3
print(conn.list_containers())
# step-4
file_path = 'goat.jpg'
object_name = 'an amazing goat'
container = conn.get_container(container_name)
object = conn.create_object(container=container_name, name=object_name, filename=file_path)
# step-5
print(conn.list_objects(container_name))
# step-6
object = conn.get_object(container_name, object_name)
print(object)
# step-7
print(hashlib.md5(open('goat.jpg', 'rb').read()).hexdigest())
# step-8
conn.delete_object(container_name, object_name)
# step-9
print(conn.list_objects(container_name))
# step-10
container_name = 'fractals'
print(conn.get_container(container_name))
# step-11
import base64
import cStringIO
import json
import requests
endpoint = 'http://IP_API_1'
params = { 'results_per_page': '-1' }
response = requests.get('%s/v1/fractal' % endpoint, params=params)
data = json.loads(response.text)
for fractal in data['objects']:
r = requests.get('%s/fractal/%s' % (endpoint, fractal['uuid']), stream=True)
with open(fractal['uuid'], 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
conn.create_object(container=container_name, name=fractal['uuid'])
for object in conn.list_objects(container_name):
print(object)
# step-12
for object in conn.list_objects(container_name):
conn.delete_object(container_name, object['name'])
conn.delete_container(container_name)
# step-13
metadata = {'foo': 'bar'}
conn.create_object(container=container_name, name=fractal['uuid'],
metadata=metadata
)
# step-14

View File

@ -1,94 +0,0 @@
#step-1
from shade import *
simple_logging(debug=True)
conn = openstack_cloud(cloud='myfavoriteopenstack')
#step-2
images = conn.list_images()
for image in images:
print(image)
#step-3
flavors = conn.list_flavors()
for flavor in flavors:
print(flavor)
#step-4
image_id = 'c55094e9-699c-4da9-95b4-2e2e75f4c66e'
image = conn.get_image(image_id)
print(image)
#step-5
flavor_id = '100'
flavor = conn.get_flavor(flavor_id)
print(flavor)
#step-6
instance_name = 'testing'
testing_instance = conn.create_server(wait=True, auto_ip=True,
name=instance_name,
image=image_id,
flavor=flavor_id)
print(testing_instance)
#step-7
instances = conn.list_servers()
for instance in instances:
print(instance)
#step-8
conn.delete_server(name_or_id=instance_name)
#step-9
print('Checking for existing SSH keypair...')
keypair_name = 'demokey'
pub_key_file = '/home/username/.ssh/id_rsa.pub'
if conn.search_keypairs(keypair_name):
print('Keypair already exists. Skipping import.')
else:
print('Adding keypair...')
conn.create_keypair(keypair_name, open(pub_key_file, 'r').read().strip())
for keypair in conn.list_keypairs():
print(keypair)
#step-10
print('Checking for existing security groups...')
sec_group_name = 'all-in-one'
if conn.search_security_groups(sec_group_name):
print('Security group already exists. Skipping creation.')
else:
print('Creating security group.')
conn.create_security_group(sec_group_name, 'network access for all-in-one application.')
conn.create_security_group_rule(sec_group_name, 80, 80, 'TCP')
conn.create_security_group_rule(sec_group_name, 22, 22, 'TCP')
conn.search_security_groups(sec_group_name)
#step-11
ex_userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
#step-12
instance_name = 'all-in-one'
testing_instance = conn.create_server(wait=True, auto_ip=False,
name=instance_name,
image=image_id,
flavor=flavor_id,
key_name=keypair_name,
security_groups=[sec_group_name],
userdata=ex_userdata)
#step-13
f_ip = conn.available_floating_ip()
#step-14
conn.add_ip_list(testing_instance, [f_ip['floating_ip_address']])
#step-15
print('The Fractals app will be deployed to http://%s' % f_ip['floating_ip_address'] )

View File

@ -1,116 +0,0 @@
# step-1
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
instance_name = 'all-in-one'
testing_instance = conn.create_server(wait=True, auto_ip=False,
name=instance_name,
image=image_id,
flavor=flavor_id,
key_name=keypair_name,
security_groups=[sec_group_name],
userdata=userdata)
# step-2
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -i messaging -r api -r worker -r demo
'''
# step-3
sec_group_name = 'all-in-one'
conn.create_security_group(sec_group_name, 'network access for all-in-one application.')
conn.create_security_group_rule(sec_group_name, 80, 80, 'TCP')
conn.create_security_group_rule(sec_group_name, 22, 22, 'TCP')
# step-4
sec_groups = conn.list_security_groups()
for sec_group in sec_groups:
print(sec_group)
# step-5
conn.delete_security_group_rule(rule_id)
conn.delete_security_group(sec_group_name)
# step-6
conn.get_openstack_vars(testing_instance)['security_groups']
# step-7
unused_floating_ip = conn.available_floating_ip()
# step-8
# step-9
# step-10
conn.add_ip_list(testing_instance, [unused_floating_ip['floating_ip_address']])
# step-11
worker_group_name = 'worker'
if conn.search_security_groups(worker_group_name):
print('Security group \'%s\' already exists. Skipping creation.' % worker_group_name)
else:
worker_group = conn.create_security_group(worker_group_name, 'for services that run on a worker node')
conn.create_security_group_rule(worker_group['name'], 22, 22, 'TCP')
controller_group_name = 'control'
if conn.search_security_groups(controller_group_name):
print('Security group \'%s\' already exists. Skipping creation.' % controller_group_name)
else:
controller_group = conn.create_security_group(controller_group_name, 'for services that run on a control node')
conn.create_security_group_rule(controller_group['name'], 22, 22, 'TCP')
conn.create_security_group_rule(controller_group['name'], 80, 80, 'TCP')
conn.create_security_group_rule(controller_group['name'], 5672, 5672, 'TCP', remote_group_id=worker_group['id'])
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i messaging -i faafo -r api
'''
instance_controller_1 = conn.create_server(wait=True, auto_ip=False,
name='app-controller',
image=image_id,
flavor=flavor_id,
key_name=keypair_name,
security_groups=[controller_group_name],
userdata=userdata)
unused_floating_ip = conn.available_floating_ip()
conn.add_ip_list(instance_controller_1, [unused_floating_ip['floating_ip_address']])
print('Application will be deployed to http://%s' % unused_floating_ip['floating_ip_address'])
# step-12
instance_controller_1 = conn.get_server(instance_controller_1['id'])
if conn.get_server_public_ip(instance_controller_1):
ip_controller = conn.get_server_public_ip(instance_controller_1)
else:
ip_controller = conn.get_server_private_ip(instance_controller_1)
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e 'http://%(ip_controller)s' -m 'amqp://guest:guest@%(ip_controller)s:5672/'
''' % {'ip_controller': ip_controller}
instance_worker_1 = conn.create_server(wait=True, auto_ip=False,
name='app-worker-1',
image=image_id,
flavor=flavor_id,
key_name=keypair_name,
security_groups=[worker_group_name],
userdata=userdata)
unused_floating_ip = conn.available_floating_ip()
conn.add_ip_list(instance_worker_1, [unused_floating_ip['floating_ip_address']])
print('The worker will be available for SSH at %s' % unused_floating_ip['floating_ip_address'])
# step-13
instance_worker_1 = conn.get_server(instance_worker_1['name'])
ip_instance_worker_1 = conn.get_server_public_ip(instance_worker_1)
print(ip_instance_worker_1)

View File

@ -1,105 +0,0 @@
# step-1
for instance in conn.list_servers():
if instance.name in ['all-in-one','app-worker-1', 'app-worker-2', 'app-controller']:
print('Destroying Instance: %s' % instance.name)
conn.delete_server(instance.id, wait=True)
for group in conn.list_security_groups():
if group['name'] in ['control', 'worker', 'api', 'services']:
print('Deleting security group: %s' % group['name'])
conn.delete_security_group(group['name'])
# step-2
api_group = conn.create_security_group('api', 'for API services only')
conn.create_security_group_rule(api_group['name'], 80, 80, 'TCP')
conn.create_security_group_rule(api_group['name'], 22, 22, 'TCP')
worker_group = conn.create_security_group('worker', 'for services that run on a worker node')
conn.create_security_group_rule(worker_group['name'], 22, 22, 'TCP')
services_group = conn.create_security_group('services', 'for DB and AMQP services only')
conn.create_security_group_rule(services_group['name'], 22, 22, 'TCP')
conn.create_security_group_rule(services_group['name'], 3306, 3306, 'TCP', remote_group_id=api_group['id'])
conn.create_security_group_rule(services_group['name'], 5672, 5672, 'TCP', remote_group_id=worker_group['id'])
conn.create_security_group_rule(services_group['name'], 5672, 5672, 'TCP', remote_group_id=api_group['id'])
# step-3
def get_floating_ip(conn):
'''A helper function to re-use available Floating IPs'''
return conn.available_floating_ip()
# step-4
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i database -i messaging
'''
instance_services = conn.create_server(wait=True, auto_ip=False,
name='app-services',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[services_group['name']],
userdata=userdata)
services_ip = conn.get_server_private_ip(instance_services)
# step-5
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r api -m 'amqp://guest:guest@%(services_ip)s:5672/' \
-d 'mysql+pymysql://faafo:password@%(services_ip)s:3306/faafo'
''' % { 'services_ip': services_ip }
instance_api_1 = conn.create_server(wait=True, auto_ip=False,
name='app-api-1',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[api_group['name']],
userdata=userdata)
instance_api_2 = conn.create_server(wait=True, auto_ip=False,
name='app-api-2',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[api_group['name']],
userdata=userdata)
api_1_ip = conn.get_server_private_ip(instance_api_1)
api_2_ip = conn.get_server_private_ip(instance_api_2)
for instance in [instance_api_1, instance_api_2]:
floating_ip = get_floating_ip(conn)
conn.add_ip_list(instance, [floating_ip['floating_ip_address']])
print('allocated %(ip)s to %(host)s' % {'ip': floating_ip['floating_ip_address'], 'host': instance['name']})
# step-6
userdata = '''#!/usr/bin/env bash
curl -L -s https://opendev.org/openstack/faafo/raw/contrib/install.sh | bash -s -- \
-i faafo -r worker -e 'http://%(api_1_ip)s' -m 'amqp://guest:guest@%(services_ip)s:5672/'
''' % {'api_1_ip': api_1_ip, 'services_ip': services_ip}
instance_worker_1 = conn.create_server(wait=True, auto_ip=False,
name='app-worker-1',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[worker_group['name']],
userdata=userdata)
instance_worker_2 = conn.create_server(wait=True, auto_ip=False,
name='app-worker-2',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[worker_group['name']],
userdata=userdata)
instance_worker_3 = conn.create_server(wait=True, auto_ip=False,
name='app-worker-3',
image=image_id,
flavor=flavor_id,
key_name='demokey',
security_groups=[worker_group['name']],
userdata=userdata)

View File

@ -1,7 +0,0 @@
my-provider:
profile: $PROVIDER_NAME
auth:
username: $YOUR_USERNAME
password: $YOUR_PASSWORD
project_name: $YOUR_PROJECT
region_name: $REGION

View File

@ -1,22 +0,0 @@
[metadata]
name = OpenStack First Application
summary = OpenStack First Application
description-file =
README.rst
author = OpenStack Documentation
author-email = openstack-docs@lists.openstack.org
home-page = https://docs.openstack.org/
classifier =
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
[build_sphinx]
all_files = 1
build-dir = build
source-dir = source
[pbr]
warnerrors = True
[wheel]
universal = 1

View File

@ -1,6 +0,0 @@
#!/usr/bin/env python
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,123 +0,0 @@
=======================================
Advice for developers new to operations
=======================================
This section introduces some operational concepts and tasks to
developers who have not written cloud applications before.
Monitoring
~~~~~~~~~~
Monitoring is essential for 'scalable' cloud applications. You must
know how many requests are coming in and the impact that these
requests have on various services. You must have enough information to
determine whether to start another worker or API service as you
did in :doc:`/scaling_out`.
.. todo:: explain how to achieve this kind of monitoring. Ceilometer?
(STOP LAUGHING.)
In addition to this kind of monitoring, you should consider
availability monitoring. Although your application might not care
about a failed worker, it should care about a failed database server.
Use the
`Health Endpoint Monitoring Pattern <https://msdn.microsoft.com/en-us/library/dn589789.aspx>`
to implement functional checks within your application that external
tools can access through exposed endpoints at regular intervals.
Backups
~~~~~~~
Just as you back up information on a non-cloud server, you must back
up non-reproducible information, such as information on a database
server, file server, or in application log files. Just because
something is 'in the cloud' does not mean that the underlying hardware
or systems cannot fail.
OpenStack provides a couple of tools that make it easy to back up
data. If your provider runs OpenStack Object Storage, you can use its
API calls and CLI tools to work with archive files.
You can also use the OpenStack API to create snapshots of running
instances and persistent volumes. For more information, see your SDK
documentation.
.. todo:: Link to appropriate documentation, or better yet, link and
also include the commands here.
In addition to configuring backups, review your policies about what
you back up and how long to retain each backed up item.
Phoenix servers
~~~~~~~~~~~~~~~
`Phoenix Servers <http://martinfowler.com/bliki/PhoenixServer.html>`_,
named for the mythical bird that is consumed by fire and rises from
the ashes to live again, make it easy to start over with new
instances.
Application developers and operators who use phoenix servers have
access to systems that are built from a known baseline, such as a
specific operating system version, and to tooling that automatically
builds, installs, and configures a system.
If you deploy your application on a regular basis, you can resolve
outages and make security updates without manual intervention. If an
outage occurs, you can provision more resources in another region. If
you must patch security holes, you can provision additional compute
nodes that are built with the updated software. Then, you can
terminate vulnerable nodes and automatically fail-over traffic to the
new instances.
Security
~~~~~~~~
If one application instance is compromised, all instances with the
same image and configuration will likely suffer the same
vulnerability. The safest path is to use configuration management to
rebuild all instances.
Configuration management
~~~~~~~~~~~~~~~~~~~~~~~~
Configuration management tools, such as Ansible, Chef, and Puppet,
enable you to describe exactly what to install and configure on an
instance. Using these descriptions, these tools implement the changes
that are required to get to the desired state.
These tools vastly reduce the effort it takes to work with large
numbers of servers, and also improve the ability to recreate, update,
move, and distribute applications.
Application deployment
~~~~~~~~~~~~~~~~~~~~~~
How do you deploy your application? For example, do you pull the
latest code from a source control repository? Do you make packaged
releases that update infrequently? Do you perform haphazard tests in a
development environment and deploy only after major changes?
One of the latest trends in scalable cloud application deployment is
`continuous integration <http://en.wikipedia.org/wiki/Continuous_integration>`_
and `continuous deployment <http://en.wikipedia.org/wiki/Continuous_delivery>`_
(CI/CD).
CI/CD means that you always test your application and make frequent
deployments to production.
In this tutorial, we have downloaded the latest version of our
application from source and installed it on a standard image. Our
magic installation script also updates the standard image to have the
latest dependencies that you need to run the application.
Another approach is to create a 'gold' image, which pre-installs your
application and its dependencies. A 'gold' image enables faster boot
times and more control over what is on the instance. However, if you
use 'gold' images, you must have a process in place to ensure that
these images do not fall behind on security updates.
Fail fast
~~~~~~~~~
.. todo:: Section needs to be written.

View File

@ -1,56 +0,0 @@
========
Appendix
========
Bootstrap your network
~~~~~~~~~~~~~~~~~~~~~~
Most cloud providers provision all network objects that are required
to boot an instance. To determine whether these objects were created
for you, access the Network Topology section of the OpenStack
dashboard.
.. figure:: images/network-topology.png
:width: 920px
:align: center
:height: 622px
:alt: network topology view
:figclass: align-center
Specify a network during instance build
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. todo:: code for creating a networking using code
.. only:: shade
Add the parameter network and send its name or id to attach the instance to:
.. code-block:: python
testing_instance = conn.create_server(wait=True, auto_ip=True,
name=instance_name,
image=image_id,
flavor=flavor_id,
network=network_id)
.. only:: gophercloud
Add the option Networks and send its id to attach the instance to:
.. code-block:: go
opts := servers.CreateOpts {
Name: instanceName,
ImageRef: image.ID,
FlavorRef: flavor.ID,
SecurityGroups: []string{securityGroupName},
UserData: []byte(userData),
Networks: []servers.Network{servers.Network{UUID: networkID}},
}
testingInstance, _ = servers.Create(client, keypairs.CreateOptsExt {
CreateOptsBuilder: opts,
KeyName: keyPairName,
}).Extract()

View File

@ -1,434 +0,0 @@
=============
Block Storage
=============
.. todo:: (For nick: Restructure the introduction to this chapter to
provide context of what we're actually going to do.)
By default, data in OpenStack instances is stored on 'ephemeral'
disks. These disks remain with the instance throughout its lifetime.
When you terminate the instance, that storage and all the data stored
on it disappears. Ephemeral storage is allocated to a single instance
and cannot be moved to another instance.
This section introduces block storage, also known as volume storage,
which provides access to persistent storage devices. You interact with
block storage by attaching volumes to running instances just as you
might attach a USB drive to a physical server. You can detach volumes
from one instance and reattach them to another instance and the data
remains intact. The OpenStack Block Storage (cinder) project
implements block storage.
Though you might have configured Object Storage to store images, the
Fractal application needs a database to track the location of, and
parameters that were used to create, images in Object Storage. This
database server cannot fail.
If you are an advanced user, think about how you might remove the
database from the architecture and replace it with Object Storage
metadata, and then contribute these steps to :doc:`craziness`.
Otherwise, continue reading to learn how to work with, and move the
Fractal application database server to use, block storage.
Basics
~~~~~~
Later on, you will use a Block Storage volume to provide persistent
storage for the database server for the Fractal application. But
first, learn how to create and attach a Block Storage device.
.. only:: dotnet
.. warning:: This section has not yet been completed for the .NET SDK.
.. only:: fog
.. warning:: This section has not yet been completed for the fog SDK.
.. only:: pkgcloud
.. warning:: This section has not yet been completed for the pkgcloud SDK.
.. only:: openstacksdk
.. warning:: This section has not yet been completed for the OpenStack SDK.
.. only:: phpopencloud
.. warning:: This section has not yet been completed for the
PHP-OpenCloud SDK.
Connect to the API endpoint:
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-1
:end-before: step-2
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-1
:end-before: step-2
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-1
:end-before: step-2
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-1
:end-before: step-2
To try it out, make a 1GB volume called 'test'.
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-2
:end-before: step-3
::
<StorageVolume id=755ab026-b5f2-4f53-b34a-6d082fb36689 size=1 driver=OpenStack>
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-2
:end-before: step-3
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-2
:end-before: step-3
.. note:: The parameter :code:`size` is in gigabytes.
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-2
:end-before: step-3
.. note:: The parameter :code:`Size` is in gigabytes.
To see if the volume creation was successful, list all volumes:
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-3
:end-before: step-4
::
[<StorageVolume id=755ab026-b5f2-4f53-b34a-6d082fb36689 size=1 driver=OpenStack>]
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-3
:end-before: step-4
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-3
:end-before: step-4
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-3
:end-before: step-4
Use Block Storage for the Fractal database server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You need a server for the dedicated database. Use the image, flavor, and
keypair that you used in :doc:`/getting_started` to launch an
:code:`app-database` instance.
You also need a security group to permit access to the database server (for
MySQL, port 3306) from the network:
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-4
:end-before: step-5
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-4
:end-before: step-5
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-4
:end-before: step-5
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-4
:end-before: step-5
Create a volume object by using the unique identifier (UUID) for the
volume. Then, use the server object from the previous code snippet to
attach the volume to it at :code:`/dev/vdb`:
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-5
:end-before: step-6
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-5
:end-before: step-6
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-5
:end-before: step-6
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-5
:end-before: step-6
Log in to the server to run the following steps.
.. note:: Replace :code:`IP_DATABASE` with the IP address of the
database instance and USERNAME to the appropriate user name.
Now prepare the empty block device.
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_DATABASE
# fdisk -l
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
# mke2fs /dev/vdb
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
# mkdir /mnt/database
# mount /dev/vdb /mnt/database
Stop the running MySQL database service and move the database files from
:file:`/var/lib/mysql` to the new volume, which is temporarily mounted at
:file:`/mnt/database`.
.. code-block:: console
# systemctl stop mariadb
# mv /var/lib/mysql/* /mnt/database
Sync the file systems and mount the block device that contains the database
files to :file:`/var/lib/mysql`.
.. code-block:: console
# sync
# umount /mnt/database
# rm -rf /mnt/database
# echo "/dev/vdb /var/lib/mysql ext4 defaults 1 2" >> /etc/fstab
# mount /var/lib/mysql
Finally, start the stopped MySQL database service and validate that everything
works as expected.
.. code-block:: console
# systemctl start mariadb
# mysql -ufaafo -ppassword -h localhost faafo -e 'show tables;'
Extras
~~~~~~
You can detach the volume and reattach it elsewhere, or use the following
steps to delete the volume.
.. warning::
The following operations are destructive and result in data loss.
To detach and delete a volume:
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/block_storage.py
:start-after: step-6
:end-before: step-7
::
True
.. note:: :code:`detach_volume` and :code:`destroy_volume` take a
volume object, not a name.
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-6
:end-before: step-7
.. only:: shade
.. literalinclude:: ../samples/shade/block_storage.py
:language: python
:start-after: step-6
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/block_storage.go
:language: go
:start-after: step-6
.. only:: libcloud
Other features, such as creating volume snapshots, are useful for backups:
.. literalinclude:: ../samples/libcloud/block_storage.py
:language: python
:start-after: step-7
:end-before: step-8
.. todo:: Do we need a note here to mention that 'test' is the
volume name and not the volume object?
For information about these and other calls, see
`libcloud documentation <http://ci.apache.org/projects/libcloud/docs/compute/drivers/openstack.html>`_.
.. only:: jclouds
Other features, such as creating volume snapshots, are useful for backups:
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
:start-after: step-7
:end-before: step-8
The following file contains all of the code from this section of the
tutorial. This comprehensive code sample lets you view and run the code
as a single file.
.. literalinclude:: ../samples/jclouds/BlockStorage.java
:language: java
Work with the OpenStack Database service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously, you manually created the database, which is useful for a single
database that you rarely update. However, the OpenStack :code:`trove`
component provides Database as a Service (DBaaS).
.. note:: This OpenStack Database service is not installed in many
clouds right now, but if your cloud supports it, it can
make your life a lot easier when working with databases.
SDKs do not generally support the service yet, but you can use the
'trove' command-line client to work with it instead.
To install the 'trove' command-line client, see
`Install the OpenStack command-line clients
<https://docs.openstack.org/cli-reference/common/cli_install_openstack_command_line_clients.html#install-the-clients>`_.
To set up environment variables for your cloud in an :file:`openrc.sh`
file, see
`Set environment variables using the OpenStack RC file <https://docs.openstack.org/cli-reference/common/cli_set_environment_variables_using_openstack_rc.html>`_.
Ensure you have an :file:`openrc.sh` file, source it, and validate that
your trove client works:
.. code-block:: console
$ cat openrc.sh
export OS_USERNAME=your_auth_username
export OS_PASSWORD=your_auth_password
export OS_TENANT_NAME=your_project_name
export OS_AUTH_URL=http://controller:5000/v2.0
export OS_REGION_NAME=your_region_name
$ source openrc.sh
$ trove --version
1.0.9
For information about supported features and how to work with an
existing database service installation, see
`Database as a Service in OpenStack <http://www.slideshare.net/hastexo/hands-on-trove-database-as-a-service-in-openstack-33588994>`_.
Next steps
~~~~~~~~~~
You should now be fairly confident working with Block Storage volumes.
For information about other calls, see the volume documentation for
your SDK. Or, try one of these tutorial steps:
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.

View File

@ -1,294 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import subprocess
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.ifconfig',
'sphinxcontrib.nwdiag',
'sphinx.ext.graphviz',
'sphinx.ext.todo',
'openstackdocstheme'
]
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
repository_name = 'openstack/api-site'
project = u'FirstApp'
bug_project = 'openstack-api-site'
bug_tag = u'firstapp'
copyright = u'2015-2017, OpenStack contributors'
# The version info for the project you are documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# So that we can enable "log-a-bug" links from each output HTML page, this
# variable must be set to a format that includes year, month, day, hours and
# minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FirstAppdoc'
# If true, publish source files
html_copy_source = False
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'FirstApp.tex', u'FirstApp Documentation',
u'OpenStack Doc Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'firstapp', u'FirstApp Documentation',
[u'OpenStack Doc Team'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'FirstApp', u'FirstApp Documentation',
u'OpenStack Doc Team', 'FirstApp', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# Set to True to enable printing of the TODO sections
todo_include_todos = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'FirstApp', u'FirstApp Documentation',
u'OpenStack contributors')
]

View File

@ -1,77 +0,0 @@
===========
Going crazy
===========
This section explores options for expanding the sample application.
Regions and geographic diversity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: For more information about multi-site clouds, see the
`Multi-Site chapter <https://docs.openstack.org/arch-design/multi-site.html>`_
in the Architecture Design Guide.
OpenStack supports 'regions', which are geographically-separated
installations that are connected to a single service catalog. This
section explains how to expand the Fractal application to use multiple
regions for high availability.
.. note:: This section is incomplete. Please help us finish it!
Multiple clouds
~~~~~~~~~~~~~~~
.. note:: For more information about hybrid clouds, see the `Hybrid
Cloud chapter
<https://docs.openstack.org/arch-design/hybrid.html>`_
in the Architecture Design Guide.
You might want to use multiple clouds, such as a private cloud inside
your organization and a public cloud. This section attempts to do
exactly that.
.. note:: This section is incomplete. Please help us finish it!
High availability
~~~~~~~~~~~~~~~~~
Using Pacemaker to look at the API.
.. note:: This section is incomplete. Please help us finish it!
conf.d, etc.d
~~~~~~~~~~~~~
Use conf.d and etc.d.
In earlier sections, the Fractal application used an installation
script into which the metadata API passed parameters to bootstrap the
cluster. `Etcd <https://github.com/coreos/etcd>`_ is "a distributed,
consistent key-value store for shared configuration and service
discovery" that you can use to store configurations. You can write
updated versions of the Fractal worker component to connect to Etcd or
use `Confd <https://github.com/kelseyhightower/confd>`_ to poll for
changes from Etcd and write changes to a configuration file on the
local file system, which the Fractal worker can use for configuration.
Use Object Storage instead of a database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have not quite figured out how to stop using a database, but the
general steps are:
* Change the Fractal upload code to store metadata with the object in
Object Storage.
* Change the API code, such as "list fractals," to query Object Storage
to get the metadata.
.. note:: This section is incomplete. Please help us finish it!
Next steps
~~~~~~~~~~
Wow! If you have made it through this section, you know more than the
authors of this guide know about working with OpenStack clouds.
Perhaps you can `contribute <https://docs.openstack.org/doc-contrib-guide/index.html>`_?

View File

@ -1,747 +0,0 @@
===============
Make it durable
===============
.. todo:: https://github.com/apache/libcloud/pull/492
.. todo:: For later versions of the guide: Extend the Fractals app to use Swift directly, and show the actual code from there.
.. todo:: Explain how to get objects back out again.
.. todo:: Large object support in Swift
https://docs.openstack.org/swift/latest/overview_large_objects.html
This section introduces object storage.
`OpenStack Object Storage <https://www.openstack.org/software/openstack-storage/>`_
(code-named swift) is open-source software that enables you to create
redundant, scalable data storage by using clusters of standardized servers to
store petabytes of accessible data. It is a long-term storage system for large
amounts of static data that you can retrieve, leverage, and update. Unlike
more traditional storage systems that you access through a file system, you
access Object Storage through an API.
The Object Storage API is organized around objects and containers.
Similar to the UNIX programming model, an object, such as a document or an
image, is a "bag of bytes" that contains data. You use containers to group
objects. You can place many objects inside a container, and your account can
have many containers.
If you think about how you traditionally make what you store durable, you
quickly conclude that keeping multiple copies of your objects on separate
systems is a good way strategy. However, keeping track of those multiple
copies is difficult, and building that into an app requires complicated logic.
OpenStack Object Storage automatically replicates each object at least twice
before returning 'write success' to your API call. A good strategy is to keep
three copies of objects, by default, at all times, replicating them across the
system in case of hardware failure, maintenance, network outage, or another
kind of breakage. This strategy is very convenient for app creation. You can
just dump objects into object storage and not worry about the additional work
that it takes to keep them safe.
Use Object Storage to store fractals
------------------------------------
The Fractals app currently uses the local file system on the instance to store
the images that it generates. For a number of reasons, this approach is not
scalable or durable.
Because the local file system is ephemeral storage, the fractal images are
lost along with the instance when the instance is terminated. Block-based
storage, which the :doc:`/block_storage` section discusses, avoids that
problem, but like local file systems, it requires administration to ensure
that it does not fill up, and immediate attention if disks fail.
The Object Storage service manages many of the tasks normally managed by the
application owner. The Object Storage service provides a scalable and durable
API that you can use for the fractals app, eliminating the need to be aware of
the low level details of how objects are stored and replicated, and how to
grow the storage pool. Object Storage handles replication for you. It stores
multiple copies of each object. You can use the Object Storage API to return
an object, on demand.
First, learn how to connect to the Object Storage endpoint:
.. only:: dotnet
.. warning:: This section has not yet been completed for the .NET SDK.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-1
:end-before: step-2
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-1
:end-before: step-2
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-1
:end-before: step-2
.. warning::
Libcloud 0.16 and 0.17 are afflicted with a bug that means
authentication to a swift endpoint can fail with `a Python
exception
<https://issues.apache.org/jira/browse/LIBCLOUD-635>`_. If
you encounter this, you can upgrade your libcloud version, or
apply a simple `2-line patch
<https://github.com/fifieldt/libcloud/commit/ec58868c3344a9bfe7a0166fc31c0548ed22ea87>`_.
.. note:: Libcloud uses a different connector for Object Storage
to all other OpenStack services, so a conn object from
previous sections will not work here and we have to create
a new one named :code:`swift`.
.. only:: pkgcloud
.. warning:: This section has not yet been completed for the pkgcloud SDK.
.. only:: openstacksdk
.. warning:: This section has not yet been completed for the OpenStack SDK.
.. only:: phpopencloud
.. warning:: This section has not yet been completed for the
PHP-OpenCloud SDK.
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-1
:end-before: step-2
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-1
:end-before: step-2
To begin to store objects, we must first make a container.
Call yours :code:`fractals`:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-2
:end-before: step-3
You should see output such as:
.. code-block:: ruby
TBC
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-2
:end-before: step-3
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-2
:end-before: step-3
You should see output such as:
.. code-block:: python
<Container: name=fractals, provider=OpenStack Swift>
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-2
:end-before: step-3
You should see output such as:
.. code-block:: python
Munch({u'content-length': u'0', u'x-container-object-count': u'0',
u'accept-ranges': u'bytes', u'x-container-bytes-used': u'0',
u'x-timestamp': u'1463950178.11674', u'x-trans-id':
u'txc6262b9c2bc1445b9dfe3-00574277ff', u'date': u'Mon, 23 May 2016
03:24:47 GMT', u'content-type': u'text/plain; charset=utf-8'})
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-2
:end-before: step-3
You should now be able to see this container appear in a listing of
all containers in your account:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-3
:end-before: step-4
You should see output such as:
.. code-block:: ruby
TBC
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-3
:end-before: step-4
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-3
:end-before: step-4
You should see output such as:
.. code-block:: python
[<Container: name=fractals, provider=OpenStack Swift>]
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-3
:end-before: step-4
.. code-block:: python
[Munch({u'count': 0, u'bytes': 0, u'name': u'fractals'}),
Munch({u'count': 0, u'bytes': 0, u'name': u'fractals_segments'})]
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-3
:end-before: step-4
The next logical step is to upload an object. Find a photo of a goat
online, name it :code:`goat.jpg`, and upload it to your
:code:`fractals` container:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-4
:end-before: step-5
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-4
:end-before: step-5
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-4
:end-before: step-5
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-4
:end-before: step-5
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-4
:end-before: step-5
List objects in your :code:`fractals` container to see if the upload
was successful. Then, download the file to verify that the md5sum is
the same:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-5
:end-before: step-6
::
TBC
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-6
:end-before: step-7
::
TBC
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-7
:end-before: step-8
::
7513986d3aeb22659079d1bf3dc2468b
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-5
:end-before: step-6
::
Objects in fractals:
SwiftObject{name=an amazing goat,
uri=https://swift.some.org:8888/v1/AUTH_8997868/fractals/an%20amazing%20goat,
etag=439884df9c1c15c59d2cf43008180048,
lastModified=Wed Nov 25 15:09:34 AEDT 2015, metadata={}}
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-6
:end-before: step-7
::
Fetched: an amazing goat
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-7
:end-before: step-8
::
MD5 for file goat.jpg: 439884df9c1c15c59d2cf43008180048
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-5
:end-before: step-6
::
[<Object: name=an amazing goat, size=191874, hash=439884df9c1c15c59d2cf43008180048, provider=OpenStack Swift ...>]
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-6
:end-before: step-7
::
<Object: name=an amazing goat, size=954465, hash=7513986d3aeb22659079d1bf3dc2468b, provider=OpenStack Swift ...>
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-7
:end-before: step-8
::
7513986d3aeb22659079d1bf3dc2468b
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-5
:end-before: step-6
::
[Munch({u'hash': u'd1408b5bf6510426db6e2bafc2f90854', u'last_modified':
u'2016-05-23T03:34:59.353480', u'bytes': 63654, u'name': u'an amazing
goat', u'content_type': u'application/octet-stream'})]
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-6
:end-before: step-7
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-7
:end-before: step-8
::
d1408b5bf6510426db6e2bafc2f90854
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-5
:end-before: step-6
Finally, clean up by deleting the test object:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-8
:end-before: step-9
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-8
:end-before: step-10
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-8
:end-before: step-9
.. note:: You must pass in objects and not object names to the delete commands.
Now, no more objects are available in the :code:`fractals` container.
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-9
:end-before: step-10
::
[]
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-8
:end-before: step-9
::
Munch({u'content-length': u'0', u'x-container-object-count': u'0',
u'accept-ranges': u'bytes', u'x-container-bytes-used': u'0',
u'x-timestamp': u'1463950178.11674', u'x-trans-id':
u'tx46c83fa41030422493110-0057427af3', u'date': u'Mon, 23 May 2016
03:37:23 GMT', u'content-type': u'text/plain; charset=utf-8'})
Now, no more objects are available in the :code:`fractals` container.
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-9
:end-before: step-10
::
[]
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-8
:end-before: step-9
Back up the Fractals from the database on the Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Back up the Fractals app images, which are currently stored inside the
database, on Object Storage.
Place the images in the :code:`fractals` container:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-10
:end-before: step-11
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-10
:end-before: step-11
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-10
:end-before: step-11
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-10
:end-before: step-11
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-10
:end-before: step-11
Next, back up all existing fractals from the database to the swift container.
A simple loop takes care of that:
.. note:: Replace :code:`IP_API_1` with the IP address of the API instance.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-11
:end-before: step-12
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-11
:end-before: step-12
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-11
:end-before: step-12
::
<Object: name=025fd8a0-6abe-4ffa-9686-bcbf853b71dc, size=61597, hash=b7a8a26e3c0ce9f80a1bf4f64792cd0c, provider=OpenStack Swift ...>
<Object: name=26ca9b38-25c8-4f1e-9e6a-a0132a7a2643, size=136298, hash=9f9b4cac16893854dd9e79dc682da0ff, provider=OpenStack Swift ...>
<Object: name=3f68c538-783e-42bc-8384-8396c8b0545d, size=27202, hash=e6ee0cd541578981c294cebc56bc4c35, provider=OpenStack Swift ...>
.. note:: The example code uses the awesome
`Requests library <http://docs.python-requests.org/en/latest/>`_.
Before you try to run the previous script, make sure that
it is installed on your system.
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-11
:end-before: step-12
.. note:: The example code uses the awesome
`Requests library <http://docs.python-requests.org/en/latest/>`_.
Before you try to run the previous script, make sure that
it is installed on your system.
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-11
:end-before: step-12
Configure the Fractals app to use Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning:: Currently, you cannot directly store generated
images in OpenStack Object Storage. Please revisit
this section again in the future.
Extra features
--------------
Delete containers
~~~~~~~~~~~~~~~~~
To delete a container, you must first remove all objects from the container.
Otherwise, the delete operation fails:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-12
:end-before: step-13
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-12
:end-before: step-13
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-12
:end-before: step-13
.. only:: shade
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-12
:end-before: step-13
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-12
:end-before: step-13
.. warning:: It is not possible to restore deleted objects. Be careful.
Add metadata to objects
~~~~~~~~~~~~~~~~~~~~~~~
You can complete advanced tasks such as uploading an object with metadata, as
shown in following example. For more information, see the documentation for
your SDK.
.. only:: fog
This option also uses a bit stream to upload the file, iterating bit
by bit over the file and passing those bits to Object Storage as they come.
Compared to loading the entire file in memory and then sending it, this method
is more efficient, especially for larger files.
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-13
:end-before: step-14
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-13
:end-before: step-14
.. only:: libcloud
This option also uses a bit stream to upload the file, iterating bit
by bit over the file and passing those bits to Object Storage as they come.
Compared to loading the entire file in memory and then sending it, this method
is more efficient, especially for larger files.
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-13
:end-before: step-14
.. todo:: It would be nice to have a pointer here to section 9.
.. only:: shade
This adds a "foo" key to the metadata that has a value of "bar".
.. Note::
Swift metadata keys are prepended with "x-object-meta-" so when you get
the object with get_object(), in order to get the value of the metadata
your key will be "x-object-meta-foo".
.. literalinclude:: ../samples/shade/durability.py
:start-after: step-13
:end-before: step-14
.. only:: gophercloud
.. literalinclude:: ../samples/gophercloud/durability.go
:language: go
:start-after: step-13
:end-before: step-14
Large objects
~~~~~~~~~~~~~
For efficiency, most Object Storage installations treat large objects,
:code:`> 5GB`, differently than smaller objects.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-14
:end-before: step-15
.. only:: jclouds
If you work with large objects, use the :code:`RegionScopedBlobStoreContext`
class family instead of the ones used so far.
.. note:: Large file uploads that use the :code:`openstack-swift` provider
are supported in only jclouds V2, currently in beta. Also, the
default chunk size is 64 Mb. Consider changing this as homework.
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
:start-after: step-14
:end-before: step-15
.. only:: libcloud
If you work with large objects, use the :code:`ex_multipart_upload_object`
call instead of the simpler :code:`upload_object` call. The call splits
the large object into chunks and creates a manifest so that the chunks can
be recombined on download. Change the :code:`chunk_size` parameter, in
bytes, to a value that your cloud can accept.
.. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-14
:end-before: step-15
.. only:: jclouds
Complete code sample
~~~~~~~~~~~~~~~~~~~~
This file contains all the code from this tutorial section. This
class lets you view and run the code.
Before you run this class, confirm that you have configured it for
your cloud and the instance running the Fractals application.
.. literalinclude:: ../samples/jclouds/Durability.java
:language: java
.. only:: shade
Shade's create_object function has a "use_slo" parameter (that defaults to
true) which will break your object into smaller objects for upload and
rejoin them if needed.
Next steps
----------
You should now be fairly confident working with Object Storage. You
can find more information about the Object Storage SDK calls at:
.. only:: fog
https://github.com/fog/fog/blob/master/lib/fog/openstack/docs/storage.md
.. only:: libcloud
https://libcloud.readthedocs.org/en/latest/storage/api.html
Or, try one of these tutorial steps:
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +0,0 @@
digraph {
API -> Database [color=green];
API -> Database [color=orange];
Database -> API [color=red];
API -> Webinterface [color=red];
API -> "Queue Service" [color=orange];
"Queue Service" -> Worker [color=orange];
Worker -> API [color=green];
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 202 KiB

View File

@ -1,7 +0,0 @@
digraph {
rankdir=LR;
Queue [shape="doublecircle"];
API -> Queue;
Queue -> "Worker 1";
Queue -> "Worker 2";
}

View File

@ -1,20 +0,0 @@
========================================
Writing your first OpenStack application
========================================
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
getting_started
introduction
scaling_out
durability
block_storage
orchestration
networking
advice
craziness
appendix

View File

@ -1,961 +0,0 @@
=====================================================
Introduction to the fractals application architecture
=====================================================
This section introduces the application architecture and explains how
it was designed to take advantage of cloud features in general and
OpenStack in particular. It also describes some commands in the
previous section.
.. todo:: (for Nick) Improve the architecture discussion.
.. only:: dotnet
.. warning:: This section has not yet been completed for the .NET SDK.
.. only:: fog
.. highlight:: ruby
.. only:: pkgcloud
.. warning:: This section has not yet been completed for the pkgcloud SDK.
.. only:: phpopencloud
.. warning:: This section has not yet been completed for the
PHP-OpenCloud SDK.
Cloud application architecture principles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Cloud applications typically share several design principles.
These principles influenced the design of the Fractals application.
.. todo:: Do you want to state the core design principles or assume
the reader can follow below.
Modularity and micro-services
-----------------------------
`Micro-services <http://en.wikipedia.org/wiki/Microservices>`_ are an
important design pattern that helps achieve application modularity. Separating
logical application functions into independent services simplifies maintenance
and re-use. Decoupling components also makes it easier to selectively scale
individual components, as required. Further, application modularity is a
required feature of applications that scale out well and are fault tolerant.
Scalability
-----------
Cloud applications often use many small instances rather than a few large
instances. Provided that an application is sufficiently modular, you can
easily distribute micro-services across as many instances as required. This
architecture enables an application to grow past the limit imposed by the
maximum size of an instance. It is like trying to move a large number of people
from one place to another; there is only so many people you can put on the
largest bus, but you can use an unlimited number of buses or small cars, which
provide just the capacity you need - and no more.
Fault tolerance
---------------
In cloud programming, there is a well-known analogy known as "cattle vs
pets". If you have not heard it before, it goes like this:
When you deal with pets, you name and care for them. If they get sick,
you nurse them back to health, which can be difficult and very time
consuming. When you deal with cattle, you attach a numbered tag to
their ear. If they get sick, you put them down and move on.
That, as it happens, is the new reality of programming. Applications
and systems used to be created on large, expensive servers, cared for
by operations staff dedicated to keeping them healthy. If something
went wrong with one of those servers, the staff's job was to do
whatever it took to make it right again and save the server and the
application.
In cloud programming, it is very different. Rather than large,
expensive servers, you have virtual machines that are disposable; if
something goes wrong, you shut the server down and spin up a new one.
There is still operations staff, but rather than nursing individual
servers back to health, their job is to monitor the health of the
overall system.
There are definite advantages to this architecture. It is easy to get a
"new" server, without any of the issues that inevitably arise when a
server has been up and running for months, or even years.
As with classical infrastructure, failures of the underpinning cloud
infrastructure (hardware, networks, and software) are unavoidable.
When you design for the cloud, it is crucial that your application is
designed for an environment where failures can happen at any moment.
This may sound like a liability, but it is not; by designing your
application with a high degree of fault tolerance, you also make it
resilient, and more adaptable, in the face of change.
Fault tolerance is essential to the cloud-based application.
Automation
----------
If an application is meant to automatically scale up and down to meet
demand, it is not feasible have any manual steps in the process of
deploying any component of the application. Automation also decreases
the time to recovery for your application in the event of component
failures, increasing fault tolerance and resilience.
Programmatic interfaces (APIs)
------------------------------
Like many cloud applications, the Fractals application has a
`RESTful API <http://en.wikipedia.org/wiki/Representational_state_transfer>`_.
You can connect to it directly and generate fractals, or you can integrate it
as a component of a larger application. Any time a standard interface such as
an API is available, automated testing becomes much more feasible, increasing
software quality.
Fractals application architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Fractals application was designed with the principles of the previous
subsection in mind. You will note that in :doc:`getting_started`, we deployed the
application in an all-in-one style, on a single virtual machine. This is not
a good practice, but because the application uses micro-services to decouple
logical application functions, we can change this easily.
.. graphviz:: images/architecture.dot
Message queues are used to facilitate communication between the
Fractal application services. The Fractal application uses a `work queue
<https://www.rabbitmq.com/tutorials/tutorial-two-python.html>`_ (or
task queue) to distribute tasks to the worker services.
Message queues work in a way similar to a queue (or a line, for those
of us on the other side of the ocean) in a bank being served by
multiple clerks. The message queue in our application provides a feed
of work requests that can be taken one-at-a-time by worker services,
whether there is a single worker service or hundreds of them.
This is a `useful pattern <https://msdn.microsoft.com/en-us/library/dn568101.aspx>`_
for many cloud applications that have long lists of requests coming in and a
pool of resources from which to service them. This also means that a
worker may crash and the tasks will be processed by other workers.
.. note:: The `RabbitMQ getting started tutorial
<https://www.rabbitmq.com/getstarted.html>`_ provides a
great introduction to message queues.
.. graphviz:: images/work_queue.dot
The worker service consumes messages from the work queue and then processes
them to create the corresponding fractal image file.
Of course there is also a web interface which offers a more human
friendly way of accessing the API to view the created fractal images,
and a simple command line interface.
.. figure:: images/screenshot_webinterface.png
:width: 800px
:align: center
:height: 600px
:alt: screenshot of the webinterface
:figclass: align-center
There are also multiple storage back ends (to store the generated
fractal images) and a database component (to store the state of
tasks), but we will talk about those in :doc:`/durability` and
:doc:`/block_storage` respectively.
How the Fractals application interacts with OpenStack
-----------------------------------------------------
.. todo:: Description of the components of OpenStack and how they
relate to the Fractals application and how it runs on the cloud.
TF notes this is already covered in the guide, just split
across each section. Adding it here forces the
introduction of block storage, object storage, orchestration
and neutron networking too early, which could seriously
confuse users who do not have these services in their
cloud. Therefore, this should not be done here.
The magic revisited
~~~~~~~~~~~~~~~~~~~
So what exactly was that request doing at the end of the previous section?
Let us look at it again. In this subsection, we are just explaining what you
have already done in the previous section; you do not need to run these
commands again.
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-1
:end-before: step-2
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-1
:end-before: step-2
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-1
:end-before: step-2
.. only:: jclouds
Note that we will be showing the commands in a more idiomatic Java way:
as methods on a class.
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-1
:end-before: step-1-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-1
:end-before: step-2
We explained image and flavor in :doc:`getting_started`, so in the following
sections, we will explain the other parameters in detail, including
:code:`ex_userdata` (cloud-init) and :code:`ex_keyname` (key pairs).
.. only:: openstacksdk
.. note:: In openstacksdk parameter :code:`ex_userdata` is called :code:`user_data`
and parameter :code:`ex_keyname` is called :code:`key_name`.
Introduction to cloud-init
--------------------------
`cloud-init <https://cloudinit.readthedocs.org/en/latest/>`_ is a tool
that performs instance configuration tasks during the boot of a cloud
instance, and comes installed on most cloud
images. :code:`ex_userdata`, which was passed to :code:`create_node`,
is the configuration data passed to cloud-init.
In this case, we are presenting a shell script as the `userdata
<https://cloudinit.readthedocs.org/en/latest/topics/format.html#user-data-script>`_.
When :code:`create_node` creates the instance, :code:`cloud-init`
executes the shell script in the :code:`userdata` variable.
When an SSH public key is provided during instance creation,
cloud-init installs this key on a user account. (The user name
varies between cloud images.) See the `Obtaining Images <https://docs.openstack.org/image-guide/obtain-images.html>`_
section of the image guide for guidance about which user name you
should use when SSHing. If you still have problems logging in, ask
your cloud provider to confirm the user name.
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-2
:end-before: step-3
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-2
:end-before: step-3
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-2
:end-before: step-3
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-2
:end-before: step-2-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-2
:end-before: step-3
.. note:: User data in openstacksdk must be encoded to Base64.
After the instance is created, cloud-init downloads and runs a script called
:code:`install.sh`. This script installs the Fractals application. Cloud-init
can consume bash scripts and a number of different types of data. You
can even provide multiple types of data. You can find more information
about cloud-init in the `official documentation <https://cloudinit.readthedocs.org/en/latest/>`_.
Introduction to key pairs
-------------------------
Security is important when it comes to your instances; you can not have just
anyone accessing them. To enable logging into an instance, you must provide
the public key of an SSH key pair during instance creation. In section one,
you created and uploaded a key pair to OpenStack, and cloud-init installed it
for the user account.
Even with a key in place, however, you must have the appropriate
security group rules in place to access your instance.
Introduction to security groups
-------------------------------
Security groups are sets of network access rules that are applied to
an instance's networking. By default, only egress (outbound) traffic
is allowed. You must explicitly enable ingress (inbound) network
access by creating a security group rule.
.. warning:: Removing the egress rule created by OpenStack will cause
your instance networking to break.
Start by creating a security group for the all-in-one instance and
adding the appropriate rules, such as HTTP (TCP port 80) and SSH (TCP
port 22):
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-3
:end-before: step-4
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-3
:end-before: step-4
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-3
:end-before: step-4
.. note:: :code:`ex_create_security_group_rule()` takes ranges of
ports as input. This is why ports 80 and 22 are passed
twice.
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-3
:end-before: step-3-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-3
:end-before: step-4
You can list available security groups with:
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-4
:end-before: step-5
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-4
:end-before: step-5
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-4
:end-before: step-5
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-4
:end-before: step-4-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-4
:end-before: step-5
Once you have created a rule or group, you can also delete it:
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-5
:end-before: step-6
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-5
:end-before: step-6
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-5
:end-before: step-6
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-5
:end-before: step-5-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-5
:end-before: step-6
To see which security groups apply to an instance, you can:
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-6
:end-before: step-7
.. code-block:: none
name: 'all-in-one',
description: 'network access for all-in-one application.',
security_group_rules:
- direction: 'ingress',
protocol': 'tcp',
remote_ip_prefix: '0.0.0.0/0',
port_range_max: 22,
security_group_id: '83aa1bf9-564a-47da-bb46-60cd1c63cc84',
port_range_min: 22,
ethertype: 'IPv4',
id: '5ff0008f-a02d-4b40-9719-f52c77dfdab0',
- direction: 'ingress',
protocol: 'tcp',
remote_ip_prefix: '0.0.0.0/0',
port_range_max: 80,
security_group_id: '83aa1bf9-564a-47da-bb46-60cd1c63cc84',
port_range_min: 80,
ethertype: 'IPv4',
id: 'c2539e49-b110-4657-bf0a-7a221f5e9e6f',
id: '83aa1bf9-564a-47da-bb46-60cd1c63cc84'
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-6
:end-before: step-7
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-6
:end-before: step-7
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-6
:end-before: step-6-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-6
:end-before: step-7
.. todo:: print() ?
Once you have configured permissions, you must know where to
access the application.
Introduction to Floating IPs
----------------------------
As in traditional IT, cloud instances are accessed through IP addresses that
OpenStack assigns. How this is actually done depends on the networking setup
for your cloud. In some cases, you will simply get an Internet rout-able IP
address assigned directly to your instance.
The most common way for OpenStack clouds to allocate Internet rout-able
IP addresses to instances, however, is through the use of floating
IPs. A floating IP is an address that exists as an entity unto
itself, and can be associated to a specific instance network
interface. When a floating IP address is associated to an instance
network interface, OpenStack re-directs traffic bound for that address
to the address of the instance's internal network interface
address. Your cloud provider will generally offer pools of floating
IPs for your use.
To use a floating IP, you must first allocate an IP to your project,
then associate it to your instance's network interface.
.. note::
Allocating a floating IP address to an instance does not change
the IP address of the instance, it causes OpenStack to establish
the network translation rules to allow an *additional* IP address.
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-7
:end-before: step-8
If you have no free floating IPs that have been previously allocated
for your project, first select a floating IP pool offered by your
provider. In this example, we have selected the first one and assume
that it has available IP addresses.
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-8
:end-before: step-9
Now request that an address from this pool be allocated to your project.
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-9
:end-before: step-10
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-7
:end-before: step-8
If you have no free floating IPs that have been previously allocated
for your project, first select a floating IP pool offered by your
provider. In this example, we have selected the first one and assume
that it has available IP addresses.
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-8
:end-before: step-9
Now request that an address from this pool be allocated to your project.
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-9
:end-before: step-10
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-7
:end-before: step-8
.. only:: jclouds
First check for an unused floating IP.
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-7
:end-before: step-7-end
If you have no free floating IPs that have been previously allocated
for your project, then select a floating IP pool offered by your
provider. In this example, we have selected the first one and assume
that it has available IP addresses.
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-8
:end-before: step-8-end
Then request an IP number be allocated from the pool.
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-9
:end-before: step-9-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-7
:end-before: step-8
If you have no free floating IPs that have been allocated for
your project, first select a network which offer allocation
of floating IPs. In this example we use network which is
called :code:`public`.
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-8
:end-before: step-9
Now request an address from this network to be allocated to your project.
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-9
:end-before: step-10
Now that you have an unused floating IP address allocated to your
project, attach it to an instance.
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-10
:end-before: step-11
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-10
:end-before: step-11
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-10
:end-before: step-11
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-10
:end-before: step-10-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-10
:end-before: step-11
That brings us to where we ended up at the end of
:doc:`/getting_started`. But where do we go from here?
Splitting services across multiple instances
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have talked about separating functions into different micro-services,
and how that enables us to make use of the cloud architecture. Now
let us see that in action.
The rest of this tutorial will not reference the all-in-one instance you
created in section one. Take a moment to delete this instance.
It is easy to split out services into multiple instances. We will
create a controller instance called :code:`app-controller`, which
hosts the API, database, and messaging services. We will also create a
worker instance called :code:`app-worker-1`, which just generates
fractals.
The first step is to start the controller instance. The instance has
the API service, the database, and the messaging service, as you can
see from the parameters passed to the installation script.
========== ====================== =============================
Parameter Description Values
========== ====================== =============================
:code:`-i` Install a service :code:`messaging` (install RabbitMQ) and :code:`faafo` (install the Faafo app).
:code:`-r` Enable/start something :code:`api` (enable and start the API service), :code:`worker` (enable and start the worker service), and :code:`demo` (run the demo mode to request random fractals).
========== ====================== =============================
.. todo:: https://bugs.launchpad.net/openstack-manuals/+bug/1439918
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-11
:end-before: step-12
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-11
:end-before: step-12
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-11
:end-before: step-12
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-11
:end-before: step-11-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-11
:end-before: step-12
Note that this time, when you create a security group, you include a
rule that applies to only instances that are part of the worker group.
Next, start a second instance, which will be the worker instance:
.. todo :: more text necessary here...
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-12
:end-before: step-13
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-12
:end-before: step-13
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-12
:end-before: step-13
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-12
:end-before: step-12-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-12
:end-before: step-13
Notice that you have added this instance to the worker_group, so it can
access the controller.
As you can see from the parameters passed to the installation script,
you define this instance as the worker instance. But, you also pass
the address of the API instance and the message queue so the worker
can pick up requests. The Fractals application installation script
accepts several parameters.
========== ==================================================== ====================================
Parameter Description Example
========== ==================================================== ====================================
:code:`-e` The endpoint URL of the API service. http://localhost/
:code:`-m` The transport URL of the messaging service. amqp://guest:guest@localhost:5672/
:code:`-d` The connection URL for the database (not used here). sqlite:////tmp/sqlite.db
========== ==================================================== ====================================
Now if you make a request for a new fractal, you connect to the
controller instance, :code:`app-controller`, but the work will
actually be performed by a separate worker instance -
:code:`app-worker-1`.
Login with SSH and use the Fractal app
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Login to the worker instance, :code:`app-worker-1`, with SSH, using
the previous added SSH key pair "demokey". Start by getting the IP
address of the worker:
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
:start-after: step-13
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:start-after: step-13
:end-before: step-14
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:start-after: step-13
:end-before: step-14
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
:start-after: step-13
:end-before: step-13-end
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:start-after: step-13
:end-before: step-14
Now you can SSH into the instance:
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_WORKER_1
.. note:: Replace :code:`IP_WORKER_1` with the IP address of the
worker instance and USERNAME to the appropriate user name.
Once you have logged in, check to see whether the worker service process
is running as expected. You can find the logs of the worker service
in the directory :code:`/var/log/supervisor/`.
.. code-block:: console
worker # ps ax | grep faafo-worker
17210 ? R 7:09 /usr/bin/python /usr/local/bin/faafo-worker
Open :code:`top` to monitor the CPU usage of the :code:`faafo-worker` process.
Now log into the controller instance, :code:`app-controller`, also
with SSH, using the previously added SSH key pair "demokey".
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_CONTROLLER
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance and USERNAME to the appropriate user name.
Check to see whether the API service process is running like
expected. You can find the logs for the API service in the directory
:file:`/var/log/supervisor/`.
.. code-block:: console
controller # ps ax | grep faafo-api
17209 ? Sl 0:19 /usr/bin/python /usr/local/bin/faafo-api
Now call the Fractal application's command line interface (:code:`faafo`) to
request a few new fractals. The following command requests a few
fractals with random parameters:
.. code-block:: console
controller # faafo --endpoint-url http://localhost --verbose create
2015-04-02 03:55:02.708 19029 INFO faafo.client [-] generating 6 task(s)
Watch :code:`top` on the worker instance. Right after calling
:code:`faafo` the :code:`faafo-worker` process should start consuming
a lot of CPU cycles.
.. code-block:: console
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17210 root 20 0 157216 39312 5716 R 98.8 3.9 12:02.15 faafo-worker
To show the details of a specific fractal use the subcommand
:code:`show` of the Faafo CLI.
.. code-block:: console
controller # faafo show 154c7b41-108e-4696-a059-1bde9bf03d0a
+------------+------------------------------------------------------------------+
| Parameter | Value |
+------------+------------------------------------------------------------------+
| uuid | 154c7b41-108e-4696-a059-1bde9bf03d0a |
| duration | 4.163147 seconds |
| dimensions | 649 x 869 pixels |
| iterations | 362 |
| xa | -1.77488588389 |
| xb | 3.08249829401 |
| ya | -1.31213919301 |
| yb | 1.95281690897 |
| size | 71585 bytes |
| checksum | 103c056f709b86f5487a24dd977d3ab88fe093791f4f6b6d1c8924d122031902 |
+------------+------------------------------------------------------------------+
There are more commands available; find out more details about them
with :code:`faafo get --help`, :code:`faafo list --help`, and
:code:`faafo delete --help`.
.. note:: The application stores the generated fractal images directly
in the database used by the API service instance. Storing
image files in a database is not good practice. We are doing it
here as an example only as an easy way to enable multiple
instances to have access to the data. For best practice, we
recommend storing objects in Object Storage, which is
covered in :doc:`durability`.
Next steps
~~~~~~~~~~
You should now have a basic understanding of the architecture of
cloud-based applications. In addition, you have had practice
starting new instances, automatically configuring them at boot, and
even modularizing an application so that you may use multiple
instances to run it. These are the basic steps for requesting and
using compute resources in order to run your application on an
OpenStack cloud.
From here, go to :doc:`/scaling_out` to learn how to further scale
your application. Or, try one of these steps in the tutorial:
* :doc:`/durability`: Learn how to use Object Storage to make your application more durable.
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)
Complete code sample
~~~~~~~~~~~~~~~~~~~~
The following file contains all of the code from this section of the
tutorial. This comprehensive code sample lets you view and run the
code as a single script.
Before you run this script, confirm that you have set your
authentication information, the flavor ID, and image ID.
.. only:: shade
.. literalinclude:: ../samples/shade/introduction.py
:language: python
.. only:: fog
.. literalinclude:: ../samples/fog/introduction.rb
:language: ruby
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/introduction.py
:language: python
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/Introduction.java
:language: java
.. only:: openstacksdk
.. literalinclude:: ../samples/openstacksdk/introduction.py
:language: python

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,912 +0,0 @@
==========
Networking
==========
.. todo:: Latter part of the chapter (LBaaS) needs to use Fractals app
entities for the examples.
In previous chapters, all nodes that comprise the fractal application were
attached to the same network.
This chapter introduces the Networking API. This will enable us to build
networking topologies that separate public traffic accessing the application
from traffic between the API and the worker components. We also introduce
load balancing for resilience, and create a secure back-end network for
communication between the database, web server, file storage, and worker
components.
.. warning:: This section assumes that your cloud provider has implemented the
OpenStack Networking API (neutron). Users of clouds which have
implemented legacy networking (nova-network) will have access to
networking via the Compute API. Log in to the Horizon dashboard
and navigate to :guilabel:`Project->Access & Security->API Access`.
If you see a service endpoint for the Network API, your cloud
is most likely running the Networking API. If you are still in
doubt, ask your cloud provider for more information.
.. only:: dotnet
.. warning:: This section has not yet been completed for the .NET SDK
.. only:: fog
.. warning:: fog `supports
<http://www.rubydoc.info/gems/fog/1.8.0/Fog/Network/OpenStack>`_
the OpenStack Networking API, but this section has
not yet been completed.
.. only:: jclouds
.. warning:: jClouds supports the OpenStack Networking API, but
section has not yet been completed. Please see `this
<https://gist.github.com/everett-toews/8701756>`_ in
the meantime.
.. only:: libcloud
.. warning:: Libcloud does not support the OpenStack Networking API.
.. only:: pkgcloud
.. warning:: Pkgcloud supports the OpenStack Networking API, but
this section has not been completed.
.. only:: openstacksdk
.. warning:: This section has not yet been completed for the OpenStack SDK.
.. only:: phpopencloud
.. warning:: PHP-OpenCloud supports the OpenStack Networking API,
but this section has not been completed.
Work with the CLI
~~~~~~~~~~~~~~~~~
Because the SDKs do not fully support the OpenStack Networking API, this
section uses the command-line clients.
Use this guide to install the 'openstack' command-line client:
https://docs.openstack.org/cli-reference/common/cli_install_openstack_command_line_clients.html#install-the-clients
Use this guide to set up the necessary variables for your cloud in an
'openrc' file:
https://docs.openstack.org/cli-reference/common/cli_set_environment_variables_using_openstack_rc.html
Ensure you have an openrc.sh file, source it, and then check that your
openstack client works: ::
$ cat openrc.sh
export OS_USERNAME=your_auth_username
export OS_PASSWORD=your_auth_password
export OS_TENANT_NAME=your_project_name
export OS_AUTH_URL=http://controller:5000/v2.0
export OS_REGION_NAME=your_region_name
$ source openrc.sh
$ openstack --version
3.3.0
Networking segmentation
~~~~~~~~~~~~~~~~~~~~~~~
In traditional data centers, network segments are dedicated to
specific types of network traffic.
The fractal application we are building contains these types of
network traffic:
* public-facing web traffic
* API traffic
* internal worker traffic
For performance reasons, it makes sense to have a network for each
tier, so that traffic from one tier does not "crowd out" other types
of traffic and cause the application to fail. In addition, having
separate networks makes controlling access to parts of the application
easier to manage, improving the overall security of the application.
Prior to this section, the network layout for the Fractal application
would be similar to the following diagram:
.. nwdiag::
nwdiag {
network public {
address = "203.0.113.0/24"
tenant_router [ address = "203.0.113.20" ];
}
network tenant_network {
address = "10.0.0.0/24"
tenant_router [ address = "10.0.0.1" ];
api [ address = "203.0.113.20, 10.0.0.3" ];
webserver1 [ address = "203.0.113.21, 10.0.0.4" ];
webserver2 [ address = "203.0.113.22, 10.0.0.5" ];
worker1 [ address = "203.0.113.23, 10.0.0.6" ];
worker2 [ address = "203.0.113.24, 10.0.0.7" ];
}
}
In this network layout, we assume that the OpenStack cloud in which
you have been building your application has a public network and tenant router
that was previously created by your cloud provider or by yourself, following
the instructions in the appendix.
Many of the network concepts that are discussed in this section are
already present in the diagram above. A tenant router provides routing
and external access for the worker nodes, and floating IP addresses
are associated with each node in the Fractal application cluster to
facilitate external access.
At the end of this section, you make some slight changes to the
networking topology by using the OpenStack Networking API to create
the 10.0.1.0/24 network to which the worker nodes attach. You use the
10.0.3.0/24 API network to attach the Fractal API servers. Web server
instances have their own 10.0.2.0/24 network, which is accessible by
fractal aficionados worldwide, by allocating floating IPs from the
public network.
.. nwdiag::
nwdiag {
network public {
address = "203.0.113.0/24"
tenant_router [ address = "203.0.113.60"];
}
network webserver_network{
address = "10.0.2.0/24"
tenant_router [ address = "10.0.2.1"];
webserver1 [ address = "203.0.113.21, 10.0.2.3"];
webserver2 [ address = "203.0.113.22, 10.0.2.4"];
}
network api_network {
address = "10.0.3.0/24"
tenant_router [ address = "10.0.3.1" ];
api1 [ address = "10.0.3.3" ];
api2 [ address = "10.0.3.4" ];
}
network worker_network {
address = "10.0.1.0/24"
tenant_router [ address = "10.0.1.1" ];
worker1 [ address = "10.0.1.5" ];
worker2 [ address = "10.0.1.6" ];
}
}
Introduction to tenant networking
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the OpenStack Networking API, the workflow for creating a network
topology that separates the public-facing Fractals app API from the
worker back end is as follows:
* Create a network and subnet for the web server nodes.
* Create a network and subnet for the worker nodes. This is the private data network.
* Create a router for the private data network.
* Allocate floating ips and assign them to the web server nodes.
Create networks
~~~~~~~~~~~~~~~
Most cloud providers make a public network accessible to you. We will
attach a router to this public network to grant Internet access to our
instances. After also attaching this router to our internal networks,
we will allocate floating IPs from the public network for instances
which need to be accessed from the Internet.
Confirm that we have a public network by listing the
networks our tenant has access to. The public network does not have to
be named public - it could be 'external', 'net04_ext' or something
else - the important thing is it exists and can be used to reach the
Internet.
::
$ openstack network list
+--------------------------------------+------------------+--------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------------------+--------------------------------------------------+
| 27e6fa33-fd39-475e-b048-6ac924972a03 | public | b12293c9-a1f4-49e3-952f-136a5dd24980 |
+--------------------------------------+------------------+--------------------------------------------------+
Next, create a network and subnet for the workers.
::
$ openstack network create worker_network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-11-06T22:28:45Z |
| description | |
| headers | |
| id | 4d25ff64-eec3-4ab6-9029-f6d4b5a3e127 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | worker_network |
| port_security_enabled | True |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 54 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-11-06T22:28:45Z |
+---------------------------+--------------------------------------+
$ openstack subnet create worker_subnet --network worker_network --subnet-range 10.0.1.0/24
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 10.0.1.2-10.0.1.254 |
| cidr | 10.0.1.0/24 |
| created_at | 2016-11-06T22:34:47Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.1.1 |
| headers | |
| host_routes | |
| id | 383309b3-184d-4060-a151-a73dcb0606db |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | worker_subnet |
| network_id | 4d25ff64-eec3-4ab6-9029-f6d4b5a3e127 |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 2 |
| service_types | |
| subnetpool_id | None |
| updated_at | 2016-11-06T22:34:47Z |
+-------------------+--------------------------------------+
Now, create a network and subnet for the web servers.
::
$ openstack network create webserver_network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-11-06T22:36:19Z |
| description | |
| headers | |
| id | 2410c262-6c27-4e99-8c31-045b60499a01 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | webserver_network |
| port_security_enabled | True |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 96 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-11-06T22:36:19Z |
+---------------------------+--------------------------------------+
$ openstack subnet create webserver_subnet --network webserver_network --subnet-range 10.0.2.0/24
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 10.0.2.2-10.0.2.254 |
| cidr | 10.0.2.0/24 |
| created_at | 2016-11-06T22:37:47Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.2.1 |
| headers | |
| host_routes | |
| id | 5878afa5-8f1d-4de5-8018-530044a49934 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | webserver_subnet |
| network_id | 2410c262-6c27-4e99-8c31-045b60499a01 |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 2 |
| service_types | |
| subnetpool_id | None |
| updated_at | 2016-11-06T22:37:47Z |
+-------------------+--------------------------------------+
Next, create a network and subnet for the API servers.
::
$ openstack network create api_network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-11-06T22:38:51Z |
| description | |
| headers | |
| id | 8657f3a3-6e7d-40a1-a979-1a8c54d5e434 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | api_network |
| port_security_enabled | True |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 64 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-11-06T22:38:51Z |
+---------------------------+--------------------------------------+
$ openstack subnet create api_subnet --network api_network --subnet-range 10.0.3.0/24
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 10.0.3.2-10.0.3.254 |
| cidr | 10.0.3.0/24 |
| created_at | 2016-11-06T22:40:15Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.3.1 |
| headers | |
| host_routes | |
| id | 614e7801-eb35-45c6-8e49-da5bdc9161f5 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | api_subnet |
| network_id | 8657f3a3-6e7d-40a1-a979-1a8c54d5e434 |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 2 |
| service_types | |
| subnetpool_id | None |
| updated_at | 2016-11-06T22:40:15Z |
+-------------------+--------------------------------------+
Now that you have got the networks created, go ahead and create two
Floating IPs, for web servers. Ensure that you replace 'public' with
the name of the public/external network offered by your cloud provider.
::
$ openstack floating ip create public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2016-11-06T22:47:30Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 172.24.4.2 |
| floating_network_id | 27e6fa33-fd39-475e-b048-6ac924972a03 |
| headers | |
| id | 820385df-36a7-415d-955c-6ff662fdb796 |
| port_id | None |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2016-11-06T22:47:30Z |
+---------------------+--------------------------------------+
$ openstack floating ip create public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2016-11-06T22:48:45Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 172.24.4.12 |
| floating_network_id | 27e6fa33-fd39-475e-b048-6ac924972a03 |
| headers | |
| id | 3d9f1591-a31e-4684-8346-f4bb33a176b0 |
| port_id | None |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2016-11-06T22:48:45Z |
+---------------------+--------------------------------------+
.. note:: The world is running out of IPv4 addresses. If you get the
"No more IP addresses available on network" error,
contact your cloud administrator. You may also want to ask
about IPv6 :)
Connecting to the Internet
~~~~~~~~~~~~~~~~~~~~~~~~~~
Most instances require access to the Internet. The instances in your
Fractals app are no exception! Add routers to pass traffic between the
various networks that you use.
::
$ openstack router create project_router
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-11-06T22:49:59Z |
| description | |
| distributed | False |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| headers | |
| id | e11eba23-961c-43d7-8da0-561abdad880c |
| name | project_router |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 2 |
| routes | |
| status | ACTIVE |
| updated_at | 2016-11-06T22:49:59Z |
+-------------------------+--------------------------------------+
Specify an external gateway for your router to tell OpenStack which
network to use for Internet access.
::
$ openstack router set project_router --external-gateway public
Set gateway for router project_router
$ openstack router show project_router
+-------------------------+-------------------------------------------------------------------------+
| Field | Value |
+-------------------------+-------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2016-11-06T22:49:59Z |
| description | |
| distributed | False |
| external_gateway_info | {"network_id": "27e6fa33-fd39-475e-b048-6ac924972a03", "enable_snat": |
| | true, "external_fixed_ips": [{"subnet_id": |
| | "d02006a5-3d10-41f1-a349-6024af41cda0", "ip_address": "172.24.4.13"}, |
| | {"subnet_id": "b12293c9-a1f4-49e3-952f-136a5dd24980", "ip_address": |
| | "2001:db8::9"}]} |
| flavor_id | None |
| ha | False |
| id | e11eba23-961c-43d7-8da0-561abdad880c |
| name | project_router |
| project_id | a59a543373bc4b12b74f07355ad1cabe |
| revision_number | 5 |
| routes | |
| status | ACTIVE |
| updated_at | 2016-11-06T22:53:04Z |
+-------------------------+-------------------------------------------------------------------------+
Now, attach your router to the worker, API, and web server subnets.
::
$ openstack router add subnet project_router worker_subnet
$ openstack router add subnet project_router api_subnet
$ openstack router add subnet project_router webserver_subnet
Booting a worker
----------------
Now that you have prepared the networking infrastructure, you can go
ahead and boot an instance on it. Ensure you use appropriate flavor
and image values for your cloud - see :doc:`getting_started` if you have not
already.
.. todo:: Show how to create an instance in libcloud using the network
we just created. - libcloud does not yet support this.
::
$ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64-disk --nic net-id=953224c6-c510-45c5-8a29-37deffd3d78e worker1
+--------------------------------------+-----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 9vU8KSY4oDht |
| config_drive | |
| created | 2015-03-30T05:26:04Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 9e188a47-a246-463e-b445-027d6e2966e0 |
| image | cirros-0.3.3-x86_64-disk (ad605ff9-4593-4048-900b-846d6401c193) |
| key_name | - |
| metadata | {} |
| name | worker1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | f77bf3369741408e89d8f6fe090d29d2 |
| updated | 2015-03-30T05:26:04Z |
| user_id | a61292a5691d4c6c831b7a8f07921261 |
+--------------------------------------+-----------------------------------------------------------------+
Load balancing
~~~~~~~~~~~~~~
After separating the Fractal worker nodes into their own networks, the
next logical step is to move the Fractal API service to a load
balancer, so that multiple API workers can handle requests. By using a
load balancer, the API service can be scaled out in a similar fashion
to the worker nodes.
Neutron LbaaS API
-----------------
.. note:: This section is based on the Neutron LBaaS API version 1.0
https://docs.openstack.org/admin-guide/networking_adv-features.html#basic-load-balancer-as-a-service-operations
.. todo:: libcloud support added 0.14:
https://developer.rackspace.com/blog/libcloud-0-dot-14-released/ -
this section needs rewriting to use the libcloud API
The OpenStack Networking API provides support for creating
loadbalancers, which can be used to scale the Fractal app web service.
In the following example, we create two compute instances via the
Compute API, then instantiate a load balancer that will use a virtual
IP (VIP) for accessing the web service offered by the two compute
nodes. The end result will be the following network topology:
.. nwdiag::
nwdiag {
network public {
address = "203.0.113.0/24"
tenant_router [ address = "203.0.113.60" ];
loadbalancer [ address = "203.0.113.63" ];
}
network webserver_network {
address = "10.0.2.0/24"
tenant_router [ address = "10.0.2.1"];
webserver1 [ address = "203.0.113.21, 10.0.2.3"];
webserver2 [ address = "203.0.113.22, 10.0.2.4"];
}
}
libcloud support added 0.14:
https://developer.rackspace.com/blog/libcloud-0-dot-14-released/
Start by looking at what is already in place.
::
$ openstack network list
+--------------------------------------+-------------------+---------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------------+---------------------------------------+
| 2410c262-6c27-4e99-8c31-045b60499a01 | webserver_network | 5878afa5-8f1d-4de5-8018-530044a49934 |
| 27e6fa33-fd39-475e-b048-6ac924972a03 | public | b12293c9-a1f4-49e3-952f-136a5dd24980, |
| | | d02006a5-3d10-41f1-a349-6024af41cda0 |
+--------------------------------------+-------------------+---------------------------------------+
Go ahead and create two instances.
::
$ nova boot --flavor 1 --image 53ff0943-99ba-42d2-a10d-f66656372f87 --min-count 2 test
+--------------------------------------+-----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | z84zWFCcpppH |
| config_drive | |
| created | 2015-04-02T02:45:09Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 8d579f4a-116d-46b9-8db3-aa55b76f76d8 |
| image | cirros-0.3.3-x86_64-disk (53ff0943-99ba-42d2-a10d-f66656372f87) |
| key_name | - |
| metadata | {} |
| name | test-1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 0cb06b70ef67424b8add447415449722 |
| updated | 2015-04-02T02:45:09Z |
| user_id | d95381d331034e049727e2413efde39f |
+--------------------------------------+-----------------------------------------------------------------+
Confirm that they were added:
::
$ nova list
+--------------------------------------+--------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------+--------+------------+-------------+------------------+
| 8d579f4a-116d-46b9-8db3-aa55b76f76d8 | test-1 | ACTIVE | - | Running | private=10.0.2.4 |
| 8fadf892-b6e9-44f4-b132-47c6762ffa2c | test-2 | ACTIVE | - | Running | private=10.0.2.3 |
+--------------------------------------+--------+--------+------------+-------------+------------------+
Look at which ports are available:
::
$ openstack port list
+--------------------------------------+------+-------------------+--------------------------------------------+
| ID | Name | MAC Address | Fixed IP Addresses |
+--------------------------------------+------+-------------------+--------------------------------------------+
| 11b38c90-f55e-41a7-b68b-0d434d66bfa2 | | fa:16:3e:21:95:a1 | ip_address='10.0.0.1', subnet_id='e7f75523 |
| | | | -ae4b-4611-85a3-07efa2e1ba0f' |
| 523331cf-5636-4298-a14c-f545bb32abcf | | fa:16:3e:f8:a1:81 | ip_address='10.0.0.2', subnet_id='e7f75523 |
| | | | -ae4b-4611-85a3-07efa2e1ba0f' |
| | | | ip_address='2001:db8:8000:0:f816:3eff:fef8 |
| | | | :a181', subnet_id='f8628fd8-8d61-43e2-9dc8 |
| | | | -a03d25443b7d' |
| cbba0f37-c1a0-4fc8-8722-68e42de7df16 | | fa:16:3e:39:a6:18 | ip_address='2001:db8:8000::1', subnet_id=' |
| | | | f8628fd8-8d61-43e2-9dc8-a03d25443b7d' |
+--------------------------------------+------+-------------------+--------------------------------------------+
Next, create additional floating IPs. Specify the fixed IP addresses
they should point to and the ports that they should use:
::
$ openstack floating ip create public --fixed-ip-address 10.0.0.2 --port 523331cf-5636-4298-a14c-f545bb32abcf
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2016-11-06T23:23:29Z |
| description | |
| fixed_ip_address | 10.0.0.2 |
| floating_ip_address | 172.24.4.2 |
| floating_network_id | 27e6fa33-fd39-475e-b048-6ac924972a03 |
| headers | |
| id | 0ed15644-4290-4adf-91d4-5713eea895e5 |
| port_id | 523331cf-5636-4298-a14c-f545bb32abcf |
| project_id | 3d2db0593c8045a392fd18385b401b5b |
| revision_number | 1 |
| router_id | 309d1402-a373-4022-9ab8-6824aad1a415 |
| status | DOWN |
| updated_at | 2016-11-06T23:23:29Z |
+---------------------+--------------------------------------+
$ openstack floating ip create public --fixed-ip-address 10.0.2.4 --port 462c92c6-941c-48ab-8cca-3c7a7308f580
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2016-11-06T23:25:26Z |
| description | |
| fixed_ip_address | 10.0.0.1 |
| floating_ip_address | 172.24.4.8 |
| floating_network_id | 27e6fa33-fd39-475e-b048-6ac924972a03 |
| headers | |
| id | 68082405-82f2-4072-b5c3-7047df527a8a |
| port_id | 11b38c90-f55e-41a7-b68b-0d434d66bfa2 |
| project_id | 3d2db0593c8045a392fd18385b401b5b |
| revision_number | 1 |
| router_id | 309d1402-a373-4022-9ab8-6824aad1a415 |
| status | DOWN |
| updated_at | 2016-11-06T23:25:26Z |
+---------------------+--------------------------------------+
You are ready to create members for the load balancer pool, which
reference the floating IPs:
::
$ neutron lb-member-create --address 203.0.113.21 --protocol-port 80 mypool
Created a new member:
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| address | 203.0.113.21 |
| admin_state_up | True |
| id | 679966a9-f719-4df0-86cf-3a24d0433b38 |
| pool_id | 600496f0-196c-431c-ae35-a0af9bb01d32 |
| protocol_port | 80 |
| status | PENDING_CREATE |
| status_description | |
| tenant_id | 0cb06b70ef67424b8add447415449722 |
| weight | 1 |
+--------------------+--------------------------------------+
$ neutron lb-member-create --address 203.0.113.22 --protocol-port 80 mypool
Created a new member:
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| address | 203.0.113.22 |
| admin_state_up | True |
| id | f3ba0605-4926-4498-b86d-51002892e93a |
| pool_id | 600496f0-196c-431c-ae35-a0af9bb01d32 |
| protocol_port | 80 |
| status | PENDING_CREATE |
| status_description | |
| tenant_id | 0cb06b70ef67424b8add447415449722 |
| weight | 1 |
+--------------------+--------------------------------------+
You should be able to see them in the member list:
::
$ neutron lb-member-list
+--------------------------------------+--------------+---------------+--------+----------------+--------+
| id | address | protocol_port | weight | admin_state_up | status |
+--------------------------------------+--------------+---------------+--------+----------------+--------+
| 679966a9-f719-4df0-86cf-3a24d0433b38 | 203.0.113.21 | 80 | 1 | True | ACTIVE |
| f3ba0605-4926-4498-b86d-51002892e93a | 203.0.113.22 | 80 | 1 | True | ACTIVE |
+--------------------------------------+--------------+---------------+--------+----------------+--------+
Now, create a health monitor that will ensure that members of the
load balancer pool are active and able to respond to requests. If a
member in the pool dies or is unresponsive, the member is removed from
the pool so that client requests are routed to another active member.
::
$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Created a new health_monitor:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| admin_state_up | True |
| delay | 3 |
| expected_codes | 200 |
| http_method | GET |
| id | 663345e6-2853-43b2-9ccb-a623d5912345 |
| max_retries | 3 |
| pools | |
| tenant_id | 0cb06b70ef67424b8add447415449722 |
| timeout | 3 |
| type | HTTP |
| url_path | / |
+----------------+--------------------------------------+
$ neutron lb-healthmonitor-associate 663345e6-2853-43b2-9ccb-a623d5912345 mypool
Associated health monitor 663345e6-2853-43b2-9ccb-a623d5912345
Now create a virtual IP that will be used to direct traffic between
the various members of the pool:
::
$ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id 47fd3ff1-ead6-4d23-9ce6-2e66a3dae425 mypool
Created a new vip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| address | 203.0.113.63 |
| admin_state_up | True |
| connection_limit | -1 |
| description | |
| id | f0bcb66e-5eeb-447b-985e-faeb67540c2f |
| name | myvip |
| pool_id | 600496f0-196c-431c-ae35-a0af9bb01d32 |
| port_id | bc732f81-2640-4622-b624-993a5ae185c5 |
| protocol | HTTP |
| protocol_port | 80 |
| session_persistence | |
| status | PENDING_CREATE |
| status_description | |
| subnet_id | 47fd3ff1-ead6-4d23-9ce6-2e66a3dae425 |
| tenant_id | 0cb06b70ef67424b8add447415449722 |
+---------------------+--------------------------------------+
And confirm it is in place:
::
$ neutron lb-vip-list
+--------------------------------------+-------+--------------+----------+----------------+--------+
| id | name | address | protocol | admin_state_up | status |
+--------------------------------------+-------+--------------+----------+----------------+--------+
| f0bcb66e-5eeb-447b-985e-faeb67540c2f | myvip | 203.0.113.63 | HTTP | True | ACTIVE |
+--------------------------------------+-------+--------------+----------+----------------+--------+
Now, look at the big picture.
Final result
~~~~~~~~~~~~
With the addition of the load balancer, the Fractal app's networking
topology now reflects the modular nature of the application itself.
.. nwdiag::
nwdiag {
network public {
address = "203.0.113.0/24"
tenant_router [ address = "203.0.113.60"];
loadbalancer [ address = "203.0.113.63" ];
}
network webserver_network{
address = "10.0.2.0/24"
tenant_router [ address = "10.0.2.1"];
webserver1 [ address = "203.0.113.21, 10.0.2.3"];
webserver2 [ address = "203.0.113.22, 10.0.2.4"];
}
network api_network {
address = "10.0.3.0/24"
tenant_router [ address = "10.0.3.1" ];
api1 [ address = "10.0.3.3" ];
api2 [ address = "10.0.3.4" ];
}
network worker_network {
address = "10.0.1.0/24"
tenant_router [ address = "10.0.1.1" ];
worker1 [ address = "10.0.1.5" ];
worker2 [ address = "10.0.1.6" ];
}
}
Next steps
~~~~~~~~~~
You should now be fairly confident working with the Network API. To
see calls that we did not cover, see the volume documentation of your
SDK, or try one of these tutorial steps:
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

View File

@ -1,501 +0,0 @@
=============
Orchestration
=============
This chapter explains the importance of durability and scalability for
your cloud-based applications. In most cases, really achieving these
qualities means automating tasks such as scaling and other operational
tasks.
The Orchestration service provides a template-based way to describe a
cloud application, then coordinates running the needed OpenStack API
calls to run cloud applications. The templates enable you to create
most OpenStack resource types, such as instances, networking
information, volumes, security groups, and even users. It also provides
more advanced functionality, such as instance high availability,
instance auto-scaling, and nested stacks.
The OpenStack Orchestration API uses the stacks, resources, and templates
constructs.
You create stacks from templates, which contain resources. Resources are an
abstraction in the HOT (Heat Orchestration Template) template language, which
enables you to define different cloud resources by setting the :code:`type`
attribute.
For example, you might use the Orchestration API to create two compute
instances by creating a stack and by passing a template to the Orchestration
API. That template contains two resources with the :code:`type` attribute set
to :code:`OS::Nova::Server`.
That example is simplistic, of course, but the flexibility of the resource
object enables the creation of templates that contain all the required cloud
infrastructure to run an application, such as load balancers, block storage
volumes, compute instances, networking topology, and security policies.
.. note:: The Orchestration service is not deployed by default in every cloud.
If these commands do not work, it means the Orchestration API is not
available; ask your support team for assistance.
This section introduces the
`HOT templating language <https://docs.openstack.org/heat/latest/template_guide/hot_guide.html>`_,
and takes you through some common OpenStack Orchestration calls.
In previous sections, you used your SDK to programmatically interact with
OpenStack. In this section, you use the 'heat' command-line client to access
the Orchestration API directly through template files.
Install the 'heat' command-line client by following this guide:
https://docs.openstack.org/cli-reference/common/cli_install_openstack_command_line_clients.html#install-the-clients
Use this guide to set up the necessary variables for your cloud in an
'openrc' file:
https://docs.openstack.org/cli-reference/common/cli_set_environment_variables_using_openstack_rc.html
.. only:: dotnet
.. warning:: the .NET SDK does not currently support OpenStack Orchestration.
.. only:: fog
.. note:: fog `does support OpenStack Orchestration
<https://github.com/fog/fog-openstack/tree/master/lib/fog/openstack/models/orchestration>`_.
.. only:: jclouds
.. warning:: Jclouds does not currently support OpenStack Orchestration.
See this `bug report <https://issues.apache.org/jira/browse/JCLOUDS-693>`_.
.. only:: libcloud
.. warning:: libcloud does not currently support OpenStack Orchestration.
.. only:: pkgcloud
.. note:: Pkgcloud supports OpenStack Orchestration :D:D:D but this section
is `not written yet <https://github.com/pkgcloud/pkgcloud/blob/master/docs/providers/openstack/orchestration.md>`_
.. only:: openstacksdk
.. warning:: The OpenStack SDK does not currently support OpenStack Orchestration.
.. only:: phpopencloud
.. note:: PHP-opencloud supports OpenStack Orchestration :D:D:D but this section is not written yet.
HOT templating language
-----------------------
To learn about the template syntax for OpenStack Orchestration, how to
create basic templates, and their inputs and outputs, see
`Heat Orchestration Template (HOT) Guide <https://docs.openstack.org/heat/latest/template_guide/hot_guide.html>`_.
Work with stacks: Basics
------------------------
**Stack create**
The
`hello_faafo <https://opendev.org/openstack/api-site/raw/firstapp/samples/heat/hello_faafo.yaml>`_ Hot template demonstrates
how to create a compute instance that builds and runs the Fractal application
as an all-in-one installation.
You pass in these configuration settings as parameters:
- The flavor
- Your ssh key name
- The unique identifier (UUID) of the image
.. code-block:: console
$ wget https://opendev.org/openstack/api-site/raw/firstapp/samples/heat/hello_faafo.yaml
$ openstack stack create -t hello_faafo.yaml \
--parameter flavor=m1.small\;key_name=test\;image_id=5bbe4073-90c0-4ec9-833c-092459cc4539 hello_faafo
+---------------------+-----------------------------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------------------------+
| id | 0db2c026-fb9a-4849-b51d-b1df244096cd |
| stack_name | hello_faafo |
| description | A template to bring up the faafo application as an all in one install |
| | |
| creation_time | 2015-04-01T03:20:25 |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | |
+---------------------+-----------------------------------------------------------------------+
The stack automatically creates a Nova instance, as follows:
.. code-block:: console
$ nova list
+--------------------------------------+---------------------------------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------+--------+------------+-------------+------------------+
| 9bdf0e2f-415e-43a0-90ea-63a5faf86cf9 | hello_faafo-server-dwmwhzfxgoor | ACTIVE | - | Running | private=10.0.0.3 |
+--------------------------------------+---------------------------------+--------+------------+-------------+------------------+
Verify that the stack was successfully created:
.. code-block:: console
$ openstack stack list
+--------------------------------------+-------------+-----------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+-------------+-----------------+---------------------+--------------+
| 0db2c026-fb9a-4849-b51d-b1df244096cd | hello_faafo | CREATE_COMPLETE | 2015-04-01T03:20:25 | None |
+--------------------------------------+-------------+-----------------+---------------------+--------------+
The stack reports an initial :code:`CREATE_IN_PROGRESS` status. When all
software is installed, the status changes to :code:`CREATE_COMPLETE`.
You might have to run the :command:`openstack stack list` command a few
times before the stack creation is complete.
**Show information about the stack**
Get more information about the stack:
.. code-block:: console
$ openstack stack show hello_faafo
The `outputs` property shows the URL through which you can access the Fractal
application. You can SSH into the instance.
**Remove the stack**
.. code-block:: console
$ openstack stack delete hello_faafo
Are you sure you want to delete this stack(s) [y/N]?
Verify the nova instance was deleted when the stack was removed:
.. code-block:: console
$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
While this stack starts a single instance that builds and runs the Fractal
application as an all-in-one installation, you can make very complicated
templates that impact dozens of instances or that add and remove instances on
demand. Continue to the next section to learn more.
Work with stacks: Advanced
With the Orchestration API, the Fractal application can create an auto-scaling
group for all parts of the application, to dynamically provision more compute
resources during periods of heavy utilization, and also terminate compute
instances to scale down, as demand decreases.
To learn about auto-scaling with the Orchestration API, read these articles:
* https://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat
* https://superuser.openstack.org/articles/understanding-openstack-heat-auto-scaling
Initially, the focus is on scaling the workers because they consume the most
resources.
The example template depends on the ceilometer project, which is part of the
`Telemetry service <https://wiki.openstack.org/wiki/Telemetry>`_.
.. note:: The Telemetry service is not deployed by default in every cloud.
If the ceilometer commands do not work, this example does not work;
ask your support team for assistance.
To better understand how the template works, use this guide to install the
'ceilometer' command-line client:
* https://docs.openstack.org/cli-reference/common/cli_install_openstack_command_line_clients.html#install-the-clients
To set up the necessary variables for your cloud in an 'openrc' file, use this
guide:
* https://docs.openstack.org/cli-reference/common/cli_set_environment_variables_using_openstack_rc.html
The Telemetry service uses meters to measure a given aspect of a resources
usage. The meter that we are interested in is the :code:`cpu_util` meter.
The value of a meter is regularly sampled and saved with a timestamp.
These saved samples are aggregated to produce a statistic. The statistic that
we are interested in is **avg**: the average of the samples over a given period.
We are interested because the Telemetry service supports alarms: an alarm is
fired when our average statistic breaches a configured threshold. When the
alarm fires, an associated action is performed.
The stack we will be building uses the firing of alarms to control the
addition or removal of worker instances.
To verify that ceilometer is installed, list the known meters:
.. code-block:: console
$ ceilometer meter-list
This command returns a very long list of meters. Once a meter is created, it
is never thrown away!
Launch the stack with auto-scaling workers:
.. code-block:: console
$ wget https://opendev.org/openstack/api-site/raw/firstapp/samples/heat/faafo_autoscaling_workers.yaml
$ openstack stack create -t faafo_autoscaling_workers.yaml \
--parameters flavor=m1.small\;key_name=test\;image_id=5bbe4073-90c0-4ec9-833c-092459cc4539 \
faafo_autoscaling_workers
+---------------------+-----------------------------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------------------------+
| id | 0db2c026-fb9a-4849-b51d-b1df244096cd |
| stack_name | faafo_autoscaling_workers |
| description | A template to bring up the faafo application as an all in one install |
| | |
| creation_time | 2015-11-17T05:12:06 |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | |
+---------------------+-----------------------------------------------------------------------+
As before, pass in configuration settings as parameters.
And as before, the stack takes a few minutes to build!
Wait for it to reach the :code:`CREATE_COMPLETE` status:
.. code-block:: console
$ openstack stack list
+--------------------------------------+---------------------------+-----------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+---------------------------+-----------------+---------------------+--------------+
| 0db2c026-fb9a-4849-b51d-b1df244096cd | faafo_autoscaling_workers | CREATE_COMPLETE | 2015-11-17T05:12:06 | None |
+--------------------------------------+---------------------------+-----------------+---------------------+--------------+
Run the :code:`nova list` command. This template created three instances:
.. code-block:: console
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.75 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.74 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | worker | ACTIVE | - | Running | public=115.146.89.80 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
Note that the worker instance is part of an :code:`OS::Heat::AutoScalingGroup`.
Confirm that the stack created two alarms:
.. code-block:: console
$ ceilometer alarm-list
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | Time constraints |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| 2bc8433f-9f8a-4c2c-be88-d841d9de1506 | testFaafo-cpu_alarm_low-torkcwquons4 | ok | low | True | True | cpu_util < 15.0 during 1 x 60s | None |
| 7755cc9a-26f3-4e2b-a9af-a285ec8524da | testFaafo-cpu_alarm_high-qqtbvk36l6nq | ok | low | True | True | cpu_util > 90.0 during 1 x 60s | None |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
.. note:: If either alarm reports the :code:`insufficient data` state, the
default sampling period of the stack is probably too low for your
cloud; ask your support team for assistance. You can set the
period through the :code:`period` parameter of the stack to match your
clouds requirements.
Use the stack ID to get more information about the stack:
.. code-block:: console
$ openstack stack show 0db2c026-fb9a-4849-b51d-b1df244096cd
The outputs section of the stack contains two ceilometer command-line queries:
* :code:`ceilometer_sample_query`: shows the samples used to build the statistics.
* :code:`ceilometer_statistics_query`: shows the statistics used to trigger the alarms.
These queries provide a view into the behavior of the stack.
In a new Terminal window, SSH into the 'api' API instance. Use the key pair
name that you passed in as a parameter.
.. code-block:: console
$ ssh -i ~/.ssh/test USERNAME@IP_API
In your SSH session, confirm that no fractals were generated:
.. code-block:: console
$ faafo list
201-11-18 11:07:20.464 8079 INFO faafo.client [-] listing all fractals
+------+------------+----------+
| UUID | Dimensions | Filesize |
+------+------------+----------+
+------+------------+----------+
Then, create a pair of large fractals:
.. code-block:: console
$ faafo create --height 9999 --width 9999 --tasks 2
In the Terminal window where you run ceilometer, run
:code:`ceilometer_sample_query` to see the samples.
.. code-block:: console
$ ceilometer sample-list -m cpu_util -q metadata.user_metadata.stack=0db2c026-fb9a-4849-b51d-b1df244096cd
+--------------------------------------+----------+-------+----------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+--------------------------------------+----------+-------+----------------+------+---------------------+
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 100.847457627 | % | 2015-11-18T00:15:50 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 82.4754098361 | % | 2015-11-18T00:14:51 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 0.45 | % | 2015-11-18T00:13:50 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 0.466666666667 | % | 2015-11-18T00:12:50 |
+--------------------------------------+----------+-------+----------------+------+---------------------+
The CPU utilization across workers increases as workers start to create the fractals.
Run the :code:`ceilometer_statistics_query`: command to see the derived statistics.
.. code-block:: console
$ ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=0db2c026-fb9a-4849-b51d-b1df244096cd -p 60 -a avg
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
| Period | Period Start | Period End | Avg | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
| 60 | 2015-11-18T00:12:45 | 2015-11-18T00:13:45 | 0.466666666667 | 0.0 | 2015-11-18T00:12:50 | 2015-11-18T00:12:50 |
| 60 | 2015-11-18T00:13:45 | 2015-11-18T00:14:45 | 0.45 | 0.0 | 2015-11-18T00:13:50 | 2015-11-18T00:13:50 |
| 60 | 2015-11-18T00:14:45 | 2015-11-18T00:15:45 | 82.4754098361 | 0.0 | 2015-11-18T00:14:51 | 2015-11-18T00:14:51 |
| 60 | 2015-11-18T00:15:45 | 2015-11-18T00:16:45 | 100.847457627 | 0.0 | 2015-11-18T00:15:50 | 2015-11-18T00:15:50 |
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
.. note:: The samples and the statistics are listed in opposite time order!
See the state of the alarms set up by the template:
.. code-block:: console
$ ceilometer alarm-list
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | Time constraints |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| 56c3022e-f23c-49ad-bf59-16a6875f3bdf | testFaafo-cpu_alarm_low-miw5tmomewot | ok | low | True | True | cpu_util < 15.0 during 1 x 60s | None |
| 70ff7b00-d56d-4a43-bbb2-e18952ae6605 | testFaafo-cpu_alarm_high-ffhsmylfzx43 | alarm | low | True | True | cpu_util > 90.0 during 1 x 60s | None |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
Run the :code:`nova list` command to confirm that the
:code:`OS::Heat::AutoScalingGroup` has created more instances:
.. code-block:: console
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.96 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.95 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | worker | ACTIVE | - | Running | public=115.146.89.97 |
| 31e7c020-c37c-4311-816b-be8afcaef8fa | worker | ACTIVE | - | Running | public=115.146.89.99 |
| 3fff2489-488c-4458-99f1-0cc50363ae33 | worker | ACTIVE | - | Running | public=115.146.89.98 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
Now, wait until all the fractals are generated and the instances have idled
for some time.
Run the :code:`nova list` command to confirm that the
:code:`OS::Heat::AutoScalingGroup` removed the unneeded instances:
.. code-block:: console
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.96 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.95 |
| 3fff2489-488c-4458-99f1-0cc50363ae33 | worker | ACTIVE | - | Running | public=115.146.89.98 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
.. note:: The :code:`OS::Heat::AutoScalingGroup` removes instances in creation order.
So the worker instance that was created first is the first instance
to be removed.
In the outputs section of the stack, you can run these web API calls:
* :code:`scale__workers_up_url`: A post to this url will add worker instances.
* :code:`scale_workers_down_url`: A post to this url will remove worker instances.
These demonstrate how the Ceilometer alarms add and remove instances.
To use them:
.. code-block:: console
$ curl -X POST "Put the very long url from the template outputs section between these quotes"
To recap:
The auto-scaling stack sets up an API instance, a services instance, and an
auto-scaling group with a single worker instance. It also sets up ceilometer
alarms that add worker instances to the auto-scaling group when it is under
load, and removes instances when the group is idling. To do this, the alarms
post to URLs.
In this template, the alarms use metadata that is attached to each worker
instance. The metadata is in the :code:`metering.stack=stack_id` format.
The prefix is `metering.` For example, `metering.some_name`.
.. code-block:: console
$ nova show <instance_id>
...
| metadata | {"metering.some_name": "some_value"} |
...
You can aggregate samples and calculate statistics across all instances with
the `metering.some_name` metadata that has `some_value` by using a query of
the form:
.. code-block:: console
-q metadata.user_metadata.some_name=some_value
For example:
.. code-block:: console
$ ceilometer sample-list -m cpu_util -q metadata.user_metadata.some_name=some_value
$ ceilometer statistics -m cpu_util -q metadata.user_metadata.some_name=some_value -p 6
The alarms have the form:
.. code-block:: console
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
Spend some time playing with the stack and the Fractal app to see how it works.
.. note:: The message queue can take a while to notice that worker instances have died.
Next steps
----------
You should now be fairly confident working with the Orchestration
service. To see the calls that we did not cover and more, see the
volume documentation of your SDK. Or, try one of these steps in the
tutorial:
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)

View File

@ -1,536 +0,0 @@
===========
Scaling out
===========
.. todo:: For later versions of this guide: implement a service within
the fractals app that simply returns the CPU load on the
local server. Then add to this section a simple loop that
checks to see if any servers are overloaded and adds a new
one if they are. (Or do this through SSH and w)
An often-cited reason for designing applications by using cloud
patterns is the ability to **scale out**. That is: to add additional
resources, as required. Contrast this strategy to the previous one of
increasing capacity by scaling up the size of existing resources. To
scale out, you must:
* Architect your application to make use of additional resources.
* Make it possible to add new resources to your application.
.. todo:: nickchase needs to restate the second point
The :doc:`/introduction` section describes how to build in a modular
fashion, create an API, and other aspects of the application
architecture. Now you will see why those strategies are so important.
By creating a modular application with decoupled services, you can
identify components that cause application performance bottlenecks and
scale them out. Just as importantly, you can also remove resources
when they are no longer necessary. It is very difficult to overstate
the cost savings that this feature can bring, as compared to
traditional infrastructure.
Of course, having access to additional resources is only part of the
game plan; while you can manually add or delete resources, you get
more value and more responsiveness if the application automatically
requests additional resources when it needs them.
This section continues to illustrate the separation of services onto
multiple instances and highlights some of the choices that we have
made that facilitate scalability in the application architecture.
You will progressively ramp up to use up six instances, so make sure that your
cloud account has the appropriate quota.
The previous section uses two virtual machines - one 'control' service
and one 'worker'. The speed at which your application can generate
fractals depends on the number of workers. With just one worker, you
can produce only one fractal at a time. Before long, you will need more
resources.
.. note:: If you do not have a working application, follow the steps in
:doc:`introduction` to create one.
.. todo:: Ensure we have the controller_ip even if this is a new
python session.
Generate load
~~~~~~~~~~~~~
To test what happens when the Fractals application is under load, you
can:
* Load the worker: Create a lot of tasks to max out the CPU of existing
worker instances
* Load the API: Create a lot of API service requests
Create more tasks
-----------------
Use SSH with the existing SSH keypair to log in to the
:code:`app-controller` controller instance.
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_CONTROLLER
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance and USERNAME with the appropriate
user name.
Call the :code:`faafo` command-line interface to request the
generation of five large fractals.
.. code-block:: console
$ faafo create --height 9999 --width 9999 --tasks 5
If you check the load on the worker, you can see that the instance is
not doing well. On the single CPU flavor instance, a load average
greater than 1 means that the server is at capacity.
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_WORKER uptime
10:37:39 up 1:44, 2 users, load average: 1.24, 1.40, 1.36
.. note:: Replace :code:`IP_WORKER` with the IP address of the worker
instance and USERNAME with the appropriate user name.
Create more API service requests
--------------------------------
API load is a slightly different problem than the previous one regarding
capacity to work. We can simulate many requests to the API, as follows:
Use SSH with the existing SSH keypair to log in to the
:code:`app-controller` controller instance.
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_CONTROLLER
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance and USERNAME with the appropriate
user name.
Use a for loop to call the :code:`faafo` command-line interface to
request a random set of fractals 500 times:
.. code-block:: console
$ for i in $(seq 1 500); do faafo --endpoint-url http://IP_CONTROLLER create & done
.. note:: Replace :code:`IP_CONTROLLER` with the IP address of the
controller instance.
If you check the load on the :code:`app-controller` API service
instance, you see that the instance is not doing well. On your single
CPU flavor instance, a load average greater than 1 means that the server is
at capacity.
.. code-block:: console
$ uptime
10:37:39 up 1:44, 2 users, load average: 1.24, 1.40, 1.36
The sheer number of requests means that some requests for fractals
might not make it to the message queue for processing. To ensure that
you can cope with demand, you must also scale out the API capability
of the Fractals application.
Scaling out
~~~~~~~~~~~
Remove the existing app
-----------------------
Go ahead and delete the existing instances and security groups that
you created in previous sections. Remember, when instances in the
cloud are no longer working, remove them and re-create something new.
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-1
:end-before: step-2
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-1
:end-before: step-2
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-1
:end-before: step-2
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-1
:end-before: step-2
Extra security groups
---------------------
As you change the topology of your applications, you must update or
create security groups. Here, you re-create the required security
groups.
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-2
:end-before: step-3
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-2
:end-before: step-3
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-2
:end-before: step-3
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-2
:end-before: step-3
A floating IP helper function
-----------------------------
Define a short function to locate unused or allocate floating IPs.
This saves a few lines of code and prevents you from reaching your
floating IP quota too quickly.
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-3
:end-before: step-4
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-3
:end-before: step-4
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-3
:end-before: step-4
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-3
:end-before: step-4
Split the database and message queue
------------------------------------
Before you scale out your application services, like the API service or the
workers, you must add a central database and an :code:`app-services` messaging
instance. The database and messaging queue will be used to track the state of
fractals and to coordinate the communication between the services.
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-4
:end-before: step-5
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-4
:end-before: step-5
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-4
:end-before: step-5
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-4
:end-before: step-5
Scale the API service
---------------------
With multiple workers producing fractals as fast as they can, the
system must be able to receive the requests for fractals as quickly as
possible. If our application becomes popular, many thousands of users
might connect to our API to generate fractals.
Armed with a security group, image, and flavor size, you can add
multiple API services:
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-5
:end-before: step-6
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-5
:end-before: step-6
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-5
:end-before: step-6
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-5
:end-before: step-6
These services are client-facing, so unlike the workers they do not
use a message queue to distribute tasks. Instead, you must introduce
some kind of load balancing mechanism to share incoming requests
between the different API services.
A simple solution is to give half of your friends one address and half
the other, but that solution is not sustainable. Instead, you can use
a `DNS round robin <http://en.wikipedia.org/wiki/Round- robin_DNS>`_
to do that automatically. However, OpenStack networking can provide
Load Balancing as a Service, which :doc:`/networking` explains.
.. todo:: Add a note that we demonstrate this by using the first API
instance for the workers and the second API instance for the
load simulation.
Scale the workers
-----------------
To increase the overall capacity, add three workers:
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
:start-after: step-6
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
:start-after: step-6
:end-before: step-7
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:start-after: step-6
:end-before: step-7
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java
:start-after: step-6
:end-before: step-7
Adding this capacity enables you to deal with a higher number of
requests for fractals. As soon as these worker instances start, they
begin checking the message queue for requests, reducing the overall
backlog like a new register opening in the supermarket.
This process was obviously a very manual one. Figuring out that we
needed more workers and then starting new ones required some effort.
Ideally the system would do this itself. If you build your application
to detect these situations, you can have it automatically request and
remove resources, which saves you the effort of doing this work
yourself. Instead, the OpenStack Orchestration service can monitor
load and start instances, as appropriate. To find out how to set that
up, see :doc:`orchestration`.
Verify that we have had an impact
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the previous steps, you split out several services and expanded
capacity. To see the new features of the Fractals application, SSH to
one of the app instances and create a few fractals.
.. code-block:: console
$ ssh -i ~/.ssh/id_rsa USERNAME@IP_API_1
.. note:: Replace :code:`IP_API_1` with the IP address of the first
API instance and USERNAME with the appropriate user name.
Use the :code:`faafo create` command to generate fractals.
Use the :code:`faafo list` command to watch the progress of fractal
generation.
Use the :code:`faafo UUID` command to examine some of the fractals.
The `generated_by` field shows the worker that created the fractal.
Because multiple worker instances share the work, fractals are
generated more quickly and users might not even notice when a worker
fails.
.. code-block:: console
root@app-api-1:# faafo list
+--------------------------------------+------------------+-------------+
| UUID | Dimensions | Filesize |
+--------------------------------------+------------------+-------------+
| 410bca6e-baa7-4d82-9ec0-78e409db7ade | 295 x 738 pixels | 26283 bytes |
| 66054419-f721-492f-8964-a5c9291d0524 | 904 x 860 pixels | 78666 bytes |
| d123e9c1-3934-4ffd-8b09-0032ca2b6564 | 952 x 382 pixels | 34239 bytes |
| f51af10a-084d-4314-876a-6d0b9ea9e735 | 877 x 708 pixels | 93679 bytes |
+--------------------------------------+------------------+-------------+
root@app-api-1:# faafo show d123e9c1-3934-4ffd-8b09-0032ca2b6564
+--------------+------------------------------------------------------------------+
| Parameter | Value |
+--------------+------------------------------------------------------------------+
| uuid | d123e9c1-3934-4ffd-8b09-0032ca2b6564 |
| duration | 1.671410 seconds |
| dimensions | 952 x 382 pixels |
| iterations | 168 |
| xa | -2.61217 |
| xb | 3.98459 |
| ya | -1.89725 |
| yb | 2.36849 |
| size | 34239 bytes |
| checksum | d2025a9cf60faca1aada854d4cac900041c6fa762460f86ab39f42ccfe305ffe |
| generated_by | app-worker-2 |
+--------------+------------------------------------------------------------------+
root@app-api-1:# faafo show 66054419-f721-492f-8964-a5c9291d0524
+--------------+------------------------------------------------------------------+
| Parameter | Value |
+--------------+------------------------------------------------------------------+
| uuid | 66054419-f721-492f-8964-a5c9291d0524 |
| duration | 5.293870 seconds |
| dimensions | 904 x 860 pixels |
| iterations | 348 |
| xa | -2.74108 |
| xb | 1.85912 |
| ya | -2.36827 |
| yb | 2.7832 |
| size | 78666 bytes |
| checksum | 1f313aaa36b0f616b5c91bdf5a9dc54f81ff32488ce3999f87a39a3b23cf1b14 |
| generated_by | app-worker-1 |
+--------------+------------------------------------------------------------------+
The fractals are now available from any of the app-api hosts. To
verify, visit http://IP_API_1/fractal/FRACTAL_UUID and
http://IP_API_2/fractal/FRACTAL_UUID. You now have multiple redundant
web services. If one fails, you can use the others.
.. note:: Replace :code:`IP_API_1` and :code:`IP_API_2` with the
corresponding floating IPs. Replace FRACTAL_UUID with the UUID
of an existing fractal.
Go ahead and test the fault tolerance. Start deleting workers and API
instances. As long as you have one of each, your application is fine.
However, be aware of one weak point. The database contains the
fractals and fractal metadata. If you lose that instance, the
application stops. Future sections will explain how to address this
weak point.
If you had a load balancer, you could distribute this load between the
two different API services. You have several options. The
:doc:`networking` section shows you one option.
In theory, you could use a simple script to monitor the load on your
workers and API services and trigger the creation of instances, which
you already know how to do. Congratulations! You are ready to create
scalable cloud applications.
Of course, creating a monitoring system for a single application might
not make sense. To learn how to use the OpenStack Orchestration
monitoring and auto-scaling capabilities to automate these steps, see
:doc:`orchestration`.
Next steps
~~~~~~~~~~
You should be fairly confident about starting instances and
distributing services from an application among these instances.
As mentioned in :doc:`/introduction`, the generated fractal images are
saved on the local file system of the API service instances. Because
you have multiple API instances up and running, the fractal images are
spread across multiple API services, which causes a number of
:code:`IOError: [Errno 2] No such file or directory` exceptions when
trying to download a fractal image from an API service instance that
does not have the fractal image on its local file system.
Go to :doc:`/durability` to learn how to use Object Storage to solve
this problem in an elegant way. Or, you can proceed to one of these
sections:
* :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component.
* :doc:`/orchestration`: Automatically orchestrate your application.
* :doc:`/networking`: Learn about complex networking.
* :doc:`/advice`: Get advice about operations.
* :doc:`/craziness`: Learn some crazy things that you might not think to do ;)
Complete code sample
~~~~~~~~~~~~~~~~~~~~
This file contains all the code from this tutorial section. This
comprehensive code sample lets you view and run the code as a single
script.
Before you run this script, confirm that you have set your
authentication information, the flavor ID, and image ID.
.. only:: fog
.. literalinclude:: ../samples/fog/scaling_out.rb
:language: ruby
.. only:: shade
.. literalinclude:: ../samples/shade/scaling_out.py
:language: python
.. only:: libcloud
.. literalinclude:: ../samples/libcloud/scaling_out.py
:language: python
.. only:: jclouds
.. literalinclude:: ../samples/jclouds/ScalingOut.java
:language: java

View File

@ -2,5 +2,4 @@
mkdir -p publish-docs mkdir -p publish-docs
tools/build-firstapp-rst.sh
tools/build-api-quick-start.sh tools/build-api-quick-start.sh

View File

@ -1,15 +0,0 @@
#!/bin/bash -e
mkdir -p publish-docs
# Publish documents to api-ref for developer.openstack.org
for tag in libcloud shade; do
tools/build-rst.sh firstapp \
--tag ${tag} --target "firstapp-${tag}"
done
# Draft documents
for tag in dotnet fog openstacksdk pkgcloud jclouds gophercloud; do
tools/build-rst.sh firstapp \
--tag ${tag} --target "draft/firstapp-${tag}"
done

32
tox.ini
View File

@ -19,7 +19,6 @@ commands = {posargs}
[testenv:linters] [testenv:linters]
commands = commands =
doc8 firstapp
doc8 api-quick-start doc8 api-quick-start
[testenv:checkbuild] [testenv:checkbuild]
@ -34,8 +33,6 @@ commands =
# Prepare documents (without www) so that they can get published on # Prepare documents (without www) so that they can get published on
# developer.openstack.org with just copying publish-docs/api-ref over. # developer.openstack.org with just copying publish-docs/api-ref over.
commands = commands =
# Build and copy RST Guides
{toxinidir}/tools/build-firstapp-rst.sh
# Build and copy API Quick Start # Build and copy API Quick Start
{toxinidir}/tools/build-api-quick-start.sh {toxinidir}/tools/build-api-quick-start.sh
# Build website index # Build website index
@ -74,33 +71,6 @@ commands = {toxinidir}/tools/generatepot-rst.sh api-site 0 {posargs}
commands = commands =
{toxinidir}/tools/build-all-rst.sh {toxinidir}/tools/build-all-rst.sh
[testenv:firstapp-libcloud]
commands = sphinx-build -E -W -t libcloud firstapp/source firstapp/build-libcloud/html
[testenv:firstapp-jclouds]
commands = sphinx-build -E -W -t jclouds firstapp/source firstapp/build-jclouds/html
[testenv:firstapp-fog]
commands = sphinx-build -E -W -t fog firstapp/source firstapp/build-fog/html
[testenv:firstapp-dotnet]
commands = sphinx-build -E -W -t dotnet firstapp/source firstapp/build-dotnet/html
[testenv:firstapp-pkgcloud]
commands = sphinx-build -E -W -t pkgcloud firstapp/source firstapp/build-pkgcloud/html
[testenv:firstapp-openstacksdk]
commands = sphinx-build -E -W -t openstacksdk firstapp/source firstapp/build-openstacksdk/html
[testenv:firstapp-todos]
commands = sphinx-build -E -W -t libcloud firstapp/source firstapp/build/html
[testenv:firstapp-shade]
commands = sphinx-build -E -W -t shade firstapp/source firstapp/build-shade/html
[testenv:firstapp-gophercloud]
commands = sphinx-build -E -W -t gophercloud firstapp/source firstapp/build-gophercloud/html
[testenv:api-quick-start] [testenv:api-quick-start]
commands = commands =
{toxinidir}/tools/build-api-quick-start.sh {toxinidir}/tools/build-api-quick-start.sh
@ -108,7 +78,7 @@ commands =
[doc8] [doc8]
# Settings for doc8: # Settings for doc8:
# Ignore target directories # Ignore target directories
ignore-path = firstapp/build*,common/ ignore-path = common/
# File extensions to use # File extensions to use
extensions = .rst,.txt extensions = .rst,.txt
# Ignore lines longer than 79 chars # Ignore lines longer than 79 chars

View File

@ -15,7 +15,6 @@
</p> </p>
<hr> <hr>
<a href="#devenv" class="overview-btn docs-btn">Development Environments <i class="fa fa-gears"></i></a> <a href="#devenv" class="overview-btn docs-btn">Development Environments <i class="fa fa-gears"></i></a>
<a href="#firstapp" class="overview-btn docs-btn">Writing your First App <i class="fa fa-pagelines"></i></a>
<a href="#refarch" class="overview-btn docs-btn">Reference Architectures <i class="fa fa-building-o"></i></a> <a href="#refarch" class="overview-btn docs-btn">Reference Architectures <i class="fa fa-building-o"></i></a>
<a href="#sdk" class="overview-btn docs-btn">View SDKs <i class="fa fa-arrow-circle-o-down"></i></a> <a href="#sdk" class="overview-btn docs-btn">View SDKs <i class="fa fa-arrow-circle-o-down"></i></a>
<a href="#api" class="overview-btn docs-btn">View APIs <i class="fa fa-arrow-circle-o-down"></i></a> <a href="#api" class="overview-btn docs-btn">View APIs <i class="fa fa-arrow-circle-o-down"></i></a>
@ -54,33 +53,6 @@
<li><a href="https://github.com/openstack/packstack/blob/master/docs/packstack.rst">PackStack A simple Puppet driven installation of OpenStack</a></li> <li><a href="https://github.com/openstack/packstack/blob/master/docs/packstack.rst">PackStack A simple Puppet driven installation of OpenStack</a></li>
</ul> </ul>
</div> </div>
<div id="firstapp">
<h2>Writing Your First OpenStack Application</h2>
<p>
Want to quickly learn how to manipulate OpenStack using the OpenStack SDKs?
</p>
<h3>OpenStack FirstApp</h3>
<ul>
<li><a href="firstapp-libcloud/">FirstApp with the Libcloud public Python library (en)</a>
<br/>
Translations:
<ul>
<li><a href="de/firstapp-libcloud/"> German (de)</a></li>
<li><a href="id/firstapp-libcloud/"> Indonesian (id)</a></li>
<li><a href="tr_TR/firstapp-libcloud/"> Turkish (tr_TR)</a></li>
</ul>
</li>
<li><a href="firstapp-shade/">FirstApp using the OpenStack Shade SDK (en)</a>
<br/>
Translations:
<ul>
<li><a href="de/firstapp-shade/"> German (de)</a></li>
<li><a href="id/firstapp-shade/"> Indonesian (id)</a></li>
<li><a href="tr_TR/firstapp-shade/"> Turkish (tr_TR)</a></li>
</ul>
</li>
</ul>
</div>
<div id="refarch"> <div id="refarch">
<h2>Reference Architectures</h2> <h2>Reference Architectures</h2>
<p> <p>

View File

@ -102,11 +102,6 @@
Docs and resources Docs and resources
</dt> </dt>
<dd> <dd>
<a class="link" href="https://developer.openstack.org/firstapp-shade/getting_started.html" target="_top">
Getting started
</a>
</dd>
<dd>
<a class="link" href="https://docs.openstack.org/shade/latest/user/usage.html" target="_top"> <a class="link" href="https://docs.openstack.org/shade/latest/user/usage.html" target="_top">
Usage Usage
</a> </a>
@ -416,11 +411,6 @@
Docs and resources Docs and resources
</dt> </dt>
<dd> <dd>
<a class="link" href="https://developer.openstack.org/firstapp-libcloud/getting_started.html" target="_top">
Getting started
</a>
</dd>
<dd>
<a class="link" href="https://libcloud.readthedocs.org/en/latest/compute/drivers/openstack.html" target="_top"> <a class="link" href="https://libcloud.readthedocs.org/en/latest/compute/drivers/openstack.html" target="_top">
OpenStack Compute Driver Documentation OpenStack Compute Driver Documentation
</a> </a>