Remove networking related code from the Trio2o

1. What is the problem?
Networking related code is still in the repository of
the Trio2o project.

2. What is the solution to the problem?
According to the blueprint for the Trio2o cleaning:
https://blueprints.launchpad.net/trio2o/+spec/trio2o-code-cleaning
Networking related code which is forked from the Tricircle
repository should be removed from the Trio2o repository.
After the cleanning, the Trio2o should be able to run independently.
There are lots of things to clean and update, and have to do it in
one huge patch, otherwise the code in Trio2o will not be able to run
and tested properply:
1). Remove netwoking operaion from server controller
2). Update devstack script
3). Update installation guide
4). Update README
5). Remove network folder and network related unit tests
6). Rename Tricircle to Trio2o in all source code

THE MEANING OF FILE OPERATION:
D: delete a file
R: rename a file to another name
A: add a new file
C: copy a file

3. What the features need to be implemented to the Tricircle to realize
the solution?
No new features.

Change-Id: I0b48ee38280e25ba6294ca3d5b7a0673cb368ed4
Signed-off-by: joehuang <joehuang@huawei.com>
This commit is contained in:
joehuang 2016-10-18 04:54:56 -04:00
parent e7278406c3
commit 180f68eac8
178 changed files with 1776 additions and 8331 deletions

View File

@ -1,7 +1,7 @@
[run]
branch = True
source = tricircle
omit = tricircle/tests/*, tricircle/tempestplugin/*
source = trio2o
omit = trio2o/tests/*, trio2o/tempestplugin/*
[report]
ignore_errors = True

View File

@ -1,4 +1,4 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/tricircle.git
project=openstack/trio2o.git

View File

@ -2,6 +2,6 @@
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover $TRICIRCLE_TEST_DIRECTORY $LISTOPT $IDOPTION
${PYTHON:-python} -m subunit.run discover $TRIO2O_TEST_DIRECTORY $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -14,4 +14,4 @@ Any pull requests submitted through GitHub will be ignored.
Any bug should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/tricircle
https://bugs.launchpad.net/trio2o

View File

@ -1,5 +1,5 @@
================================
The Tricircle Style Commandments
The Trio2o Style Commandments
================================
Please read the OpenStack Style Commandments

View File

@ -1,37 +1,34 @@
=========
Tricircle
Trio2o
=========
The Tricircle provides an OpenStack API gateway and networking automation
funtionality to allow multiple OpenStack instances, spanning in one site or
multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud.
The Trio2o provides an OpenStack API gateway to allow multiple OpenStack
instances, spanning in one site or multiple sites or in hybrid cloud, to
be managed as a single OpenStack cloud.
The Tricircle and these managed OpenStack instances will use shared KeyStone
The Trio2o and these managed OpenStack instances will use shared KeyStone
(with centralized or distributed deployment) or federated KeyStones for
identity management.
The Tricircle presents one big region to the end user in KeyStone. And each
OpenStack instance called a pod is a sub-region of the Tricircle in
The Trio2o presents one big region to the end user in KeyStone. And each
OpenStack instance called a pod is a sub-region of the Trio2o in
KeyStone, and usually not visible to end user directly.
The Tricircle acts as OpenStack API gateway, can handle OpenStack API calls,
The Trio2o acts as OpenStack API gateway, can handle OpenStack API calls,
schedule one proper OpenStack instance if needed during the API calls handling,
forward the API calls to the appropriate OpenStack instance, and deal with
tenant level L2/L3 networking across OpenStack instances automatically. So it
doesn't matter on which bottom OpenStack instance the VMs for the tenant are
running, they can communicate with each other via L2 or L3.
forward the API calls to the appropriate OpenStack instance.
The end user can see avaialbility zone(AZ) and use AZ to provision
VM, Volume, even Network through the Tricircle. One AZ can include many
OpenStack instances, the Tricircle can schedule and bind OpenStack instance
for the tenant inside one AZ. A tenant's resources could be bound to multiple
specific bottom OpenStack instances in one or multiple AZs automatically.
VM, Volume, through the Trio2o. One AZ can include many OpenStack instances,
the Trio2o can schedule and bind OpenStack instance for the tenant inside one
AZ. A tenant's resources could be bound to multiple specific bottom OpenStack
instances in one or multiple AZs automatically.
* Free software: Apache license
* Design documentation: `Tricircle Design Blueprint <https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/>`_
* Wiki: https://wiki.openstack.org/wiki/tricircle
* Installation with DevStack: https://github.com/openstack/tricircle/blob/master/doc/source/installation.rst
* Tricircle Admin API documentation: https://github.com/openstack/tricircle/blob/master/doc/source/api_v1.rst
* Source: https://github.com/openstack/tricircle
* Bugs: http://bugs.launchpad.net/tricircle
* Blueprints: https://launchpad.net/tricircle
* Design documentation: `Trio2o Design Blueprint <https://docs.google.com/document/d/1cmIUsClw964hJxuwj3ild87rcHL8JLC-c7T-DUQzd4k/>`_
* Wiki: https://wiki.openstack.org/wiki/trio2o
* Installation with DevStack: https://github.com/openstack/trio2o/blob/master/doc/source/
* Trio2o Admin API documentation: https://github.com/openstack/trio2o/blob/master/doc/source/api_v1.rst
* Source: https://github.com/openstack/trio2o
* Bugs: http://bugs.launchpad.net/trio2o
* Blueprints: https://launchpad.net/trio2o

View File

@ -23,11 +23,11 @@ from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.api import app
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import restapp
from trio2o.api import app
from trio2o.common import config
from trio2o.common.i18n import _LI
from trio2o.common.i18n import _LW
from trio2o.common import restapp
CONF = cfg.CONF
@ -49,7 +49,7 @@ def main():
LOG.info(_LI("Admin API on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle Admin_API', application, host, port)
service = wsgi.Server(CONF, 'Trio2o Admin_API', application, host, port)
restapp.serve(service, CONF, workers)
LOG.info(_LI("Configuration:"))

View File

@ -23,12 +23,12 @@ from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import restapp
from trio2o.common import config
from trio2o.common.i18n import _LI
from trio2o.common.i18n import _LW
from trio2o.common import restapp
from tricircle.cinder_apigw import app
from trio2o.cinder_apigw import app
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@ -49,7 +49,7 @@ def main():
LOG.info(_LI("Cinder_APIGW on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle Cinder_APIGW',
service = wsgi.Server(CONF, 'Trio2o Cinder_APIGW',
application, host, port)
restapp.serve(service, CONF, workers)

View File

@ -18,14 +18,14 @@ import sys
from oslo_config import cfg
from tricircle.db import core
from tricircle.db import migration_helpers
from trio2o.db import core
from trio2o.db import migration_helpers
def main(argv=None, config_files=None):
core.initialize()
cfg.CONF(args=argv[2:],
project='tricircle',
project='trio2o',
default_config_files=config_files)
migration_helpers.find_migrate_repo()
migration_helpers.sync_repo(2)

View File

@ -28,12 +28,12 @@ from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import restapp
from trio2o.common import config
from trio2o.common.i18n import _LI
from trio2o.common.i18n import _LW
from trio2o.common import restapp
from tricircle.nova_apigw import app
from trio2o.nova_apigw import app
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@ -54,7 +54,7 @@ def main():
LOG.info(_LI("Nova_APIGW on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle Nova_APIGW',
service = wsgi.Server(CONF, 'Trio2o Nova_APIGW',
application, host, port)
restapp.serve(service, CONF, workers)

View File

@ -27,11 +27,11 @@ import sys
from oslo_config import cfg
from oslo_log import log as logging
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from trio2o.common import config
from trio2o.common.i18n import _LI
from trio2o.common.i18n import _LW
from tricircle.xjob import xservice
from trio2o.xjob import xservice
CONF = cfg.CONF
LOG = logging.getLogger(__name__)

View File

@ -1,70 +0,0 @@
#
# Sample DevStack local.conf.
#
# This sample file is intended to be used for your typical Tricircle DevStack
# multi-node environment. As this file configures, DevStack will setup two
# regions, one top region running Tricircle services, Keystone, Glance, Nova
# API gateway, Cinder API gateway and Neutron with Tricircle plugin; and one
# bottom region running original Nova, Cinder and Neutron.
#
# This file works with local.conf.node_2.sample to help you build a two-node
# three-region Tricircle environment. Keystone and Glance in top region are
# shared by services in all the regions.
#
# Some options needs to be change to adapt to your environment, see README.md
# for detail.
#
[[local|localrc]]
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
NEUTRON_CREATE_INITIAL_NETWORKS=False
Q_USE_PROVIDERNET_FOR_PUBLIC=True
HOST_IP=10.250.201.24
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000)
OVS_BRIDGE_MAPPINGS=bridge:br-bridge
Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://github.com/openstack/tricircle/
# Tricircle Services
enable_service t-api
enable_service t-ngw
enable_service t-cgw
enable_service t-job
# Use Neutron instead of nova-network
disable_service n-net
enable_service q-svc
enable_service q-svc1
enable_service q-dhcp
enable_service q-agt
enable_service q-l3
enable_service c-api
enable_service c-vol
enable_service c-sch
disable_service n-obj
disable_service c-bak
disable_service tempest
disable_service horizon

View File

@ -1,10 +1,11 @@
#
# Sample DevStack local.conf.
# Sample DevStack local.conf.sample
#
# This sample file is intended to be used for your typical Tricircle DevStack
# environment that's running all of OpenStack on a single host.
# This sample file is intended to be used for your typical Trio2o DevStack
# environment that's running Trio2o and one bottom OpenStack Pod1 on a
# single host.
#
# No changes to this sample configuration are required for this to work.
# Changes HOST_IP in this sample configuration are required.
#
[[local|localrc]]
@ -18,33 +19,27 @@ LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
HOST_IP=127.0.0.1
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
NEUTRON_CREATE_INITIAL_NETWORKS=False
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://github.com/openstack/tricircle/
# Enable Trio2o
Q_ENABLE_TRIO2O=True
enable_plugin trio2o https://github.com/openstack/trio2o/
# Tricircle Services
enable_service t-api
enable_service t-ngw
enable_service t-cgw
enable_service t-job
# Change the HOST_IP address to the host's IP address where
# the Trio2o is running
HOST_IP=162.3.124.203
# Use Neutron instead of nova-network
disable_service n-net
enable_service q-svc
enable_service q-svc1
enable_service q-dhcp
enable_service q-agt
@ -56,5 +51,5 @@ enable_service c-api
enable_service c-vol
enable_service c-sch
disable_service c-bak
# disable_service tempest
disable_service tempest
disable_service horizon

View File

@ -1,20 +1,22 @@
#
# Sample DevStack local.conf.
# Sample DevStack local.conf.sample2
#
# This sample file is intended to be used for your typical Tricircle DevStack
# This sample file is intended to be used for your typical Trio2o DevStack
# multi-node environment. As this file configures, DevStack will setup one
# bottom region running original Nova, Cinder and Neutron.
# one more bottom OpenStack Pod2 running original Nova, Cinder and Neutron.
#
# This file works with local.conf.node_1.sample to help you build a two-node
# three-region Tricircle environment. Keystone and Glance in top region are
# shared by services in all the regions.
# This file works with local.conf.node.sample to help you build a two-node
# three-region Trio2o environment. Keystone, Neutron and Glance in top region
# are shared by services in all the regions.
#
# Some options needs to be change to adapt to your environment, see README.md
# for detail.
# Some options needs to be change to adapt to your environment, read
# installation.rst for detail.
#
[[local|localrc]]
RECLONE=no
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
@ -29,24 +31,31 @@ NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
NEUTRON_CREATE_INITIAL_NETWORKS=False
Q_USE_PROVIDERNET_FOR_PUBLIC=True
HOST_IP=10.250.201.25
# the region name of this OpenStack instance, and it's also
# the pod name in Trio2o
REGION_NAME=Pod2
KEYSTONE_REGION_NAME=RegionOne
SERVICE_HOST=$HOST_IP
KEYSTONE_SERVICE_HOST=10.250.201.24
KEYSTONE_AUTH_HOST=10.250.201.24
GLANCE_SERVICE_HOST=10.250.201.24
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext
# Change the HOST_IP, SERVICE_HOST and GLANCE_SERVICE_HOST to
# the host's IP address where the Pod2 is running
HOST_IP=162.3.124.204
SERVICE_HOST=162.3.124.204
# Use the KeyStone which is located in RegionOne, where the Trio2o is
# installed, change the KEYSTONE_SERVICE_HOST and KEYSTONE_AUTH_HOST to
# host's IP address where the KeyStone is served.
KEYSTONE_REGION_NAME=RegionOne
KEYSTONE_SERVICE_HOST=162.3.124.203
KEYSTONE_AUTH_HOST=162.3.124.203
# Use the Glance which is located in RegionOne, where the Trio2o is
# installed
GLANCE_SERVICE_HOST=162.3.124.203
# Use Neutron instead of nova-network
disable_service n-net

View File

@ -1,30 +1,30 @@
# Devstack extras script to install Tricircle
# Devstack extras script to install Trio2o
# Test if any tricircle services are enabled
# is_tricircle_enabled
function is_tricircle_enabled {
# Test if any trio2o services are enabled
# is_trio2o_enabled
function is_trio2o_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"t-api" ]] && return 0
return 1
}
# create_tricircle_accounts() - Set up common required tricircle
# create_trio2o_accounts() - Set up common required trio2o
# service accounts in keystone
# Project User Roles
# -------------------------------------------------------------------------
# $SERVICE_TENANT_NAME tricircle service
# $SERVICE_TENANT_NAME trio2o service
function create_tricircle_accounts {
function create_trio2o_accounts {
if [[ "$ENABLED_SERVICES" =~ "t-api" ]]; then
create_service_user "tricircle"
create_service_user "trio2o"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_api=$(get_or_create_service "tricircle" \
local trio2o_api=$(get_or_create_service "trio2o" \
"Cascading" "OpenStack Cascading Service")
get_or_create_endpoint $tricircle_api \
get_or_create_endpoint $trio2o_api \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0"
"$SERVICE_PROTOCOL://$TRIO2O_API_HOST:$TRIO2O_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRIO2O_API_HOST:$TRIO2O_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRIO2O_API_HOST:$TRIO2O_API_PORT/v1.0"
fi
fi
}
@ -41,16 +41,16 @@ function create_nova_apigw_accounts {
create_service_user "nova_apigw"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_nova_apigw=$(get_or_create_service "nova" \
local trio2o_nova_apigw=$(get_or_create_service "nova" \
"compute" "Nova Compute Service")
remove_old_endpoint_conf $tricircle_nova_apigw
remove_old_endpoint_conf $trio2o_nova_apigw
get_or_create_endpoint $tricircle_nova_apigw \
get_or_create_endpoint $trio2o_nova_apigw \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s'
"$SERVICE_PROTOCOL://$TRIO2O_NOVA_APIGW_HOST:$TRIO2O_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRIO2O_NOVA_APIGW_HOST:$TRIO2O_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRIO2O_NOVA_APIGW_HOST:$TRIO2O_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s'
fi
fi
}
@ -67,22 +67,22 @@ function create_cinder_apigw_accounts {
create_service_user "cinder_apigw"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_cinder_apigw=$(get_or_create_service "cinder" \
local trio2o_cinder_apigw=$(get_or_create_service "cinder" \
"volumev2" "Cinder Volume Service")
remove_old_endpoint_conf $tricircle_cinder_apigw
remove_old_endpoint_conf $trio2o_cinder_apigw
get_or_create_endpoint $tricircle_cinder_apigw \
get_or_create_endpoint $trio2o_cinder_apigw \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s'
"$SERVICE_PROTOCOL://$TRIO2O_CINDER_APIGW_HOST:$TRIO2O_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRIO2O_CINDER_APIGW_HOST:$TRIO2O_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRIO2O_CINDER_APIGW_HOST:$TRIO2O_CINDER_APIGW_PORT/v2/"'$(tenant_id)s'
fi
fi
}
# common config-file configuration for tricircle services
# common config-file configuration for trio2o services
function remove_old_endpoint_conf {
local service=$1
@ -102,24 +102,24 @@ function remove_old_endpoint_conf {
}
# create_tricircle_cache_dir() - Set up cache dir for tricircle
function create_tricircle_cache_dir {
# create_trio2o_cache_dir() - Set up cache dir for trio2o
function create_trio2o_cache_dir {
# Delete existing dir
sudo rm -rf $TRICIRCLE_AUTH_CACHE_DIR
sudo mkdir -p $TRICIRCLE_AUTH_CACHE_DIR
sudo chown `whoami` $TRICIRCLE_AUTH_CACHE_DIR
sudo rm -rf $TRIO2O_AUTH_CACHE_DIR
sudo mkdir -p $TRIO2O_AUTH_CACHE_DIR
sudo chown `whoami` $TRIO2O_AUTH_CACHE_DIR
}
# common config-file configuration for tricircle services
function init_common_tricircle_conf {
# common config-file configuration for trio2o services
function init_common_trio2o_conf {
local conf_file=$1
touch $conf_file
iniset $conf_file DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $conf_file DEFAULT verbose True
iniset $conf_file DEFAULT use_syslog $SYSLOG
iniset $conf_file DEFAULT tricircle_db_connection `database_connection_url tricircle`
iniset $conf_file DEFAULT trio2o_db_connection `database_connection_url trio2o`
iniset $conf_file client admin_username admin
iniset $conf_file client admin_password $ADMIN_PASSWORD
@ -127,181 +127,154 @@ function init_common_tricircle_conf {
iniset $conf_file client auto_refresh_endpoint True
iniset $conf_file client top_pod_name $REGION_NAME
iniset $conf_file oslo_concurrency lock_path $TRICIRCLE_STATE_PATH/lock
iniset $conf_file oslo_concurrency lock_path $TRIO2O_STATE_PATH/lock
}
function configure_tricircle_api {
function configure_trio2o_api {
if is_service_enabled t-api ; then
echo "Configuring Tricircle API"
echo "Configuring Trio2o API"
init_common_tricircle_conf $TRICIRCLE_API_CONF
init_common_trio2o_conf $TRIO2O_API_CONF
setup_colorized_logging $TRICIRCLE_API_CONF DEFAULT tenant_name
setup_colorized_logging $TRIO2O_API_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
create_trio2o_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_API_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
configure_auth_token_middleware $TRIO2O_API_CONF trio2o \
$TRIO2O_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_API_CONF DEFAULT auth_strategy noauth
iniset $TRIO2O_API_CONF DEFAULT auth_strategy noauth
fi
fi
}
function configure_tricircle_nova_apigw {
function configure_trio2o_nova_apigw {
if is_service_enabled t-ngw ; then
echo "Configuring Tricircle Nova APIGW"
echo "Configuring Trio2o Nova APIGW"
init_common_tricircle_conf $TRICIRCLE_NOVA_APIGW_CONF
init_common_trio2o_conf $TRIO2O_NOVA_APIGW_CONF
setup_colorized_logging $TRICIRCLE_NOVA_APIGW_CONF DEFAULT tenant_name
setup_colorized_logging $TRIO2O_NOVA_APIGW_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
create_trio2o_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_NOVA_APIGW_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
configure_auth_token_middleware $TRIO2O_NOVA_APIGW_CONF trio2o \
$TRIO2O_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_NOVA_APIGW_CONF DEFAULT auth_strategy noauth
iniset $TRIO2O_NOVA_APIGW_CONF DEFAULT auth_strategy noauth
fi
fi
}
function configure_tricircle_cinder_apigw {
function configure_trio2o_cinder_apigw {
if is_service_enabled t-cgw ; then
echo "Configuring Tricircle Cinder APIGW"
echo "Configuring Trio2o Cinder APIGW"
init_common_tricircle_conf $TRICIRCLE_CINDER_APIGW_CONF
init_common_trio2o_conf $TRIO2O_CINDER_APIGW_CONF
setup_colorized_logging $TRICIRCLE_CINDER_APIGW_CONF DEFAULT tenant_name
setup_colorized_logging $TRIO2O_CINDER_APIGW_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
create_trio2o_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_CINDER_APIGW_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
configure_auth_token_middleware $TRIO2O_CINDER_APIGW_CONF trio2o \
$TRIO2O_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_CINDER_APIGW_CONF DEFAULT auth_strategy noauth
iniset $TRIO2O_CINDER_APIGW_CONF DEFAULT auth_strategy noauth
fi
fi
}
function configure_tricircle_xjob {
function configure_trio2o_xjob {
if is_service_enabled t-job ; then
echo "Configuring Tricircle xjob"
echo "Configuring Trio2o xjob"
init_common_tricircle_conf $TRICIRCLE_XJOB_CONF
init_common_trio2o_conf $TRIO2O_XJOB_CONF
setup_colorized_logging $TRICIRCLE_XJOB_CONF DEFAULT
setup_colorized_logging $TRIO2O_XJOB_CONF DEFAULT
fi
}
function start_new_neutron_server {
local server_index=$1
local region_name=$2
local q_port=$3
function move_neutron_server {
local region_name=$1
remove_old_endpoint_conf "neutron"
get_or_create_service "neutron" "network" "Neutron Service"
get_or_create_endpoint "network" \
"$region_name" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/"
"$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
"$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
"$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/"
cp $NEUTRON_CONF $NEUTRON_CONF.$server_index
iniset $NEUTRON_CONF.$server_index database connection `database_connection_url $Q_DB_NAME$server_index`
iniset $NEUTRON_CONF.$server_index nova region_name $region_name
iniset $NEUTRON_CONF.$server_index DEFAULT bind_port $q_port
iniset $NEUTRON_CONF nova region_name $region_name
recreate_database $Q_DB_NAME$server_index
$NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE upgrade head
run_process q-svc$server_index "$NEUTRON_BIN_DIR/neutron-server --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE"
stop_process q-svc
# remove previous failure flag file since we are going to restart service
rm -f "$SERVICE_DIR/$SCREEN_NAME"/q-svc.failure
sleep 20
run_process q-svc "$NEUTRON_BIN_DIR/neutron-server --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
}
if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
if [[ "$Q_ENABLE_TRIO2O" == "True" ]]; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo summary "Tricircle pre-install"
echo summary "Trio2o pre-install"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Tricircle"
echo_summary "Installing Trio2o"
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Tricircle"
echo_summary "Configuring Trio2o"
sudo install -d -o $STACK_USER -m 755 $TRICIRCLE_CONF_DIR
sudo install -d -o $STACK_USER -m 755 $TRIO2O_CONF_DIR
configure_tricircle_api
configure_tricircle_nova_apigw
configure_tricircle_cinder_apigw
configure_tricircle_xjob
enable_service t-api t-job t-ngw t-cgw
echo export PYTHONPATH=\$PYTHONPATH:$TRICIRCLE_DIR >> $RC_DIR/.localrc.auto
configure_trio2o_api
configure_trio2o_nova_apigw
configure_trio2o_cinder_apigw
configure_trio2o_xjob
setup_package $TRICIRCLE_DIR -e
echo export PYTHONPATH=\$PYTHONPATH:$TRIO2O_DIR >> $RC_DIR/.localrc.auto
recreate_database tricircle
python "$TRICIRCLE_DIR/cmd/manage.py" "$TRICIRCLE_API_CONF"
setup_package $TRIO2O_DIR -e
if is_service_enabled q-svc ; then
start_new_neutron_server 1 $POD_REGION_NAME $TRICIRCLE_NEUTRON_PORT
# reconfigure neutron server to use our own plugin
echo "Configuring Neutron plugin for Tricircle"
Q_PLUGIN_CLASS="tricircle.network.plugin.TricirclePlugin"
iniset $NEUTRON_CONF DEFAULT core_plugin "$Q_PLUGIN_CLASS"
iniset $NEUTRON_CONF DEFAULT service_plugins ""
iniset $NEUTRON_CONF DEFAULT tricircle_db_connection `database_connection_url tricircle`
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes False
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes False
iniset $NEUTRON_CONF client admin_username admin
iniset $NEUTRON_CONF client admin_password $ADMIN_PASSWORD
iniset $NEUTRON_CONF client admin_tenant demo
iniset $NEUTRON_CONF client auto_refresh_endpoint True
iniset $NEUTRON_CONF client top_pod_name $REGION_NAME
if [ "$Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS" != "" ]; then
iniset $NEUTRON_CONF tricircle type_drivers local,shared_vlan
iniset $NEUTRON_CONF tricircle tenant_network_types local,shared_vlan
iniset $NEUTRON_CONF tricircle network_vlan_ranges `echo $Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS | awk -F= '{print $2}'`
iniset $NEUTRON_CONF tricircle bridge_network_type shared_vlan
fi
fi
recreate_database trio2o
python "$TRIO2O_DIR/cmd/manage.py" "$TRIO2O_API_CONF"
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Tricircle Service"
echo_summary "Initializing Trio2o Service"
if is_service_enabled t-api; then
create_tricircle_accounts
create_trio2o_accounts
run_process t-api "python $TRICIRCLE_API --config-file $TRICIRCLE_API_CONF"
run_process t-api "python $TRIO2O_API --config-file $TRIO2O_API_CONF"
fi
if is_service_enabled t-ngw; then
create_nova_apigw_accounts
run_process t-ngw "python $TRICIRCLE_NOVA_APIGW --config-file $TRICIRCLE_NOVA_APIGW_CONF"
run_process t-ngw "python $TRIO2O_NOVA_APIGW --config-file $TRIO2O_NOVA_APIGW_CONF"
# Nova services are running, but we need to re-configure them to
# move them to bottom region
iniset $NOVA_CONF neutron region_name $POD_REGION_NAME
iniset $NOVA_CONF neutron url "$Q_PROTOCOL://$SERVICE_HOST:$TRICIRCLE_NEUTRON_PORT"
iniset $NOVA_CONF neutron url "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT"
iniset $NOVA_CONF cinder os_region_name $POD_REGION_NAME
get_or_create_endpoint "compute" \
@ -320,11 +293,15 @@ if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF" $LIBVIRT_GROUP
fi
if is_service_enabled q-svc; then
move_neutron_server $POD_REGION_NAME
fi
if is_service_enabled t-cgw; then
create_cinder_apigw_accounts
run_process t-cgw "python $TRICIRCLE_CINDER_APIGW --config-file $TRICIRCLE_CINDER_APIGW_CONF"
run_process t-cgw "python $TRIO2O_CINDER_APIGW --config-file $TRIO2O_CINDER_APIGW_CONF"
get_or_create_endpoint "volumev2" \
"$POD_REGION_NAME" \
@ -335,7 +312,7 @@ if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
if is_service_enabled t-job; then
run_process t-job "python $TRICIRCLE_XJOB --config-file $TRICIRCLE_XJOB_CONF"
run_process t-job "python $TRIO2O_XJOB --config-file $TRIO2O_XJOB_CONF"
fi
fi
@ -356,9 +333,5 @@ if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
if is_service_enabled t-job; then
stop_process t-job
fi
if is_service_enabled q-svc1; then
stop_process q-svc1
fi
fi
fi

View File

@ -1,45 +1,44 @@
# Git information
TRICIRCLE_REPO=${TRICIRCLE_REPO:-https://git.openstack.org/cgit/openstack/tricircle/}
TRICIRCLE_DIR=$DEST/tricircle
TRICIRCLE_BRANCH=${TRICIRCLE_BRANCH:-master}
TRIO2O_REPO=${TRIO2O_REPO:-https://git.openstack.org/cgit/openstack/trio2o/}
TRIO2O_DIR=$DEST/trio2o
TRIO2O_BRANCH=${TRIO2O_BRANCH:-master}
# common variables
POD_REGION_NAME=${POD_REGION_NAME:-Pod1}
TRICIRCLE_NEUTRON_PORT=${TRICIRCLE_NEUTRON_PORT:-20001}
TRICIRCLE_CONF_DIR=${TRICIRCLE_CONF_DIR:-/etc/tricircle}
TRICIRCLE_STATE_PATH=${TRICIRCLE_STATE_PATH:-/var/lib/tricircle}
TRIO2O_CONF_DIR=${TRIO2O_CONF_DIR:-/etc/trio2o}
TRIO2O_STATE_PATH=${TRIO2O_STATE_PATH:-/var/lib/trio2o}
# tricircle rest admin api
TRICIRCLE_API=$TRICIRCLE_DIR/cmd/api.py
TRICIRCLE_API_CONF=$TRICIRCLE_CONF_DIR/api.conf
# trio2o rest admin api
TRIO2O_API=$TRIO2O_DIR/cmd/api.py
TRIO2O_API_CONF=$TRIO2O_CONF_DIR/api.conf
TRICIRCLE_API_LISTEN_ADDRESS=${TRICIRCLE_API_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_API_HOST=${TRICIRCLE_API_HOST:-$SERVICE_HOST}
TRICIRCLE_API_PORT=${TRICIRCLE_API_PORT:-19999}
TRICIRCLE_API_PROTOCOL=${TRICIRCLE_API_PROTOCOL:-$SERVICE_PROTOCOL}
TRIO2O_API_LISTEN_ADDRESS=${TRIO2O_API_LISTEN_ADDRESS:-0.0.0.0}
TRIO2O_API_HOST=${TRIO2O_API_HOST:-$SERVICE_HOST}
TRIO2O_API_PORT=${TRIO2O_API_PORT:-19999}
TRIO2O_API_PROTOCOL=${TRIO2O_API_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle nova_apigw
TRICIRCLE_NOVA_APIGW=$TRICIRCLE_DIR/cmd/nova_apigw.py
TRICIRCLE_NOVA_APIGW_CONF=$TRICIRCLE_CONF_DIR/nova_apigw.conf
# trio2o nova_apigw
TRIO2O_NOVA_APIGW=$TRIO2O_DIR/cmd/nova_apigw.py
TRIO2O_NOVA_APIGW_CONF=$TRIO2O_CONF_DIR/nova_apigw.conf
TRICIRCLE_NOVA_APIGW_LISTEN_ADDRESS=${TRICIRCLE_NOVA_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_NOVA_APIGW_HOST=${TRICIRCLE_NOVA_APIGW_HOST:-$SERVICE_HOST}
TRICIRCLE_NOVA_APIGW_PORT=${TRICIRCLE_NOVA_APIGW_PORT:-19998}
TRICIRCLE_NOVA_APIGW_PROTOCOL=${TRICIRCLE_NOVA_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
TRIO2O_NOVA_APIGW_LISTEN_ADDRESS=${TRIO2O_NOVA_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRIO2O_NOVA_APIGW_HOST=${TRIO2O_NOVA_APIGW_HOST:-$SERVICE_HOST}
TRIO2O_NOVA_APIGW_PORT=${TRIO2O_NOVA_APIGW_PORT:-19998}
TRIO2O_NOVA_APIGW_PROTOCOL=${TRIO2O_NOVA_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle cinder_apigw
TRICIRCLE_CINDER_APIGW=$TRICIRCLE_DIR/cmd/cinder_apigw.py
TRICIRCLE_CINDER_APIGW_CONF=$TRICIRCLE_CONF_DIR/cinder_apigw.conf
# trio2o cinder_apigw
TRIO2O_CINDER_APIGW=$TRIO2O_DIR/cmd/cinder_apigw.py
TRIO2O_CINDER_APIGW_CONF=$TRIO2O_CONF_DIR/cinder_apigw.conf
TRICIRCLE_CINDER_APIGW_LISTEN_ADDRESS=${TRICIRCLE_CINDER_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_CINDER_APIGW_HOST=${TRICIRCLE_CINDER_APIGW_HOST:-$SERVICE_HOST}
TRICIRCLE_CINDER_APIGW_PORT=${TRICIRCLE_CINDER_APIGW_PORT:-19997}
TRICIRCLE_CINDER_APIGW_PROTOCOL=${TRICIRCLE_CINDER_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
TRIO2O_CINDER_APIGW_LISTEN_ADDRESS=${TRIO2O_CINDER_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRIO2O_CINDER_APIGW_HOST=${TRIO2O_CINDER_APIGW_HOST:-$SERVICE_HOST}
TRIO2O_CINDER_APIGW_PORT=${TRIO2O_CINDER_APIGW_PORT:-19997}
TRIO2O_CINDER_APIGW_PROTOCOL=${TRIO2O_CINDER_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle xjob
TRICIRCLE_XJOB=$TRICIRCLE_DIR/cmd/xjob.py
TRICIRCLE_XJOB_CONF=$TRICIRCLE_CONF_DIR/xjob.conf
# trio2o xjob
TRIO2O_XJOB=$TRIO2O_DIR/cmd/xjob.py
TRIO2O_XJOB_CONF=$TRIO2O_CONF_DIR/xjob.conf
TRICIRCLE_AUTH_CACHE_DIR=${TRICIRCLE_AUTH_CACHE_DIR:-/var/cache/tricircle}
TRIO2O_AUTH_CACHE_DIR=${TRIO2O_AUTH_CACHE_DIR:-/var/cache/trio2o}
export PYTHONPATH=$PYTHONPATH:$TRICIRCLE_DIR
export PYTHONPATH=$PYTHONPATH:$TRIO2O_DIR

View File

@ -1,143 +0,0 @@
#!/bin/bash
#
# Script name: verify_cross_pod_install.sh
# This script is to verify the installation of Tricircle in cross pod L3 networking.
# It verify both east-west and north-south networks.
#
# In this script, there are some parameters you need to consider before running it.
#
# 1, Post URL whether is 127.0.0.1 or something else,
# 2, This script create 2 subnets 10.0.1.0/24 and 10.0.2.0/24, Change these if needed.
# 3, This script create external subnet ext-net 10.50.11.0/26, Change it according
# your own environment.
# 4, The floating ip attached to the VM with ip 10.0.2.3, created by the script
# "verify_cross_pod_install.sh", modify it to your own environment.
#
# Change the parameters according to your own environment.
# Finally, execute "verify_cross_pod_install.sh" in the Node1.
#
# Author: Pengfei Shi <shipengfei92@gmail.com>
#
set -o xtrace
TEST_DIR=$(pwd)
echo "Test work directory is $TEST_DIR."
if [ ! -r admin-openrc.sh ];then
set -o xtrace
echo "Your work directory doesn't have admin-openrc.sh,"
echo "Please check whether you are in tricircle/devstack/ or not and run this script."
exit 1
fi
echo "Begining the verify testing..."
echo "Import client environment variables:"
source $TEST_DIR/admin-openrc.sh
echo "******************************"
echo "* Verify Endpoint *"
echo "******************************"
echo "List openstack endpoint:"
openstack --debug endpoint list
token=$(openstack token issue | awk 'NR==5 {print $4}')
echo $token
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod2", "az_name": "az2"}}'
echo "******************************"
echo "* Verify Nova *"
echo "******************************"
echo "Show nova aggregate:"
nova aggregate-list
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "net1", "admin_state_up": true, "availability_zone_hints": ["az1"]}}'
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "net2", "admin_state_up": true, "availability_zone_hints": ["az2"]}}'
echo "Create external network ext-net by curl:"
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "ext-net", "admin_state_up": true, "router:external": true, "provider:network_type": "vlan", "provider:physical_network": "extern", "availability_zone_hints": ["Pod2"]}}'
echo "Create test flavor:"
nova flavor-create test 1 1024 10 1
echo "******************************"
echo "* Verify Neutron *"
echo "******************************"
echo "Create external subnet with floating ips:"
neutron subnet-create --name ext-subnet --disable-dhcp ext-net 10.50.11.0/26 --allocation-pool start=10.50.11.30,end=10.50.11.50 --gateway 10.50.11.1
echo "Create router for subnets:"
neutron router-create router
echo "Set router external gateway:"
neutron router-gateway-set router ext-net
echo "Create net1 in Node1:"
neutron subnet-create net1 10.0.1.0/24
echo "Create net2 in Node2:"
neutron subnet-create net2 10.0.2.0/24
net1_id=$(neutron net-list |grep net1 | awk '{print $2}')
net2_id=$(neutron net-list |grep net2 | awk '{print $2}')
image_id=$(glance image-list |awk 'NR==4 {print $2}')
echo "Boot vm1 in az1:"
nova boot --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 vm1
echo "Boot vm2 in az2:"
nova boot --flavor 1 --image $image_id --nic net-id=$net2_id --availability-zone az2 vm2
subnet1_id=$(neutron net-list |grep net1 |awk '{print $6}')
subnet2_id=$(neutron net-list |grep net2 |awk '{print $6}')
echo "Add interface of subnet1:"
neutron router-interface-add router $subnet1_id
echo "Add interface of subnet2:"
neutron router-interface-add router $subnet2_id
echo "******************************"
echo "* Verify VNC connection *"
echo "******************************"
echo "Get the VNC url of vm1:"
nova --os-region-name Pod1 get-vnc-console vm1 novnc
echo "Get the VNC url of vm2:"
nova --os-region-name Pod2 get-vnc-console vm2 novnc
echo "**************************************"
echo "* Verify External network *"
echo "**************************************"
echo "Create floating ip:"
neutron floatingip-create ext-net
echo "Show floating ips:"
neutron floatingip-list
echo "Show neutron ports:"
neutron port-list
floatingip_id=$(neutron floatingip-list | awk 'NR==4 {print $2}')
port_id=$(neutron port-list |grep 10.0.2.3 |awk '{print $2}')
echo "Associate floating ip:"
neutron floatingip-associate $floatingip_id $port_id

View File

@ -1,94 +0,0 @@
#!/bin/bash
#
# Script name: verify_top_install.sh
# This script is to verify the installation of Tricircle in Top OpenStack.
#
# In this script, there are some parameters you need to consider before running it.
#
# 1, Post URL whether is 127.0.0.1 or something else,
# 2, This script create a subnet called net1 10.0.0.0/24, Change these if needed.
#
# Change the parameters according to your own environment.
# Execute "verify_top_install.sh" in the top OpenStack
#
# Author: Pengfei Shi <shipengfei92@gmail.com>
#
set -o xtrace
TEST_DIR=$(pwd)
echo "Test work directory is $TEST_DIR."
if [ ! -r admin-openrc.sh ];then
set -o xtrace
echo "Your work directory doesn't have admin-openrc.sh,"
echo "Please check whether you are in tricircle/devstack/ or not and run this script."
exit 1
fi
echo "Begining the verify testing..."
echo "Import client environment variables:"
source $TEST_DIR/admin-openrc.sh
echo "******************************"
echo "* Verify Endpoint *"
echo "******************************"
echo "List openstack endpoint:"
openstack --debug endpoint list
token=$(openstack token issue | awk 'NR==5 {print $4}')
echo $token
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
echo "******************************"
echo "* Verify Nova *"
echo "******************************"
echo "Show nova aggregate:"
nova --debug aggregate-list
echo "Create test flavor:"
nova --debug flavor-create test 1 1024 10 1
echo "******************************"
echo "* Verify Neutron *"
echo "******************************"
echo "Create net1:"
neutron --debug net-create net1
echo "Create subnet of net1:"
neutron --debug subnet-create net1 10.0.0.0/24
image_id=$(glance image-list |awk 'NR==4 {print $2}')
net_id=$(neutron net-list|grep net1 |awk '{print $2}')
echo "Boot vm1 in az1:"
nova --debug boot --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
echo "******************************"
echo "* Verify Cinder *"
echo "******************************"
echo "Create a volume in az1:"
cinder --debug create --availability-zone=az1 1
echo "Show volume list:"
cinder --debug list
volume_id=$(cinder list |grep lvmdriver-1 | awk '{print $2}')
echo "Show detailed volume info:"
cinder --debug show $volume_id
echo "Delete test volume:"
cinder --debug delete $volume_id
cinder --debug list

View File

@ -1,13 +1,13 @@
=======================
The Tricircle Admin API
The Trio2o Admin API
=======================
This Admin API describes the ways of interacting with the Tricircle service
This Admin API describes the ways of interacting with the Trio2o service
via HTTP protocol using Representational State Transfer(ReST).
API Versions
============
In order to bring new features to users over time, versioning is supported
by the Tricircle. The latest version of the Tricircle is v1.0.
by the Trio2o. The latest version of the Trio2o is v1.0.
The Version APIs work the same as other APIs as they still require
authentication.
@ -22,20 +22,20 @@ Service URLs
============
All API calls through the rest of this document require authentication with
the OpenStack Identity service. They also require a base service url that can
be got from the OpenStack Tricircle endpoint. This will be the root url that
be got from the OpenStack Trio2o endpoint. This will be the root url that
every call below will be added to build a full path.
For instance, if the Tricircle service url is http://127.0.0.1:19999/v1.0 then
For instance, if the Trio2o service url is http://127.0.0.1:19999/v1.0 then
the full API call for /pods is http://127.0.0.1:19999/v1.0/pods.
As such, for the rest of this document we will leave out the root url where
GET /pods really means GET {tricircle_service_url}/pods.
GET /pods really means GET {trio2o_service_url}/pods.
Pod
===
A pod represents a region in Keystone. When operating a pod, the Tricircle
A pod represents a region in Keystone. When operating a pod, the Trio2o
decides the correct endpoints to send request based on the region of the pod.
Considering the 2-layers architecture of the Tricircle, we also have two kinds
Considering the 2-layers architecture of the Trio2o, we also have two kinds
of pods: top pod and bottom pod.
@ -59,7 +59,7 @@ following table.
+-----------+-------+---------------+-----------------------------------------------------+
|pod_name |body | string |pod_name is specified by user but must match the |
| | | |region name registered in Keystone. When creating a |
| | | |bottom pod, the Tricircle automatically creates a |
| | | |bottom pod, the Trio2o automatically creates a |
| | | |host aggregation and assigns the new availability |
| | | |zone id to it. |
+-----------+-------+---------------+-----------------------------------------------------+
@ -142,7 +142,7 @@ means a bottom pod. All of its attributes are described in the following table.
+-----------+-------+---------------+-----------------------------------------------------+
|pod_name |body | string |pod_name is specified by user but must match the |
| | | |region name registered in Keystone. When creating a |
| | | |bottom pod, the Tricircle automatically creates a |
| | | |bottom pod, the Trio2o automatically creates a |
| | | |host aggregation and assigns the new availability |
| | | |zone id to it. |
+-----------+-------+---------------+-----------------------------------------------------+
@ -198,7 +198,7 @@ in the following table.
+===========+=======+===============+=====================================================+
|pod_name |body | string |pod_name is specified by user but must match the |
| | | |region name registered in Keystone. When creating a |
| | | |bottom pod, the Tricircle automatically creates a |
| | | |bottom pod, the Trio2o automatically creates a |
| | | |host aggregation and assigns the new availability |
| | | |zone id to it. |
+-----------+-------+---------------+-----------------------------------------------------+
@ -232,7 +232,7 @@ are listed below.
+-----------+-------+---------------+-----------------------------------------------------+
|pod_name |body | string |pod_name is specified by user but must match the |
| | | |region name registered in Keystone. When creating a |
| | | |bottom pod, the Tricircle automatically creates a |
| | | |bottom pod, the Trio2o automatically creates a |
| | | |host aggregation and assigns the new availability |
| | | |zone id to it. |
+-----------+-------+---------------+-----------------------------------------------------+

View File

@ -37,7 +37,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'tricircle'
project = u'trio2o'
copyright = u'2015, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.

View File

@ -1,9 +1,9 @@
.. tricircle documentation master file, created by
.. trio2o documentation master file, created by
sphinx-quickstart on Wed Dec 2 17:00:36 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to tricircle's documentation!
Welcome to trio2o's documentation!
========================================================
Contents:

View File

@ -1,467 +1,270 @@
=====================
Installation with pip
=====================
==================================
Trio2o installation with DevStack
==================================
At the command line::
Now the Trio2o can be played with all-in-one single node DevStack. For
the resource requirement to setup single node DevStack, please refer
to `All-In-One Single Machine <http://docs.openstack.org/developer/devstack/guides/single-machine.html>`_ for
installing DevStack in physical machine
or `All-In-One Single VM <http://docs.openstack.org/developer/devstack/guides/single-vm.html>`_ for
installing DevStack in virtual machine.
$ pip install tricircle
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv tricircle
$ pip install tricircle
======================================
Single node installation with DevStack
======================================
Now the Tricircle can be played with DevStack.
- 1 Install DevStack. Please refer to
http://docs.openstack.org/developer/devstack/
- 1 Install DevStack. Please refer to `DevStack document
<http://docs.openstack.org/developer/devstack/>`_
on how to install DevStack into single VM or physcial machine
- 2 In DevStack folder, create a file local.conf, and copy the content of
https://github.com/openstack/tricircle/blob/master/devstack/local.conf.sample
https://github.com/openstack/trio2o/blob/master/devstack/local.conf.sample
to local.conf, change password in the file if needed.
- 3 Run DevStack. In DevStack folder, run::
- 3 In local.conf, change HOST_IP to the host's IP address where the Trio2o
will be installed to, for example::
HOST_IP=162.3.124.203
- 4 Run DevStack. In DevStack folder, run::
./stack.sh
- 4 In DevStack folder, create a file adminrc, and copy the content of
https://github.com/openstack/tricircle/blob/master/devstack/admin-openrc.sh
to the adminrc, change the password in the file if needed.
And run the following command to set the environment variables::
- 5 After DevStack successfully starts, we need to create environment variables for
the user (admin user as example in this document). In DevStack folder::
source adminrc
source openrc admin admin
- 5 After DevStack successfully starts, check if services have been correctly
registered. Run "openstack endpoint list" and you should get output look
like as following::
- 6 Unset the region name environment variable, so that the command can be issued to
specified region in following commands as needed::
unset OS_REGION_NAME
- 7 Check if services have been correctly registered. Run::
openstack --os-region-name=RegionOne endpoint list
you should get output looks like as following::
+----------------------------------+-----------+--------------+----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+--------------+----------------+
| 230059e8533e4d389e034fd68257034b | RegionOne | glance | image |
| 25180a0a08cb41f69de52a7773452b28 | RegionOne | nova | compute |
| bd1ed1d6f0cc42398688a77bcc3bda91 | Pod1 | neutron | network |
| 673736f54ec147b79e97c395afe832f9 | RegionOne | ec2 | ec2 |
| fd7f188e2ba04ebd856d582828cdc50c | RegionOne | neutron | network |
| ffb56fd8b24a4a27bf6a707a7f78157f | RegionOne | keystone | identity |
| 88da40693bfa43b9b02e1478b1fa0bc6 | Pod1 | nova | compute |
| f35d64c2ddc44c16a4f9dfcd76e23d9f | RegionOne | nova_legacy | compute_legacy |
| 8759b2941fe7469e9651de3f6a123998 | RegionOne | tricircle | Cascading |
| e8a1f1a333334106909e05037db3fbf6 | Pod1 | neutron | network |
| 72c02a11856a4814a84b60ff72e0028d | Pod1 | cinderv2 | volumev2 |
| a26cff63563a480eaba334185a7f2cec | Pod1 | nova | compute |
| f90d97f8959948088ab58bc143ecb011 | RegionOne | cinderv3 | volumev3 |
| ed1af45af0d8459ea409e5c0dd0aadba | RegionOne | cinder | volume |
| ae6024a582534c21aee0c6d7fa5b90fb | RegionOne | nova | compute |
| c75ab09edc874bb781b0d376cec74623 | RegionOne | cinderv2 | volumev2 |
| 80ce6a2d12aa43fab693f4e619670d97 | RegionOne | trio2o | Cascading |
| 11a4b451da1a4db6ae14b0aa282f9ba6 | RegionOne | nova_legacy | compute_legacy |
| 546a8abf29244223bc9d5dd4960553a7 | RegionOne | glance | image |
| 0e9c9343b50e4b7080b25f4e297f79d3 | RegionOne | keystone | identity |
+----------------------------------+-----------+--------------+----------------+
"RegionOne" is the region where the Trio2o Admin API(ID is
80ce6a2d12aa43fab693f4e619670d97 in the above list), Nova API gateway(
ID is ae6024a582534c21aee0c6d7fa5b90fb) and Cinder API gateway( ID is
c75ab09edc874bb781b0d376cec74623) are running in. "Pod1" is the normal
bottom OpenStack region which includes Nova, Cinder, Neutron.
"RegionOne" is the region you set in local.conf via REGION_NAME, whose default
value is "RegionOne", we use it as the region for the Tricircle instance;
"Pod1" is the region set via "POD_REGION_NAME", new configuration option
introduced by the Tricircle, we use it as the bottom OpenStack instance.
- 6 Create pod instances for Tricircle and bottom OpenStack::
- 8 Get token for the later commands. Run::
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
openstack --os-region-name=RegionOne token issue
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
- 9 Create pod instances for the Trio2o to manage the mapping between
availability zone and OpenStack instances, the "$token" is obtained in the
step 7::
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
Pay attention to "pod_name" parameter we specify when creating pod. Pod name
should exactly match the region name registered in Keystone since it is used
by the Tricircle to route API request. In the above commands, we create pods
named "RegionOne" and "Pod1" for the Tricircle instance and bottom OpenStack
instance. The Tricircle API service will automatically create an aggregate
when user creates a bottom pod, so command "nova aggregate-list" will show
the following result::
should exactly match the region name registered in Keystone. In the above
commands, we create pods named "RegionOne" and "Pod1".
+----+----------+-------------------+
| Id | Name | Availability Zone |
+----+----------+-------------------+
| 1 | ag_Pod1 | az1 |
+----+----------+-------------------+
- 7 Create necessary resources to boot a virtual machine::
nova flavor-create test 1 1024 10 1
neutron net-create net1
neutron subnet-create net1 10.0.0.0/24
glance image-list
Note that flavor mapping has not been implemented yet so the created flavor
is just record saved in database as metadata. Actual flavor is saved in
bottom OpenStack instance.
- 8 Boot a virtual machine::
nova boot --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
- 9 Create, list, show and delete volume::
cinder --debug create --availability-zone=az1 1
cinder --debug list
cinder --debug show $volume_id
cinder --debug delete $volume_id
cinder --debug list
Verification with script
^^^^^^^^^^^^^^^^^^^^^^^^
A sample of admin-openrc.sh and an installation verification script can be found
in devstack/ in the Tricircle root folder. 'admin-openrc.sh' is used to create
environment variables for the admin user as the following::
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password #change password as you set in your own environment
export OS_AUTH_URL=http://127.0.0.1:5000
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_REGION_NAME=RegionOne
The command to use the admin-openrc.sh is::
source tricircle/devstack/admin-openrc.sh
'verify_top_install.sh' script is to quickly verify the installation of
the Tricircle in Top OpenStack as the step 5-9 above and save the output
to logs.
Before verifying the installation, you should modify the script based on your
own environment.
- 10 Create necessary resources in local Neutron server::
- 1 The default post URL is 127.0.0.1, change it if needed,
- 2 The default create net1's networ address is 10.0.0.0/24, change it if
needed.
neutron --os-region-name=Pod1 net-create net1
neutron --os-region-name=Pod1 subnet-create net1 10.0.0.0/24
Then you do the following steps to verify::
Please note that the net1 ID will be used in later step to boot VM.
cd tricircle/devstack/
./verify_top_install.sh 2>&1 | tee logs
- 11 Get image ID and flavor ID which will be used in VM booting::
glance --os-region-name=RegionOne image-list
nova --os-region-name=RegionOne flavor-create test 1 1024 10 1
nova --os-region-name=RegionOne flavor-list
======================================================================
Two nodes installation with DevStack for Cross-OpenStack L3 networking
======================================================================
- 12 Boot a virtual machine::
Introduction
^^^^^^^^^^^^
nova --os-region-name=RegionOne boot --flavor 1 --image $image_id --nic net-id=$net_id vm1
Now the Tricircle supports cross-pod l3 networking.
- 13 Verify the VM is connected to the net1::
To achieve cross-pod l3 networking, Tricircle utilizes a shared provider VLAN
network at first phase. We are considering later using DCI controller to create
a multi-segment VLAN network, VxLAN network for L3 networking purpose. When a
subnet is attached to a router in top pod, Tricircle not only creates
corresponding subnet and router in bottom pod, but also creates a VLAN type
"bridge" network. Both tenant network and "bridge" network are attached to
bottom router. Each tenant will have one allocated VLAN, which is shared by
the tenant's "bridge" networks across bottom pods. The CIDRs of "bridge"
networks for one tenant are also the same, so the router interfaces in
"bridge" networks across different bottom pods can communicate with each
other via the provider VLAN network. By adding an extra route as following::
neutron --os-region-name=Pod1 port-list
nova --os-region-name=RegionOne list
destination: CIDR of tenant network in another bottom pod
nexthop: "bridge" network interface ip in another bottom pod
- 14 Create, list, show and delete volume::
when a server sends a packet whose receiver is in another network and in
another bottom pod, the packet first goes to router namespace, then is
forwarded to the router namespace in another bottom pod according to the extra
route, at last the packet is sent to the target server. This configuration job
is triggered when user attaches a subnet to a router in top pod and finished
asynchronously.
cinder --os-region-name=RegionOne create --availability-zone=az1 1
cinder --os-region-name=RegionOne list
cinder --os-region-name=RegionOne show $volume_id
cinder --os-region-name=RegionOne delete $volume_id
cinder --os-region-name=RegionOne list
Currently cross-pod L2 networking is not supported yet, so tenant networks
cannot cross pods, that is to say, one network in top pod can only locate in
one bottom pod, tenant network is bound to bottom pod. Otherwise we cannot
correctly configure extra route since for one destination CIDR, we have more
than one possible nexthop addresses.
- 15 Using --debug to make sure the commands are issued to Nova API gateway
or Cinder API gateway::
*When cross-pod L2 networking is introduced, L2GW will be used to connect L2
network in different pods. No extra route is required to connect L2 network
All L3 traffic will be forwarded to the local L2 network, then go to the
server in another pod via the L2GW.*
nova --debug --os-region-name=RegionOne list
cinder --debug --os-region-name=RegionOne list
We use "availability_zone_hints" attribute for user to specify the bottom pod
he wants to create the bottom network. Currently we do not support attaching
a network to a router without setting "availability_zone_hints" attribute of
the network.
The nova command should be sent to http://162.3.124.203:19998/ and cinder
command to http://162.3.124.203:19997/
Prerequisite
^^^^^^^^^^^^
========================================
Add another pod to Trio2o with DevStack
========================================
- 1 Prepare another node(suppose it's node-2), be sure the node is ping-able
from the node(suppose it's node-1) where the Trio2o is installed and running.
For the resource requirement to setup another node DevStack, please refer
to `All-In-One Single Machine <http://docs.openstack.org/developer/devstack/guides/single-machine.html>`_ for
installing DevStack in physical machine
or `All-In-One Single VM <http://docs.openstack.org/developer/devstack/guides/single-vm.html>`_ for
installing DevStack in virtual machine.
To play cross-pod L3 networking, two nodes are needed. One to run Tricircle
and one bottom pod, the other one to run another bottom pod. Both nodes have
two network interfaces, for management and provider VLAN network. For VLAN
network, the physical network infrastructure should support VLAN tagging. If
you would like to try north-south networking, too, you should prepare one more
network interface in the second node for external network. In this guide, the
external network is also vlan type, so the local.conf sample is based on vlan
type external network setup.
- 2 Install DevStack in node-2. Please refer to `DevStack document
<http://docs.openstack.org/developer/devstack/>`_
on how to install DevStack into single VM or physcial machine
Setup
^^^^^
In node1,
- 3 In node-2 DevStack folder, create a file local.conf, and copy the
content of https://github.com/openstack/trio2o/blob/master/devstack/local.conf.sample2
to local.conf, change password in the file if needed.
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.node_1.sample.
- 3 Copy devstack/local.conf.node_1.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Change the following options according to your environment::
- 4 In node-2 local.conf, change the REGION_NAME for the REGION_NAME is
used as the region name if needed::
HOST_IP=10.250.201.24
REGION_NAME=Pod2
change to your management interface ip::
- 5 In node-2 local.conf, change following IP to the host's IP address of node-2,
for example, if node-2's management interface IP address is 162.3.124.204::
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000)
HOST_IP=162.3.124.204
SERVICE_HOST=162.3.124.204
the format is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
you can change physical network name, but remember to adapt your change
to the commands showed in this guide; also, change min vlan and max vlan
to adapt the vlan range your physical network supports::
- 6 In node-2, the OpenStack will use the KeyStone which is running in
node-1, so change the KEYSTONE_REGION_NAME and KEYSTONE host IP address
to node-1 IP address accordingly::
OVS_BRIDGE_MAPPINGS=bridge:br-bridge
KEYSTONE_REGION_NAME=RegionOne
KEYSTONE_SERVICE_HOST=162.3.124.203
KEYSTONE_AUTH_HOST=162.3.124.203
the format is <physical network name>:<ovs bridge name>, you can change
these names, but remember to adapt your change to the commands showed in
this guide::
- 7 In node-2, the OpenStack will use the Glance which is running in
node-1, so change the GLANCE_SERVICE_HOST IP address to node-1 IP
address accordingly::
GLANCE_SERVICE_HOST=162.3.124.203
Q_USE_PROVIDERNET_FOR_PUBLIC=True
- 8 Run DevStack. In DevStack folder, run::
use this option if you would like to try L3 north-south networking.
./stack.sh
- 9 After node-2 DevStack successfully starts, return to the noed-1. In
node-1 DevStack folder::
- 5 Create OVS bridge and attach the VLAN network interface to it::
source openrc admin admin
sudo ovs-vsctl add-br br-bridge
sudo ovs-vsctl add-port br-bridge eth1
- 10 Unset the region name environment variable in node-1, so that the command
can be issued to specified region in following commands as needed::
br-bridge is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
the device name of your VLAN network interface
- 6 Run DevStack.
- 7 After DevStack successfully starts, begin to setup node2.
unset OS_REGION_NAME
In node2,
- 11 Check if services in node-1 and node-2 have been correctly registered.
Run::
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.node_2.sample.
- 3 Copy devstack/local.conf.node_2.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Change the following options according to your environment::
openstack --os-region-name=RegionOne endpoint list
HOST_IP=10.250.201.25
you should get output looks like as following::
change to your management interface ip::
+----------------------------------+-----------+--------------+----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+--------------+----------------+
| e09ca9acfa6341aa8f2671571c73db28 | RegionOne | glance | image |
| 2730fbf212604687ada1f20b203fa0d7 | Pod2 | nova_legacy | compute_legacy |
| 7edd2273b0ae4bc68bbf714f561c2958 | Pod2 | cinder | volume |
| b39c6e4d1be143d694f620b53b4a6015 | Pod2 | cinderv2 | volumev2 |
| 9612c10655bb4fc994f3db4af72bfdac | Pod2 | nova | compute |
| 6c28b4a76fa148578a12423362a5ade1 | RegionOne | trio2o | Cascading |
| a1f439e8933d48e9891d238ad8e18bd5 | RegionOne | keystone | identity |
| 452b249592d04f0b903ee24fa0dbb573 | RegionOne | nova | compute |
| 30e7efc5e8f841f192cbea4da31ae5d5 | RegionOne | cinderv3 | volumev3 |
| 63b88f4023cc44b59cfca53ad9606b85 | RegionOne | cinderv2 | volumev2 |
| 653693d607934da7b7724c0cd1c49fb0 | Pod2 | neutron | network |
| 3e3ccb71b8424958ad5def048077ddf8 | Pod1 | nova | compute |
| d4615bce839f43f2a8856f3795df6833 | Pod1 | neutron | network |
| fd2004b26b6847df87d1036c2363ed22 | RegionOne | cinder | volume |
| 04ae8677ec704b779a1c00fa0eca2636 | Pod1 | cinderv2 | volumev2 |
| e11be9f233d1434bbf8c4b8edf6a2f50 | RegionOne | nova_legacy | compute_legacy |
| d50e2dfbb87b43e98a5899eae4fd4d72 | Pod2 | cinderv3 | volumev3 |
+----------------------------------+-----------+--------------+----------------+
KEYSTONE_SERVICE_HOST=10.250.201.24
"RegionOne" is the region where the Trio2o Admin API(ID is
6c28b4a76fa148578a12423362a5ade1 in the above list), Nova API gateway(
ID is 452b249592d04f0b903ee24fa0dbb573) and Cinder API gateway(ID is
63b88f4023cc44b59cfca53ad9606b85) are running in. "Pod1" is the normal
bottom OpenStack region which includes Nova, Cinder, Neutron in node-1.
"Pod2" is the normal bottom OpenStack region which includes Nova, Cinder,
Neutron in node-2.
change to management interface ip of node1::
- 12 Get token for the later commands. Run::
KEYSTONE_AUTH_HOST=10.250.201.24
openstack --os-region-name=RegionOne token issue
change to management interface ip of node1::
- 13 Create Pod2 instances for the Trio2o to manage the mapping between
availability zone and OpenStack instances, the "$token" is obtained in the
step 11::
GLANCE_SERVICE_HOST=10.250.201.24
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod2", "az_name": "az2"}}'
change to management interface ip of node1::
Pay attention to "pod_name" parameter we specify when creating pod. Pod name
should exactly match the region name registered in Keystone. In the above
commands, we create pod named "Pod2" in "az2".
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
- 14 Create necessary resources in local Neutron server::
the format is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
you can change physical network name, but remember to adapt your change
to the commands showed in this guide; also, change min vlan and max vlan
to adapt the vlan range your physical network supports::
neutron --os-region-name=Pod2 net-create net2
neutron --os-region-name=Pod2 subnet-create net2 10.0.0.0/24
OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext
Please note that the net2 ID will be used in later step to boot VM.
the format is <physical network name>:<ovs bridge name>, you can change
these names, but remember to adapt your change to the commands showed in
this guide::
- 15 Get image ID and flavor ID which will be used in VM booting, flavor
should have been created in node-1 installation, if not, please create
one::
Q_USE_PROVIDERNET_FOR_PUBLIC=True
glance --os-region-name=RegionOne image-list
nova --os-region-name=RegionOne flavor-create test 1 1024 10 1
nova --os-region-name=RegionOne flavor-list
use this option if you would like to try L3 north-south networking.
- 16 Boot a virtual machine in net2, replace $net-id to net2's ID::
In this guide, we define two physical networks in node2, one is "bridge" for
bridge network, the other one is "extern" for external network. If you do not
want to try L3 north-south networking, you can simply remove the "extern" part.
The external network type we use in the guide is vlan, if you want to use other
network type like flat, please refer to
[DevStack document](http://docs.openstack.org/developer/devstack/).
nova --os-region-name=RegionOne boot --availability-zone az2 --flavor 1 --image $image_id --nic net-id=$net_id vm2
- 5 Create OVS bridge and attach the VLAN network interface to it::
- 17 Verify the VM is connected to the net2::
sudo ovs-vsctl add-br br-bridge
sudo ovs-vsctl add-port br-bridge eth1
neutron --os-region-name=Pod2 port-list
nova --os-region-name=RegionOne list
br-bridge is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
the device name of your VLAN network interface
- 6 Run DevStack.
- 7 After DevStack successfully starts, the setup is finished.
- 18 Create, list, show and delete volume::
How to play
^^^^^^^^^^^
All the following operations are performed in node1
- 1 Check if services have been correctly registered. Run "openstack endpoint
list" and you should get similar output as following::
+----------------------------------+-----------+--------------+----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+--------------+----------------+
| 1fadbddef9074f81b986131569c3741e | RegionOne | tricircle | Cascading |
| a5c5c37613244cbab96230d9051af1a5 | RegionOne | ec2 | ec2 |
| 809a3f7282f94c8e86f051e15988e6f5 | Pod2 | neutron | network |
| e6ad9acc51074f1290fc9d128d236bca | Pod1 | neutron | network |
| aee8a185fa6944b6860415a438c42c32 | RegionOne | keystone | identity |
| 280ebc45bf9842b4b4156eb5f8f9eaa4 | RegionOne | glance | image |
| aa54df57d7b942a1a327ed0722dba96e | Pod2 | nova_legacy | compute_legacy |
| aa25ae2a3f5a4e4d8bc0cae2f5fbb603 | Pod2 | nova | compute |
| 932550311ae84539987bfe9eb874dea3 | RegionOne | nova_legacy | compute_legacy |
| f89fbeffd7e446d0a552e2a6cf7be2ec | Pod1 | nova | compute |
| e2e19c164060456f8a1e75f8d3331f47 | Pod2 | ec2 | ec2 |
| de698ad5c6794edd91e69f0e57113e97 | RegionOne | nova | compute |
| 8a4b2332d2a4460ca3f740875236a967 | Pod2 | keystone | identity |
| b3ad80035f8742f29d12df67bdc2f70c | RegionOne | neutron | network |
+----------------------------------+-----------+--------------+----------------+
"RegionOne" is the region you set in local.conf via REGION_NAME in node1, whose
default value is "RegionOne", we use it as the region for Tricircle; "Pod1" is
the region set via POD_REGION_NAME, new configuration option introduced by
Tricircle, we use it as the bottom OpenStack; "Pod2" is the region you set via
REGION_NAME in node2, we use it as another bottom OpenStack. In node2, you also
need to set KEYSTONE_REGION_NAME the same as REGION_NAME in node1, which is
"RegionOne" in this example. So services in node2 can interact with Keystone
service in RegionOne.
- 2 Create pod instances for Tricircle and bottom OpenStack::
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod2", "az_name": "az2"}}'
- 3 Create network with AZ scheduler hints specified::
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "net1", "admin_state_up": true, "availability_zone_hints": ["az1"]}}'
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "net2", "admin_state_up": true, "availability_zone_hints": ["az2"]}}'
Here we create two networks separately bound to Pod1 and Pod2
- 4 Create necessary resources to boot virtual machines::
nova flavor-create test 1 1024 10 1
neutron subnet-create net1 10.0.1.0/24
neutron subnet-create net2 10.0.2.0/24
glance image-list
- 5 Boot virtual machines::
nova boot --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 vm1
nova boot --flavor 1 --image $image_id --nic net-id=$net2_id --availability-zone az2 vm2
- 6 Create router and attach interface::
neutron router-create router
neutron router-interface-add router $subnet1_id
neutron router-interface-add router $subnet2_id
- 7 Launch VNC console anc check connectivity
By now, two networks are connected by the router, the two virtual machines
should be able to communicate with each other, we can launch a VNC console to
check. Currently Tricircle doesn't support VNC proxy, we need to go to bottom
OpenStack to obtain a VNC console::
nova --os-region-name Pod1 get-vnc-console vm1 novnc
nova --os-region-name Pod2 get-vnc-console vm2 novnc
Login one virtual machine via VNC and you should find it can "ping" the other
virtual machine. Default security group is applied so no need to configure
security group rule.
North-South Networking
^^^^^^^^^^^^^^^^^^^^^^
Before running DevStack in node2, you need to create another ovs bridge for
external network and then attach port::
sudo ovs-vsctl add-br br-ext
sudo ovs-vsctl add-port br-ext eth2
Below listed the operations related to north-south networking.
- 1 Create external network::
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
-d '{"network": {"name": "ext-net", "admin_state_up": true, "router:external": true, "provider:network_type": "vlan", "provider:physical_network": "extern", "availability_zone_hints": ["Pod2"]}}'
Pay attention that when creating external network, we still need to pass
"availability_zone_hints" parameter, but the value we pass is the name of pod,
not the name of availability zone.
*Currently external network needs to be created before attaching subnet to the
router, because plugin needs to utilize external network information to setup
bridge network when handling interface adding operation. This limitation will
be removed later.*
- 2 Create external subnet::
neutron subnet-create --name ext-subnet --disable-dhcp ext-net 163.3.124.0/24
- 3 Set router external gateway::
neutron router-gateway-set router ext-net
Now virtual machine in the subnet attached to the router should be able to
"ping" machines in the external network. In our test, we use hypervisor tool
to directly start a virtual machine in the external network to check the
network connectivity.
- 4 Create floating ip::
neutron floatingip-create ext-net
- 5 Associate floating ip::
neutron floatingip-list
neutron port-list
neutron floatingip-associate $floatingip_id $port_id
Now you should be able to access virtual machine with floating ip bound from
the external network.
Verification with script
^^^^^^^^^^^^^^^^^^^^^^^^
A sample of admin-openrc.sh and an installation verification script can be
found in devstack/ directory. And a demo blog with virtualbox can be found in https://wiki.openstack.org/wiki/Play_tricircle_with_virtualbox
Script 'verify_cross_pod_install.sh' is to quickly verify the installation of
the Tricircle in Cross Pod OpenStack as the contents above and save the output
to logs.
Before verifying the installation, some parameters should be modified to your
own environment.
- 1 The default URL is 127.0.0.1, change it if needed,
- 2 This script create a external network 10.50.11.0/26 according to the work
environment, change it if needed.
- 3 This script create 2 subnets 10.0.1.0/24 and 10.0.2.0/24, Change these if
needed.
- 4 The default created floating-ip is attached to the VM with port 10.0.2.3
created by the subnets, modify it according to your environment.
Then do the following steps in Node1 OpenStack to verify network functions::
cd tricircle/devstack/
./verify_cross_pod_install.sh 2>&1 | tee logs
cinder --os-region-name=RegionOne create --availability-zone=az2 1
cinder --os-region-name=RegionOne list
cinder --os-region-name=RegionOne show $volume_id
cinder --os-region-name=RegionOne delete $volume_id
cinder --os-region-name=RegionOne list
- 19 Using --debug to make sure the commands are issued to Nova API gateway
or Cinder API gateway::
nova --debug --os-region-name=RegionOne list
cinder --debug --os-region-name=RegionOne list
The nova command should be sent to http://127.0.0.1:19998/ and cinder
command to http://127.0.0.1:19997/

View File

@ -2,6 +2,6 @@
Usage
======
To use tricircle in a project::
To use trio2o in a project::
import tricircle
import trio2o

View File

@ -1,9 +1,9 @@
[DEFAULT]
output_file = etc/api.conf.sample
wrap_width = 79
namespace = tricircle.api
namespace = tricircle.common
namespace = tricircle.db
namespace = trio2o.api
namespace = trio2o.common
namespace = trio2o.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy

View File

@ -1,9 +1,9 @@
[DEFAULT]
output_file = etc/cinder_apigw.conf.sample
wrap_width = 79
namespace = tricircle.cinder_apigw
namespace = tricircle.common
namespace = tricircle.db
namespace = trio2o.cinder_apigw
namespace = trio2o.common
namespace = trio2o.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy

View File

@ -1,9 +1,9 @@
[DEFAULT]
output_file = etc/nova_apigw.conf.sample
wrap_width = 79
namespace = tricircle.nova_apigw
namespace = tricircle.common
namespace = tricircle.db
namespace = trio2o.nova_apigw
namespace = trio2o.common
namespace = trio2o.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy

View File

@ -1,4 +0,0 @@
[DEFAULT]
output_file = etc/tricircle_plugin.conf.sample
wrap_width = 79
namespace = tricircle.network

View File

@ -1,8 +1,8 @@
[DEFAULT]
output_file = etc/xjob.conf.sample
wrap_width = 79
namespace = tricircle.xjob
namespace = tricircle.common
namespace = trio2o.xjob
namespace = trio2o.common
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy

View File

@ -55,7 +55,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'The Tricircle Release Notes'
project = u'The Trio2o Release Notes'
copyright = u'2016, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for

View File

@ -1,7 +1,7 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
pbr>=1.8 # Apache-2.0
Babel>=2.3.4 # BSD
Paste # MIT
@ -10,38 +10,38 @@ Routes!=2.0,!=2.1,!=2.3.0,>=1.12.3;python_version=='2.7' # MIT
Routes!=2.0,!=2.3.0,>=1.12.3;python_version!='2.7' # MIT
debtcollector>=1.2.0 # Apache-2.0
eventlet!=0.18.3,>=0.18.2 # MIT
pecan!=1.0.2,!=1.0.3,!=1.0.4,>=1.0.0 # BSD
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
greenlet>=0.3.2 # MIT
httplib2>=0.7.5 # MIT
requests>=2.10.0 # Apache-2.0
Jinja2>=2.8 # BSD License (3 clause)
keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0
netaddr!=0.7.16,>=0.7.12 # BSD
keystonemiddleware!=4.5.0,>=4.2.0 # Apache-2.0
netaddr!=0.7.16,>=0.7.13 # BSD
netifaces>=0.10.4 # MIT
neutron-lib>=0.4.0 # Apache-2.0
neutron-lib>=1.0.0 # Apache-2.0
retrying!=1.3.0,>=1.2.3 # Apache-2.0
SQLAlchemy<1.1.0,>=1.0.10 # MIT
WebOb>=1.2.3 # MIT
WebOb>=1.6.0 # MIT
python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0
python-glanceclient!=2.4.0,>=2.3.0 # Apache-2.0
python-keystoneclient!=2.1.0,>=2.0.0 # Apache-2.0
python-glanceclient>=2.5.0 # Apache-2.0
python-keystoneclient>=3.6.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
alembic>=0.8.4 # MIT
six>=1.9.0 # MIT
stevedore>=1.16.0 # Apache-2.0
stevedore>=1.17.1 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.config>=3.14.0 # Apache-2.0
oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
oslo.context>=2.9.0 # Apache-2.0
oslo.db>=4.10.0 # Apache-2.0
oslo.db!=4.13.1,!=4.13.2,>=4.11.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.log>=3.11.0 # Apache-2.0
oslo.messaging>=5.2.0 # Apache-2.0
oslo.middleware>=3.0.0 # Apache-2.0
oslo.policy>=1.9.0 # Apache-2.0
oslo.policy>=1.15.0 # Apache-2.0
oslo.rootwrap>=5.0.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
oslo.utils>=3.18.0 # Apache-2.0
oslo.versionedobjects>=1.13.0 # Apache-2.0
sqlalchemy-migrate>=0.9.6 # Apache-2.0

View File

@ -1,10 +1,10 @@
[metadata]
name = tricircle
summary = the Tricircle provides an OpenStack API gateway and networking automation to allow multiple OpenStack instances, spanning in one site or multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud
name = trio2o
summary = the Trio2o provides an OpenStack API gateway to allow multiple OpenStack instances, spanning in one site or multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud
description-file = README.rst
author = OpenStack
author = OpenStack Trio2o
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
home-page = wiki.openstack.org/wiki/Trio2o
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
@ -20,7 +20,7 @@ classifier =
[files]
packages =
tricircle
trio2o
[build_sphinx]
source-dir = doc/source
@ -31,34 +31,29 @@ all_files = 1
upload-dir = doc/build/html
[compile_catalog]
directory = tricircle/locale
domain = tricircle
directory = trio2o/locale
domain = trio2o
[update_catalog]
domain = tricircle
output_dir = tricircle/locale
input_file = tricircle/locale/tricircle.pot
domain = trio2o
output_dir = trio2o/locale
input_file = trio2o/locale/trio2o.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = tricircle/locale/tricircle.pot
output_file = trio2o/locale/trio2o.pot
[entry_points]
oslo.config.opts =
tricircle.api = tricircle.api.opts:list_opts
tricircle.common = tricircle.common.opts:list_opts
tricircle.db = tricircle.db.opts:list_opts
tricircle.network = tricircle.network.opts:list_opts
trio2o.api = trio2o.api.opts:list_opts
trio2o.common = trio2o.common.opts:list_opts
trio2o.db = trio2o.db.opts:list_opts
tricircle.nova_apigw = tricircle.nova_apigw.opts:list_opts
tricircle.cinder_apigw = tricircle.cinder_apigw.opts:list_opts
tricircle.xjob = tricircle.xjob.opts:list_opts
trio2o.nova_apigw = trio2o.nova_apigw.opts:list_opts
trio2o.cinder_apigw = trio2o.cinder_apigw.opts:list_opts
trio2o.xjob = trio2o.xjob.opts:list_opts
tempest.test_plugins =
tricircle_tests = tricircle.tempestplugin.plugin:TricircleTempestPlugin
tricircle.network.type_drivers =
local = tricircle.network.drivers.type_local:LocalTypeDriver
shared_vlan = tricircle.network.drivers.type_shared_vlan:SharedVLANTypeDriver
trio2o_tests = trio2o.tempestplugin.plugin:Trio2oTempestPlugin

View File

@ -0,0 +1,236 @@
=================================
Dynamic Pod Binding in Trio2o
=================================
Background
===========
Most public cloud infrastructure is built with Availability Zones (AZs).
Each AZ is consisted of one or more discrete data centers, each with high
bandwidth and low latency network connection, separate power and facilities.
These AZs offer cloud tenants the ability to operate production
applications and databases deployed into multiple AZs are more highly
available, fault tolerant and scalable than a single data center.
In production clouds, each AZ is built by modularized OpenStack, and each
OpenStack is one pod. Moreover, one AZ can include multiple pods. Among the
pods, they are classified into different categories. For example, servers
in one pod are only for general purposes, and the other pods may be built
for heavy load CAD modeling with GPU. So pods in one AZ could be divided
into different groups. Different pod groups for different purposes, and
the VM's cost and performance are also different.
The concept "pod" is created for the Trio2o to facilitate managing
OpenStack instances among AZs, which therefore is transparent to cloud
tenants. The Trio2o maintains and manages a pod binding table which
records the mapping relationship between a cloud tenant and pods. When the
cloud tenant creates a VM or a volume, the Trio2o tries to assign a pod
based on the pod binding table.
Motivation
===========
In resource allocation scenario, when a tenant creates a VM in one pod and a
new volume in a another pod respectively. If the tenant attempt to attach the
volume to the VM, the operation will fail. In other words, the volume should
be in the same pod where the VM is, otherwise the volume and VM would not be
able to finish the attachment. Hence, the Trio2o needs to ensure the pod
binding so as to guarantee that VM and volume are created in one pod.
In capacity expansion scenario, when resources in one pod are exhausted,
then a new pod with the same type should be added into the AZ. Therefore,
new resources of this type should be provisioned in the new added pod, which
requires dynamical change of pod binding. The pod binding could be done
dynamically by the Trio2o, or by admin through admin api for maintenance
purpose. For example, for maintenance(upgrade, repairement) window, all
new provision requests should be forwarded to the running one, but not
the one under maintenance.
Solution: dynamic pod binding
==============================
It's quite headache for capacity expansion inside one pod, you have to
estimate, calculate, monitor, simulate, test, and do online grey expansion
for controller nodes and network nodes whenever you add new machines to the
pod. It's quite big challenge as more and more resources added to one pod,
and at last you will reach limitation of one OpenStack. If this pod's
resources exhausted or reach the limit for new resources provisioning, the
Trio2o needs to bind tenant to a new pod instead of expanding the current
pod unlimitedly. The Trio2o needs to select a proper pod and stay binding
for a duration, in this duration VM and volume will be created for one tenant
in the same pod.
For example, suppose we have two groups of pods, and each group has 3 pods,
i.e.,
GroupA(Pod1, Pod2, Pod3) for general purpose VM,
GroupB(Pod4, Pod5, Pod6) for CAD modeling.
Tenant1 is bound to Pod1, Pod4 during the first phase for several months.
In the first phase, we can just add weight in Pod, for example, Pod1, weight 1,
Pod2, weight2, this could be done by adding one new field in pod table, or no
field at all, just link them by the order created in the Trio2o. In this
case, we use the pod creation time as the weight.
If the tenant wants to allocate VM/volume for general VM, Pod1 should be
selected. It can be implemented with flavor or volume type metadata. For
general VM/Volume, there is no special tag in flavor or volume type metadata.
If the tenant wants to allocate VM/volume for CAD modeling VM, Pod4 should be
selected. For CAD modeling VM/Volume, a special tag "resource: CAD Modeling"
in flavor or volume type metadata determines the binding.
When it is detected that there is no more resources in Pod1, Pod4. Based on
the resource_affinity_tag, the Trio2o queries the pod table for available
pods which provision a specific type of resources. The field resource_affinity
is a key-value pair. The pods will be selected when there are matched
key-value in flavor extra-spec or volume extra-spec. A tenant will be bound
to one pod in one group of pods with same resource_affinity_tag. In this case,
the Trio2o obtains Pod2 and Pod3 for general purpose, as well as Pod5 an
Pod6 for CAD purpose. The Trio2o needs to change the binding, for example,
tenant1 needs to be bound to Pod2, Pod5.
Implementation
===============
Measurement
-------------
To get the information of resource utilization of pods, the Trio2o needs to
conduct some measurements on pods. The statistic task should be done in
bottom pod.
For resources usages, current cells provide interface to retrieve usage for
cells [1]. OpenStack provides details of capacity of a cell, including disk
and ram via api of showing cell capacities [1].
If OpenStack is not running with cells mode, we can ask Nova to provide
an interface to show the usage detail in AZ. Moreover, an API for usage
query at host level is provided for admins [3], through which we can obtain
details of a host, including cpu, memory, disk, and so on.
Cinder also provides interface to retrieve the backend pool usage,
including updated time, total capacity, free capacity and so on [2].
The Trio2o needs to have one task to collect the usage in the bottom on
daily base, to evaluate whether the threshold is reached or not. A threshold
or headroom could be configured for each pod, but not to reach 100% exhaustion
of resources.
On top there should be no heavy process. So getting the sum info from the
bottom can be done in the Trio2o. After collecting the details, the
Trio2o can judge whether a pod reaches its limit.
Trio2o
----------
The Trio2o needs a framework to support different binding policy (filter).
Each pod is one OpenStack instance, including controller nodes and compute
nodes. E.g.,
::
+-> controller(s) - pod1 <--> compute nodes <---+
|
The trio2o +-> controller(s) - pod2 <--> compute nodes <---+ resource migration, if necessary
(resource controller) .... |
+-> controller(s) - pod{N} <--> compute nodes <-+
The Trio2o selects a pod to decide where the requests should be forwarded
to which controller. Then the controllers in the selected pod will do its own
scheduling.
One simplest binding filter is as follows. Line up all available pods in a
list and always select the first one. When all the resources in the first pod
has been allocated, remove it from the list. This is quite like how production
cloud is built: at first, only a few pods are in the list, and then add more
and more pods if there is not enough resources in current cloud. For example,
List1 for general pool: Pod1 <- Pod2 <- Pod3
List2 for CAD modeling pool: Pod4 <- Pod5 <- Pod6
If Pod1's resource exhausted, Pod1 is removed from List1. The List1 is changed
to: Pod2 <- Pod3.
If Pod4's resource exhausted, Pod4 is removed from List2. The List2 is changed
to: Pod5 <- Pod6
If the tenant wants to allocate resources for general VM, the Trio2o
selects Pod2. If the tenant wants to allocate resources for CAD modeling VM,
the Trio2o selects Pod5.
Filtering
-------------
For the strategy of selecting pods, we need a series of filters. Before
implementing dynamic pod binding, the binding criteria are hard coded to
select the first pod in the AZ. Hence, we need to design a series of filter
algorithms. Firstly, we plan to design an ALLPodsFilter which does no
filtering and passes all the available pods. Secondly, we plan to design an
AvailabilityZoneFilter which passes the pods matching the specified available
zone. Thirdly, we plan to design a ResourceAffiniyFilter which passes the pods
matching the specified resource type. Based on the resource_affinity_tag,
the Trio2o can be aware of which type of resource the tenant wants to
provision. In the future, we can add more filters, which requires adding more
information in the pod table.
Weighting
-------------
After filtering all the pods, the Trio2o obtains the available pods for a
tenant. The Trio2o needs to select the most suitable pod for the tenant.
Hence, we need to define a weight function to calculate the corresponding
weight of each pod. Based on the weights, the Trio2o selects the pod which
has the maximum weight value. When calculating the weight of a pod, we need
to design a series of weigher. We first take the pod creation time into
consideration when designing the weight function. The second one is the idle
capacity, to select a pod which has the most idle capacity. Other metrics
will be added in the future, e.g., cost.
Data Model Impact
==================
Firstly, we need to add a column “resource_affinity_tag” to the pod table,
which is used to store the key-value pair, to match flavor extra-spec and
volume extra-spec.
Secondly, in the pod binding table, we need to add fields of start binding
time and end binding time, so the history of the binding relationship could
be stored.
Thirdly, we need a table to store the usage of each pod for Cinder/Nova.
We plan to use JSON object to store the usage information. Hence, even if
the usage structure is changed, we don't need to update the table. And if
the usage value is null, that means the usage has not been initialized yet.
As just mentioned above, the usage could be refreshed in daily basis. If it's
not initialized yet, it means there is still lots of resources available,
which could be scheduled just like this pod has not reach usage threshold.
Dependencies
=============
None
Testing
========
None
Documentation Impact
=====================
None
Reference
==========
[1] http://developer.openstack.org/api-ref-compute-v2.1.html#showCellCapacities
[2] http://developer.openstack.org/api-ref-blockstorage-v2.html#os-vol-pool-v2
[3] http://developer.openstack.org/api-ref-compute-v2.1.html#showinfo

View File

@ -3,21 +3,21 @@
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.2
cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
coverage>=3.6 # Apache-2.0
cliff>=2.2.0 # Apache-2.0
coverage>=4.0 # Apache-2.0
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0 # BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
requests-mock>=1.0 # Apache-2.0
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
requests-mock>=1.1 # Apache-2.0
sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
oslosphinx>=4.7.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
testresources>=0.2.4 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
WebTest>=2.0 # MIT
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.7.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0
tempest-lib>=0.14.0 # Apache-2.0
ddt>=1.0.1 # MIT
pylint==1.4.5 # GPLv2

View File

@ -10,10 +10,9 @@ install_command = pip install -U --force-reinstall {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
PYTHONWARNINGS=default::DeprecationWarning
TRICIRCLE_TEST_DIRECTORY=tricircle/tests
TRIO2O_TEST_DIRECTORY=trio2o/tests
deps =
-r{toxinidir}/test-requirements.txt
-egit+https://git.openstack.org/openstack/neutron@master#egg=neutron
commands = python setup.py testr --slowest --testr-args='{posargs}'
whitelist_externals = rm
@ -31,7 +30,6 @@ commands = oslo-config-generator --config-file=etc/api-cfg-gen.conf
oslo-config-generator --config-file=etc/nova_apigw-cfg-gen.conf
oslo-config-generator --config-file=etc/cinder_apigw-cfg-gen.conf
oslo-config-generator --config-file=etc/xjob-cfg-gen.conf
oslo-config-generator --config-file=etc/tricircle_plugin-cfg-gen.conf
[testenv:docs]
commands = python setup.py build_sphinx

View File

@ -1,44 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.plugins.ml2 import driver_api
from tricircle.common import constants
class LocalTypeDriver(driver_api.TypeDriver):
def get_type(self):
return constants.NT_LOCAL
def initialize(self):
pass
def is_partial_segment(self, segment):
return False
def validate_provider_segment(self, segment):
pass
def reserve_provider_segment(self, session, segment):
return segment
def allocate_tenant_segment(self, session):
return {driver_api.NETWORK_TYPE: constants.NT_LOCAL}
def release_segment(self, session, segment):
pass
def get_mtu(self, physical):
pass

View File

@ -1,62 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from oslo_log import log
from neutron.plugins.common import utils as plugin_utils
from neutron.plugins.ml2 import driver_api
from neutron.plugins.ml2.drivers import type_vlan
from tricircle.common import constants
from tricircle.common.i18n import _LE
from tricircle.common.i18n import _LI
LOG = log.getLogger(__name__)
class SharedVLANTypeDriver(type_vlan.VlanTypeDriver):
def __init__(self):
super(SharedVLANTypeDriver, self).__init__()
def _parse_network_vlan_ranges(self):
try:
self.network_vlan_ranges = plugin_utils.parse_network_vlan_ranges(
cfg.CONF.tricircle.network_vlan_ranges)
except Exception:
LOG.exception(_LE('Failed to parse network_vlan_ranges. '
'Service terminated!'))
sys.exit(1)
LOG.info(_LI('Network VLAN ranges: %s'), self.network_vlan_ranges)
def get_type(self):
return constants.NT_SHARED_VLAN
def reserve_provider_segment(self, session, segment):
res = super(SharedVLANTypeDriver,
self).reserve_provider_segment(session, segment)
res[driver_api.NETWORK_TYPE] = constants.NT_SHARED_VLAN
return res
def allocate_tenant_segment(self, session):
res = super(SharedVLANTypeDriver,
self).allocate_tenant_segment(session)
res[driver_api.NETWORK_TYPE] = constants.NT_SHARED_VLAN
return res
def get_mtu(self, physical):
pass

View File

@ -1,30 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib import exceptions
from tricircle.common.i18n import _
class RemoteGroupNotSupported(exceptions.InvalidInput):
message = _('Remote group not supported by Tricircle plugin')
class DefaultGroupUpdateNotSupported(exceptions.InvalidInput):
message = _('Default group update not supported by Tricircle plugin')
class BottomPodOperationFailure(exceptions.NeutronException):
message = _('Operation for %(resource)s on bottom pod %(pod_name)s fails')

View File

@ -1,555 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from neutron_lib import constants
import neutronclient.common.exceptions as q_cli_exceptions
from tricircle.common import client
import tricircle.common.constants as t_constants
import tricircle.common.context as t_context
import tricircle.common.lock_handle as t_lock
from tricircle.common import utils
import tricircle.db.api as db_api
import tricircle.network.exceptions as t_network_exc
# manually define these constants to avoid depending on neutron repos
# neutron.extensions.availability_zone.AZ_HINTS
AZ_HINTS = 'availability_zone_hints'
EXTERNAL = 'router:external' # neutron.extensions.external_net.EXTERNAL
TYPE_VLAN = 'vlan' # neutron.plugins.common.constants.TYPE_VLAN
class NetworkHelper(object):
def __init__(self, call_obj=None):
self.clients = {}
self.call_obj = call_obj
@staticmethod
def _transfer_network_type(network_type):
network_type_map = {t_constants.NT_SHARED_VLAN: TYPE_VLAN}
return network_type_map.get(network_type, network_type)
def _get_client(self, pod_name=None):
if not pod_name:
if t_constants.TOP not in self.clients:
self.clients[t_constants.TOP] = client.Client()
return self.clients[t_constants.TOP]
if pod_name not in self.clients:
self.clients[pod_name] = client.Client(pod_name)
return self.clients[pod_name]
# operate top resource
def _prepare_top_element_by_call(self, t_ctx, q_ctx,
project_id, pod, ele, _type, body):
def list_resources(t_ctx_, q_ctx_, pod_, ele_, _type_):
return getattr(super(self.call_obj.__class__, self.call_obj),
'get_%ss' % _type_)(q_ctx_,
filters={'name': [ele_['id']]})
def create_resources(t_ctx_, q_ctx_, pod_, body_, _type_):
if _type_ == t_constants.RT_NETWORK:
# for network, we call TricirclePlugin's own create_network to
# handle network segment
return self.call_obj.create_network(q_ctx_, body_)
else:
return getattr(super(self.call_obj.__class__, self.call_obj),
'create_%s' % _type_)(q_ctx_, body_)
return t_lock.get_or_create_element(
t_ctx, q_ctx,
project_id, pod, ele, _type, body,
list_resources, create_resources)
def _prepare_top_element_by_client(self, t_ctx, q_ctx,
project_id, pod, ele, _type, body):
def list_resources(t_ctx_, q_ctx_, pod_, ele_, _type_):
client = self._get_client()
return client.list_resources(_type_, t_ctx_,
[{'key': 'name', 'comparator': 'eq',
'value': ele_['id']}])
def create_resources(t_ctx_, q_ctx_, pod_, body_, _type_):
client = self._get_client()
return client.create_resources(_type_, t_ctx_, body_)
assert _type == 'port'
# currently only top port is possible to be created via client, other
# top resources should be created directly by plugin
return t_lock.get_or_create_element(
t_ctx, q_ctx,
project_id, pod, ele, _type, body,
list_resources, create_resources)
def prepare_top_element(self, t_ctx, q_ctx,
project_id, pod, ele, _type, body):
"""Get or create shared top networking resource
:param t_ctx: tricircle context
:param q_ctx: neutron context
:param project_id: project id
:param pod: dict of top pod
:param ele: dict with "id" as key and distinctive identifier as value
:param _type: type of the resource
:param body: request body to create resource
:return: boolean value indicating whether the resource is newly
created or already exists and id of the resource
"""
if self.call_obj:
return self._prepare_top_element_by_call(
t_ctx, q_ctx, project_id, pod, ele, _type, body)
else:
return self._prepare_top_element_by_client(
t_ctx, q_ctx, project_id, pod, ele, _type, body)
def get_bridge_interface(self, t_ctx, q_ctx, project_id, pod,
t_net_id, b_router_id, b_port_id, is_ew):
"""Get or create top bridge interface
:param t_ctx: tricircle context
:param q_ctx: neutron context
:param project_id: project id
:param pod: dict of top pod
:param t_net_id: top bridge network id
:param b_router_id: bottom router id
:param b_port_id: needed when creating bridge interface for south-
north network, id of the internal port bound to floating ip
:param is_ew: create the bridge interface for east-west network or
south-north network
:return: bridge interface id
"""
if is_ew:
port_name = t_constants.ew_bridge_port_name % (project_id,
b_router_id)
else:
port_name = t_constants.ns_bridge_port_name % (project_id,
b_router_id,
b_port_id)
port_ele = {'id': port_name}
port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': port_name,
'network_id': t_net_id,
'device_id': '',
'device_owner': ''
}
}
if self.call_obj:
port_body['port'].update(
{'mac_address': constants.ATTR_NOT_SPECIFIED,
'fixed_ips': constants.ATTR_NOT_SPECIFIED})
_, port_id = self.prepare_top_element(
t_ctx, q_ctx, project_id, pod, port_ele, 'port', port_body)
return port_id
# operate bottom resource
def prepare_bottom_element(self, t_ctx,
project_id, pod, ele, _type, body):
"""Get or create bottom networking resource based on top resource
:param t_ctx: tricircle context
:param project_id: project id
:param pod: dict of bottom pod
:param ele: dict of top resource
:param _type: type of the resource
:param body: request body to create resource
:return: boolean value indicating whether the resource is newly
created or already exists and id of the resource
"""
def list_resources(t_ctx_, q_ctx, pod_, ele_, _type_):
client = self._get_client(pod_['pod_name'])
if _type_ == t_constants.RT_NETWORK:
value = utils.get_bottom_network_name(ele_)
else:
value = ele_['id']
return client.list_resources(_type_, t_ctx_,
[{'key': 'name', 'comparator': 'eq',
'value': value}])
def create_resources(t_ctx_, q_ctx, pod_, body_, _type_):
client = self._get_client(pod_['pod_name'])
return client.create_resources(_type_, t_ctx_, body_)
return t_lock.get_or_create_element(
t_ctx, None, # we don't need neutron context, so pass None
project_id, pod, ele, _type, body,
list_resources, create_resources)
@staticmethod
def get_create_network_body(project_id, network):
"""Get request body to create bottom network
:param project_id: project id
:param network: top network dict
:return: request body to create bottom network
"""
body = {
'network': {
'tenant_id': project_id,
'name': utils.get_bottom_network_name(network),
'admin_state_up': True
}
}
network_type = network.get('provider:network_type')
if network_type == t_constants.NT_SHARED_VLAN:
body['network']['provider:network_type'] = 'vlan'
body['network']['provider:physical_network'] = network[
'provider:physical_network']
body['network']['provider:segmentation_id'] = network[
'provider:segmentation_id']
return body
@staticmethod
def get_create_subnet_body(project_id, t_subnet, b_net_id, gateway_ip):
"""Get request body to create bottom subnet
:param project_id: project id
:param t_subnet: top subnet dict
:param b_net_id: bottom network id
:param gateway_ip: bottom gateway ip
:return: request body to create bottom subnet
"""
pools = t_subnet['allocation_pools']
new_pools = []
g_ip = netaddr.IPAddress(gateway_ip)
ip_found = False
for pool in pools:
if ip_found:
new_pools.append({'start': pool['start'],
'end': pool['end']})
continue
ip_range = netaddr.IPRange(pool['start'], pool['end'])
ip_num = len(ip_range)
for i, ip in enumerate(ip_range):
if g_ip == ip:
ip_found = True
if i > 0:
new_pools.append({'start': ip_range[0].format(),
'end': ip_range[i - 1].format()})
if i < ip_num - 1:
new_pools.append(
{'start': ip_range[i + 1].format(),
'end': ip_range[ip_num - 1].format()})
body = {
'subnet': {
'network_id': b_net_id,
'name': t_subnet['id'],
'ip_version': t_subnet['ip_version'],
'cidr': t_subnet['cidr'],
'gateway_ip': gateway_ip,
'allocation_pools': new_pools,
'enable_dhcp': False,
'tenant_id': project_id
}
}
return body
@staticmethod
def get_create_port_body(project_id, t_port, subnet_map, b_net_id,
b_security_group_ids=None):
"""Get request body to create bottom port
:param project_id: project id
:param t_port: top port dict
:param subnet_map: dict with top subnet id as key and bottom subnet
id as value
:param b_net_id: bottom network id
:param security_group_ids: list of bottom security group id
:return: request body to create bottom port
"""
b_fixed_ips = []
for ip in t_port['fixed_ips']:
b_ip = {'subnet_id': subnet_map[ip['subnet_id']],
'ip_address': ip['ip_address']}
b_fixed_ips.append(b_ip)
body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': t_port['id'],
'network_id': b_net_id,
'mac_address': t_port['mac_address'],
'fixed_ips': b_fixed_ips
}
}
if b_security_group_ids:
body['port']['security_groups'] = b_security_group_ids
return body
def get_create_interface_body(self, project_id, t_net_id, b_pod_id,
t_subnet_id):
"""Get request body to create top interface
:param project_id: project id
:param t_net_id: top network id
:param b_pod_id: bottom pod id
:param t_subnet_id: top subnet id
:return:
"""
t_interface_name = t_constants.interface_port_name % (b_pod_id,
t_subnet_id)
t_interface_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': t_interface_name,
'network_id': t_net_id,
'device_id': '',
'device_owner': 'network:router_interface',
}
}
if self.call_obj:
t_interface_body['port'].update(
{'mac_address': constants.ATTR_NOT_SPECIFIED,
'fixed_ips': constants.ATTR_NOT_SPECIFIED})
return t_interface_body
def prepare_bottom_network_subnets(self, t_ctx, q_ctx, project_id, pod,
t_net, t_subnets):
"""Get or create bottom network, subnet and dhcp port
:param t_ctx: tricircle context
:param q_ctx: neutron context
:param project_id: project id
:param pod: dict of bottom pod
:param t_net: dict of top network
:param t_subnets: list of top subnet dict
:return: bottom network id and a dict with top subnet id as key,
bottom subnet id as value
"""
# network
net_body = self.get_create_network_body(project_id, t_net)
if net_body['network'].get('provider:network_type'):
# if network type specified, we need to switch to admin account
admin_context = t_context.get_admin_context()
_, b_net_id = self.prepare_bottom_element(
admin_context, project_id, pod, t_net, t_constants.RT_NETWORK,
net_body)
else:
_, b_net_id = self.prepare_bottom_element(
t_ctx, project_id, pod, t_net, t_constants.RT_NETWORK,
net_body)
# subnet
subnet_map = {}
subnet_dhcp_map = {}
for subnet in t_subnets:
# gateway
t_interface_name = t_constants.interface_port_name % (
pod['pod_id'], subnet['id'])
t_interface_body = self.get_create_interface_body(
project_id, t_net['id'], pod['pod_id'], subnet['id'])
_, t_interface_id = self.prepare_top_element(
t_ctx, q_ctx, project_id, pod, {'id': t_interface_name},
t_constants.RT_PORT, t_interface_body)
t_interface = self._get_top_element(
t_ctx, q_ctx, t_constants.RT_PORT, t_interface_id)
gateway_ip = t_interface['fixed_ips'][0]['ip_address']
subnet_body = self.get_create_subnet_body(
project_id, subnet, b_net_id, gateway_ip)
_, b_subnet_id = self.prepare_bottom_element(
t_ctx, project_id, pod, subnet, t_constants.RT_SUBNET,
subnet_body)
subnet_map[subnet['id']] = b_subnet_id
subnet_dhcp_map[subnet['id']] = subnet['enable_dhcp']
# dhcp port
for t_subnet_id, b_subnet_id in subnet_map.iteritems():
if not subnet_dhcp_map[t_subnet_id]:
continue
self.prepare_dhcp_port(t_ctx, project_id, pod, t_net['id'],
t_subnet_id, b_net_id, b_subnet_id)
b_client = self._get_client(pod['pod_name'])
b_client.update_subnets(t_ctx, b_subnet_id,
{'subnet': {'enable_dhcp': True}})
return b_net_id, subnet_map
def get_bottom_bridge_elements(self, t_ctx, project_id,
pod, t_net, is_external, t_subnet, t_port):
"""Get or create bottom bridge port
:param t_ctx: tricircle context
:param project_id: project id
:param pod: dict of bottom pod
:param t_net: dict of top bridge network
:param is_external: whether the bottom network should be created as
an external network, this is True for south-north case
:param t_subnet: dict of top bridge subnet
:param t_port: dict of top bridge port
:return: tuple (boolean value indicating whether the resource is newly
created or already exists, bottom port id, bottom subnet id,
bottom network id)
"""
net_body = {'network': {
'tenant_id': project_id,
'name': t_net['id'],
'provider:network_type': self._transfer_network_type(
t_net['provider:network_type']),
'provider:physical_network': t_net['provider:physical_network'],
'provider:segmentation_id': t_net['provider:segmentation_id'],
'admin_state_up': True}}
if is_external:
net_body['network'][EXTERNAL] = True
_, b_net_id = self.prepare_bottom_element(
t_ctx, project_id, pod, t_net, 'network', net_body)
subnet_body = {'subnet': {'network_id': b_net_id,
'name': t_subnet['id'],
'ip_version': 4,
'cidr': t_subnet['cidr'],
'enable_dhcp': False,
'tenant_id': project_id}}
# In the pod hosting external network, where ns bridge network is used
# as an internal network, need to allocate ip address from .3 because
# .2 is used by the router gateway port in the pod hosting servers,
# where ns bridge network is used as an external network.
# if t_subnet['name'].startswith('ns_bridge_') and not is_external:
# prefix = t_subnet['cidr'][:t_subnet['cidr'].rindex('.')]
# subnet_body['subnet']['allocation_pools'] = [
# {'start': prefix + '.3', 'end': prefix + '.254'}]
_, b_subnet_id = self.prepare_bottom_element(
t_ctx, project_id, pod, t_subnet, 'subnet', subnet_body)
if t_port:
port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': t_port['id'],
'network_id': b_net_id,
'fixed_ips': [
{'subnet_id': b_subnet_id,
'ip_address': t_port['fixed_ips'][0]['ip_address']}]
}
}
is_new, b_port_id = self.prepare_bottom_element(
t_ctx, project_id, pod, t_port, 'port', port_body)
return is_new, b_port_id, b_subnet_id, b_net_id
else:
return None, None, b_subnet_id, b_net_id
@staticmethod
def _get_create_dhcp_port_body(project_id, port, b_subnet_id,
b_net_id):
body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': port['id'],
'network_id': b_net_id,
'fixed_ips': [
{'subnet_id': b_subnet_id,
'ip_address': port['fixed_ips'][0]['ip_address']}
],
'mac_address': port['mac_address'],
'binding:profile': {},
'device_id': 'reserved_dhcp_port',
'device_owner': 'network:dhcp',
}
}
return body
def prepare_dhcp_port(self, ctx, project_id, b_pod, t_net_id, t_subnet_id,
b_net_id, b_subnet_id):
"""Create top dhcp port and map it to bottom dhcp port
:param ctx: tricircle context
:param project_id: project id
:param b_pod: dict of bottom pod
:param t_net_id: top network id
:param t_subnet_id: top subnet id
:param b_net_id: bottom network id
:param b_subnet_id: bottom subnet id
:return: None
"""
t_client = self._get_client()
t_dhcp_name = t_constants.dhcp_port_name % t_subnet_id
t_dhcp_port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'network_id': t_net_id,
'name': t_dhcp_name,
'binding:profile': {},
'device_id': 'reserved_dhcp_port',
'device_owner': 'network:dhcp',
}
}
if self.call_obj:
t_dhcp_port_body['port'].update(
{'mac_address': constants.ATTR_NOT_SPECIFIED,
'fixed_ips': constants.ATTR_NOT_SPECIFIED})
# NOTE(zhiyuan) for one subnet in different pods, we just create
# one dhcp port. though dhcp port in different pods will have
# the same IP, each dnsmasq daemon only takes care of VM IPs in
# its own pod, VM will not receive incorrect dhcp response
_, t_dhcp_port_id = self.prepare_top_element(
ctx, None, project_id, db_api.get_top_pod(ctx),
{'id': t_dhcp_name}, t_constants.RT_PORT, t_dhcp_port_body)
t_dhcp_port = t_client.get_ports(ctx, t_dhcp_port_id)
dhcp_port_body = self._get_create_dhcp_port_body(
project_id, t_dhcp_port, b_subnet_id, b_net_id)
self.prepare_bottom_element(ctx, project_id, b_pod, t_dhcp_port,
t_constants.RT_PORT, dhcp_port_body)
@staticmethod
def _safe_create_bottom_floatingip(t_ctx, pod, client, fip_net_id,
fip_address, port_id):
try:
client.create_floatingips(
t_ctx, {'floatingip': {'floating_network_id': fip_net_id,
'floating_ip_address': fip_address,
'port_id': port_id}})
except q_cli_exceptions.IpAddressInUseClient:
fips = client.list_floatingips(t_ctx,
[{'key': 'floating_ip_address',
'comparator': 'eq',
'value': fip_address}])
if not fips:
# this is rare case that we got IpAddressInUseClient exception
# a second ago but now the floating ip is missing
raise t_network_exc.BottomPodOperationFailure(
resource='floating ip', pod_name=pod['pod_name'])
associated_port_id = fips[0].get('port_id')
if associated_port_id == port_id:
# the internal port associated with the existing fip is what
# we expect, just ignore this exception
pass
elif not associated_port_id:
# the existing fip is not associated with any internal port,
# update the fip to add association
client.update_floatingips(t_ctx, fips[0]['id'],
{'floatingip': {'port_id': port_id}})
else:
raise
def _get_top_element(self, t_ctx, q_ctx, _type, _id):
if self.call_obj:
return getattr(self.call_obj, 'get_%s' % _type)(q_ctx, _id)
else:
return getattr(self._get_client(), 'get_%ss' % _type)(t_ctx, _id)

View File

@ -1,108 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
from neutron.api.v2 import attributes
from neutron.extensions import external_net
from neutron.plugins.ml2 import managers
from tricircle.common.i18n import _LE
from tricircle.common.i18n import _LI
LOG = log.getLogger(__name__)
class TricircleTypeManager(managers.TypeManager):
def __init__(self):
self.drivers = {}
# NOTE(zhiyuan) here we call __init__ of super class's super class,
# which is NamedExtensionManager's __init__ to bypass initialization
# process of ml2 type manager
super(managers.TypeManager, self).__init__(
'tricircle.network.type_drivers',
cfg.CONF.tricircle.type_drivers,
invoke_on_load=True)
LOG.info(_LI('Loaded type driver names: %s'), self.names())
self._register_types()
self._check_tenant_network_types(
cfg.CONF.tricircle.tenant_network_types)
self._check_bridge_network_type(
cfg.CONF.tricircle.bridge_network_type)
def _check_bridge_network_type(self, bridge_network_type):
if not bridge_network_type:
return
if bridge_network_type == 'local':
LOG.error(_LE("Local is not a valid bridge network type. "
"Service terminated!"), bridge_network_type)
raise SystemExit(1)
type_set = set(self.tenant_network_types)
if bridge_network_type not in type_set:
LOG.error(_LE("Bridge network type %s is not registered. "
"Service terminated!"), bridge_network_type)
raise SystemExit(1)
def _register_types(self):
for ext in self:
network_type = ext.obj.get_type()
if network_type not in self.drivers:
self.drivers[network_type] = ext
@staticmethod
def _is_external_network(network):
external = network.get(external_net.EXTERNAL)
external_set = attributes.is_attr_set(external)
if not external_set or not external:
return False
else:
return True
def create_network_segments(self, context, network, tenant_id):
# NOTE(zhiyuan) before we figure out how to deal with external network
# segment allocation, skip segment creation for external network
if self._is_external_network(network):
return
segments = self._process_provider_create(network)
session = context.session
with session.begin(subtransactions=True):
network_id = network['id']
if segments:
for segment_index, segment in enumerate(segments):
segment = self.reserve_provider_segment(
session, segment)
self._add_network_segment(context, network_id, segment,
segment_index)
else:
segment = self._allocate_tenant_net_segment(session)
self._add_network_segment(context, network_id, segment)
def extend_networks_dict_provider(self, context, networks):
internal_networks = []
for network in networks:
# NOTE(zhiyuan) before we figure out how to deal with external
# network segment allocation, skip external network since it does
# not have segment information
if not self._is_external_network(network):
internal_networks.append(network)
if internal_networks:
super(TricircleTypeManager,
self).extend_networks_dict_provider(context,
internal_networks)

File diff suppressed because it is too large Load Diff

View File

@ -1,108 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.db import securitygroups_db
import neutronclient.common.exceptions as q_exceptions
from tricircle.common import constants
from tricircle.common import context
import tricircle.db.api as db_api
import tricircle.network.exceptions as n_exceptions
class TricircleSecurityGroupMixin(securitygroups_db.SecurityGroupDbMixin):
@staticmethod
def _safe_create_security_group_rule(t_context, client, body):
try:
client.create_security_group_rules(t_context, body)
except q_exceptions.Conflict:
return
@staticmethod
def _safe_delete_security_group_rule(t_context, client, _id):
try:
client.delete_security_group_rules(t_context, _id)
except q_exceptions.NotFound:
return
@staticmethod
def _compare_rule(rule1, rule2):
for key in ('direction', 'remote_ip_prefix', 'protocol', 'ethertype',
'port_range_max', 'port_range_min'):
if rule1[key] != rule2[key]:
return False
return True
def create_security_group_rule(self, q_context, security_group_rule):
rule = security_group_rule['security_group_rule']
if rule['remote_group_id']:
raise n_exceptions.RemoteGroupNotSupported()
sg_id = rule['security_group_id']
sg = self.get_security_group(q_context, sg_id)
if sg['name'] == 'default':
raise n_exceptions.DefaultGroupUpdateNotSupported()
new_rule = super(TricircleSecurityGroupMixin,
self).create_security_group_rule(q_context,
security_group_rule)
t_context = context.get_context_from_neutron_context(q_context)
mappings = db_api.get_bottom_mappings_by_top_id(
t_context, sg_id, constants.RT_SG)
try:
for pod, b_sg_id in mappings:
client = self._get_client(pod['pod_name'])
rule['security_group_id'] = b_sg_id
self._safe_create_security_group_rule(
t_context, client, {'security_group_rule': rule})
except Exception:
super(TricircleSecurityGroupMixin,
self).delete_security_group_rule(q_context, new_rule['id'])
raise n_exceptions.BottomPodOperationFailure(
resource='security group rule', pod_name=pod['pod_name'])
return new_rule
def delete_security_group_rule(self, q_context, _id):
rule = self.get_security_group_rule(q_context, _id)
if rule['remote_group_id']:
raise n_exceptions.RemoteGroupNotSupported()
sg_id = rule['security_group_id']
sg = self.get_security_group(q_context, sg_id)
if sg['name'] == 'default':
raise n_exceptions.DefaultGroupUpdateNotSupported()
t_context = context.get_context_from_neutron_context(q_context)
mappings = db_api.get_bottom_mappings_by_top_id(
t_context, sg_id, constants.RT_SG)
try:
for pod, b_sg_id in mappings:
client = self._get_client(pod['pod_name'])
rule['security_group_id'] = b_sg_id
b_sg = client.get_security_groups(t_context, b_sg_id)
for b_rule in b_sg['security_group_rules']:
if not self._compare_rule(b_rule, rule):
continue
self._safe_delete_security_group_rule(t_context, client,
b_rule['id'])
break
except Exception:
raise n_exceptions.BottomPodOperationFailure(
resource='security group rule', pod_name=pod['pod_name'])
super(TricircleSecurityGroupMixin,
self).delete_security_group_rule(q_context, _id)

View File

@ -1,679 +0,0 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
import pecan
from pecan import expose
from pecan import rest
import six
import oslo_log.log as logging
import neutronclient.common.exceptions as q_exceptions
from tricircle.common import az_ag
import tricircle.common.client as t_client
from tricircle.common import constants
import tricircle.common.context as t_context
import tricircle.common.exceptions as t_exceptions
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
import tricircle.common.lock_handle as t_lock
from tricircle.common.quota import QUOTAS
from tricircle.common import utils
from tricircle.common import xrpcapi
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
from tricircle.network import helper
LOG = logging.getLogger(__name__)
MAX_METADATA_KEY_LENGTH = 255
MAX_METADATA_VALUE_LENGTH = 255
class ServerController(rest.RestController):
def __init__(self, project_id):
self.project_id = project_id
self.clients = {constants.TOP: t_client.Client()}
self.helper = helper.NetworkHelper()
self.xjob_handler = xrpcapi.XJobAPI()
def _get_client(self, pod_name=constants.TOP):
if pod_name not in self.clients:
self.clients[pod_name] = t_client.Client(pod_name)
return self.clients[pod_name]
def _get_all(self, context, params):
filters = [{'key': key,
'comparator': 'eq',
'value': value} for key, value in params.iteritems()]
ret = []
pods = db_api.list_pods(context)
for pod in pods:
if not pod['az_name']:
continue
client = self._get_client(pod['pod_name'])
servers = client.list_servers(context, filters=filters)
self._remove_fip_info(servers)
ret.extend(servers)
return ret
@staticmethod
def _construct_brief_server_entry(server):
return {'id': server['id'],
'name': server.get('name'),
'links': server.get('links')}
@staticmethod
def _transform_network_name(server):
if 'addresses' not in server:
return
keys = [key for key in server['addresses'].iterkeys()]
for key in keys:
value = server['addresses'].pop(key)
network_name = key.split('#')[1]
server['addresses'][network_name] = value
return server
@expose(generic=True, template='json')
def get_one(self, _id, **kwargs):
context = t_context.extract_context_from_environ()
if _id == 'detail':
return {'servers': [self._transform_network_name(
server) for server in self._get_all(context, kwargs)]}
mappings = db_api.get_bottom_mappings_by_top_id(
context, _id, constants.RT_SERVER)
if not mappings:
return utils.format_nova_error(
404, _('Instance %s could not be found.') % _id)
pod, bottom_id = mappings[0]
client = self._get_client(pod['pod_name'])
server = client.get_servers(context, bottom_id)
if not server:
return utils.format_nova_error(
404, _('Instance %s could not be found.') % _id)
else:
self._transform_network_name(server)
return {'server': server}
@expose(generic=True, template='json')
def get_all(self, **kwargs):
context = t_context.extract_context_from_environ()
return {'servers': [self._construct_brief_server_entry(
server) for server in self._get_all(context, kwargs)]}
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if 'server' not in kw:
return utils.format_nova_error(
400, _('server is not set'))
az = kw['server'].get('availability_zone', '')
pod, b_az = az_ag.get_pod_by_az_tenant(
context, az, self.project_id)
if not pod:
return utils.format_nova_error(
500, _('Pod not configured or scheduling failure'))
t_server_dict = kw['server']
self._process_metadata_quota(context, t_server_dict)
self._process_injected_file_quota(context, t_server_dict)
server_body = self._get_create_server_body(kw['server'], b_az)
top_client = self._get_client()
sg_filters = [{'key': 'tenant_id', 'comparator': 'eq',
'value': self.project_id}]
top_sgs = top_client.list_security_groups(context, sg_filters)
top_sg_map = dict((sg['name'], sg) for sg in top_sgs)
if 'security_groups' not in kw['server']:
security_groups = ['default']
else:
security_groups = []
for sg in kw['server']['security_groups']:
if 'name' not in sg:
return utils.format_nova_error(
400, _('Invalid input for field/attribute'))
if sg['name'] not in top_sg_map:
return utils.format_nova_error(
400, _('Unable to find security_group with name or id '
'%s') % sg['name'])
security_groups.append(sg['name'])
t_sg_ids, b_sg_ids, is_news = self._handle_security_group(
context, pod, top_sg_map, security_groups)
server_body['networks'] = []
if 'networks' in kw['server']:
for net_info in kw['server']['networks']:
if 'uuid' in net_info:
network = top_client.get_networks(context,
net_info['uuid'])
if not network:
return utils.format_nova_error(
400, _('Network %s could not be '
'found') % net_info['uuid'])
if not self._check_network_server_az_match(
context, network,
kw['server']['availability_zone']):
return utils.format_nova_error(
400, _('Network and server not in the same '
'availability zone'))
subnets = top_client.list_subnets(
context, [{'key': 'network_id',
'comparator': 'eq',
'value': network['id']}])
if not subnets:
return utils.format_nova_error(
400, _('Network does not contain any subnets'))
t_port_id, b_port_id = self._handle_network(
context, pod, network, subnets,
top_sg_ids=t_sg_ids, bottom_sg_ids=b_sg_ids)
elif 'port' in net_info:
port = top_client.get_ports(context, net_info['port'])
if not port:
return utils.format_nova_error(
400, _('Port %s could not be '
'found') % net_info['port'])
t_port_id, b_port_id = self._handle_port(
context, pod, port)
server_body['networks'].append({'port': b_port_id})
# only for security group first created in a pod, we invoke
# _handle_sg_rule_for_new_group to initialize rules in that group, this
# method removes all the rules in the new group then add new rules
top_sg_id_map = dict((sg['id'], sg) for sg in top_sgs)
new_top_sgs = []
new_bottom_sg_ids = []
default_sg = None
for t_id, b_id, is_new in zip(t_sg_ids, b_sg_ids, is_news):
sg_name = top_sg_id_map[t_id]['name']
if sg_name == 'default':
default_sg = top_sg_id_map[t_id]
continue
if not is_new:
continue
new_top_sgs.append(top_sg_id_map[t_id])
new_bottom_sg_ids.append(b_id)
self._handle_sg_rule_for_new_group(context, pod, new_top_sgs,
new_bottom_sg_ids)
if default_sg:
self._handle_sg_rule_for_default_group(
context, pod, default_sg, self.project_id)
client = self._get_client(pod['pod_name'])
nics = [
{'port-id': _port['port']} for _port in server_body['networks']]
server = client.create_servers(context,
name=server_body['name'],
image=server_body['imageRef'],
flavor=server_body['flavorRef'],
nics=nics,
security_groups=b_sg_ids)
with context.session.begin():
core.create_resource(context, models.ResourceRouting,
{'top_id': server['id'],
'bottom_id': server['id'],
'pod_id': pod['pod_id'],
'project_id': self.project_id,
'resource_type': constants.RT_SERVER})
pecan.response.status = 202
return {'server': server}
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
mappings = db_api.get_bottom_mappings_by_top_id(context, _id,
constants.RT_SERVER)
if not mappings:
pecan.response.status = 404
return {'Error': {'message': _('Server not found'), 'code': 404}}
pod, bottom_id = mappings[0]
client = self._get_client(pod['pod_name'])
top_client = self._get_client()
try:
server_ports = top_client.list_ports(
context, filters=[{'key': 'device_id', 'comparator': 'eq',
'value': _id}])
ret = client.delete_servers(context, bottom_id)
# none return value indicates server not found
if ret is None:
self._remove_stale_mapping(context, _id)
pecan.response.status = 404
return {'Error': {'message': _('Server not found'),
'code': 404}}
for server_port in server_ports:
self.xjob_handler.delete_server_port(context,
server_port['id'])
except Exception as e:
code = 500
message = _('Delete server %(server_id)s fails') % {
'server_id': _id}
if hasattr(e, 'code'):
code = e.code
ex_message = str(e)
if ex_message:
message = ex_message
LOG.error(message)
pecan.response.status = code
return {'Error': {'message': message, 'code': code}}
# NOTE(zhiyuan) Security group rules for default security group are
# also kept until subnet is deleted.
pecan.response.status = 204
return pecan.response
def _get_or_create_route(self, context, pod, _id, _type):
def list_resources(t_ctx, q_ctx, pod_, ele, _type_):
client = self._get_client(pod_['pod_name'])
return client.list_resources(_type_, t_ctx, [{'key': 'name',
'comparator': 'eq',
'value': ele['id']}])
return t_lock.get_or_create_route(context, None,
self.project_id, pod, {'id': _id},
_type, list_resources)
def _handle_router(self, context, pod, net):
top_client = self._get_client()
interfaces = top_client.list_ports(
context, filters=[{'key': 'network_id',
'comparator': 'eq',
'value': net['id']},
{'key': 'device_owner',
'comparator': 'eq',
'value': 'network:router_interface'}])
interfaces = [inf for inf in interfaces if inf['device_id']]
if not interfaces:
return
# TODO(zhiyuan) change xjob invoking from "cast" to "call" to guarantee
# the job can be successfully registered
self.xjob_handler.setup_bottom_router(
context, net['id'], interfaces[0]['device_id'], pod['pod_id'])
def _handle_network(self, context, pod, net, subnets, port=None,
top_sg_ids=None, bottom_sg_ids=None):
(bottom_net_id,
subnet_map) = self.helper.prepare_bottom_network_subnets(
context, None, self.project_id, pod, net, subnets)
top_client = self._get_client()
top_port_body = {'port': {'network_id': net['id'],
'admin_state_up': True}}
if top_sg_ids:
top_port_body['port']['security_groups'] = top_sg_ids
# port
if not port:
port = top_client.create_ports(context, top_port_body)
port_body = self.helper.get_create_port_body(
self.project_id, port, subnet_map, bottom_net_id,
bottom_sg_ids)
else:
port_body = self.helper.get_create_port_body(
self.project_id, port, subnet_map, bottom_net_id)
_, bottom_port_id = self.helper.prepare_bottom_element(
context, self.project_id, pod, port, constants.RT_PORT, port_body)
self._handle_router(context, pod, net)
return port['id'], bottom_port_id
def _handle_port(self, context, pod, port):
top_client = self._get_client()
# NOTE(zhiyuan) at this moment, it is possible that the bottom port has
# been created. if user creates a port and associate it with a floating
# ip before booting a vm, tricircle plugin will create the bottom port
# first in order to setup floating ip in bottom pod. but it is still
# safe for us to use network id and subnet id in the returned port dict
# since tricircle plugin will do id mapping and guarantee ids in the
# dict are top id.
net = top_client.get_networks(context, port['network_id'])
subnets = []
for fixed_ip in port['fixed_ips']:
subnets.append(top_client.get_subnets(context,
fixed_ip['subnet_id']))
return self._handle_network(context, pod, net, subnets, port=port)
@staticmethod
def _safe_create_security_group_rule(context, client, body):
try:
client.create_security_group_rules(context, body)
except q_exceptions.Conflict:
return
@staticmethod
def _safe_delete_security_group_rule(context, client, _id):
try:
client.delete_security_group_rules(context, _id)
except q_exceptions.NotFound:
return
def _handle_security_group(self, context, pod, top_sg_map,
security_groups):
t_sg_ids = []
b_sg_ids = []
is_news = []
for sg_name in security_groups:
t_sg = top_sg_map[sg_name]
sg_body = {
'security_group': {
'name': t_sg['id'],
'description': t_sg['description']}}
is_new, b_sg_id = self.helper.prepare_bottom_element(
context, self.project_id, pod, t_sg, constants.RT_SG, sg_body)
t_sg_ids.append(t_sg['id'])
is_news.append(is_new)
b_sg_ids.append(b_sg_id)
return t_sg_ids, b_sg_ids, is_news
@staticmethod
def _construct_bottom_rule(rule, sg_id, ip=None):
ip = ip or rule['remote_ip_prefix']
# if ip is passed, this is a extended rule for remote group
return {'remote_group_id': None,
'direction': rule['direction'],
'remote_ip_prefix': ip,
'protocol': rule.get('protocol'),
'ethertype': rule['ethertype'],
'port_range_max': rule.get('port_range_max'),
'port_range_min': rule.get('port_range_min'),
'security_group_id': sg_id}
@staticmethod
def _compare_rule(rule1, rule2):
for key in ('direction', 'remote_ip_prefix', 'protocol', 'ethertype',
'port_range_max', 'port_range_min'):
if rule1[key] != rule2[key]:
return False
return True
def _handle_sg_rule_for_default_group(self, context, pod, default_sg,
project_id):
top_client = self._get_client()
new_b_rules = []
for t_rule in default_sg['security_group_rules']:
if not t_rule['remote_group_id']:
# leave sg_id empty here
new_b_rules.append(
self._construct_bottom_rule(t_rule, ''))
continue
if t_rule['ethertype'] != 'IPv4':
continue
subnets = top_client.list_subnets(
context, [{'key': 'tenant_id', 'comparator': 'eq',
'value': project_id}])
bridge_ip_net = netaddr.IPNetwork('100.0.0.0/8')
for subnet in subnets:
ip_net = netaddr.IPNetwork(subnet['cidr'])
if ip_net in bridge_ip_net:
continue
# leave sg_id empty here
new_b_rules.append(
self._construct_bottom_rule(t_rule, '',
subnet['cidr']))
mappings = db_api.get_bottom_mappings_by_top_id(
context, default_sg['id'], constants.RT_SG)
for pod, b_sg_id in mappings:
client = self._get_client(pod['pod_name'])
b_sg = client.get_security_groups(context, b_sg_id)
add_rules = []
del_rules = []
match_index = set()
for b_rule in b_sg['security_group_rules']:
match = False
for i, rule in enumerate(new_b_rules):
if self._compare_rule(b_rule, rule):
match = True
match_index.add(i)
break
if not match:
del_rules.append(b_rule)
for i, rule in enumerate(new_b_rules):
if i not in match_index:
add_rules.append(rule)
for del_rule in del_rules:
self._safe_delete_security_group_rule(
context, client, del_rule['id'])
if add_rules:
rule_body = {'security_group_rules': []}
for add_rule in add_rules:
add_rule['security_group_id'] = b_sg_id
rule_body['security_group_rules'].append(add_rule)
self._safe_create_security_group_rule(context,
client, rule_body)
def _handle_sg_rule_for_new_group(self, context, pod, top_sgs,
bottom_sg_ids):
client = self._get_client(pod['pod_name'])
for i, t_sg in enumerate(top_sgs):
b_sg_id = bottom_sg_ids[i]
new_b_rules = []
for t_rule in t_sg['security_group_rules']:
if t_rule['remote_group_id']:
# we do not handle remote group rule for non-default
# security group, actually tricircle plugin in neutron
# will reject such rule
# default security group is not passed with top_sgs so
# t_rule will not belong to default security group
continue
new_b_rules.append(
self._construct_bottom_rule(t_rule, b_sg_id))
try:
b_sg = client.get_security_groups(context, b_sg_id)
for b_rule in b_sg['security_group_rules']:
self._safe_delete_security_group_rule(
context, client, b_rule['id'])
if new_b_rules:
rule_body = {'security_group_rules': new_b_rules}
self._safe_create_security_group_rule(context, client,
rule_body)
except Exception:
# if we fails when operating bottom security group rule, we
# update the security group mapping to set bottom_id to None
# and expire the mapping, so next time the security group rule
# operations can be redone
with context.session.begin():
routes = core.query_resource(
context, models.ResourceRouting,
[{'key': 'top_id', 'comparator': 'eq',
'value': t_sg['id']},
{'key': 'bottom_id', 'comparator': 'eq',
'value': b_sg_id}], [])
update_dict = {'bottom_id': None,
'created_at': constants.expire_time,
'updated_at': constants.expire_time}
core.update_resource(context, models.ResourceRouting,
routes[0]['id'], update_dict)
raise
@staticmethod
def _get_create_server_body(origin, bottom_az):
body = {}
copy_fields = ['name', 'imageRef', 'flavorRef',
'max_count', 'min_count']
if bottom_az:
body['availability_zone'] = bottom_az
for field in copy_fields:
if field in origin:
body[field] = origin[field]
return body
@staticmethod
def _remove_fip_info(servers):
for server in servers:
if 'addresses' not in server:
continue
for addresses in server['addresses'].values():
remove_index = -1
for i, address in enumerate(addresses):
if address.get('OS-EXT-IPS:type') == 'floating':
remove_index = i
break
if remove_index >= 0:
del addresses[remove_index]
@staticmethod
def _remove_stale_mapping(context, server_id):
filters = [{'key': 'top_id', 'comparator': 'eq', 'value': server_id},
{'key': 'resource_type',
'comparator': 'eq',
'value': constants.RT_SERVER}]
with context.session.begin():
core.delete_resources(context,
models.ResourceRouting,
filters)
@staticmethod
def _check_network_server_az_match(context, network, server_az):
az_hints = 'availability_zone_hints'
network_type = 'provider:network_type'
# for local type network, we make sure it's created in only one az
# NOTE(zhiyuan) race condition exists when creating vms in the same
# local type network but different azs at the same time
if network.get(network_type) == constants.NT_LOCAL:
mappings = db_api.get_bottom_mappings_by_top_id(
context, network['id'], constants.RT_NETWORK)
if mappings:
pod, _ = mappings[0]
if pod['az_name'] != server_az:
return False
# if neutron az not assigned, server az is used
if not network.get(az_hints):
return True
if server_az in network[az_hints]:
return True
else:
return False
def _process_injected_file_quota(self, context, t_server_dict):
try:
ctx = context.elevated()
injected_files = t_server_dict.get('injected_files', None)
self._check_injected_file_quota(ctx, injected_files)
except (t_exceptions.OnsetFileLimitExceeded,
t_exceptions.OnsetFilePathLimitExceeded,
t_exceptions.OnsetFileContentLimitExceeded) as e:
msg = str(e)
LOG.exception(_LE('Quota exceeded %(msg)s'),
{'msg': msg})
return utils.format_nova_error(400, _('Quota exceeded %s') % msg)
def _check_injected_file_quota(self, context, injected_files):
"""Enforce quota limits on injected files.
Raises a QuotaError if any limit is exceeded.
"""
if injected_files is None:
return
# Check number of files first
try:
QUOTAS.limit_check(context,
injected_files=len(injected_files))
except t_exceptions.OverQuota:
raise t_exceptions.OnsetFileLimitExceeded()
# OK, now count path and content lengths; we're looking for
# the max...
max_path = 0
max_content = 0
for path, content in injected_files:
max_path = max(max_path, len(path))
max_content = max(max_content, len(content))
try:
QUOTAS.limit_check(context,
injected_file_path_bytes=max_path,
injected_file_content_bytes=max_content)
except t_exceptions.OverQuota as exc:
# Favor path limit over content limit for reporting
# purposes
if 'injected_file_path_bytes' in exc.kwargs['overs']:
raise t_exceptions.OnsetFilePathLimitExceeded()
else:
raise t_exceptions.OnsetFileContentLimitExceeded()
def _process_metadata_quota(self, context, t_server_dict):
try:
ctx = context.elevated()
metadata = t_server_dict.get('metadata', None)
self._check_metadata_properties_quota(ctx, metadata)
except t_exceptions.InvalidMetadata as e1:
LOG.exception(_LE('Invalid metadata %(exception)s'),
{'exception': str(e1)})
return utils.format_nova_error(400, _('Invalid metadata'))
except t_exceptions.InvalidMetadataSize as e2:
LOG.exception(_LE('Invalid metadata size %(exception)s'),
{'exception': str(e2)})
return utils.format_nova_error(400, _('Invalid metadata size'))
except t_exceptions.MetadataLimitExceeded as e3:
LOG.exception(_LE('Quota exceeded %(exception)s'),
{'exception': str(e3)})
return utils.format_nova_error(400,
_('Quota exceeded in metadata'))
def _check_metadata_properties_quota(self, context, metadata=None):
"""Enforce quota limits on metadata properties."""
if not metadata:
metadata = {}
if not isinstance(metadata, dict):
msg = (_("Metadata type should be dict."))
raise t_exceptions.InvalidMetadata(reason=msg)
num_metadata = len(metadata)
try:
QUOTAS.limit_check(context, metadata_items=num_metadata)
except t_exceptions.OverQuota as exc:
quota_metadata = exc.kwargs['quotas']['metadata_items']
raise t_exceptions.MetadataLimitExceeded(allowed=quota_metadata)
# Because metadata is processed in the bottom pod, we just do
# parameter validation here to ensure quota management
for k, v in six.iteritems(metadata):
try:
utils.check_string_length(v)
utils.check_string_length(k, min_len=1)
except t_exceptions.InvalidInput as e:
raise t_exceptions.InvalidMetadata(reason=str(e))
if len(k) > MAX_METADATA_KEY_LENGTH:
msg = _("Metadata property key greater than 255 characters")
raise t_exceptions.InvalidMetadataSize(reason=msg)
if len(v) > MAX_METADATA_VALUE_LENGTH:
msg = _("Metadata property value greater than 255 characters")
raise t_exceptions.InvalidMetadataSize(reason=msg)

View File

@ -1,6 +0,0 @@
===============================================
Tempest Integration of Tricircle
===============================================
This directory contains Tempest tests to cover the Tricircle project.

View File

@ -1,275 +0,0 @@
# tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[id-9c80f04d-11f3-44a4-8738-ed2f879b0ff4]
# tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agents_non_admin[id-e335be47-b9a1-46fd-be30-0874c0b751e6]
# tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_show_agent[id-869bc8e8-0fda-4a30-9b71-f8a7cf58ca9f]
# tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_update_agent_description[id-68a94a14-1243-46e6-83bf-157627e31556]
# tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_update_agent_status[id-371dfc5b-55b9-4cb5-ac82-c40eadaac941]
# tempest.api.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_add_remove_network_from_dhcp_agent[id-a0856713-6549-470c-a656-e97c8df9a14d]
# tempest.api.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_list_dhcp_agent_hosting_network[id-5032b1fe-eb42-4a64-8f3b-6e189d8b5c7d]
# tempest.api.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_list_networks_hosted_by_one_dhcp[id-30c48f98-e45d-4ffb-841c-b8aad57c7587]
# tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON.test_create_external_network[id-462be770-b310-4df9-9c42-773217e4c8b1]
# tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON.test_delete_external_networks_with_floating_ip[id-82068503-2cf2-4ed4-b3be-ecb89432e4bb]
# tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON.test_list_external_networks[id-39be4c9b-a57e-4ff9-b7c7-b218e209dfcc]
# tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON.test_show_external_networks_attribute[id-2ac50ab2-7ebd-4e27-b3ce-a9e399faaea2]
# tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON.test_update_external_network[id-4db5417a-e11c-474d-a361-af00ebef57c5]
# tempest.api.network.admin.test_external_networks_negative.ExternalNetworksAdminNegativeTestJSON.test_create_port_with_precreated_floatingip_as_fixed_ip[id-d402ae6c-0be0-4d8e-833b-a738895d98d0,negative]
# tempest.api.network.admin.test_floating_ips_admin_actions.FloatingIPAdminTestJSON.test_create_list_show_floating_ip_with_tenant_id_by_admin[id-32727cc3-abe2-4485-a16e-48f2d54c14f2]
# tempest.api.network.admin.test_floating_ips_admin_actions.FloatingIPAdminTestJSON.test_list_floating_ips_from_admin_and_nonadmin[id-64f2100b-5471-4ded-b46c-ddeeeb4f231b]
# tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestJSON.test_add_list_remove_router_on_l3_agent[id-9464e5e7-8625-49c3-8fd1-89c52be59d66]
# tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestJSON.test_list_routers_on_l3_agent[id-b7ce6e89-e837-4ded-9b78-9ed3c9c6a45a]
# tempest.api.network.admin.test_negative_quotas.QuotasNegativeTest.test_network_quota_exceeding[id-644f4e1b-1bf9-4af0-9fd8-eb56ac0f51cf]
# tempest.api.network.admin.test_quotas.QuotasTest.test_quotas[id-2390f766-836d-40ef-9aeb-e810d78207fb]
# tempest.api.network.admin.test_routers_dvr.RoutersTestDVR.test_centralized_router_creation[id-8a0a72b4-7290-4677-afeb-b4ffe37bc352]
# tempest.api.network.admin.test_routers_dvr.RoutersTestDVR.test_centralized_router_update_to_dvr[id-acd43596-c1fb-439d-ada8-31ad48ae3c2e]
# tempest.api.network.admin.test_routers_dvr.RoutersTestDVR.test_distributed_router_creation[id-08a2a0a8-f1e4-4b34-8e30-e522e836c44e]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairIpV6TestJSON.test_create_list_port_with_address_pair[id-86c3529b-1231-40de-803c-00e40882f043]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairIpV6TestJSON.test_update_port_with_address_pair[id-9599b337-272c-47fd-b3cf-509414414ac4]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairIpV6TestJSON.test_update_port_with_cidr_address_pair[id-4d6d178f-34f6-4bff-a01c-0a2f8fe909e4]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairIpV6TestJSON.test_update_port_with_multiple_ip_mac_address_pair[id-b3f20091-6cd5-472b-8487-3516137df933]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairTestJSON.test_create_list_port_with_address_pair[id-86c3529b-1231-40de-803c-00e40882f043]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairTestJSON.test_update_port_with_address_pair[id-9599b337-272c-47fd-b3cf-509414414ac4]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairTestJSON.test_update_port_with_cidr_address_pair[id-4d6d178f-34f6-4bff-a01c-0a2f8fe909e4]
# tempest.api.network.test_allowed_address_pair.AllowedAddressPairTestJSON.test_update_port_with_multiple_ip_mac_address_pair[id-b3f20091-6cd5-472b-8487-3516137df933]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful[id-4ab211a0-276f-4552-9070-51e27f58fecf]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_fixedips[id-51a5e97f-f02e-4e4e-9a17-a69811d300e3]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_fixedips_duplicate[id-57b8302b-cba9-4fbb-8835-9168df029051]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_fixedips_outrange[id-98244d88-d990-4570-91d4-6b25d70d08af]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_router[id-e98f65db-68f4-4330-9fea-abd8c5192d4d]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_64_subnets[id-4256c61d-c538-41ea-9147-3c450c36669e]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_invalid_options[id-81f18ef6-95b5-4584-9966-10d480b7496a]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_eui64[id-e5517e62-6f16-430d-a672-f80875493d4c]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra[id-ae2f4a5d-03ff-4c42-a3b0-ce2fcb7ea832]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra_no_dhcp[id-21635b6f-165a-4d42-bf49-7d195e47342f]
# tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_two_subnets[id-4544adf7-bb5f-4bdc-b769-b3e77026cef2]
# tempest.api.network.test_extensions.ExtensionsTestJSON.test_list_show_extensions[id-ef28c7e6-e646-4979-9d67-deb207bc5564,smoke]
# tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsIpV6TestJSON.test_create_list_port_with_extra_dhcp_options[id-d2c17063-3767-4a24-be4f-a23dbfa133c9]
# tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsIpV6TestJSON.test_update_show_port_with_extra_dhcp_options[id-9a6aebf4-86ee-4f47-b07a-7f7232c55607]
# tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsTestJSON.test_create_list_port_with_extra_dhcp_options[id-d2c17063-3767-4a24-be4f-a23dbfa133c9]
# tempest.api.network.test_extra_dhcp_options.ExtraDHCPOptionsTestJSON.test_update_show_port_with_extra_dhcp_options[id-9a6aebf4-86ee-4f47-b07a-7f7232c55607]
# tempest.api.network.test_floating_ips.FloatingIPTestJSON.test_create_floating_ip_specifying_a_fixed_ip_address[id-36de4bd0-f09c-43e3-a8e1-1decc1ffd3a5,smoke]
# tempest.api.network.test_floating_ips.FloatingIPTestJSON.test_create_list_show_update_delete_floating_ip[id-62595970-ab1c-4b7f-8fcc-fddfe55e8718,smoke]
# tempest.api.network.test_floating_ips.FloatingIPTestJSON.test_create_update_floatingip_with_port_multiple_ip_address[id-45c4c683-ea97-41ef-9c51-5e9802f2f3d7]
# tempest.api.network.test_floating_ips.FloatingIPTestJSON.test_floating_ip_delete_port[id-e1f6bffd-442f-4668-b30e-df13f2705e77]
# tempest.api.network.test_floating_ips.FloatingIPTestJSON.test_floating_ip_update_different_router[id-1bb2f731-fe5a-4b8c-8409-799ade1bed4d]
# tempest.api.network.test_floating_ips_negative.FloatingIPNegativeTestJSON.test_associate_floatingip_port_ext_net_unreachable[id-6b3b8797-6d43-4191-985c-c48b773eb429,negative]
# tempest.api.network.test_floating_ips_negative.FloatingIPNegativeTestJSON.test_create_floatingip_in_private_network[id-50b9aeb4-9f0b-48ee-aa31-fa955a48ff54,negative]
# tempest.api.network.test_floating_ips_negative.FloatingIPNegativeTestJSON.test_create_floatingip_with_port_ext_net_unreachable[id-22996ea8-4a81-4b27-b6e1-fa5df92fa5e8,negative]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_create_delete_metering_label_rule_with_filters[id-f4d547cd-3aee-408f-bf36-454f8825e045]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_create_delete_metering_label_with_filters[id-ec8e15ff-95d0-433b-b8a6-b466bddb1e50]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_list_metering_label_rules[id-cc832399-6681-493b-9d79-0202831a1281]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_list_metering_labels[id-e2fb2f8c-45bf-429a-9f17-171c70444612]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_show_metering_label[id-30abb445-0eea-472e-bd02-8649f54a5968]
# tempest.api.network.test_metering_extensions.MeteringIpV6TestJSON.test_show_metering_label_rule[id-b7354489-96ea-41f3-9452-bace120fb4a7]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_create_delete_metering_label_rule_with_filters[id-f4d547cd-3aee-408f-bf36-454f8825e045]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_create_delete_metering_label_with_filters[id-ec8e15ff-95d0-433b-b8a6-b466bddb1e50]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_list_metering_label_rules[id-cc832399-6681-493b-9d79-0202831a1281]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_list_metering_labels[id-e2fb2f8c-45bf-429a-9f17-171c70444612]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_show_metering_label[id-30abb445-0eea-472e-bd02-8649f54a5968]
# tempest.api.network.test_metering_extensions.MeteringTestJSON.test_show_metering_label_rule[id-b7354489-96ea-41f3-9452-bace120fb4a7]
# tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_network[id-d4f9024d-1e28-4fc1-a6b1-25dbc6fa11e2,smoke]
# tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_port[id-48037ff2-e889-4c3b-b86a-8e3f34d2d060,smoke]
# tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_subnet[id-8936533b-c0aa-4f29-8e53-6cc873aec489,smoke]
# tempest.api.network.test_networks.BulkNetworkOpsTest.test_bulk_create_delete_network[id-d4f9024d-1e28-4fc1-a6b1-25dbc6fa11e2,smoke]
# tempest.api.network.test_networks.BulkNetworkOpsTest.test_bulk_create_delete_port[id-48037ff2-e889-4c3b-b86a-8e3f34d2d060,smoke]
# tempest.api.network.test_networks.BulkNetworkOpsTest.test_bulk_create_delete_subnet[id-8936533b-c0aa-4f29-8e53-6cc873aec489,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_all_attributes[id-a4d9ec4c-0306-4111-a75c-db01a709030b]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_allocation_pools[id-bec949c4-3147-4ba6-af5f-cd2306118404]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_default_gw[id-ebb4fd95-524f-46af-83c1-0305b239338f]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_dhcp_enabled[id-94ce038d-ff0a-4a4c-a56b-09da3ca0b55d]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_gw[id-e41a4888-65a6-418c-a095-f7c2ef4ad59a]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_gw_and_allocation_pools[id-8217a149-0c6c-4cfb-93db-0486f707d13f]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_with_host_routes_and_dns_nameservers[id-d830de0a-be47-468f-8f02-1fd996118289]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_delete_subnet_without_gateway[id-d2d596e2-8e76-47a9-ac51-d4648009f4d3]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_list_subnet_with_no_gw64_one_network[id-a9653883-b2a4-469b-8c3c-4518430a7e55]
# tempest.api.network.test_networks.NetworksIpV6Test.test_create_update_delete_network_subnet[id-0e269138-0da6-4efc-a46d-578161e7b221,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_delete_network_with_subnet[id-f04f61a9-b7f3-4194-90b2-9bcf660d1bfe]
# tempest.api.network.test_networks.NetworksIpV6Test.test_external_network_visibility[id-af774677-42a9-4e4b-bb58-16fe6a5bc1ec,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_list_networks[id-f7ffdeda-e200-4a7a-bcbe-05716e86bf43,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_list_networks_fields[id-6ae6d24f-9194-4869-9c85-c313cb20e080]
# tempest.api.network.test_networks.NetworksIpV6Test.test_list_subnets[id-db68ba48-f4ea-49e9-81d1-e367f6d0b20a,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_list_subnets_fields[id-842589e3-9663-46b0-85e4-7f01273b0412]
# tempest.api.network.test_networks.NetworksIpV6Test.test_show_network[id-2bf13842-c93f-4a69-83ed-717d2ec3b44e,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_show_network_fields[id-867819bb-c4b6-45f7-acf9-90edcf70aa5e]
# tempest.api.network.test_networks.NetworksIpV6Test.test_show_subnet[id-bd635d81-6030-4dd1-b3b9-31ba0cfdf6cc,smoke]
# tempest.api.network.test_networks.NetworksIpV6Test.test_show_subnet_fields[id-270fff0b-8bfc-411f-a184-1e8fd35286f0]
# tempest.api.network.test_networks.NetworksIpV6Test.test_update_subnet_gw_dns_host_routes_dhcp[id-3d3852eb-3009-49ec-97ac-5ce83b73010a]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_slaac_subnet_with_ports[id-88554555-ebf8-41ef-9300-4926d45e06e9]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_stateless_subnet_with_ports[id-2de6ab5a-fcf0-4144-9813-f91a940291f1]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_all_attributes[id-a4d9ec4c-0306-4111-a75c-db01a709030b]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_allocation_pools[id-bec949c4-3147-4ba6-af5f-cd2306118404]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_default_gw[id-ebb4fd95-524f-46af-83c1-0305b239338f]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_dhcp_enabled[id-94ce038d-ff0a-4a4c-a56b-09da3ca0b55d]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_gw[id-e41a4888-65a6-418c-a095-f7c2ef4ad59a]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_gw_and_allocation_pools[id-8217a149-0c6c-4cfb-93db-0486f707d13f]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_host_routes_and_dns_nameservers[id-d830de0a-be47-468f-8f02-1fd996118289]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_v6_attributes_slaac[id-176b030f-a923-4040-a755-9dc94329e60c]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_v6_attributes_stateful[id-da40cd1b-a833-4354-9a85-cd9b8a3b74ca]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_with_v6_attributes_stateless[id-7d410310-8c86-4902-adf9-865d08e31adb]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_subnet_without_gateway[id-d2d596e2-8e76-47a9-ac51-d4648009f4d3]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_list_subnet_with_no_gw64_one_network[id-a9653883-b2a4-469b-8c3c-4518430a7e55]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_update_delete_network_subnet[id-0e269138-0da6-4efc-a46d-578161e7b221,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_delete_network_with_subnet[id-f04f61a9-b7f3-4194-90b2-9bcf660d1bfe]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_external_network_visibility[id-af774677-42a9-4e4b-bb58-16fe6a5bc1ec,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_list_networks[id-f7ffdeda-e200-4a7a-bcbe-05716e86bf43,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_list_networks_fields[id-6ae6d24f-9194-4869-9c85-c313cb20e080]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_list_subnets[id-db68ba48-f4ea-49e9-81d1-e367f6d0b20a,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_list_subnets_fields[id-842589e3-9663-46b0-85e4-7f01273b0412]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_show_network[id-2bf13842-c93f-4a69-83ed-717d2ec3b44e,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_show_network_fields[id-867819bb-c4b6-45f7-acf9-90edcf70aa5e]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_show_subnet[id-bd635d81-6030-4dd1-b3b9-31ba0cfdf6cc,smoke]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_show_subnet_fields[id-270fff0b-8bfc-411f-a184-1e8fd35286f0]
# tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_update_subnet_gw_dns_host_routes_dhcp[id-3d3852eb-3009-49ec-97ac-5ce83b73010a]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_all_attributes[id-a4d9ec4c-0306-4111-a75c-db01a709030b]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_allocation_pools[id-bec949c4-3147-4ba6-af5f-cd2306118404]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_dhcp_enabled[id-94ce038d-ff0a-4a4c-a56b-09da3ca0b55d]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw[id-9393b468-186d-496d-aa36-732348cd76e7]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw_and_allocation_pools[id-8217a149-0c6c-4cfb-93db-0486f707d13f]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_host_routes_and_dns_nameservers[id-d830de0a-be47-468f-8f02-1fd996118289]
# tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_without_gateway[id-d2d596e2-8e76-47a9-ac51-d4648009f4d3]
# tempest.api.network.test_networks.NetworksTest.test_create_update_delete_network_subnet[id-0e269138-0da6-4efc-a46d-578161e7b221,smoke]
# tempest.api.network.test_networks.NetworksTest.test_delete_network_with_subnet[id-f04f61a9-b7f3-4194-90b2-9bcf660d1bfe]
# tempest.api.network.test_networks.NetworksTest.test_external_network_visibility[id-af774677-42a9-4e4b-bb58-16fe6a5bc1ec,smoke]
# tempest.api.network.test_networks.NetworksTest.test_list_networks[id-f7ffdeda-e200-4a7a-bcbe-05716e86bf43,smoke]
# tempest.api.network.test_networks.NetworksTest.test_list_networks_fields[id-6ae6d24f-9194-4869-9c85-c313cb20e080]
# tempest.api.network.test_networks.NetworksTest.test_list_subnets[id-db68ba48-f4ea-49e9-81d1-e367f6d0b20a,smoke]
# tempest.api.network.test_networks.NetworksTest.test_list_subnets_fields[id-842589e3-9663-46b0-85e4-7f01273b0412]
# tempest.api.network.test_networks.NetworksTest.test_show_network[id-2bf13842-c93f-4a69-83ed-717d2ec3b44e,smoke]
# tempest.api.network.test_networks.NetworksTest.test_show_network_fields[id-867819bb-c4b6-45f7-acf9-90edcf70aa5e]
# tempest.api.network.test_networks.NetworksTest.test_show_subnet[id-bd635d81-6030-4dd1-b3b9-31ba0cfdf6cc,smoke]
# tempest.api.network.test_networks.NetworksTest.test_show_subnet_fields[id-270fff0b-8bfc-411f-a184-1e8fd35286f0]
# tempest.api.network.test_networks.NetworksTest.test_update_subnet_gw_dns_host_routes_dhcp[id-3d3852eb-3009-49ec-97ac-5ce83b73010a]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_create_port_on_non_existent_network[id-13d3b106-47e6-4b9b-8d53-dae947f092fe,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_delete_non_existent_network[id-03795047-4a94-4120-a0a1-bd376e36fd4e,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_delete_non_existent_port[id-49ec2bbd-ac2e-46fd-8054-798e679ff894,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_delete_non_existent_subnet[id-a176c859-99fb-42ec-a208-8a85b552a239,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_show_non_existent_network[id-9293e937-824d-42d2-8d5b-e985ea67002a,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_show_non_existent_port[id-a954861d-cbfd-44e8-b0a9-7fab111f235d,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_show_non_existent_subnet[id-d746b40c-5e09-4043-99f7-cba1be8b70df,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_update_non_existent_network[id-98bfe4e3-574e-4012-8b17-b2647063de87,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_update_non_existent_port[id-cf8eef21-4351-4f53-adcd-cc5cb1e76b92,negative]
# tempest.api.network.test_networks_negative.NetworksNegativeTestJSON.test_update_non_existent_subnet[id-1cc47884-ac52-4415-a31c-e7ce5474a868,negative]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_create_port_binding_ext_attr[id-8e8569c1-9ac7-44db-8bc1-f5fb2814f29b]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_list_ports_binding_ext_attr[id-1c82a44a-6c6e-48ff-89e1-abe7eaf8f9f8]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_show_port_binding_ext_attr[id-b54ac0ff-35fc-4c79-9ca3-c7dbd4ea4f13]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestJSON.test_update_port_binding_ext_attr[id-6f6c412c-711f-444d-8502-0ac30fbf5dd5]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsTestJSON.test_create_port_binding_ext_attr[id-8e8569c1-9ac7-44db-8bc1-f5fb2814f29b]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsTestJSON.test_list_ports_binding_ext_attr[id-1c82a44a-6c6e-48ff-89e1-abe7eaf8f9f8]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsTestJSON.test_show_port_binding_ext_attr[id-b54ac0ff-35fc-4c79-9ca3-c7dbd4ea4f13]
# tempest.api.network.test_ports.PortsAdminExtendedAttrsTestJSON.test_update_port_binding_ext_attr[id-6f6c412c-711f-444d-8502-0ac30fbf5dd5]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_bulk_port[id-67f1b811-f8db-43e2-86bd-72c074d4a42c]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools[id-0435f278-40ae-48cb-a404-b8a087bc09b1,smoke]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_with_no_securitygroups[id-4179dcb9-1382-4ced-84fe-1b91c54f5735,smoke]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_show_delete_port_user_defined_mac[id-13e95171-6cbd-489c-9d7c-3f9c58215c18]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_delete_port[id-c72c1c0c-2193-4aca-aaa4-b1442640f51c,smoke]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_port_with_second_ip[id-63aeadd4-3b49-427f-a3b1-19ca81f06270]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_list_ports[id-cf95b358-3e92-4a29-a148-52445e1ac50e,smoke]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_list_ports_fields[id-ff7f117f-f034-4e0e-abff-ccef05c454b4]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_port_list_filter_by_ip[id-e7fe260b-1e79-4dd3-86d9-bec6a7959fc5]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_port_list_filter_by_router_id[id-5ad01ed0-0e6e-4c5d-8194-232801b15c72]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_show_port[id-c9a685bd-e83f-499c-939f-9f7863ca259f,smoke]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_show_port_fields[id-45fcdaf2-dab0-4c13-ac6c-fcddfb579dbd]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_update_port_with_security_group_and_extra_attributes[id-58091b66-4ff4-4cc1-a549-05d60c7acd1a]
# tempest.api.network.test_ports.PortsIpV6TestJSON.test_update_port_with_two_security_groups_and_extra_attributes[id-edf6766d-3d40-4621-bc6e-2521a44c257d]
# tempest.api.network.test_ports.PortsTestJSON.test_create_bulk_port[id-67f1b811-f8db-43e2-86bd-72c074d4a42c]
# tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools[id-0435f278-40ae-48cb-a404-b8a087bc09b1,smoke]
# tempest.api.network.test_ports.PortsTestJSON.test_create_port_with_no_securitygroups[id-4179dcb9-1382-4ced-84fe-1b91c54f5735,smoke]
# tempest.api.network.test_ports.PortsTestJSON.test_create_show_delete_port_user_defined_mac[id-13e95171-6cbd-489c-9d7c-3f9c58215c18]
# tempest.api.network.test_ports.PortsTestJSON.test_create_update_delete_port[id-c72c1c0c-2193-4aca-aaa4-b1442640f51c,smoke]
# tempest.api.network.test_ports.PortsTestJSON.test_create_update_port_with_second_ip[id-63aeadd4-3b49-427f-a3b1-19ca81f06270]
# tempest.api.network.test_ports.PortsTestJSON.test_list_ports[id-cf95b358-3e92-4a29-a148-52445e1ac50e,smoke]
# tempest.api.network.test_ports.PortsTestJSON.test_list_ports_fields[id-ff7f117f-f034-4e0e-abff-ccef05c454b4]
# tempest.api.network.test_ports.PortsTestJSON.test_port_list_filter_by_ip[id-e7fe260b-1e79-4dd3-86d9-bec6a7959fc5]
# tempest.api.network.test_ports.PortsTestJSON.test_port_list_filter_by_router_id[id-5ad01ed0-0e6e-4c5d-8194-232801b15c72]
# tempest.api.network.test_ports.PortsTestJSON.test_show_port[id-c9a685bd-e83f-499c-939f-9f7863ca259f,smoke]
# tempest.api.network.test_ports.PortsTestJSON.test_show_port_fields[id-45fcdaf2-dab0-4c13-ac6c-fcddfb579dbd]
# tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_security_group_and_extra_attributes[id-58091b66-4ff4-4cc1-a549-05d60c7acd1a]
# tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_two_security_groups_and_extra_attributes[id-edf6766d-3d40-4621-bc6e-2521a44c257d]
# tempest.api.network.test_routers.DvrRoutersTest.test_convert_centralized_router[id-644d7a4a-01a1-4b68-bb8d-0c0042cb1729]
# tempest.api.network.test_routers.DvrRoutersTest.test_create_distributed_router[id-141297aa-3424-455d-aa8d-f2d95731e00a]
# tempest.api.network.test_routers.RoutersIpV6Test.test_add_multiple_router_interfaces[id-802c73c9-c937-4cef-824b-2191e24a6aab,smoke]
# tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_port_id[id-2b7d2f37-6748-4d78-92e5-1d590234f0d5,smoke]
# tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_subnet_id[id-b42e6e39-2e37-49cc-a6f4-8467e940900a,smoke]
# tempest.api.network.test_routers.RoutersIpV6Test.test_create_router_setting_project_id[id-e54dd3a3-4352-4921-b09d-44369ae17397]
# tempest.api.network.test_routers.RoutersIpV6Test.test_create_router_with_default_snat_value[id-847257cc-6afd-4154-b8fb-af49f5670ce8]
# tempest.api.network.test_routers.RoutersIpV6Test.test_create_router_with_snat_explicit[id-ea74068d-09e9-4fd7-8995-9b6a1ace920f]
# tempest.api.network.test_routers.RoutersIpV6Test.test_create_show_list_update_delete_router[id-f64403e2-8483-4b34-8ccd-b09a87bcc68c,smoke]
# tempest.api.network.test_routers.RoutersIpV6Test.test_router_interface_port_update_with_fixed_ip[id-96522edf-b4b5-45d9-8443-fa11c26e6eff]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_delete_extra_route[id-c86ac3a8-50bd-4b00-a6b8-62af84a0765c]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_admin_state[id-a8902683-c788-4246-95c7-ad9c6d63a4d9]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_reset_gateway_without_snat[id-f2faf994-97f4-410b-a831-9bc977b64374]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_set_gateway[id-6cc285d8-46bf-4f36-9b1a-783e3008ba79]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_set_gateway_with_snat_explicit[id-b386c111-3b21-466d-880c-5e72b01e1a33]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_set_gateway_without_snat[id-96536bc7-8262-4fb2-9967-5c46940fa279]
# tempest.api.network.test_routers.RoutersIpV6Test.test_update_router_unset_gateway[id-ad81b7ee-4f81-407b-a19c-17e623f763e8]
# tempest.api.network.test_routers.RoutersTest.test_add_multiple_router_interfaces[id-802c73c9-c937-4cef-824b-2191e24a6aab,smoke]
# tempest.api.network.test_routers.RoutersTest.test_add_remove_router_interface_with_port_id[id-2b7d2f37-6748-4d78-92e5-1d590234f0d5,smoke]
# tempest.api.network.test_routers.RoutersTest.test_add_remove_router_interface_with_subnet_id[id-b42e6e39-2e37-49cc-a6f4-8467e940900a,smoke]
# tempest.api.network.test_routers.RoutersTest.test_create_router_setting_project_id[id-e54dd3a3-4352-4921-b09d-44369ae17397]
# tempest.api.network.test_routers.RoutersTest.test_create_router_with_default_snat_value[id-847257cc-6afd-4154-b8fb-af49f5670ce8]
# tempest.api.network.test_routers.RoutersTest.test_create_router_with_snat_explicit[id-ea74068d-09e9-4fd7-8995-9b6a1ace920f]
# tempest.api.network.test_routers.RoutersTest.test_create_show_list_update_delete_router[id-f64403e2-8483-4b34-8ccd-b09a87bcc68c,smoke]
# tempest.api.network.test_routers.RoutersTest.test_router_interface_port_update_with_fixed_ip[id-96522edf-b4b5-45d9-8443-fa11c26e6eff]
# tempest.api.network.test_routers.RoutersTest.test_update_delete_extra_route[id-c86ac3a8-50bd-4b00-a6b8-62af84a0765c]
# tempest.api.network.test_routers.RoutersTest.test_update_router_admin_state[id-a8902683-c788-4246-95c7-ad9c6d63a4d9]
# tempest.api.network.test_routers.RoutersTest.test_update_router_reset_gateway_without_snat[id-f2faf994-97f4-410b-a831-9bc977b64374]
# tempest.api.network.test_routers.RoutersTest.test_update_router_set_gateway[id-6cc285d8-46bf-4f36-9b1a-783e3008ba79]
# tempest.api.network.test_routers.RoutersTest.test_update_router_set_gateway_with_snat_explicit[id-b386c111-3b21-466d-880c-5e72b01e1a33]
# tempest.api.network.test_routers.RoutersTest.test_update_router_set_gateway_without_snat[id-96536bc7-8262-4fb2-9967-5c46940fa279]
# tempest.api.network.test_routers.RoutersTest.test_update_router_unset_gateway[id-ad81b7ee-4f81-407b-a19c-17e623f763e8]
# tempest.api.network.test_routers_negative.DvrRoutersNegativeTest.test_router_create_tenant_distributed_returns_forbidden[id-4990b055-8fc7-48ab-bba7-aa28beaad0b9,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_add_router_interfaces_on_overlapping_subnets_returns_400[id-957751a3-3c68-4fa2-93b6-eb52ea10db6e,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_delete_non_existent_router_returns_404[id-c7edc5ad-d09d-41e6-a344-5c0c31e2e3e4,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_router_add_gateway_invalid_network_returns_404[id-37a94fc0-a834-45b9-bd23-9a81d2fd1e22,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_router_add_gateway_net_not_external_returns_400[id-11836a18-0b15-4327-a50b-f0d9dc66bddd,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_router_remove_interface_in_use_returns_409[id-04df80f9-224d-47f5-837a-bf23e33d1c20,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_show_non_existent_router_returns_404[id-c2a70d72-8826-43a7-8208-0209e6360c47,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeIpV6Test.test_update_non_existent_router_returns_404[id-b23d1569-8b0c-4169-8d4b-6abd34fad5c7,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_add_router_interfaces_on_overlapping_subnets_returns_400[id-957751a3-3c68-4fa2-93b6-eb52ea10db6e,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_delete_non_existent_router_returns_404[id-c7edc5ad-d09d-41e6-a344-5c0c31e2e3e4,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_router_add_gateway_invalid_network_returns_404[id-37a94fc0-a834-45b9-bd23-9a81d2fd1e22,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_router_add_gateway_net_not_external_returns_400[id-11836a18-0b15-4327-a50b-f0d9dc66bddd,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_router_remove_interface_in_use_returns_409[id-04df80f9-224d-47f5-837a-bf23e33d1c20,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_show_non_existent_router_returns_404[id-c2a70d72-8826-43a7-8208-0209e6360c47,negative]
# tempest.api.network.test_routers_negative.RoutersNegativeTest.test_update_non_existent_router_returns_404[id-b23d1569-8b0c-4169-8d4b-6abd34fad5c7,negative]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_list_update_show_delete_security_group[id-bfd128e5-3c92-44b6-9d66-7fe29d22c802,smoke]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_security_group_rule_with_additional_args[id-87dfbcf9-1849-43ea-b1e4-efa3eeae9f71]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_security_group_rule_with_icmp_type_code[id-c9463db8-b44d-4f52-b6c0-8dbda99f26ce]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_security_group_rule_with_protocol_integer_value[id-0a307599-6655-4220-bebc-fd70c64f2290]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_security_group_rule_with_remote_group_id[id-c2ed2deb-7a0c-44d8-8b4c-a5825b5c310b]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_security_group_rule_with_remote_ip_prefix[id-16459776-5da2-4634-bce4-4b55ee3ec188]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_show_delete_security_group_rule[id-cfb99e0e-7410-4a3d-8a0c-959a63ee77e9,smoke]
# tempest.api.network.test_security_groups.SecGroupIPv6Test.test_list_security_groups[id-e30abd17-fef9-4739-8617-dc26da88e686,smoke]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_list_update_show_delete_security_group[id-bfd128e5-3c92-44b6-9d66-7fe29d22c802,smoke]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_additional_args[id-87dfbcf9-1849-43ea-b1e4-efa3eeae9f71]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_icmp_type_code[id-c9463db8-b44d-4f52-b6c0-8dbda99f26ce]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_protocol_integer_value[id-0a307599-6655-4220-bebc-fd70c64f2290]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_group_id[id-c2ed2deb-7a0c-44d8-8b4c-a5825b5c310b]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_ip_prefix[id-16459776-5da2-4634-bce4-4b55ee3ec188]
# tempest.api.network.test_security_groups.SecGroupTest.test_create_show_delete_security_group_rule[id-cfb99e0e-7410-4a3d-8a0c-959a63ee77e9,smoke]
# tempest.api.network.test_security_groups.SecGroupTest.test_list_security_groups[id-e30abd17-fef9-4739-8617-dc26da88e686,smoke]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_additional_default_security_group_fails[id-2323061e-9fbf-4eb0-b547-7e8fafc90849,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_duplicate_security_group_rule_fails[id-8fde898f-ce88-493b-adc9-4e4692879fc5,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_bad_ethertype[id-5666968c-fff3-40d6-9efc-df1c8bd01abb,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_bad_protocol[id-981bdc22-ce48-41ed-900a-73148b583958,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_bad_remote_ip_prefix[id-5f8daf69-3c5f-4aaa-88c9-db1d66f68679,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_invalid_ports[id-0d9c7791-f2ad-4e2f-ac73-abf2373b0d2d,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_non_existent_remote_groupid[id-4bf786fd-2f02-443c-9716-5b98e159a49a,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_non_existent_security_group[id-be308db6-a7cf-4d5c-9baf-71bafd73f35e,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_with_remote_ip_and_group[id-b5c4b247-6b02-435b-b088-d10d45650881,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_create_security_group_rule_wrong_ip_prefix_version[id-7607439c-af73-499e-bf64-f687fd12a842,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_delete_non_existent_security_group[id-1f1bb89d-5664-4956-9fcd-83ee0fa603df,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_show_non_existent_security_group[id-424fd5c3-9ddc-486a-b45f-39bf0c820fc6,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupIPv6Test.test_show_non_existent_security_group_rule[id-4c094c09-000b-4e41-8100-9617600c02a6,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_additional_default_security_group_fails[id-2323061e-9fbf-4eb0-b547-7e8fafc90849,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_duplicate_security_group_rule_fails[id-8fde898f-ce88-493b-adc9-4e4692879fc5,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_ethertype[id-5666968c-fff3-40d6-9efc-df1c8bd01abb,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_protocol[id-981bdc22-ce48-41ed-900a-73148b583958,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_remote_ip_prefix[id-5f8daf69-3c5f-4aaa-88c9-db1d66f68679,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_invalid_ports[id-0d9c7791-f2ad-4e2f-ac73-abf2373b0d2d,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_remote_groupid[id-4bf786fd-2f02-443c-9716-5b98e159a49a,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_security_group[id-be308db6-a7cf-4d5c-9baf-71bafd73f35e,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_remote_ip_and_group[id-b5c4b247-6b02-435b-b088-d10d45650881,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_delete_non_existent_security_group[id-1f1bb89d-5664-4956-9fcd-83ee0fa603df,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group[id-424fd5c3-9ddc-486a-b45f-39bf0c820fc6,negative]
# tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group_rule[id-4c094c09-000b-4e41-8100-9617600c02a6,negative]
# tempest.api.network.test_service_type_management.ServiceTypeManagementTestJSON.test_service_provider_list[id-2cbbeea9-f010-40f6-8df5-4eaa0c918ea6]
# tempest.api.network.test_subnetpools_extensions.SubnetPoolsTestJSON.test_create_list_show_update_delete_subnetpools[id-62595970-ab1c-4b7f-8fcc-fddfe55e9811,smoke]

View File

@ -1,64 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from oslo_utils import uuidutils
from tricircle.network import helper
class HelperTest(unittest.TestCase):
def setUp(self):
self.helper = helper.NetworkHelper()
def test_get_create_subnet_body(self):
t_net_id = uuidutils.generate_uuid()
t_subnet_id = uuidutils.generate_uuid()
b_net_id = uuidutils.generate_uuid()
project_id = uuidutils.generate_uuid()
t_subnet = {
'network_id': t_net_id,
'id': t_subnet_id,
'ip_version': 4,
'cidr': '10.0.1.0/24',
'gateway_ip': '10.0.1.1',
'allocation_pools': [{'start': '10.0.1.2', 'end': '10.0.1.254'}],
'enable_dhcp': True,
'tenant_id': project_id
}
body = self.helper.get_create_subnet_body(project_id, t_subnet,
b_net_id, '10.0.1.2')
self.assertItemsEqual([{'start': '10.0.1.3', 'end': '10.0.1.254'}],
body['subnet']['allocation_pools'])
self.assertEqual('10.0.1.2', body['subnet']['gateway_ip'])
body = self.helper.get_create_subnet_body(project_id, t_subnet,
b_net_id, '10.0.1.254')
self.assertItemsEqual([{'start': '10.0.1.2', 'end': '10.0.1.253'}],
body['subnet']['allocation_pools'])
self.assertEqual('10.0.1.254', body['subnet']['gateway_ip'])
t_subnet['allocation_pools'] = [
{'start': '10.0.1.2', 'end': '10.0.1.10'},
{'start': '10.0.1.20', 'end': '10.0.1.254'}]
body = self.helper.get_create_subnet_body(project_id, t_subnet,
b_net_id, '10.0.1.5')
self.assertItemsEqual([{'start': '10.0.1.2', 'end': '10.0.1.4'},
{'start': '10.0.1.6', 'end': '10.0.1.10'},
{'start': '10.0.1.20', 'end': '10.0.1.254'}],
body['subnet']['allocation_pools'])
self.assertEqual('10.0.1.5', body['subnet']['gateway_ip'])

File diff suppressed because it is too large Load Diff

View File

@ -1,243 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from tricircle.common import constants
from tricircle.db import core
from tricircle.db import models
from tricircle.network import exceptions
class TricircleSecurityGroupTestMixin(object):
@staticmethod
def _build_test_rule(_id, sg_id, project_id, ip_prefix, remote_group=None):
return {'security_group_id': sg_id,
'id': _id,
'tenant_id': project_id,
'remote_group_id': remote_group,
'direction': 'ingress',
'remote_ip_prefix': ip_prefix,
'protocol': None,
'port_range_max': None,
'port_range_min': None,
'ethertype': 'IPv4'}
def _test_create_security_group_rule(self, plugin, q_ctx, t_ctx, pod_id,
top_sgs, bottom1_sgs):
t_sg_id = uuidutils.generate_uuid()
t_rule_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_sg = {'id': t_sg_id, 'name': 'test', 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
b_sg = {'id': b_sg_id, 'name': t_sg_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
top_sgs.append(t_sg)
bottom1_sgs.append(b_sg)
route = {
'top_id': t_sg_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route)
rule = {
'security_group_rule': self._build_test_rule(
t_rule_id, t_sg_id, project_id, '10.0.0.0/24')}
plugin.create_security_group_rule(q_ctx, rule)
self.assertEqual(1, len(bottom1_sgs[0]['security_group_rules']))
b_rule = bottom1_sgs[0]['security_group_rules'][0]
self.assertEqual(b_sg_id, b_rule['security_group_id'])
rule['security_group_rule'].pop('security_group_id', None)
b_rule.pop('security_group_id', None)
self.assertEqual(rule['security_group_rule'], b_rule)
def _test_delete_security_group_rule(self, plugin, q_ctx, t_ctx, pod_id,
top_sgs, top_rules, bottom1_sgs):
t_sg_id = uuidutils.generate_uuid()
t_rule1_id = uuidutils.generate_uuid()
t_rule2_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_rule1 = self._build_test_rule(
t_rule1_id, t_sg_id, project_id, '10.0.1.0/24')
t_rule2 = self._build_test_rule(
t_rule2_id, t_sg_id, project_id, '10.0.2.0/24')
b_rule1 = self._build_test_rule(
t_rule1_id, b_sg_id, project_id, '10.0.1.0/24')
b_rule2 = self._build_test_rule(
t_rule2_id, b_sg_id, project_id, '10.0.2.0/24')
t_sg = {'id': t_sg_id, 'name': 'test', 'description': '',
'tenant_id': project_id,
'security_group_rules': [t_rule1, t_rule2]}
b_sg = {'id': b_sg_id, 'name': t_sg_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': [b_rule1, b_rule2]}
top_sgs.append(t_sg)
top_rules.append(t_rule1)
top_rules.append(t_rule2)
bottom1_sgs.append(b_sg)
route = {
'top_id': t_sg_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route)
plugin.delete_security_group_rule(q_ctx, t_rule1_id)
self.assertEqual(1, len(bottom1_sgs[0]['security_group_rules']))
b_rule = bottom1_sgs[0]['security_group_rules'][0]
self.assertEqual(b_sg_id, b_rule['security_group_id'])
t_rule2.pop('security_group_id', None)
b_rule.pop('security_group_id', None)
self.assertEqual(t_rule2, b_rule)
def _test_handle_remote_group_invalid_input(self, plugin, q_ctx, t_ctx,
pod_id, top_sgs, top_rules,
bottom1_sgs):
t_sg1_id = uuidutils.generate_uuid()
t_sg2_id = uuidutils.generate_uuid()
t_rule1_id = uuidutils.generate_uuid()
t_rule2_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_rule1 = self._build_test_rule(
t_rule1_id, t_sg1_id, project_id, None, t_sg1_id)
t_rule2 = self._build_test_rule(
t_rule2_id, t_sg1_id, project_id, None, t_sg2_id)
t_sg = {'id': t_sg1_id, 'name': 'test', 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
b_sg = {'id': b_sg_id, 'name': t_sg1_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
top_sgs.append(t_sg)
top_rules.append(t_rule1)
bottom1_sgs.append(b_sg)
route = {
'top_id': t_sg1_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route)
self.assertRaises(exceptions.RemoteGroupNotSupported,
plugin.create_security_group_rule, q_ctx,
{'security_group_rule': t_rule2})
self.assertRaises(exceptions.RemoteGroupNotSupported,
plugin.delete_security_group_rule, q_ctx, t_rule1_id)
def _test_handle_default_sg_invalid_input(self, plugin, q_ctx, t_ctx,
pod_id, top_sgs, top_rules,
bottom1_sgs):
t_sg_id = uuidutils.generate_uuid()
t_rule1_id = uuidutils.generate_uuid()
t_rule2_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_rule1 = self._build_test_rule(
t_rule1_id, t_sg_id, project_id, '10.0.0.0/24')
t_rule2 = self._build_test_rule(
t_rule2_id, t_sg_id, project_id, '10.0.1.0/24')
t_sg = {'id': t_sg_id, 'name': 'default', 'description': '',
'tenant_id': project_id,
'security_group_rules': [t_rule1]}
b_sg = {'id': b_sg_id, 'name': t_sg_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
top_sgs.append(t_sg)
top_rules.append(t_rule1)
bottom1_sgs.append(b_sg)
route1 = {
'top_id': t_sg_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route1)
self.assertRaises(exceptions.DefaultGroupUpdateNotSupported,
plugin.create_security_group_rule, q_ctx,
{'security_group_rule': t_rule2})
self.assertRaises(exceptions.DefaultGroupUpdateNotSupported,
plugin.delete_security_group_rule, q_ctx, t_rule1_id)
def _test_create_security_group_rule_exception(
self, plugin, q_ctx, t_ctx, pod_id, top_sgs, bottom1_sgs):
t_sg_id = uuidutils.generate_uuid()
t_rule_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_sg = {'id': t_sg_id, 'name': 'test', 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
b_sg = {'id': b_sg_id, 'name': t_sg_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': []}
top_sgs.append(t_sg)
bottom1_sgs.append(b_sg)
route = {
'top_id': t_sg_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route)
rule = {
'security_group_rule': self._build_test_rule(
t_rule_id, t_sg_id, project_id, '10.0.0.0/24')}
self.assertRaises(exceptions.BottomPodOperationFailure,
plugin.create_security_group_rule, q_ctx, rule)
def _test_delete_security_group_rule_exception(self, plugin, q_ctx, t_ctx,
pod_id, top_sgs, top_rules,
bottom1_sgs):
t_sg_id = uuidutils.generate_uuid()
t_rule_id = uuidutils.generate_uuid()
b_sg_id = uuidutils.generate_uuid()
project_id = 'test_prject_id'
t_rule = self._build_test_rule(
t_rule_id, t_sg_id, project_id, '10.0.1.0/24')
b_rule = self._build_test_rule(
t_rule_id, b_sg_id, project_id, '10.0.1.0/24')
t_sg = {'id': t_sg_id, 'name': 'test', 'description': '',
'tenant_id': project_id,
'security_group_rules': [t_rule]}
b_sg = {'id': b_sg_id, 'name': t_sg_id, 'description': '',
'tenant_id': project_id,
'security_group_rules': [b_rule]}
top_sgs.append(t_sg)
top_rules.append(t_rule)
bottom1_sgs.append(b_sg)
route = {
'top_id': t_sg_id,
'pod_id': pod_id,
'bottom_id': b_sg_id,
'resource_type': constants.RT_SG}
with t_ctx.session.begin():
core.create_resource(t_ctx, models.ResourceRouting, route)
self.assertRaises(exceptions.BottomPodOperationFailure,
plugin.delete_security_group_rule, q_ctx, t_rule_id)

View File

@ -1,23 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.xjob.xservice
def list_opts():
return [
('DEFAULT', tricircle.xjob.xservice.common_opts),
('DEFAULT', tricircle.xjob.xservice.service_opts),
]

View File

@ -1,654 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import eventlet
import netaddr
import random
import six
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging as messaging
from oslo_service import periodic_task
import neutronclient.common.exceptions as q_cli_exceptions
from tricircle.common import client
from tricircle.common import constants
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import xrpcapi
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
import tricircle.network.exceptions as t_network_exc
from tricircle.network import helper
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
IN_TEST = False
AZ_HINTS = 'availability_zone_hints'
def _job_handle(job_type):
def handle_func(func):
@six.wraps(func)
def handle_args(*args, **kwargs):
if IN_TEST:
# NOTE(zhiyuan) job mechanism will cause some unpredictable
# result in unit test so we would like to bypass it. However
# we have problem mocking a decorator which decorates member
# functions, that's why we use this label, not an elegant
# way though.
func(*args, **kwargs)
return
ctx = args[1]
payload = kwargs['payload']
resource_id = payload[job_type]
db_api.new_job(ctx, job_type, resource_id)
start_time = datetime.datetime.now()
while True:
current_time = datetime.datetime.now()
delta = current_time - start_time
if delta.seconds >= CONF.worker_handle_timeout:
# quit when this handle is running for a long time
break
time_new = db_api.get_latest_timestamp(ctx, constants.JS_New,
job_type, resource_id)
time_success = db_api.get_latest_timestamp(
ctx, constants.JS_Success, job_type, resource_id)
if time_success and time_success >= time_new:
break
job = db_api.register_job(ctx, job_type, resource_id)
if not job:
# fail to obtain the lock, let other worker handle the job
running_job = db_api.get_running_job(ctx, job_type,
resource_id)
if not running_job:
# there are two reasons that running_job is None. one
# is that the running job has just been finished, the
# other is that all workers fail to register the job
# due to deadlock exception. so we sleep and try again
eventlet.sleep(CONF.worker_sleep_time)
continue
job_time = running_job['timestamp']
current_time = datetime.datetime.now()
delta = current_time - job_time
if delta.seconds > CONF.job_run_expire:
# previous running job expires, we set its status to
# fail and try again to obtain the lock
db_api.finish_job(ctx, running_job['id'], False,
time_new)
LOG.warning(_LW('Job %(job)s of type %(job_type)s for '
'resource %(resource)s expires, set '
'its state to Fail'),
{'job': running_job['id'],
'job_type': job_type,
'resource': resource_id})
eventlet.sleep(CONF.worker_sleep_time)
continue
else:
# previous running job is still valid, we just leave
# the job to the worker who holds the lock
break
# successfully obtain the lock, start to execute handler
try:
func(*args, **kwargs)
except Exception:
db_api.finish_job(ctx, job['id'], False, time_new)
LOG.error(_LE('Job %(job)s of type %(job_type)s for '
'resource %(resource)s fails'),
{'job': job['id'],
'job_type': job_type,
'resource': resource_id})
break
db_api.finish_job(ctx, job['id'], True, time_new)
eventlet.sleep(CONF.worker_sleep_time)
return handle_args
return handle_func
class PeriodicTasks(periodic_task.PeriodicTasks):
def __init__(self):
super(PeriodicTasks, self).__init__(CONF)
class XManager(PeriodicTasks):
target = messaging.Target(version='1.0')
def __init__(self, host=None, service_name='xjob'):
LOG.debug(_('XManager initialization...'))
if not host:
host = CONF.host
self.host = host
self.service_name = service_name
# self.notifier = rpc.get_notifier(self.service_name, self.host)
self.additional_endpoints = []
self.clients = {constants.TOP: client.Client()}
self.job_handles = {
constants.JT_ROUTER: self.configure_extra_routes,
constants.JT_ROUTER_SETUP: self.setup_bottom_router,
constants.JT_PORT_DELETE: self.delete_server_port}
self.helper = helper.NetworkHelper()
self.xjob_handler = xrpcapi.XJobAPI()
super(XManager, self).__init__()
def _get_client(self, pod_name=None):
if not pod_name:
return self.clients[constants.TOP]
if pod_name not in self.clients:
self.clients[pod_name] = client.Client(pod_name)
return self.clients[pod_name]
def periodic_tasks(self, context, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
def init_host(self):
"""init_host
Hook to do additional manager initialization when one requests
the service be started. This is called before any service record
is created.
Child classes should override this method.
"""
LOG.debug(_('XManager init_host...'))
pass
def cleanup_host(self):
"""cleanup_host
Hook to do cleanup work when the service shuts down.
Child classes should override this method.
"""
LOG.debug(_('XManager cleanup_host...'))
pass
def pre_start_hook(self):
"""pre_start_hook
Hook to provide the manager the ability to do additional
start-up work before any RPC queues/consumers are created. This is
called after other initialization has succeeded and a service
record is created.
Child classes should override this method.
"""
LOG.debug(_('XManager pre_start_hook...'))
pass
def post_start_hook(self):
"""post_start_hook
Hook to provide the manager the ability to do additional
start-up work immediately after a service creates RPC consumers
and starts 'running'.
Child classes should override this method.
"""
LOG.debug(_('XManager post_start_hook...'))
pass
# rpc message endpoint handling
def test_rpc(self, ctx, payload):
LOG.info(_LI("xmanager receive payload: %s"), payload)
info_text = "xmanager receive payload: %s" % payload
return info_text
@staticmethod
def _get_resource_by_name(cli, cxt, _type, name):
return cli.list_resources(_type, cxt, filters=[{'key': 'name',
'comparator': 'eq',
'value': name}])[0]
@staticmethod
def _get_router_interfaces(cli, cxt, router_id, net_id):
return cli.list_ports(
cxt, filters=[{'key': 'network_id', 'comparator': 'eq',
'value': net_id},
{'key': 'device_id', 'comparator': 'eq',
'value': router_id}])
@periodic_task.periodic_task
def redo_failed_job(self, ctx):
failed_jobs = db_api.get_latest_failed_jobs(ctx)
failed_jobs = [
job for job in failed_jobs if job['type'] in self.job_handles]
if not failed_jobs:
return
# in one run we only pick one job to handle
job_index = random.randint(0, len(failed_jobs) - 1)
failed_job = failed_jobs[job_index]
job_type = failed_job['type']
payload = {job_type: failed_job['resource_id']}
LOG.debug(_('Redo failed job for %(resource_id)s of type '
'%(job_type)s'),
{'resource_id': failed_job['resource_id'],
'job_type': job_type})
self.job_handles[job_type](ctx, payload=payload)
@staticmethod
def _safe_create_bottom_floatingip(t_ctx, pod, client, fip_net_id,
fip_address, port_id):
try:
client.create_floatingips(
t_ctx, {'floatingip': {'floating_network_id': fip_net_id,
'floating_ip_address': fip_address,
'port_id': port_id}})
except q_cli_exceptions.IpAddressInUseClient:
fips = client.list_floatingips(t_ctx,
[{'key': 'floating_ip_address',
'comparator': 'eq',
'value': fip_address}])
if not fips:
# this is rare case that we got IpAddressInUseClient exception
# a second ago but now the floating ip is missing
raise t_network_exc.BottomPodOperationFailure(
resource='floating ip', pod_name=pod['pod_name'])
associated_port_id = fips[0].get('port_id')
if associated_port_id == port_id:
# if the internal port associated with the existing fip is what
# we expect, just ignore this exception
pass
elif not associated_port_id:
# if the existing fip is not associated with any internal port,
# update the fip to add association
client.update_floatingips(t_ctx, fips[0]['id'],
{'floatingip': {'port_id': port_id}})
else:
raise
def _setup_router_one_pod(self, ctx, t_pod, b_pod, t_client, t_net,
t_router, t_ew_bridge_net, t_ew_bridge_subnet,
need_ns_bridge):
b_client = self._get_client(b_pod['pod_name'])
router_body = {'router': {'name': t_router['id'],
'distributed': False}}
project_id = t_router['tenant_id']
# create bottom router in target bottom pod
_, b_router_id = self.helper.prepare_bottom_element(
ctx, project_id, b_pod, t_router, 'router', router_body)
# handle E-W networking
# create top E-W bridge port
q_ctx = None # no need to pass neutron context when using client
t_ew_bridge_port_id = self.helper.get_bridge_interface(
ctx, q_ctx, project_id, t_pod, t_ew_bridge_net['id'],
b_router_id, None, True)
# create bottom E-W bridge port
t_ew_bridge_port = t_client.get_ports(ctx, t_ew_bridge_port_id)
(is_new, b_ew_bridge_port_id,
_, _) = self.helper.get_bottom_bridge_elements(
ctx, project_id, b_pod, t_ew_bridge_net, False, t_ew_bridge_subnet,
t_ew_bridge_port)
# attach bottom E-W bridge port to bottom router
if is_new:
# only attach bridge port the first time
b_client.action_routers(ctx, 'add_interface', b_router_id,
{'port_id': b_ew_bridge_port_id})
else:
# still need to check if the bridge port is bound
port = b_client.get_ports(ctx, b_ew_bridge_port_id)
if not port.get('device_id'):
b_client.action_routers(ctx, 'add_interface', b_router_id,
{'port_id': b_ew_bridge_port_id})
# handle N-S networking
if need_ns_bridge:
t_ns_bridge_net_name = constants.ns_bridge_net_name % project_id
t_ns_bridge_subnet_name = constants.ns_bridge_subnet_name % (
project_id)
t_ns_bridge_net = self._get_resource_by_name(
t_client, ctx, 'network', t_ns_bridge_net_name)
t_ns_bridge_subnet = self._get_resource_by_name(
t_client, ctx, 'subnet', t_ns_bridge_subnet_name)
# create bottom N-S bridge network and subnet
(_, _, b_ns_bridge_subnet_id,
b_ns_bridge_net_id) = self.helper.get_bottom_bridge_elements(
ctx, project_id, b_pod, t_ns_bridge_net, True,
t_ns_bridge_subnet, None)
# create top N-S bridge gateway port
t_ns_bridge_gateway_id = self.helper.get_bridge_interface(
ctx, q_ctx, project_id, t_pod, t_ns_bridge_net['id'],
b_router_id, None, False)
t_ns_bridge_gateway = t_client.get_ports(ctx,
t_ns_bridge_gateway_id)
# add external gateway for bottom router
# add gateway is update operation, can run multiple times
gateway_ip = t_ns_bridge_gateway['fixed_ips'][0]['ip_address']
b_client.action_routers(
ctx, 'add_gateway', b_router_id,
{'network_id': b_ns_bridge_net_id,
'external_fixed_ips': [{'subnet_id': b_ns_bridge_subnet_id,
'ip_address': gateway_ip}]})
# attach internal port to bottom router
t_ports = self._get_router_interfaces(t_client, ctx, t_router['id'],
t_net['id'])
b_net_id = db_api.get_bottom_id_by_top_id_pod_name(
ctx, t_net['id'], b_pod['pod_name'], constants.RT_NETWORK)
if b_net_id:
b_ports = self._get_router_interfaces(b_client, ctx, b_router_id,
b_net_id)
else:
b_ports = []
if not t_ports and b_ports:
# remove redundant bottom interface
b_port = b_ports[0]
request_body = {'port_id': b_port['id']}
b_client.action_routers(ctx, 'remove_interface', b_router_id,
request_body)
elif t_ports and not b_ports:
# create new bottom interface
t_port = t_ports[0]
# only consider ipv4 address currently
t_subnet_id = t_port['fixed_ips'][0]['subnet_id']
t_subnet = t_client.get_subnets(ctx, t_subnet_id)
(b_net_id,
subnet_map) = self.helper.prepare_bottom_network_subnets(
ctx, q_ctx, project_id, b_pod, t_net, [t_subnet])
# the gateway ip of bottom subnet is set to the ip of t_port, so
# we just attach the bottom subnet to the bottom router and neutron
# server in the bottom pod will create the interface for us, using
# the gateway ip.
b_client.action_routers(ctx, 'add_interface', b_router_id,
{'subnet_id': subnet_map[t_subnet_id]})
if not t_router['external_gateway_info']:
return
# handle floatingip
t_ext_net_id = t_router['external_gateway_info']['network_id']
t_fips = t_client.list_floatingips(ctx, [{'key': 'floating_network_id',
'comparator': 'eq',
'value': t_ext_net_id}])
# skip unbound top floatingip
t_ip_fip_map = dict([(fip['floating_ip_address'],
fip) for fip in t_fips if fip['port_id']])
mappings = db_api.get_bottom_mappings_by_top_id(ctx, t_ext_net_id,
constants.RT_NETWORK)
# bottom external network should exist
b_ext_pod, b_ext_net_id = mappings[0]
b_ext_client = self._get_client(b_ext_pod['pod_name'])
b_fips = b_ext_client.list_floatingips(
ctx, [{'key': 'floating_network_id', 'comparator': 'eq',
'value': b_ext_net_id}])
# skip unbound bottom floatingip
b_ip_fip_map = dict([(fip['floating_ip_address'],
fip) for fip in b_fips if fip['port_id']])
add_fips = [ip for ip in t_ip_fip_map if ip not in b_ip_fip_map]
del_fips = [ip for ip in b_ip_fip_map if ip not in t_ip_fip_map]
for add_fip in add_fips:
fip = t_ip_fip_map[add_fip]
t_int_port_id = fip['port_id']
b_int_port_id = db_api.get_bottom_id_by_top_id_pod_name(
ctx, t_int_port_id, b_pod['pod_name'], constants.RT_PORT)
if not b_int_port_id:
LOG.warning(_LW('Port %(port_id)s associated with floating ip '
'%(fip)s is not mapped to bottom pod'),
{'port_id': t_int_port_id, 'fip': add_fip})
continue
t_int_port = t_client.get_ports(ctx, t_int_port_id)
if t_int_port['network_id'] != t_net['id']:
# only handle floating ip association for the given top network
continue
if need_ns_bridge:
# create top N-S bridge interface port
t_ns_bridge_port_id = self.helper.get_bridge_interface(
ctx, q_ctx, project_id, t_pod, t_ns_bridge_net['id'], None,
b_int_port_id, False)
t_ns_bridge_port = t_client.get_ports(ctx, t_ns_bridge_port_id)
b_ext_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
ctx, t_ns_bridge_net['id'], b_ext_pod['pod_name'],
constants.RT_NETWORK)
port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': 'ns_bridge_port',
'network_id': b_ext_bridge_net_id,
'fixed_ips': [{'ip_address': t_ns_bridge_port[
'fixed_ips'][0]['ip_address']}]
}
}
_, b_ns_bridge_port_id = self.helper.prepare_bottom_element(
ctx, project_id, b_ext_pod, t_ns_bridge_port,
constants.RT_PORT, port_body)
self._safe_create_bottom_floatingip(
ctx, b_ext_pod, b_ext_client, b_ext_net_id, add_fip,
b_ns_bridge_port_id)
self._safe_create_bottom_floatingip(
ctx, b_pod, b_client, b_ns_bridge_net_id,
t_ns_bridge_port['fixed_ips'][0]['ip_address'],
b_int_port_id)
else:
self._safe_create_bottom_floatingip(
ctx, b_pod, b_client, b_ext_net_id, add_fip,
b_int_port_id)
for del_fip in del_fips:
fip = b_ip_fip_map[del_fip]
if need_ns_bridge:
b_ns_bridge_port = b_ext_client.get_ports(ctx, fip['port_id'])
entries = core.query_resource(
ctx, models.ResourceRouting,
[{'key': 'bottom_id', 'comparator': 'eq',
'value': b_ns_bridge_port['id']},
{'key': 'pod_id', 'comparator': 'eq',
'value': b_ext_pod['pod_id']}], [])
t_ns_bridge_port_id = entries[0]['top_id']
b_int_fips = b_client.list_floatingips(
ctx,
[{'key': 'floating_ip_address',
'comparator': 'eq',
'value': b_ns_bridge_port['fixed_ips'][0]['ip_address']},
{'key': 'floating_network_id',
'comparator': 'eq',
'value': b_ns_bridge_net_id}])
if b_int_fips:
b_client.delete_floatingips(ctx, b_int_fips[0]['id'])
b_ext_client.update_floatingips(
ctx, fip['id'], {'floatingip': {'port_id': None}})
# for bridge port, we have two resource routing entries, one
# for bridge port in top pod, another for bridge port in bottom
# pod. calling t_client.delete_ports will delete bridge port in
# bottom pod as well as routing entry for it, but we also need
# to remove routing entry for bridge port in top pod, bridge
# network will be deleted when deleting router
# first we update the routing entry to set bottom_id to None
# and expire the entry, so if we succeed to delete the bridge
# port next, this expired entry will be deleted; otherwise, we
# fail to delete the bridge port, when the port is accessed via
# lock_handle module, that module will find the port and update
# the entry
with ctx.session.begin():
core.update_resources(
ctx, models.ResourceRouting,
[{'key': 'bottom_id', 'comparator': 'eq',
'value': t_ns_bridge_port_id}],
{'bottom_id': None,
'created_at': constants.expire_time,
'updated_at': constants.expire_time})
# delete bridge port
t_client.delete_ports(ctx, t_ns_bridge_port_id)
# delete the expired entry, even if this deletion fails, we
# still have a chance that lock_handle module will delete it
with ctx.session.begin():
core.delete_resources(ctx, models.ResourceRouting,
[{'key': 'bottom_id',
'comparator': 'eq',
'value': t_ns_bridge_port_id}])
else:
b_client.update_floatingips(ctx, fip['id'],
{'floatingip': {'port_id': None}})
@_job_handle(constants.JT_ROUTER_SETUP)
def setup_bottom_router(self, ctx, payload):
(b_pod_id,
t_router_id, t_net_id) = payload[constants.JT_ROUTER_SETUP].split('#')
if b_pod_id == constants.POD_NOT_SPECIFIED:
mappings = db_api.get_bottom_mappings_by_top_id(
ctx, t_net_id, constants.RT_NETWORK)
b_pods = [mapping[0] for mapping in mappings]
for b_pod in b_pods:
# NOTE(zhiyuan) we create one job for each pod to avoid
# conflict caused by different workers operating the same pod
self.xjob_handler.setup_bottom_router(
ctx, t_net_id, t_router_id, b_pod['pod_id'])
return
t_client = self._get_client()
t_pod = db_api.get_top_pod(ctx)
t_router = t_client.get_routers(ctx, t_router_id)
if not t_router:
# we just end this job if top router no longer exists
return
t_net = t_client.get_networks(ctx, t_net_id)
if not t_net:
# we just end this job if top network no longer exists
return
project_id = t_router['tenant_id']
b_pod = db_api.get_pod(ctx, b_pod_id)
t_ew_bridge_net_name = constants.ew_bridge_net_name % project_id
t_ew_bridge_subnet_name = constants.ew_bridge_subnet_name % project_id
t_ew_bridge_net = self._get_resource_by_name(t_client, ctx, 'network',
t_ew_bridge_net_name)
t_ew_bridge_subnet = self._get_resource_by_name(
t_client, ctx, 'subnet', t_ew_bridge_subnet_name)
ext_nets = t_client.list_networks(ctx,
filters=[{'key': 'router:external',
'comparator': 'eq',
'value': True}])
ext_net_pod_names = set(
[ext_net[AZ_HINTS][0] for ext_net in ext_nets])
if not ext_net_pod_names:
need_ns_bridge = False
elif b_pod['pod_name'] in ext_net_pod_names:
need_ns_bridge = False
else:
need_ns_bridge = True
self._setup_router_one_pod(ctx, t_pod, b_pod, t_client, t_net,
t_router, t_ew_bridge_net,
t_ew_bridge_subnet, need_ns_bridge)
self.xjob_handler.configure_extra_routes(ctx, t_router_id)
@_job_handle(constants.JT_ROUTER)
def configure_extra_routes(self, ctx, payload):
t_router_id = payload[constants.JT_ROUTER]
non_vm_port_types = ['network:router_interface',
'network:router_gateway',
'network:dhcp']
b_pods, b_router_ids = zip(*db_api.get_bottom_mappings_by_top_id(
ctx, t_router_id, constants.RT_ROUTER))
router_bridge_ip_map = {}
router_ips_map = {}
for i, b_pod in enumerate(b_pods):
bottom_client = self._get_client(pod_name=b_pod['pod_name'])
b_interfaces = bottom_client.list_ports(
ctx, filters=[{'key': 'device_id',
'comparator': 'eq',
'value': b_router_ids[i]},
{'key': 'device_owner',
'comparator': 'eq',
'value': 'network:router_interface'}])
router_ips_map[b_router_ids[i]] = {}
for b_interface in b_interfaces:
ip = b_interface['fixed_ips'][0]['ip_address']
ew_bridge_cidr = '100.0.0.0/9'
ns_bridge_cidr = '100.128.0.0/9'
if netaddr.IPAddress(ip) in netaddr.IPNetwork(ew_bridge_cidr):
router_bridge_ip_map[b_router_ids[i]] = ip
continue
if netaddr.IPAddress(ip) in netaddr.IPNetwork(ns_bridge_cidr):
continue
b_net_id = b_interface['network_id']
b_subnet = bottom_client.get_subnets(
ctx, b_interface['fixed_ips'][0]['subnet_id'])
b_ports = bottom_client.list_ports(
ctx, filters=[{'key': 'network_id',
'comparator': 'eq',
'value': b_net_id}])
b_vm_ports = [b_port for b_port in b_ports if b_port.get(
'device_owner', '') not in non_vm_port_types]
ips = [vm_port['fixed_ips'][0][
'ip_address'] for vm_port in b_vm_ports]
router_ips_map[b_router_ids[i]][b_subnet['cidr']] = ips
for i, b_router_id in enumerate(b_router_ids):
bottom_client = self._get_client(pod_name=b_pods[i]['pod_name'])
extra_routes = []
if not router_ips_map[b_router_id]:
bottom_client.update_routers(
ctx, b_router_id, {'router': {'routes': extra_routes}})
continue
for router_id, cidr_ips_map in router_ips_map.iteritems():
if router_id == b_router_id:
continue
for cidr, ips in cidr_ips_map.iteritems():
if cidr in router_ips_map[b_router_id]:
continue
for ip in ips:
extra_routes.append(
{'nexthop': router_bridge_ip_map[router_id],
'destination': ip + '/32'})
bottom_client.update_routers(
ctx, b_router_id, {'router': {'routes': extra_routes}})
@_job_handle(constants.JT_PORT_DELETE)
def delete_server_port(self, ctx, payload):
t_port_id = payload[constants.JT_PORT_DELETE]
self._get_client().delete_ports(ctx, t_port_id)

View File

@ -17,8 +17,8 @@ import pecan
from oslo_config import cfg
from tricircle.common.i18n import _
from tricircle.common import restapp
from trio2o.common.i18n import _
from trio2o.common import restapp
common_opts = [
@ -52,8 +52,8 @@ def setup_app(*args, **kwargs):
'host': cfg.CONF.bind_host
},
'app': {
'root': 'tricircle.api.controllers.root.RootController',
'modules': ['tricircle.api'],
'root': 'trio2o.api.controllers.root.RootController',
'modules': ['trio2o.api'],
'errors': {
400: '/error',
'__force_dict__': True

View File

@ -22,16 +22,16 @@ import oslo_db.exception as db_exc
from oslo_log import log as logging
from oslo_utils import uuidutils
from tricircle.common import az_ag
import tricircle.common.context as t_context
import tricircle.common.exceptions as t_exc
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
from trio2o.common import az_ag
import trio2o.common.context as t_context
import trio2o.common.exceptions as t_exc
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
from tricircle.db import api as db_api
from tricircle.db import core
from tricircle.db import models
from trio2o.db import api as db_api
from trio2o.db import core
from trio2o.db import models
LOG = logging.getLogger(__name__)

View File

@ -18,8 +18,8 @@ import oslo_log.log as logging
import pecan
from pecan import request
from tricircle.api.controllers import pod
import tricircle.common.context as t_context
from trio2o.api.controllers import pod
import trio2o.common.context as t_context
LOG = logging.getLogger(__name__)

View File

@ -13,10 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.db.core
import trio2o.api.app
def list_opts():
return [
('DEFAULT', tricircle.db.core.db_opts),
('DEFAULT', trio2o.api.app.common_opts),
]

View File

@ -17,8 +17,8 @@ import pecan
from oslo_config import cfg
from tricircle.common.i18n import _
from tricircle.common import restapp
from trio2o.common.i18n import _
from trio2o.common import restapp
common_opts = [
@ -52,8 +52,8 @@ def setup_app(*args, **kwargs):
'host': cfg.CONF.bind_host
},
'app': {
'root': 'tricircle.cinder_apigw.controllers.root.RootController',
'modules': ['tricircle.cinder_apigw'],
'root': 'trio2o.cinder_apigw.controllers.root.RootController',
'modules': ['trio2o.cinder_apigw'],
'errors': {
400: '/error',
'__force_dict__': True

View File

@ -17,10 +17,10 @@ import pecan
import oslo_log.log as logging
from tricircle.cinder_apigw.controllers import volume
from tricircle.cinder_apigw.controllers import volume_actions
from tricircle.cinder_apigw.controllers import volume_metadata
from tricircle.cinder_apigw.controllers import volume_type
from trio2o.cinder_apigw.controllers import volume
from trio2o.cinder_apigw.controllers import volume_actions
from trio2o.cinder_apigw.controllers import volume_metadata
from trio2o.cinder_apigw.controllers import volume_type
LOG = logging.getLogger(__name__)

View File

@ -23,17 +23,17 @@ from pecan import rest
from oslo_log import log as logging
from oslo_serialization import jsonutils
from tricircle.common import az_ag
from tricircle.common import constants as cons
import tricircle.common.context as t_context
from tricircle.common import httpclient as hclient
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
from trio2o.common import az_ag
from trio2o.common import constants as cons
import trio2o.common.context as t_context
from trio2o.common import httpclient as hclient
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
import trio2o.db.api as db_api
from trio2o.db import core
from trio2o.db import models
LOG = logging.getLogger(__name__)

View File

@ -19,13 +19,13 @@ from pecan import rest
from oslo_log import log as logging
import tricircle.common.client as t_client
from tricircle.common import constants
import tricircle.common.context as t_context
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
import tricircle.db.api as db_api
import trio2o.common.client as t_client
from trio2o.common import constants
import trio2o.common.context as t_context
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
import trio2o.db.api as db_api
LOG = logging.getLogger(__name__)

View File

@ -22,13 +22,13 @@ from pecan import rest
from oslo_log import log as logging
from oslo_serialization import jsonutils
from tricircle.common import constants as cons
import tricircle.common.context as t_context
from tricircle.common import httpclient as hclient
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
import tricircle.db.api as db_api
from trio2o.common import constants as cons
import trio2o.common.context as t_context
from trio2o.common import httpclient as hclient
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
import trio2o.db.api as db_api
LOG = logging.getLogger(__name__)

View File

@ -20,14 +20,14 @@ from pecan import rest
from oslo_log import log as logging
from oslo_utils import uuidutils
import tricircle.common.context as t_context
from tricircle.common import exceptions
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
import trio2o.common.context as t_context
from trio2o.common import exceptions
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
import trio2o.db.api as db_api
from trio2o.db import core
from trio2o.db import models
LOG = logging.getLogger(__name__)

View File

@ -13,10 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.nova_apigw.app
import trio2o.cinder_apigw.app
def list_opts():
return [
('DEFAULT', tricircle.nova_apigw.app.common_opts),
('DEFAULT', trio2o.cinder_apigw.app.common_opts),
]

View File

@ -16,11 +16,11 @@
from oslo_log import log as logging
from oslo_utils import uuidutils
from tricircle.common.i18n import _LE
from trio2o.common.i18n import _LE
from tricircle.db import api as db_api
from tricircle.db import core
from tricircle.db import models
from trio2o.db import api as db_api
from trio2o.db import core
from trio2o.db import models
LOG = logging.getLogger(__name__)

View File

@ -23,7 +23,7 @@ from oslo_config import cfg
import oslo_messaging as messaging
from oslo_serialization import jsonutils
from tricircle.common import rpc
from trio2o.common import rpc
CONF = cfg.CONF
@ -44,7 +44,7 @@ class BaseClientAPI(object):
"""
VERSION_ALIASES = {
# baseapi was added in the first version of Tricircle
# baseapi was added in the first version of Trio2o
}
def __init__(self, topic):

View File

@ -26,11 +26,11 @@ from keystoneclient.v3 import client as keystone_client
from oslo_config import cfg
from oslo_log import log as logging
import tricircle.common.context as tricircle_context
from tricircle.common import exceptions
from tricircle.common import resource_handle
from tricircle.db import api
from tricircle.db import models
import trio2o.common.context as trio2o_context
from trio2o.common import exceptions
from trio2o.common import resource_handle
from trio2o.db import api
from trio2o.db import models
client_opts = [
@ -123,7 +123,7 @@ class Client(object):
handle_create in NeutronResourceHandle is called).
Not all kinds of resources support the above five operations(or not
supported yet by Tricircle), so each service handler has a
supported yet by Trio2o), so each service handler has a
support_resource field to specify the resources and operations it
supports, like:
'port': LIST | CREATE | DELETE | GET
@ -271,7 +271,7 @@ class Client(object):
:return: None
"""
if is_internal:
admin_context = tricircle_context.Context()
admin_context = trio2o_context.Context()
admin_context.auth_token = self._get_admin_token()
endpoint_map = self._get_endpoint_from_keystone(admin_context)
else:

View File

@ -14,7 +14,7 @@
# under the License.
"""
Routines for configuring tricircle, largely copy from Neutron
Routines for configuring trio2o, largely copy from Neutron
"""
import sys
@ -22,11 +22,11 @@ import sys
from oslo_config import cfg
import oslo_log.log as logging
from tricircle.common.i18n import _LI
from trio2o.common.i18n import _LI
# from tricircle import policy
from tricircle.common import rpc
from tricircle.common import version
# from trio2o import policy
from trio2o.common import rpc
from trio2o.common import version
LOG = logging.getLogger(__name__)
@ -40,7 +40,7 @@ def init(opts, args, **kwargs):
# auth.register_conf_options(cfg.CONF)
logging.register_options(cfg.CONF)
cfg.CONF(args=args, project='tricircle',
cfg.CONF(args=args, project='trio2o',
version=version.version_info,
**kwargs)
@ -51,7 +51,7 @@ def init(opts, args, **kwargs):
def _setup_logging():
"""Sets up the logging options for a log with supplied name."""
product_name = "tricircle"
product_name = "trio2o"
logging.setup(cfg.CONF, product_name)
LOG.info(_LI("Logging enabled!"))
LOG.info(_LI("%(prog)s version %(version)s"),

View File

@ -19,9 +19,9 @@ from pecan import request
import oslo_context.context as oslo_ctx
from tricircle.common import constants
from tricircle.common.i18n import _
from tricircle.db import core
from trio2o.common import constants
from trio2o.common.i18n import _
from trio2o.db import core
def get_db_context():

View File

@ -14,20 +14,20 @@
# under the License.
"""
Tricircle base exception handling.
Trio2o base exception handling.
"""
import six
from oslo_log import log as logging
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
LOG = logging.getLogger(__name__)
class TricircleException(Exception):
"""Base Tricircle Exception.
class Trio2oException(Exception):
"""Base Trio2o Exception.
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
@ -83,7 +83,7 @@ class TricircleException(Exception):
message = six.text_type(message)
self.msg = message
super(TricircleException, self).__init__(message)
super(Trio2oException, self).__init__(message)
def _should_format(self):
@ -97,25 +97,25 @@ class TricircleException(Exception):
return six.text_type(self.msg)
class BadRequest(TricircleException):
class BadRequest(Trio2oException):
message = _('Bad %(resource)s request: %(msg)s')
class NotFound(TricircleException):
class NotFound(Trio2oException):
message = _("Resource could not be found.")
code = 404
safe = True
class Conflict(TricircleException):
class Conflict(Trio2oException):
pass
class NotAuthorized(TricircleException):
class NotAuthorized(Trio2oException):
message = _("Not authorized.")
class ServiceUnavailable(TricircleException):
class ServiceUnavailable(Trio2oException):
message = _("The service is unavailable")
@ -123,37 +123,37 @@ class AdminRequired(NotAuthorized):
message = _("User does not have admin privileges")
class InUse(TricircleException):
class InUse(Trio2oException):
message = _("The resource is inuse")
class InvalidConfigurationOption(TricircleException):
class InvalidConfigurationOption(Trio2oException):
message = _("An invalid value was provided for %(opt_name)s: "
"%(opt_value)s")
class EndpointNotAvailable(TricircleException):
class EndpointNotAvailable(Trio2oException):
message = "Endpoint %(url)s for %(service)s is not available"
def __init__(self, service, url):
super(EndpointNotAvailable, self).__init__(service=service, url=url)
class EndpointNotUnique(TricircleException):
class EndpointNotUnique(Trio2oException):
message = "Endpoint for %(service)s in %(pod)s not unique"
def __init__(self, pod, service):
super(EndpointNotUnique, self).__init__(pod=pod, service=service)
class EndpointNotFound(TricircleException):
class EndpointNotFound(Trio2oException):
message = "Endpoint for %(service)s in %(pod)s not found"
def __init__(self, pod, service):
super(EndpointNotFound, self).__init__(pod=pod, service=service)
class ResourceNotFound(TricircleException):
class ResourceNotFound(Trio2oException):
message = "Could not find %(resource_type)s: %(unique_key)s"
def __init__(self, model, unique_key):
@ -162,7 +162,7 @@ class ResourceNotFound(TricircleException):
unique_key=unique_key)
class ResourceNotSupported(TricircleException):
class ResourceNotSupported(Trio2oException):
message = "%(method)s method not supported for %(resource)s"
def __init__(self, resource, method):
@ -170,7 +170,7 @@ class ResourceNotSupported(TricircleException):
method=method)
class Invalid(TricircleException):
class Invalid(Trio2oException):
message = _("Unacceptable parameters.")
code = 400
@ -187,7 +187,7 @@ class InvalidMetadataSize(Invalid):
message = _("Invalid metadata size: %(reason)s")
class MetadataLimitExceeded(TricircleException):
class MetadataLimitExceeded(Trio2oException):
message = _("Maximum number of metadata items exceeds %(allowed)d")
@ -224,16 +224,16 @@ class ReservationNotFound(QuotaNotFound):
message = _("Quota reservation %(uuid)s could not be found.")
class OverQuota(TricircleException):
class OverQuota(Trio2oException):
message = _("Quota exceeded for resources: %(overs)s")
class TooManyInstances(TricircleException):
class TooManyInstances(Trio2oException):
message = _("Quota exceeded for %(overs)s: Requested %(req)s,"
" but already used %(used)s of %(allowed)s %(overs)s")
class OnsetFileLimitExceeded(TricircleException):
class OnsetFileLimitExceeded(Trio2oException):
message = _("Personality file limit exceeded")
@ -245,7 +245,7 @@ class OnsetFileContentLimitExceeded(OnsetFileLimitExceeded):
message = _("Personality file content too long")
class ExternalNetPodNotSpecify(TricircleException):
class ExternalNetPodNotSpecify(Trio2oException):
message = "Pod for external network not specified"
def __init__(self):
@ -259,18 +259,18 @@ class PodNotFound(NotFound):
super(PodNotFound, self).__init__(pod_name=pod_name)
class ChildQuotaNotZero(TricircleException):
class ChildQuotaNotZero(Trio2oException):
message = _("Child projects having non-zero quota")
# parameter validation error
class ValidationError(TricircleException):
class ValidationError(Trio2oException):
message = _("%(msg)s")
code = 400
# parameter validation error
class HTTPForbiddenError(TricircleException):
class HTTPForbiddenError(Trio2oException):
message = _("%(msg)s")
code = 403
@ -289,7 +289,7 @@ class VolumeTypeExtraSpecsNotFound(NotFound):
"key %(extra_specs_key)s.")
class Duplicate(TricircleException):
class Duplicate(Trio2oException):
pass
@ -297,5 +297,5 @@ class VolumeTypeExists(Duplicate):
message = _("Volume Type %(id)s already exists.")
class VolumeTypeUpdateFailed(TricircleException):
class VolumeTypeUpdateFailed(Trio2oException):
message = _("Cannot update volume_type %(id)s")

View File

@ -21,18 +21,18 @@ from requests import Session
from oslo_log import log as logging
from tricircle.common import client
from tricircle.common import constants as cons
from tricircle.common.i18n import _LE
from tricircle.common import utils
from tricircle.db import api as db_api
from trio2o.common import client
from trio2o.common import constants as cons
from trio2o.common.i18n import _LE
from trio2o.common import utils
from trio2o.db import api as db_api
LOG = logging.getLogger(__name__)
# the url could be endpoint registered in the keystone
# or url sent to tricircle service, which is stored in
# or url sent to trio2o service, which is stored in
# pecan.request.url
def get_version_from_url(url):
@ -60,7 +60,7 @@ def get_version_from_url(url):
def get_bottom_url(t_ver, t_url, b_ver, b_endpoint):
"""get_bottom_url
convert url received by Tricircle service to bottom OpenStack
convert url received by Trio2o service to bottom OpenStack
request url through the configured endpoint in the KeyStone
:param t_ver: version of top service

View File

@ -14,7 +14,7 @@
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='tricircle')
_translators = oslo_i18n.TranslatorFactory(domain='trio2o')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -18,8 +18,8 @@ import eventlet
import oslo_db.exception as db_exc
from tricircle.db import core
from tricircle.db import models
from trio2o.db import core
from trio2o.db import models
ALL_DONE = 0 # both route and bottom resource exist

View File

@ -13,14 +13,14 @@
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.common.client
import trio2o.common.client
# Todo: adding rpc cap negotiation configuration after first release
# import tricircle.common.xrpcapi
# import trio2o.common.xrpcapi
def list_opts():
return [
('client', tricircle.common.client.client_opts),
# ('upgrade_levels', tricircle.common.xrpcapi.rpcapi_cap_opt),
('client', trio2o.common.client.client_opts),
# ('upgrade_levels', trio2o.common.xrpcapi.rpcapi_cap_opt),
]

View File

@ -14,7 +14,7 @@
# under the License.
"""
Routines for configuring tricircle, copy and modify from Cinder
Routines for configuring trio2o, copy and modify from Cinder
"""
import datetime
@ -28,13 +28,13 @@ from oslo_utils import timeutils
from keystoneclient import exceptions as k_exceptions
from tricircle.common import client
from tricircle.common import constants as cons
from tricircle.common import exceptions as t_exceptions
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
from tricircle.db import api as db_api
from trio2o.common import client
from trio2o.common import constants as cons
from trio2o.common import exceptions as t_exceptions
from trio2o.common.i18n import _
from trio2o.common.i18n import _LE
from trio2o.common import utils
from trio2o.db import api as db_api
quota_opts = [
cfg.IntOpt('quota_instances',
@ -124,7 +124,7 @@ quota_opts = [
'they will update on a new reservation if max_age has '
'passed since the last reservation'),
cfg.StrOpt('quota_driver',
default='tricircle.common.quota.DbQuotaDriver',
default='trio2o.common.quota.DbQuotaDriver',
help='Default driver to use for quota checks'),
cfg.BoolOpt('use_default_quota_class',
default=True,
@ -621,7 +621,7 @@ class DbQuotaDriver(object):
# Yes, the admin may be in the process of reducing
# quotas, but that's a pretty rare thing.
# NOTE(joehuang): in Tricircle, no embeded sync function here,
# NOTE(joehuang): in Trio2o, no embeded sync function here,
# so set has_sync=False.
quotas = self._get_quotas(context, resources, deltas.keys(),
has_sync=False, project_id=project_id)
@ -999,7 +999,7 @@ class AllQuotaEngine(QuotaEngine):
result = {}
# Global quotas.
# Set sync_func to None for no sync function in Tricircle
# Set sync_func to None for no sync function in Trio2o
reservable_argses = [
('instances', None, 'quota_instances'),

View File

@ -27,8 +27,8 @@ from oslo_config import cfg
from oslo_log import log as logging
from requests import exceptions as r_exceptions
from tricircle.common import constants as cons
from tricircle.common import exceptions
from trio2o.common import constants as cons
from trio2o.common import exceptions
client_opts = [

View File

@ -27,7 +27,7 @@ def auth_app(app):
if cfg.CONF.auth_strategy == 'noauth':
pass
elif cfg.CONF.auth_strategy == 'keystone':
# NOTE(zhiyuan) pkg_resources will try to load tricircle to get module
# NOTE(zhiyuan) pkg_resources will try to load trio2o to get module
# version, passing "project" as empty string to bypass it
app = auth_token.AuthProtocol(app, {'project': ''})
else:

View File

@ -31,15 +31,15 @@ from oslo_config import cfg
import oslo_messaging as messaging
from oslo_serialization import jsonutils
import tricircle.common.context
import tricircle.common.exceptions
import trio2o.common.context
import trio2o.common.exceptions
CONF = cfg.CONF
TRANSPORT = None
NOTIFIER = None
ALLOWED_EXMODS = [
tricircle.common.exceptions.__name__,
trio2o.common.exceptions.__name__,
]
EXTRA_EXMODS = []
@ -102,7 +102,7 @@ class RequestContextSerializer(messaging.Serializer):
return context.to_dict()
def deserialize_context(self, context):
return tricircle.common.context.Context.from_dict(context)
return trio2o.common.context.Context.from_dict(context)
def get_transport_url(url_str=None):

View File

@ -31,9 +31,9 @@ _SINGLETON_MAPPING = Mapping({
})
class TricircleSerializer(Serializer):
class Trio2oSerializer(Serializer):
def __init__(self, base=None):
super(TricircleSerializer, self).__init__()
super(Trio2oSerializer, self).__init__()
self._base = base
def serialize_entity(self, context, entity):

View File

@ -19,10 +19,10 @@ import pecan
from oslo_log import log as logging
from tricircle.common import constants as cons
import tricircle.common.exceptions as t_exceptions
from tricircle.common.i18n import _
import tricircle.db.api as db_api
from trio2o.common import constants as cons
import trio2o.common.exceptions as t_exceptions
from trio2o.common.i18n import _
import trio2o.db.api as db_api
LOG = logging.getLogger(__name__)

View File

@ -12,4 +12,4 @@
# License for the specific language governing permissions and limitations
# under the License.
version_info = "tricircle 1.0"
version_info = "trio2o 1.0"

View File

@ -21,10 +21,10 @@ from oslo_log import log as logging
import oslo_messaging as messaging
import rpc
from serializer import TricircleSerializer as Serializer
from serializer import Trio2oSerializer as Serializer
import topics
from tricircle.common import constants
from trio2o.common import constants
CONF = cfg.CONF
@ -80,17 +80,3 @@ class XJobAPI(object):
self.client.prepare(exchange='openstack').cast(
ctxt, 'setup_bottom_router',
payload={constants.JT_ROUTER_SETUP: combine_id})
def configure_extra_routes(self, ctxt, router_id):
# NOTE(zhiyuan) this RPC is called by plugin in Neutron server, whose
# control exchange is "neutron", however, we starts xjob without
# specifying its control exchange, so the default value "openstack" is
# used, thus we need to pass exchange as "openstack" here.
self.client.prepare(exchange='openstack').cast(
ctxt, 'configure_extra_routes',
payload={constants.JT_ROUTER: router_id})
def delete_server_port(self, ctxt, port_id):
self.client.prepare(exchange='openstack').cast(
ctxt, 'delete_server_port',
payload={constants.JT_PORT_DELETE: port_id})

View File

@ -27,14 +27,14 @@ from sqlalchemy import or_, and_
from sqlalchemy.orm import joinedload
from sqlalchemy.sql.expression import literal_column
from tricircle.common import constants
from tricircle.common.context import is_admin_context as _is_admin_context
from tricircle.common import exceptions
from tricircle.common.i18n import _
from tricircle.common.i18n import _LW
from trio2o.common import constants
from trio2o.common.context import is_admin_context as _is_admin_context
from trio2o.common import exceptions
from trio2o.common.i18n import _
from trio2o.common.i18n import _LW
from tricircle.db import core
from tricircle.db import models
from trio2o.db import core
from trio2o.db import models
CONF = cfg.CONF

View File

@ -26,12 +26,12 @@ import oslo_db.options as db_options
import oslo_db.sqlalchemy.session as db_session
from oslo_utils import strutils
from tricircle.common import exceptions
from trio2o.common import exceptions
db_opts = [
cfg.StrOpt('tricircle_db_connection',
help='db connection string for tricircle'),
cfg.StrOpt('trio2o_db_connection',
help='db connection string for trio2o'),
]
cfg.CONF.register_opts(db_opts)
@ -74,7 +74,7 @@ def _get_engine_facade():
global _engine_facade
if not _engine_facade:
t_connection = cfg.CONF.tricircle_db_connection
t_connection = cfg.CONF.trio2o_db_connection
_engine_facade = db_session.EngineFacade(t_connection,
_conf=cfg.CONF)
return _engine_facade

View File

@ -1,7 +1,7 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=tricircle
repository_id=trio2o
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.

View File

@ -18,9 +18,9 @@ import os
from oslo_db.sqlalchemy import migration
from tricircle import db
from tricircle.db import core
from tricircle.db import migrate_repo
from trio2o import db
from trio2o.db import core
from trio2o.db import migrate_repo
def find_migrate_repo(package=None, repo_name='migrate_repo'):

Some files were not shown because too many files have changed in this diff Show More