Submit new code base

Change-Id: I233a1e0c8ecd9d35a66e28be0a6328b5c7215829
This commit is contained in:
zhiyuan_cai 2015-06-18 11:58:12 +08:00
parent 8e9ac69910
commit f9874c05ae
114 changed files with 233 additions and 24378 deletions

View File

@ -1,196 +0,0 @@
Openstack Cinder Proxy
===============================
Cinder-Proxy acts as the same role of Cinder-Volume in cascading OpenStack.
Cinder-Proxy treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder.
Key modules
-----------
* The new Cinder-Proxy module cinder_proxy,which treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder:
cinder/volume/cinder_proxy.py
Requirements
------------
* openstack-cinder-volume-juno has been installed
Installation
------------
We provide two ways to install the Cinder-Proxy code. In this section, we will guide you through installing the Cinder-Proxy with the minimum configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Juno**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$CINDER_CONFIG_PARENT_DIR/cinder.conf
(replace the $... with actual directory names.)
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing cinder, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $CINDER_PARENT_DIR```
(replace the $... with actual directory name.)
- Update the cinder configuration file (e.g. /etc/cinder/cinder.conf) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide.
```
[DEFAULT]
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
voltype_sync_interval=3600
pagination_limit=50
volume_sync_timestamp_flag=True
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_tenant_id=$CASCADED_ADMIN_ID
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
glance_cascading_flag=True
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
- Restart the Cinder-Proxy.
```service openstack-cinder-volume restart```
- Done. The Cinder-Proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation code should setup the Cinder-Proxy with the minimum configuration below. Check the "Configurations" section for a full configuration guide.
```
[DEFAULT]
...
###cascade info ###
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
voltype_sync_interval=3600
pagination_limit=50
volume_sync_timestamp_flag=True
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_tenant_id=$CASCADED_ADMIN_ID
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
glance_cascading_flag=True
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Juno.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically add the related codes to $CINDER_PARENT_DIR/cinder and modify the related configuration.
- In case the automatic installation does not work, try to install manually.
Configurations
--------------
* This is a (default) configuration sample for the Cinder-Proxy. Please add/modify these options in /etc/cinder/cinder.conf.
* Note:
- Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name.
- Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints.
```
[DEFAULT]
...
#
#Options defined in cinder.volume.manager
#
# Default driver to use for the Cinder-Proxy (string value)
volume_manager=cinder.volume.cinder_proxy.CinderProxy
#The period time used by Cinder-Proxy to determine how often volume status
#is synchronized between cascading and cascaded cinder (integer value, default 5)
volume_sync_interval=5
#The period time used by Cinder-Proxy to control how often volume types
#is synchronized between cascading and cascaded cinder (integer value, default 3600)
voltype_sync_interval=3600
#The length of volume list used by Cinder-Proxy to control each pagination volume query
#for Cinder-Proxy between cascading and cascaded cinder (integer value, default 50)
pagination_limit=50
#The switch flag used by Cinder-Proxy to determine whether to use time-stamp when synchronize
#volume status.( boolean value, default true)
volume_sync_timestamp_flag=True
#The cascaded level tenant name, which will be set as a parameter when cascaded cinder
#client is constructed by Cinder-Proxy
cinder_tenant_name=$CASCADED_ADMIN_TENANT
#The cascaded level tenant id, which will be set as a parameter when cascaded cinder
#client is constructed by Cinder-Proxy
cinder_tenant_id=$CASCADED_ADMIN_ID
#The cascaded level user name, which will be set as a parameter when cascaded cinder
#client is constructed by Cinder-Proxy
cinder_username=$CASCADED_ADMIN_NAME
#The cascaded level user password, which will be set as a parameter when cascaded cinder
#client is constructed by Cinder-Proxy
cinder_password=$CASCADED_ADMIN_PASSWORD
#The cascading level keystone component service url, by which the Cinder-Proxy
#can access to cascading level keystone service
keystone_auth_url=$keystone_auth_url
#The switch flag used by Cinder-Proxy to determine glance is used OpenStack-cascading solution.
#(boolean value, default true)
glance_cascading_flag=True
#The cascading level glance component service url, by which the Cinder-Proxy
#can access to cascading level glance service
cascading_glance_url=$CASCADING_GLANCE
#The cascaded level glance component service url, by which the Cinder-Proxy
#can judge whether the cascading glance image has a location for this cascaded glance
cascaded_glance_url=http://$CASCADED_GLANCE
#The cascaded level region name, which will be set as a parameter when
#the cascaded level component services register endpoint to keystone
cascaded_region_name=$CASCADED_REGION_NAME
#The cascaded level available zone name, which will be set as a parameter when
#forward request to cascaded level cinder. Please pay attention to that value of
#cascaded_available_zone of Cinder-Proxy must be the same as storage_availability_zone in
#the cascaded level node. And Cinder-Proxy should be configured to the same storage_availability_zone.
#this configuration could be removed in the future to just use the Cinder-Proxy storage_availability_zone
#configuration item. but it is up to the admin to make sure the storage_availability_zone in Cinder-Proxy
#and cascaded cinder keep the same value.
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE

File diff suppressed because it is too large Load Diff

View File

@ -1,130 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
# please set the option list set in cinder configure file
_CINDER_CONF_OPTION=("volume_manager=cinder.volume.cinder_proxy.CinderProxy volume_sync_interval=5 voltype_sync_interval=3600 periodic_interval=5 volume_sync_timestamp_flag=True cinder_tenant_name=admin cinder_tenant_id=1234 pagination_limit=50 cinder_username=admin cinder_password=1234 keystone_auth_url=http://10.67.148.210:5000/v2.0/ glance_cascading_flag=False cascading_glance_url=10.67.148.210:9292 cascaded_glance_url=http://10.67.148.201:9292 cascaded_cinder_url=http://10.67.148.201:8776/v2/%(project_id)s cascaded_region_name=Region_AZ1 cascaded_available_zone=AZ1")
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder/"
_BACKUP_DIR="${_CINDER_DIR}/cinder-proxy-installation-backup"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder-proxy has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
log "updating config file..."
sed -i.backup -e "/volume_manager *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
sed -i.backup -e "/periodic_interval *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
for option in $_CINDER_CONF_OPTION
do
sed -i -e "/\[DEFAULT\]/a \\"$option "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
done
if [ $? -ne 0 ] ; then
log "Error in updating, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering /cinder failed! Please install manually."
fi
cp "${_BACKUP_DIR}/etc/cinder/${_CINDER_CONF_FILE}" "${_CINDER_CONF_DIR}" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "Recovering config failed! Please install manually."
fi
exit 1
fi
log "restarting cinder proxy..."
service openstack-cinder-volume restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder proxy manually."
exit 1
fi
log "Cinder proxy Completed."
log "See README to get started."
exit 0

View File

@ -15,8 +15,8 @@
from oslo.config import cfg
from nova.openstack.common import importutils
from nova.openstack.common import log as logging
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(__name__)

View File

@ -17,9 +17,9 @@ from oslo.config import cfg
from nova.openstack.common import local
from nova import exception
from nova import wsgi
from nova.openstack.common import context
from nova.openstack.common import importutils
from nova.openstack.common import uuidutils
from oslo_context import context
from oslo.utils import importutils
from oslo.utils import uuidutils
def generate_request_id():

View File

@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
from nova.openstack.common import context
from oslo_context import context
from nova import exception
import eventlet
@ -21,8 +21,8 @@ import eventlet
from keystoneclient.v2_0 import client as kc
from keystoneclient.v3 import client as kc_v3
from oslo.config import cfg
from nova.openstack.common import importutils
from nova.openstack.common import log as logging
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger('nova.compute.keystoneclient')

View File

@ -82,12 +82,12 @@ from nova.objects import quotas as quotas_obj
from nova.objects import block_device as block_device_obj
from nova.objects import compute_node as compute_node_obj
from nova.objects import service as service_obj
from nova.openstack.common import excutils
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from oslo.utils import excutils
from oslo.serialization import jsonutils
from oslo_log import log as logging
from nova.openstack.common import periodic_task
from nova.openstack.common import strutils
from nova.openstack.common import timeutils
from oslo.utils import strutils
from oslo.utils import timeutils
from nova import paths
from nova import rpc
from nova.scheduler import rpcapi as scheduler_rpcapi
@ -176,50 +176,6 @@ compute_opts = [
]
interval_opts = [
cfg.IntOpt('bandwidth_poll_interval',
default=600,
help='Interval to pull network bandwidth usage info. Not '
'supported on all hypervisors. Set to -1 to disable. '
'Setting this to 0 will disable, but this will change in '
'the K release to mean "run at the default rate".'),
# TODO(gilliard): Clean the above message after the K release
cfg.IntOpt('sync_power_state_interval',
default=600,
help='Interval to sync power states between the database and '
'the hypervisor. Set to -1 to disable. '
'Setting this to 0 will disable, but this will change in '
'Juno to mean "run at the default rate".'),
# TODO(gilliard): Clean the above message after the K release
cfg.IntOpt("heal_instance_info_cache_interval",
default=60,
help="Number of seconds between instance info_cache self "
"healing updates"),
cfg.IntOpt('reclaim_instance_interval',
default=0,
help='Interval in seconds for reclaiming deleted instances'),
cfg.IntOpt('volume_usage_poll_interval',
default=0,
help='Interval in seconds for gathering volume usages'),
cfg.IntOpt('shelved_poll_interval',
default=3600,
help='Interval in seconds for polling shelved instances to '
'offload. Set to -1 to disable.'
'Setting this to 0 will disable, but this will change in '
'Juno to mean "run at the default rate".'),
# TODO(gilliard): Clean the above message after the K release
cfg.IntOpt('shelved_offload_time',
default=0,
help='Time in seconds before a shelved instance is eligible '
'for removing from a host. -1 never offload, 0 offload '
'when shelved'),
cfg.IntOpt('instance_delete_interval',
default=300,
help=('Interval in seconds for retrying failed instance file '
'deletes')),
cfg.IntOpt('block_device_allocate_retries_interval',
default=3,
help='Waiting time interval (seconds) between block'
' device allocation retries on failures'),
cfg.IntOpt('sync_instance_state_interval',
default=5,
help='interval to sync instance states between '
@ -235,36 +191,9 @@ interval_opts = [
]
timeout_opts = [
cfg.IntOpt("reboot_timeout",
default=0,
help="Automatically hard reboot an instance if it has been "
"stuck in a rebooting state longer than N seconds. "
"Set to 0 to disable."),
cfg.IntOpt("instance_build_timeout",
default=0,
help="Amount of time in seconds an instance can be in BUILD "
"before going into ERROR status."
"Set to 0 to disable."),
cfg.IntOpt("rescue_timeout",
default=0,
help="Automatically unrescue an instance after N seconds. "
"Set to 0 to disable."),
cfg.IntOpt("resize_confirm_window",
default=0,
help="Automatically confirm resizes after N seconds. "
"Set to 0 to disable."),
cfg.IntOpt("shutdown_timeout",
default=60,
help="Total amount of time to wait in seconds for an instance "
"to perform a clean shutdown."),
]
running_deleted_opts = [
cfg.StrOpt("running_deleted_instance_action",
default="reap",
help="Action to take if a running deleted instance is detected."
"Valid options are 'noop', 'log', 'shutdown', or 'reap'. "
"Set to 'noop' to take no action."),
cfg.IntOpt("running_deleted_instance_poll_interval",
default=1800,
help="Number of seconds to wait between runs of the cleanup "
@ -701,7 +630,7 @@ class ComputeVirtAPI(virtapi.VirtAPI):
class ComputeManager(manager.Manager):
"""Manages the running instances from creation to destruction."""
target = messaging.Target(version='3.35')
target = messaging.Target(version='4.0')
# How long to wait in seconds before re-issuing a shutdown
# signal to a instance during power off. The overall
@ -749,6 +678,7 @@ class ComputeManager(manager.Manager):
# NOTE(russellb) Load the driver last. It may call back into the
# compute manager via the virtapi, so we want it to be fully
# initialized before that happens.
self.driver = driver.load_compute_driver(self.virtapi, compute_driver)
self.use_legacy_block_device_info = \
self.driver.need_legacy_block_device_info
@ -962,6 +892,23 @@ class ComputeManager(manager.Manager):
self._update_resource_tracker(context, instance_ref)
return instance_ref
# Vega: Since the patch for bug 1158684 is accepted, nova no longer
# automatically deletes pre-created ports. We need to handle ports
# deleting manually.
def _delete_proxy_port(self, context, instance_uuid):
search_opts = {'device_id': instance_uuid}
csd_neutron_client = ComputeManager.get_neutron_client(CONF.proxy_region_name)
data = csd_neutron_client.list_ports(**search_opts)
ports = [port['id'] for port in data.get('ports', [])]
for port in ports:
try:
csd_neutron_client.delete_port(port)
except NeutronClientException as ne:
if e.status_code == 404:
LOG.warning('Port %s does not exist', port)
else:
LOG.warning('Failed to delete port %s for server %s', port, instance_uuid)
def _delete_proxy_instance(self, context, instance):
proxy_instance_id = self._get_csd_instance_uuid(instance)
@ -979,6 +926,7 @@ class ComputeManager(manager.Manager):
task_state=None)
LOG.debug(_('delete the server %s from nova-proxy'),
instance['uuid'])
self._delete_proxy_port(context, proxy_instance_id)
except Exception:
if isinstance(sys.exc_info()[1], novaclient.exceptions.NotFound):
return
@ -2410,7 +2358,7 @@ class ComputeManager(manager.Manager):
def _start_building(self, context, instance):
"""Save the host and launched_on fields and log appropriately."""
LOG.audit(_('Starting instance...'), context=context,
LOG.info(_('Starting instance...'), context=context,
instance=instance)
self._instance_update(context, instance.uuid,
vm_state=vm_states.BUILDING,
@ -2784,7 +2732,7 @@ class ComputeManager(manager.Manager):
node=None, limits=None):
try:
LOG.audit(_('Starting instance...'), context=context,
LOG.info(_('Starting instance...'), context=context,
instance=instance)
instance.vm_state = vm_states.BUILDING
instance.task_state = None
@ -3201,7 +3149,8 @@ class ComputeManager(manager.Manager):
socket_dir = '/var/l2proxysock'
if not os.path.exists(socket_dir):
LOG.debug("socket file is not exist!")
raise
# Vega, temporary comment out this exception
# raise
else:
retry = 5
cas_ports = [cas_port_id["port"]["id"] for cas_port_id in cascaded_ports]
@ -3339,7 +3288,7 @@ class ComputeManager(manager.Manager):
trying to teardown networking
"""
context = context.elevated()
LOG.audit(_('%(action_str)s instance') % {'action_str': 'Terminating'},
LOG.info(_('%(action_str)s instance') % {'action_str': 'Terminating'},
context=context, instance=instance)
if notify:
@ -3726,7 +3675,7 @@ class ComputeManager(manager.Manager):
#cascading patch
context = context.elevated()
with self._error_out_instance_on_exception(context, instance):
LOG.audit(_("Rebuilding instance"), context=context,
LOG.info(_("Rebuilding instance"), context=context,
instance=instance)
# if bdms is None:
# bdms = self.conductor_api. \
@ -3887,7 +3836,7 @@ class ComputeManager(manager.Manager):
instance.power_state = current_power_state
instance.save()
LOG.audit(_('instance snapshotting'), context=context,
LOG.info(_('instance snapshotting'), context=context,
instance=instance)
if instance.power_state != power_state.RUNNING:
@ -3968,7 +3917,7 @@ class ComputeManager(manager.Manager):
try:
self.driver.set_admin_password(instance, new_pass)
LOG.audit(_("Root password set"), instance=instance)
LOG.info(_("Root password set"), instance=instance)
instance.task_state = None
instance.save(
expected_task_state=task_states.UPDATING_PASSWORD)
@ -4013,7 +3962,7 @@ class ComputeManager(manager.Manager):
{'current_state': current_power_state,
'expected_state': expected_state},
instance=instance)
LOG.audit(_('injecting file to %s'), path,
LOG.info(_('injecting file to %s'), path,
instance=instance)
self.driver.inject_file(instance, path, file_contents)
@ -4379,7 +4328,7 @@ class ComputeManager(manager.Manager):
rt = self._get_resource_tracker(node)
with rt.resize_claim(context, instance, instance_type,
image_meta=image, limits=limits) as claim:
LOG.audit(_('Migrating'), context=context, instance=instance)
LOG.info(_('Migrating'), context=context, instance=instance)
self.compute_rpcapi.resize_instance(
context, instance, claim.migration, image,
instance_type, quotas.reservations)
@ -4750,7 +4699,7 @@ class ComputeManager(manager.Manager):
def pause_instance(self, context, instance):
"""Pause an instance on this host."""
context = context.elevated()
LOG.audit(_('Pausing'), context=context, instance=instance)
LOG.info(_('Pausing'), context=context, instance=instance)
self._notify_about_instance_usage(context, instance, 'pause.start')
# self.driver.pause(instance)
# current_power_state = self._get_power_state(context, instance)
@ -4774,7 +4723,7 @@ class ComputeManager(manager.Manager):
def unpause_instance(self, context, instance):
"""Unpause a paused instance on this host."""
context = context.elevated()
LOG.audit(_('Unpausing'), context=context, instance=instance)
LOG.info(_('Unpausing'), context=context, instance=instance)
self._notify_about_instance_usage(context, instance, 'unpause.start')
cascaded_instance_id = self._get_csd_instance_uuid(instance)
if cascaded_instance_id is None:
@ -4814,7 +4763,7 @@ class ComputeManager(manager.Manager):
"""Resume the given suspended instance."""
#cascading patch
context = context.elevated()
LOG.audit(_('Resuming'), context=context, instance=instance)
LOG.info(_('Resuming'), context=context, instance=instance)
cascaded_instance_id = self._get_csd_instance_uuid(instance)
if cascaded_instance_id is None:
@ -4840,7 +4789,7 @@ class ComputeManager(manager.Manager):
def get_console_output(self, context, instance, tail_length):
"""Send the console output for the given instance."""
context = context.elevated()
LOG.audit(_("Get console output"), context=context,
LOG.info(_("Get console output"), context=context,
instance=instance)
# output = self.driver.get_console_output(context, instance)
@ -5009,7 +4958,7 @@ class ComputeManager(manager.Manager):
def _attach_volume(self, context, instance, bdm):
context = context.elevated()
LOG.audit(_('Attaching volume %(volume_id)s to %(mountpoint)s'),
LOG.info(_('Attaching volume %(volume_id)s to %(mountpoint)s'),
{'volume_id': bdm.volume_id,
'mountpoint': bdm['mount_device']},
context=context, instance=instance)
@ -5055,7 +5004,7 @@ class ComputeManager(manager.Manager):
mp = bdm.device_name
volume_id = bdm.volume_id
LOG.audit(_('Detach volume %(volume_id)s from mountpoint %(mp)s'),
LOG.info(_('Detach volume %(volume_id)s from mountpoint %(mp)s'),
{'volume_id': volume_id, 'mp': mp},
context=context, instance=instance)
@ -5489,7 +5438,7 @@ class ComputeManager(manager.Manager):
resources['pci_stats'] = jsonutils.dumps([])
resources['stats'] = {}
rt._update_usage_from_instances(context, resources, [])
rt._sync_compute_node(context, resources)
rt._init_compute_node(context, resources)
@periodic_task.periodic_task
def update_available_resource(self, context):

3
envrc
View File

@ -1,3 +0,0 @@
#set up where the openstack is installed, before running the installation script,
#it's better to run 'source envrc' .
export OPENSTACK_INSTALL_DIR=/usr/lib/python2.7/dist-packages

View File

@ -1,139 +0,0 @@
Glance Sync Manager
===============================
This is a submodule of Tricircle Project, in which a sync function is added to support the glance images' sync between cascading and cascadeds.
When launching a instance, the nova will search the image which is in the same region with the instance to downland, this can speeded up the whole launching time of the instance.
Key modules
-----------
* Primarily, there is only new module in glance cascading: Sync, which is in the glance/sync package.
glance/sync/__init__.py : Adds a ImageRepoProxy class, like store, policy .etc , to augment a sync mechanism layer on top of the api request handling chain.
glance/sync/base.py : Contains SyncManager object, execute the sync operations.
glance/sync/utils.py : Some help functions.
glance/sync/api/ : Support a Web Server of sync.
glance/sync/client/: Support a client to visit the Web Server , ImageRepoProxy use this client to call the sync requests.
glance/sync/task/: Each Sync operation is transformed into a task, we using queue to store the task an eventlet to handle the task simultaneously.
glance/sync/store/: We implements the independent-glance-store, separating the handles of image_data from image_metadata.
glance/cmd/sync.py: For the Sync Server starting launch (refer this in /usr/bin/glance-sync).
* **Note:**
At present, the glance cascading only support v2 version of glance-api;
Requirements
------------
* pexpect>=2.3
Installation
------------
* **Note:**
- The Installation and configuration guidelines written below is just for the cascading layer of glance. For the cascaded layer, the glance is installed as normal.
* **Prerequisites**
- Please install the python package: pexpect>=2.3 ( because we use pxssh for loginng and there is a bug in pxssh, see https://mail.python.org/pipermail/python-list/2008-February/510054.html, you should fix this before launch the service. )
* **Manual Installation**
- Please **make sure you have installed the glance patches in /juno-patches**.
- Make sure you have performed backups properly.
* **Manual Installation**
1. Under cascading Openstack, copy these files from glance-patch directory and glancesync directory to suitable place:
| DIR | FROM | TO |
| ------------- |:-----------------|:-------------------------------------------|
| glancesync | glance/ | ${python_install_dir}/glance |
| glancesync | etc/glance/ | /etc/glance/ |
| glancesync | glance-sync | /usr/bin/ |
|${glance-patch}| glance/ | ${python_install_dir}/glance |
|${glance-patch}|glance.egg-info/entry_points.txt | ${glance_install_egg.info}/ |
${glance-patch} = `juno-patches/glance/glance_location_patch` ${python_install_dir} is where the openstack installed, e.g. `/usr/lib64/python2.6/site-packages` .
2. Add/modify the config options
| CONFIG_FILE | OPTION | ADD or MODIFY |
| ----------------|:---------------------------------------------------|:--------------:|
|glance-api.conf | show_multiple_locations=True | M |
|glance-api.conf | sync_server_host=${sync_mgr_host} | A |
|glance-api.conf | sync_server_port=9595 | A |
|glance-api.conf | sync_enabled=True | A |
|glance-sync.conf | cascading_endpoint_url=${glance_api_endpoint_url} | M |
|glance-sync.conf | sync_strategy=ALL | M |
|glance-sync.conf | auth_host=${keystone_host} | M |
3. Re-launch services on cacading openstack, like:
`service openstack-glance-api restart `
`service openstack-glance-registry restart `
`python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf & `
* **Automatic Installation**
0. run `source envrc`.
1. **make sure you have installed the glance patches in /juno-patches**: Enter the glance-patch installation dir: `cd ./tricircle/juno-patches/glance/glance_location_patch/installation` .
2. Optional, modify the shell script variable: `_PYTHON_INSTALL_DIR` .
3. Run the install script: `sh install.sh`
4. Enter the glancesync installation dir: `cd ./tricircle/glancesync/installation` .
5. Modify the cascading&cascaded glances' store scheme configuration, which is in the file: `./tricircle/glancesync/etc/glance/glance_store.yaml` .
6. Run the install script: `sh install.sh`
Configurations
--------------
Besides glance-api.conf file, we add some new config files. They are described separately.
- In glance-api.conf, three options added:
[DEFAULT]
# Indicate whether use the image sync, default value is False.
#If configuring on cascading layer, this value should be True.
sync_enabled = True
#The sync server 's port number, default is 9595.
sync_server_port = 9595
#The sync server's host name (or ip address)
sync_server_host = 127.0.0.1
*Besides, the option show_multiple_locations value should be ture.
- In glance-sync.conf which newly increased, the options is similar with glance-registry.conf except:
[DEFAULT]
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = ALL
#What the cascading glance endpoint url is .(Note that this value should be consistent with what in keystone).
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to. (e.g. physicalOpenstack001, physicalOpenstack002)
snapshot_region_names =
- Last but not least, we add a yaml file for config the store backend's copy : glance_store.yaml in cascading glance.
these config correspond to various store scheme (at present, only filesystem is supported), the values
are based on your environment, so you have to config it before installation or restart the glance-sync
when modify it.

View File

@ -1,10 +0,0 @@
#!/usr/bin/python
# PBR Generated from 'console_scripts'
import sys
from glance.cmd.sync import main
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,35 +0,0 @@
# Use this pipeline for no auth - DEFAULT
[pipeline:glance-sync]
pipeline = versionnegotiation unauthenticated-context rootapp
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
# Use this pipeline for keystone auth
[pipeline:glance-sync-keystone]
pipeline = versionnegotiation authtoken context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-sync-trusted-auth]
pipeline = versionnegotiation context rootapp
[composite:rootapp]
paste.composite_factory = glance.sync.api:root_app_factory
/v1: syncv1app
[app:syncv1app]
paste.app_factory = glance.sync.api.v1:API.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

View File

@ -1,60 +0,0 @@
[DEFAULT]
# Show debugging output in logs (sets DEBUG log level output)
debug = True
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9595
#worker number
workers = 3
# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/sync.log
# Backlog requests when creating socket
backlog = 4096
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = All
#What the cascading glance endpoint is .
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to.
snapshot_region_names = CascadedOne, CascadedTwo
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = glance
admin_password = openstack
[paste_deploy]
config_file = /etc/glance/glance-sync-paste.ini
flavor=keystone

View File

@ -1,29 +0,0 @@
---
glances:
- name: master
service_ip: "127.0.0.1"
schemes:
- name: http
parameters:
netloc: '127.0.0.1:8800'
path: '/'
image_name: 'test.img'
- name: filesystem
parameters:
host: '127.0.0.1'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'
- name: slave1
service_ip: "0.0.0.0"
schemes:
- name: http
parameters:
netloc: '0.0.0.0:8800'
path: '/'
- name: filesystem
parameters:
host: '0.0.0.0'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'

View File

@ -1,65 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""
Reference implementation server for Glance Sync
"""
import eventlet
import os
import sys
from glance.common import utils
# Monkey patch socket and time
eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=True)
# If ../glance/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')):
sys.path.insert(0, possible_topdir)
from glance.common import config
from glance.common import exception
from glance.common import wsgi
from glance.openstack.common import log
import glance.sync
def fail(returncode, e):
sys.stderr.write("ERROR: %s\n" % utils.exception_to_str(e))
sys.exit(returncode)
def main():
try:
config.parse_args(default_config_files='glance-sync.conf')
log.setup('glance')
server = wsgi.Server()
server.start(config.load_paste_app('glance-sync'), default_port=9595)
server.wait()
except exception.WorkerCreationFailure as e:
fail(2, e)
except RuntimeError as e:
fail(1, e)
if __name__ == '__main__':
main()

View File

@ -1,257 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
import glance.context
import glance.domain.proxy
import glance.openstack.common.log as logging
from glance.sync.clients import Clients as clients
from glance.sync import utils
LOG = logging.getLogger(__name__)
_V2_IMAGE_CREATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name', 'virtual_size', 'visibility',
'protected']
_V2_IMAGE_UPDATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name']
def _check_trigger_sync(pre_image, image):
"""
check if it is the case that the cascaded glance has upload or first patch
location.
"""
return pre_image.status in ('saving', 'queued') and image.size and \
[l for l in image.locations if not utils.is_glance_location(l['url'])]
def _from_snapshot_request(pre_image, image):
"""
when patch location, check if it's snapshot-sync case.
"""
if pre_image.status == 'queued' and len(image.locations) == 1:
loc_meta = image.locations[0]['metadata']
return loc_meta and loc_meta.get('image_from', None) in ['snapshot',
'volume']
def get_adding_image_properties(image):
_tags = list(image.tags) or []
kwargs = {}
kwargs['body'] = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs['body'][key] = value
except KeyError:
pass
_properties = getattr(image, 'extra_properties') or None
if _properties:
extra_keys = _properties.keys()
for _key in extra_keys:
kwargs['body'][_key] = _properties[_key]
if _tags:
kwargs['body']['tags'] = _tags
return kwargs
def get_existing_image_locations(image):
return {'locations': image.locations}
class ImageRepoProxy(glance.domain.proxy.Repo):
def __init__(self, image_repo, context, sync_api):
self.image_repo = image_repo
self.context = context
self.sync_client = sync_api.get_sync_client(context)
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageRepoProxy, self).__init__(image_repo,
item_proxy_class=ImageProxy,
item_proxy_kwargs=proxy_kwargs)
def _sync_saving_metadata(self, pre_image, image):
kwargs = {}
remove_keys = []
changes = {}
"""
image base properties
"""
for key in _V2_IMAGE_UPDATE_PROPERTIES:
pre_value = getattr(pre_image, key, None)
my_value = getattr(image, key, None)
if not my_value and not pre_value or my_value == pre_value:
continue
if not my_value and pre_value:
remove_keys.append(key)
else:
changes[key] = my_value
"""
image extra_properties
"""
pre_props = pre_image.extra_properties or {}
_properties = image.extra_properties or {}
addset = set(_properties.keys()).difference(set(pre_props.keys()))
removeset = set(pre_props.keys()).difference(set(_properties.keys()))
mayrepset = set(pre_props.keys()).intersection(set(_properties.keys()))
for key in addset:
changes[key] = _properties[key]
for key in removeset:
remove_keys.append(key)
for key in mayrepset:
if _properties[key] == pre_props[key]:
continue
changes[key] = _properties[key]
"""
image tags
"""
tag_dict = {}
pre_tags = pre_image.tags
new_tags = image.tags
added_tags = set(new_tags) - set(pre_tags)
removed_tags = set(pre_tags) - set(new_tags)
if added_tags:
tag_dict['add'] = added_tags
if removed_tags:
tag_dict['delete'] = removed_tags
if tag_dict:
kwargs['tags'] = tag_dict
kwargs['changes'] = changes
kwargs['removes'] = remove_keys
if not changes and not remove_keys and not tag_dict:
return
LOG.debug(_('In image %s, some properties changed, sync...')
% (image.image_id))
self.sync_client.update_image_matedata(image.image_id, **kwargs)
def _try_sync_locations(self, pre_image, image):
image_id = image.image_id
"""
image locations
"""
locations_dict = {}
pre_locs = pre_image.locations
_locs = image.locations
"""
if all locations of cascading removed, the image status become 'queued'
so the cascaded images should be 'queued' too. we replace all locations
with '[]'
"""
if pre_locs and not _locs:
LOG.debug(_('The image %s all locations removed, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='CLEAR',
locs=pre_locs)
return
added_locs = []
removed_locs = []
for _loc in pre_locs:
if _loc in _locs:
continue
removed_locs.append(_loc)
for _loc in _locs:
if _loc in pre_locs:
continue
added_locs.append(_loc)
if added_locs:
if _from_snapshot_request(pre_image, image):
add_kwargs = get_adding_image_properties(image)
else:
add_kwargs = {}
LOG.debug(_('The image %s add locations, sync...') % (image_id))
self.sync_client.sync_locations(image_id,
action='INSERT',
locs=added_locs,
**add_kwargs)
elif removed_locs:
LOG.debug(_('The image %s remove some locations, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='DELETE',
locs=removed_locs)
def save(self, image):
pre_image = self.get(image.image_id)
result = super(ImageRepoProxy, self).save(image)
image_id = image.image_id
if _check_trigger_sync(pre_image, image):
add_kwargs = get_adding_image_properties(image)
self.sync_client.sync_data(image_id, **add_kwargs)
LOG.debug(_('Sync data when image status changes ACTIVE, the '
'image id is %s.' % (image_id)))
else:
"""
In case of add/remove/replace locations property.
"""
self._try_sync_locations(pre_image, image)
"""
In case of sync the glance's properties
"""
if image.status == 'active':
self._sync_saving_metadata(pre_image, image)
return result
def remove(self, image):
result = super(ImageRepoProxy, self).remove(image)
LOG.debug(_('Image %s removed, sync...') % (image.image_id))
delete_kwargs = get_existing_image_locations(image)
self.sync_client.remove_image(image.image_id, **delete_kwargs)
return result
class ImageFactoryProxy(glance.domain.proxy.ImageFactory):
def __init__(self, factory, context, sync_api):
self.context = context
self.sync_api = sync_api
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageFactoryProxy, self).__init__(factory,
proxy_class=ImageProxy,
proxy_kwargs=proxy_kwargs)
def new_image(self, **kwargs):
return super(ImageFactoryProxy, self).new_image(**kwargs)
class ImageProxy(glance.domain.proxy.Image):
def __init__(self, image, context, sync_api=None):
self.image = image
self.sync_api = sync_api
self.context = context
super(ImageProxy, self).__init__(image)

View File

@ -1,22 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import paste.urlmap
def root_app_factory(loader, global_conf, **local_conf):
return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)

View File

@ -1,59 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from glance.common import wsgi
from glance.sync.api.v1 import images
def init(mapper):
images_resource = images.create_resource()
mapper.connect("/cascaded-eps",
controller=images_resource,
action="endpoints",
conditions={'method': ['POST']})
mapper.connect("/images/{id}",
controller=images_resource,
action="update",
conditions={'method': ['PATCH']})
mapper.connect("/images/{id}",
controller=images_resource,
action="remove",
conditions={'method': ['DELETE']})
mapper.connect("/images/{id}",
controller=images_resource,
action="upload",
conditions={'method': ['PUT']})
mapper.connect("/images/{id}/location",
controller=images_resource,
action="sync_loc",
conditions={'method': ['PUT']})
class API(wsgi.Router):
"""WSGI entry point for all Registry requests."""
def __init__(self, mapper):
mapper = mapper or wsgi.APIMapper()
init(mapper)
super(API, self).__init__(mapper)

View File

@ -1,95 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
from glance.common import exception
from glance.common import wsgi
import glance.openstack.common.log as logging
from glance.sync.base import SyncManagerV2 as sync_manager
from glance.sync import utils as utils
LOG = logging.getLogger(__name__)
class Controller(object):
def __init__(self):
self.sync_manager = sync_manager()
self.sync_manager.start()
def test(self, req):
return {'body': 'for test'}
def update(self, req, id, body):
LOG.debug(_('sync client start run UPDATE metadata operation for'
'image_id: %s' % (id)))
self.sync_manager.sync_image_metadata(id, req.context.auth_tok, 'SAVE',
**body)
return dict({'body': id})
def remove(self, req, id, body):
LOG.debug(_('sync client start run DELETE operation for image_id: %s'
% (id)))
self.sync_manager.sync_image_metadata(id, req.context.auth_tok,
'DELETE', **body)
return dict({'body': id})
def upload(self, req, id, body):
LOG.debug(_('sync client start run UPLOAD operation for image_id: %s'
% (id)))
self.sync_manager.sync_image_data(id, req.context.auth_tok, **body)
return dict({'body': id})
def sync_loc(self, req, id, body):
action = body['action']
locs = body['locations']
LOG.debug(_('sync client start run SYNC-LOC operation for image_id: %s'
% (id)))
if action == 'INSERT':
self.sync_manager.adding_locations(id, req.context.auth_tok, locs,
**body)
elif action == 'DELETE':
self.sync_manager.removing_locations(id,
req.context.auth_tok,
locs)
elif action == 'CLEAR':
self.sync_manager.clear_all_locations(id,
req.context.auth_tok,
locs)
return dict({'body': id})
def endpoints(self, req, body):
regions = req.params.get('regions', [])
if not regions:
regions = body.pop('regions', [])
if not isinstance(regions, list):
regions = [regions]
LOG.debug(_('get cacaded endpoints of user/tenant: %s'
% (req.context.user or req.context.tenant or 'NONE')))
return dict(eps=utils.get_endpoints(req.context.auth_tok,
req.context.tenant,
region_names=regions) or [])
def create_resource():
"""Images resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = wsgi.JSONResponseSerializer()
return wsgi.Resource(Controller(), deserializer, serializer)

View File

@ -1,738 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import copy
import httplib
import Queue
import threading
import time
import eventlet
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
from glance.common import exception
from glance.openstack.common import jsonutils
from glance.openstack.common import timeutils
import glance.openstack.common.log as logging
from glance.sync import utils as s_utils
from glance.sync.clients import Clients as clients
from glance.sync.store.driver import StoreFactory as s_factory
from glance.sync.store.location import LocationFactory as l_factory
import glance.sync.store.glance_store as glance_store
from glance.sync.task import TaskObject
from glance.sync.task import PeriodicTask
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('sync_strategy', 'glance.common.config', group='sync')
CONF.import_opt('task_retry_times', 'glance.common.config', group='sync')
CONF.import_opt('snapshot_timeout', 'glance.common.config', group='sync')
CONF.import_opt('snapshot_sleep_interval', 'glance.common.config',
group='sync')
_IMAGE_LOCS_MAP = {}
def get_copy_location_url(image):
"""
choose a best location of an image for sync.
"""
global _IMAGE_LOCS_MAP
image_id = image.id
locations = image.locations
if not locations:
return ''
#First time store in the cache
if image_id not in _IMAGE_LOCS_MAP.keys():
_IMAGE_LOCS_MAP[image_id] = {
'locations':
[{'url': locations[0]['url'],
'count': 1,
'is_using':1
}]
}
return locations[0]['url']
else:
recorded_locs = _IMAGE_LOCS_MAP[image_id]['locations']
record_urls = [loc['url'] for loc in recorded_locs]
for location in locations:
#the new, not-used location, cache and just return it.
if location['url'] not in record_urls:
recorded_locs.append({
'url': location['url'],
'count':1,
'is_using':1
})
return location['url']
#find ever used and at present not used.
not_used_locs = [loc for loc in recorded_locs
if not loc['is_using']]
if not_used_locs:
_loc = not_used_locs[0]
_loc['is_using'] = 1
_loc['count'] += 1
return _loc['url']
#the last case, just choose one that has the least using count.
_my_loc = sorted(recorded_locs, key=lambda my_loc: my_loc['count'])[0]
_my_loc['count'] += 1
return _my_loc['url']
def remove_invalid_location(id, url):
"""
when sync fail with a location, remove it from the cache.
:param id: the image_id
:param url: the location's url
:return:
"""
global _IMAGE_LOCS_MAP
image_map = _IMAGE_LOCS_MAP[id]
if not image_map:
return
locs = image_map['locations'] or []
if not locs:
return
del_locs = [loc for loc in locs if loc['url'] == url]
if not del_locs:
return
locs.remove(del_locs[0])
def return_sync_location(id, url):
"""
when sync finish, modify the using count and state.
"""
global _IMAGE_LOCS_MAP
image_map = _IMAGE_LOCS_MAP[id]
if not image_map:
return
locs = image_map['locations'] or []
if not locs:
return
selectd_locs = [loc for loc in locs if loc['url'] == url]
if not selectd_locs:
return
selectd_locs[0]['is_using'] = 0
selectd_locs[0]['count'] -= 1
def choose_a_location(sync_f):
"""
the wrapper for the method which need a location for sync.
:param sync_f:
:return:
"""
def wrapper(*args, **kwargs):
_id = args[1]
_auth_token = args[2]
_image = create_self_glance_client(_auth_token).images.get(_id)
_url = get_copy_location_url(_image)
kwargs['src_image_url'] = _url
_sync_ok = False
while not _sync_ok:
try:
sync_f(*args, **kwargs)
_sync_ok = True
except Exception:
remove_invalid_location(_id, _url)
_url = get_copy_location_url(_image)
if not _url:
break
kwargs['src_image_url'] = _url
return wrapper
def get_image_servcie():
return ImageService
def create_glance_client(auth_token, url):
return clients(auth_token).glance(url=url)
def create_self_glance_client(auth_token):
return create_glance_client(auth_token,
s_utils.get_cascading_endpoint_url())
def create_restful_client(auth_token, url):
pieces = urlparse.urlparse(url)
return _create_restful_client(auth_token, pieces.netloc)
def create_self_restful_client(auth_token):
return create_restful_client(auth_token,
s_utils.get_cascading_endpoint_url())
def _create_restful_client(auth_token, url):
server, port = url.split(':')
try:
port = int(port)
except Exception:
port = 9292
conn = httplib.HTTPConnection(server.encode(), port)
image_service = get_image_servcie()
glance_client = image_service(conn, auth_token)
return glance_client
def get_mappings_from_image(auth_token, image_id):
client = create_self_glance_client(auth_token)
image = client.images.get(image_id)
locations = image.locations
if not locations:
return {}
return get_mappings_from_locations(locations)
def get_mappings_from_locations(locations):
mappings = {}
for loc in locations:
if s_utils.is_glance_location(loc['url']):
id = loc['metadata'].get('image_id')
if not id:
continue
ep_url = s_utils.create_ep_by_loc(loc)
mappings[ep_url] = id
# endpoints.append(utils.create_ep_by_loc(loc))
return mappings
class AuthenticationException(Exception):
pass
class ImageAlreadyPresentException(Exception):
pass
class ServerErrorException(Exception):
pass
class UploadException(Exception):
pass
class ImageService(object):
def __init__(self, conn, auth_token):
"""Initialize the ImageService.
conn: a httplib.HTTPConnection to the glance server
auth_token: authentication token to pass in the x-auth-token header
"""
self.auth_token = auth_token
self.conn = conn
def _http_request(self, method, url, headers, body,
ignore_result_body=False):
"""Perform an HTTP request against the server.
method: the HTTP method to use
url: the URL to request (not including server portion)
headers: headers for the request
body: body to send with the request
ignore_result_body: the body of the result will be ignored
Returns: a httplib response object
"""
if self.auth_token:
headers.setdefault('x-auth-token', self.auth_token)
LOG.debug(_('Request: %(method)s http://%(server)s:%(port)s'
'%(url)s with headers %(headers)s')
% {'method': method,
'server': self.conn.host,
'port': self.conn.port,
'url': url,
'headers': repr(headers)})
self.conn.request(method, url, body, headers)
response = self.conn.getresponse()
headers = self._header_list_to_dict(response.getheaders())
code = response.status
code_description = httplib.responses[code]
LOG.debug(_('Response: %(code)s %(status)s %(headers)s')
% {'code': code,
'status': code_description,
'headers': repr(headers)})
if code in [400, 500]:
raise ServerErrorException(response.read())
if code in [401, 403]:
raise AuthenticationException(response.read())
if code == 409:
raise ImageAlreadyPresentException(response.read())
if ignore_result_body:
# NOTE: because we are pipelining requests through a single HTTP
# connection, httplib requires that we read the response body
# before we can make another request. If the caller knows they
# don't care about the body, they can ask us to do that for them.
response.read()
return response
@staticmethod
def _header_list_to_dict(headers):
"""Expand a list of headers into a dictionary.
headers: a list of [(key, value), (key, value), (key, value)]
Returns: a dictionary representation of the list
"""
d = {}
for (header, value) in headers:
if header.startswith('x-image-meta-property-'):
prop = header.replace('x-image-meta-property-', '')
d.setdefault('properties', {})
d['properties'][prop] = value
else:
d[header.replace('x-image-meta-', '')] = value
return d
@staticmethod
def _dict_to_headers(d):
"""Convert a dictionary into one suitable for a HTTP request.
d: a dictionary
Returns: the same dictionary, with x-image-meta added to every key
"""
h = {}
for key in d:
if key == 'properties':
for subkey in d[key]:
if d[key][subkey] is None:
h['x-image-meta-property-%s' % subkey] = ''
else:
h['x-image-meta-property-%s' % subkey] = d[key][subkey]
else:
h['x-image-meta-%s' % key] = d[key]
return h
def add_location(self, image_uuid, path_val, metadata=None):
"""
add an actual location
"""
LOG.debug(_('call restful api to add location: url is %s' % path_val))
metadata = metadata or {}
url = '/v2/images/%s' % image_uuid
hdrs = {'Content-Type': 'application/openstack-images-v2.1-json-patch'}
body = []
value = {'url': path_val, 'metadata': metadata}
body.append({'op': 'add', 'path': '/locations/-', 'value': value})
return self._http_request('PATCH', url, hdrs, jsonutils.dumps(body))
def clear_locations(self, image_uuid):
"""
clear all the location infos, make the image status be 'queued'.
"""
LOG.debug(_('call restful api to clear image location: image id is %s'
% image_uuid))
url = '/v2/images/%s' % image_uuid
hdrs = {'Content-Type': 'application/openstack-images-v2.1-json-patch'}
body = []
body.append({'op': 'replace', 'path': '/locations', 'value': []})
return self._http_request('PATCH', url, hdrs, jsonutils.dumps(body))
class MetadataHelper(object):
def execute(self, auth_token, endpoint, action_name='CREATE',
image_id=None, **kwargs):
glance_client = create_glance_client(auth_token, endpoint)
if action_name.upper() == 'CREATE':
return self._do_create_action(glance_client, **kwargs)
if action_name.upper() == 'SAVE':
return self._do_save_action(glance_client, image_id, **kwargs)
if action_name.upper() == 'DELETE':
return self._do_delete_action(glance_client, image_id, **kwargs)
return None
@staticmethod
def _fetch_params(keys, **kwargs):
return tuple([kwargs.get(key, None) for key in keys])
def _do_create_action(self, glance_client, **kwargs):
body = kwargs['body']
new_image = glance_client.images.create(**body)
return new_image.id
def _do_save_action(self, glance_client, image_id, **kwargs):
keys = ['changes', 'removes', 'tags']
changes, removes, tags = self._fetch_params(keys, **kwargs)
if changes or removes:
glance_client.images.update(image_id,
remove_props=removes,
**changes)
if tags:
if tags.get('add', None):
added = tags.get('add')
for tag in added:
glance_client.image_tags.update(image_id, tag)
elif tags.get('delete', None):
removed = tags.get('delete')
for tag in removed:
glance_client.image_tags.delete(image_id, tag)
return glance_client.images.get(image_id)
def _do_delete_action(self, glance_client, image_id, **kwargs):
return glance_client.images.delete(image_id)
_task_queue = Queue.Queue(maxsize=150)
class SyncManagerV2():
MAX_TASK_RETRY_TIMES = 1
def __init__(self):
global _task_queue
self.mete_helper = MetadataHelper()
self.location_factory = l_factory()
self.store_factory = s_factory()
self.task_queue = _task_queue
self.task_handler = None
self.unhandle_task_list = []
self.periodic_add_id_list = []
self.periodic_add_done = True
self._load_glance_store_cfg()
self.ks_client = clients().keystone()
self.create_new_periodic_task = False
def _load_glance_store_cfg(self):
glance_store.setup_glance_stores()
def sync_image_metadata(self, image_id, auth_token, action, **kwargs):
if not action or CONF.sync.sync_strategy == 'None':
return
kwargs['image_id'] = image_id
if action == 'SAVE':
self.task_queue.put_nowait(TaskObject.get_instance('meta_update',
kwargs))
elif action == 'DELETE':
self.task_queue.put_nowait(TaskObject.get_instance('meta_remove',
kwargs))
@choose_a_location
def sync_image_data(self, image_id, auth_token, eps=None, **kwargs):
if CONF.sync.sync_strategy in ['None', 'nova']:
return
kwargs['image_id'] = image_id
cascading_ep = s_utils.get_cascading_endpoint_url()
kwargs['cascading_ep'] = cascading_ep
copy_url = kwargs.get('src_image_url', None)
if not copy_url:
LOG.warn(_('No copy url found, for image %s sync, Exit.'),
image_id)
return
LOG.info(_('choose the copy url %s for sync image %s'),
copy_url, image_id)
if s_utils.is_glance_location(copy_url):
kwargs['copy_ep'] = s_utils.create_ep_by_loc_url(copy_url)
kwargs['copy_id'] = s_utils.get_id_from_glance_loc_url(copy_url)
else:
kwargs['copy_ep'] = cascading_ep
kwargs['copy_id'] = image_id
self.task_queue.put_nowait(TaskObject.get_instance('sync', kwargs))
def adding_locations(self, image_id, auth_token, locs, **kwargs):
if CONF.sync.sync_strategy == 'None':
return
for loc in locs:
if s_utils.is_glance_location(loc['url']):
if s_utils.is_snapshot_location(loc):
snapshot_ep = s_utils.create_ep_by_loc(loc)
snapshot_id = s_utils.get_id_from_glance_loc(loc)
snapshot_client = create_glance_client(auth_token,
snapshot_ep)
snapshot_image = snapshot_client.images.get(snapshot_id)
_pre_check_time = timeutils.utcnow()
_timout = CONF.sync.snapshot_timeout
while not timeutils.is_older_than(_pre_check_time,
_timout):
if snapshot_image.status == 'active':
break
LOG.debug(_('Check snapshot not active, wait for %i'
'second.'
% CONF.sync.snapshot_sleep_interval))
time.sleep(CONF.sync.snapshot_sleep_interval)
snapshot_image = snapshot_client.images.get(
snapshot_id)
if snapshot_image.status != 'active':
LOG.error(_('Snapshot status to active Timeout'))
return
kwargs['image_id'] = image_id
kwargs['snapshot_ep'] = snapshot_ep
kwargs['snapshot_id'] = snapshot_id
snapshot_task = TaskObject.get_instance('snapshot', kwargs)
self.task_queue.put_nowait(snapshot_task)
else:
LOG.debug(_('patch a normal location %s to image %s'
% (loc['url'], image_id)))
input = {'image_id': image_id, 'location': loc}
self.task_queue.put_nowait(TaskObject.get_instance('patch',
input))
def removing_locations(self, image_id, auth_token, locs):
if CONF.sync.sync_strategy == 'None':
return
locs = filter(lambda loc: s_utils.is_glance_location(loc['url']), locs)
if not locs:
return
input = {'image_id': image_id, 'locations': locs}
remove_locs_task = TaskObject.get_instance('locs_remove', input)
self.task_queue.put_nowait(remove_locs_task)
def clear_all_locations(self, image_id, auth_token, locs):
locs = filter(lambda loc: not s_utils.is_snapshot_location(loc), locs)
self.removing_locations(image_id, auth_token, locs)
def create_new_cascaded_task(self, last_run_time=None):
LOG.debug(_('new_cascaded periodic task has been created.'))
glance_client = create_self_glance_client(self.ks_client.auth_token)
filters = {'status': 'active'}
image_list = glance_client.images.list(filters=filters)
input = {}
run_images = {}
cascading_ep = s_utils.get_cascading_endpoint_url()
input['cascading_ep'] = cascading_ep
input['image_id'] = 'ffffffff-ffff-ffff-ffff-ffffffffffff'
all_ep_urls = s_utils.get_endpoints()
for image in image_list:
glance_urls = [loc['url'] for loc in image.locations
if s_utils.is_glance_location(loc['url'])]
lack_ep_urls = s_utils.calculate_lack_endpoints(all_ep_urls,
glance_urls)
if lack_ep_urls:
image_core_props = s_utils.get_core_properties(image)
run_images[image.id] = {'body': image_core_props,
'locations': lack_ep_urls}
if not run_images:
LOG.debug(_('No images need to sync to new cascaded glances.'))
input['images'] = run_images
return TaskObject.get_instance('periodic_add', input,
last_run_time=last_run_time)
@staticmethod
def _fetch_params(keys, **kwargs):
return tuple([kwargs.get(key, None) for key in keys])
def _get_candidate_path(self, auth_token, from_ep, image_id,
scheme='file'):
g_client = create_glance_client(auth_token, from_ep)
image = g_client.images.get(image_id)
locs = image.locations or []
for loc in locs:
if s_utils.is_glance_location(loc['url']):
continue
if loc['url'].startswith(scheme):
if scheme == 'file':
return loc['url'][len('file://'):]
return loc['url']
return None
def _do_image_data_copy(self, s_ep, d_ep, from_image_id, to_image_id,
candidate_path=None):
from_scheme, to_scheme = glance_store.choose_best_store_schemes(s_ep,
d_ep)
store_driver = self.store_factory.get_instance(from_scheme['name'],
to_scheme['name'])
from_params = from_scheme['parameters']
from_params['image_id'] = from_image_id
to_params = to_scheme['parameters']
to_params['image_id'] = to_image_id
from_location = self.location_factory.get_instance(from_scheme['name'],
**from_params)
to_location = self.location_factory.get_instance(to_scheme['name'],
**to_params)
return store_driver.copy_to(from_location, to_location,
candidate_path=candidate_path)
def _patch_cascaded_location(self, auth_token, image_id,
cascaded_ep, cascaded_id, action=None):
self_restful_client = create_self_restful_client(auth_token)
path = s_utils.generate_glance_location(cascaded_ep, cascaded_id)
# add the auth_token, so this url can be visited, otherwise 404 error
path += '?auth_token=1'
metadata = {'image_id': cascaded_id}
if action:
metadata['action'] = action
self_restful_client.add_location(image_id, path, metadata)
def meta_update(self, auth_token, cascaded_ep, image_id, **kwargs):
return self.mete_helper.execute(auth_token, cascaded_ep, 'SAVE',
image_id, **kwargs)
def meta_delete(self, auth_token, cascaded_ep, image_id):
return self.mete_helper.execute(auth_token, cascaded_ep, 'DELETE',
image_id)
def sync_image(self, auth_token, copy_ep=None, to_ep=None,
copy_image_id=None, cascading_image_id=None, **kwargs):
# Firstly, crate an image object with cascading image's properties.
LOG.debug(_('create an image metadata in ep: %s'), to_ep)
cascaded_id = self.mete_helper.execute(auth_token, to_ep,
**kwargs)
try:
c_path = self._get_candidate_path(auth_token, copy_ep,
copy_image_id)
LOG.debug(_('Chose candidate path: %s from ep %s'), c_path, copy_ep)
# execute copy operation to copy the image data.
copy_image_loc = self._do_image_data_copy(copy_ep,
to_ep,
copy_image_id,
cascaded_id,
candidate_path=c_path)
LOG.debug(_('Sync image data, synced loc is %s'), copy_image_loc)
# patch the copied image_data to the image
glance_client = create_restful_client(auth_token, to_ep)
glance_client.add_location(cascaded_id, copy_image_loc)
# patch the glance location to cascading glance
msg = _("patch glance location to cascading image, with cascaded "
"endpoint : %s, cascaded id: %s, cascading image id: %s." %
(to_ep, cascaded_id, cascading_image_id))
LOG.debug(msg)
self._patch_cascaded_location(auth_token,
cascading_image_id,
to_ep,
cascaded_id,
action='upload')
return cascaded_id
except exception.SyncStoreCopyError as e:
LOG.error(_("Exception occurs when syncing store copy."))
raise exception.SyncServiceOperationError(reason=e.msg)
def do_snapshot(self, auth_token, snapshot_ep, cascaded_ep,
snapshot_image_id, cascading_image_id, **kwargs):
return self.sync_image(auth_token, copy_ep=snapshot_ep,
to_ep=cascaded_ep, copy_image_id=snapshot_image_id,
cascading_image_id=cascading_image_id, **kwargs)
def patch_location(self, image_id, cascaded_id, auth_token, cascaded_ep,
location):
g_client = create_glance_client(auth_token, cascaded_ep)
cascaded_image = g_client.images.get(cascaded_id)
glance_client = create_restful_client(auth_token, cascaded_ep)
try:
glance_client.add_location(cascaded_id, location['url'])
if cascaded_image.status == 'queued':
self._patch_cascaded_location(auth_token,
image_id,
cascaded_ep,
cascaded_id,
action='patch')
except:
pass
def remove_loc(self, cascaded_id, auth_token, cascaded_ep):
glance_client = create_glance_client(auth_token, cascaded_ep)
glance_client.images.delete(cascaded_id)
def start(self):
# lanuch a new thread to read the task_task to handle.
_thread = threading.Thread(target=self.tasks_handle)
_thread.setDaemon(True)
_thread.start()
def tasks_handle(self):
while True:
_task = self.task_queue.get()
if not isinstance(_task, TaskObject):
LOG.error(_('task type valid.'))
continue
LOG.debug(_('Task start to runs, task id is %s' % _task.id))
_task.start_time = timeutils.strtime()
self.unhandle_task_list.append(copy.deepcopy(_task))
eventlet.spawn(_task.execute, self, self.ks_client.auth_token)
def handle_tasks(self, task_result):
t_image_id = task_result.get('image_id')
t_type = task_result.get('type')
t_start_time = task_result.get('start_time')
t_status = task_result.get('status')
handling_tasks = filter(lambda t: t.image_id == t_image_id and
t.start_time == t_start_time,
self.unhandle_task_list)
if not handling_tasks or len(handling_tasks) > 1:
LOG.error(_('The task not exist or duplicate, can not go handle. '
'Info is image: %(id)s, op_type: %(type)s, run time: '
'%(time)s'
% {'id': t_image_id,
'type': t_type,
'time': t_start_time}
))
return
task = handling_tasks[0]
self.unhandle_task_list.remove(task)
if isinstance(task, PeriodicTask):
LOG.debug(_('The periodic task executed done, with op %(type)s '
'runs at time: %(start_time)s, the status is '
'%(status)s.' %
{'type': t_type,
'start_time': t_start_time,
'status': t_status
}))
else:
if t_status == 'terminal':
LOG.debug(_('The task executed successful for image:'
'%(image_id)s with op %(type)s, which runs '
'at time: %(start_time)s' %
{'image_id': t_image_id,
'type': t_type,
'start_time': t_start_time
}))
elif t_status == 'param_error':
LOG.error(_('The task executed failed for params error. Image:'
'%(image_id)s with op %(type)s, which runs '
'at time: %(start_time)s' %
{'image_id': t_image_id,
'type': t_type,
'start_time': t_start_time
}))
elif t_status == 'error':
LOG.error(_('The task failed to execute. Detail info is: '
'%(image_id)s with op %(op_type)s run_time:'
'%(start_time)s' %
{'image_id': t_image_id,
'op_type': t_type,
'start_time': t_start_time
}))

View File

@ -1,46 +0,0 @@
from oslo.config import cfg
sync_client_opts = [
cfg.StrOpt('sync_client_protocol', default='http',
help=_('The protocol to use for communication with the '
'sync server. Either http or https.')),
cfg.StrOpt('sync_client_key_file',
help=_('The path to the key file to use in SSL connections '
'to the sync server.')),
cfg.StrOpt('sync_client_cert_file',
help=_('The path to the cert file to use in SSL connections '
'to the sync server.')),
cfg.StrOpt('sync_client_ca_file',
help=_('The path to the certifying authority cert file to '
'use in SSL connections to the sync server.')),
cfg.BoolOpt('sync_client_insecure', default=False,
help=_('When using SSL in connections to the sync server, '
'do not require validation via a certifying '
'authority.')),
cfg.IntOpt('sync_client_timeout', default=600,
help=_('The period of time, in seconds, that the API server '
'will wait for a sync request to complete. A '
'value of 0 implies no timeout.')),
]
sync_client_ctx_opts = [
cfg.BoolOpt('sync_use_user_token', default=True,
help=_('Whether to pass through the user token when '
'making requests to the sync.')),
cfg.StrOpt('sync_admin_user', secret=True,
help=_('The administrators user name.')),
cfg.StrOpt('sync_admin_password', secret=True,
help=_('The administrators password.')),
cfg.StrOpt('sync_admin_tenant_name', secret=True,
help=_('The tenant name of the administrative user.')),
cfg.StrOpt('sync_auth_url',
help=_('The URL to the keystone service.')),
cfg.StrOpt('sync_auth_strategy', default='noauth',
help=_('The strategy to use for authentication.')),
cfg.StrOpt('sync_auth_region',
help=_('The region for the authentication service.')),
]
CONF = cfg.CONF
CONF.register_opts(sync_client_opts)
CONF.register_opts(sync_client_ctx_opts)

View File

@ -1,124 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import os
from oslo.config import cfg
from glance.common import exception
from glance.openstack.common import jsonutils
import glance.openstack.common.log as logging
from glance.sync.client.v1 import client
CONF = cfg.CONF
CONF.import_opt('sync_server_host', 'glance.common.config')
CONF.import_opt('sync_server_port', 'glance.common.config')
sync_client_ctx_opts = [
cfg.BoolOpt('sync_send_identity_headers', default=False,
help=_("Whether to pass through headers containing user "
"and tenant information when making requests to "
"the sync. This allows the sync to use the "
"context middleware without the keystoneclients' "
"auth_token middleware, removing calls to the keystone "
"auth service. It is recommended that when using this "
"option, secure communication between glance api and "
"glance sync is ensured by means other than "
"auth_token middleware.")),
]
CONF.register_opts(sync_client_ctx_opts)
_sync_client = 'glance.sync.client'
CONF.import_opt('sync_client_protocol', _sync_client)
CONF.import_opt('sync_client_key_file', _sync_client)
CONF.import_opt('sync_client_cert_file', _sync_client)
CONF.import_opt('sync_client_ca_file', _sync_client)
CONF.import_opt('sync_client_insecure', _sync_client)
CONF.import_opt('sync_client_timeout', _sync_client)
CONF.import_opt('sync_use_user_token', _sync_client)
CONF.import_opt('sync_admin_user', _sync_client)
CONF.import_opt('sync_admin_password', _sync_client)
CONF.import_opt('sync_admin_tenant_name', _sync_client)
CONF.import_opt('sync_auth_url', _sync_client)
CONF.import_opt('sync_auth_strategy', _sync_client)
CONF.import_opt('sync_auth_region', _sync_client)
CONF.import_opt('metadata_encryption_key', 'glance.common.config')
_CLIENT_CREDS = None
_CLIENT_HOST = None
_CLIENT_PORT = None
_CLIENT_KWARGS = {}
def get_sync_client(cxt):
global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
kwargs = _CLIENT_KWARGS.copy()
if CONF.sync_use_user_token:
kwargs['auth_tok'] = cxt.auth_tok
if _CLIENT_CREDS:
kwargs['creds'] = _CLIENT_CREDS
if CONF.sync_send_identity_headers:
identity_headers = {
'X-User-Id': cxt.user,
'X-Tenant-Id': cxt.tenant,
'X-Roles': ','.join(cxt.roles),
'X-Identity-Status': 'Confirmed',
'X-Service-Catalog': jsonutils.dumps(cxt.service_catalog),
}
kwargs['identity_headers'] = identity_headers
return client.SyncClient(_CLIENT_HOST, _CLIENT_PORT, **kwargs)
def configure_sync_client():
global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
host, port = CONF.sync_server_host, CONF.sync_server_port
_CLIENT_HOST = host
_CLIENT_PORT = port
_METADATA_ENCRYPTION_KEY = CONF.metadata_encryption_key
_CLIENT_KWARGS = {
'use_ssl': CONF.sync_client_protocol.lower() == 'https',
'key_file': CONF.sync_client_key_file,
'cert_file': CONF.sync_client_cert_file,
'ca_file': CONF.sync_client_ca_file,
'insecure': CONF.sync_client_insecure,
'timeout': CONF.sync_client_timeout,
}
if not CONF.sync_use_user_token:
configure_sync_admin_creds()
def configure_sync_admin_creds():
global _CLIENT_CREDS
if CONF.sync_auth_url or os.getenv('OS_AUTH_URL'):
strategy = 'keystone'
else:
strategy = CONF.sync_auth_strategy
_CLIENT_CREDS = {
'user': CONF.sync_admin_user,
'password': CONF.sync_admin_password,
'username': CONF.sync_admin_user,
'tenant': CONF.sync_admin_tenant_name,
'auth_url': CONF.sync_auth_url,
'strategy': strategy,
'region': CONF.sync_auth_region,
}

View File

@ -1,106 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from glance.common.client import BaseClient
from glance.openstack.common import jsonutils
import glance.openstack.common.log as logging
LOG = logging.getLogger(__name__)
class SyncClient(BaseClient):
DEFAULT_PORT = 9595
def __init__(self, host=None, port=DEFAULT_PORT, identity_headers=None,
**kwargs):
self.identity_headers = identity_headers
BaseClient.__init__(self, host, port, configure_via_auth=False,
**kwargs)
def do_request(self, method, action, **kwargs):
try:
kwargs['headers'] = kwargs.get('headers', {})
res = super(SyncClient, self).do_request(method, action, **kwargs)
status = res.status
request_id = res.getheader('x-openstack-request-id')
msg = (_("Sync request %(method)s %(action)s HTTP %(status)s"
" request id %(request_id)s") %
{'method': method, 'action': action,
'status': status, 'request_id': request_id})
LOG.debug(msg)
except Exception as exc:
exc_name = exc.__class__.__name__
LOG.info(_("Sync client request %(method)s %(action)s "
"raised %(exc_name)s"),
{'method': method, 'action': action,
'exc_name': exc_name})
raise
return res
def _add_common_params(self, id, kwargs):
pass
def update_image_matedata(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("PATCH", "/v1/images/%s" % (image_id), body=body,
headers=headers)
return res
def remove_image(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("DELETE", "/v1/images/%s" %
(image_id), body=body, headers=headers)
return res
def sync_data(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("PUT", "/v1/images/%s" % (image_id), body=body,
headers=headers)
return res
def sync_locations(self, image_id, action=None, locs=None, **kwargs):
headers = {
'Content-Type': 'application/json',
}
kwargs['action'] = action
kwargs['locations'] = locs
body = jsonutils.dumps(kwargs)
res = self.do_request("PUT", "/v1/images/%s/location" % (image_id),
body=body, headers=headers)
return res
def get_cascaded_endpoints(self, regions=[]):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps({'regions': regions})
res = self.do_request('POST', '/v1/cascaded-eps', body=body,
headers=headers)
return jsonutils.loads(res.read())['eps']

View File

@ -1,89 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
from keystoneclient.v2_0 import client as ksclient
import glance.openstack.common.log as logging
from glanceclient.v2 import client as gclient2
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class Clients(object):
def __init__(self, auth_token=None, tenant_id=None):
self._keystone = None
self._glance = None
self._cxt_token = auth_token
self._tenant_id = tenant_id
self._ks_conf = cfg.CONF.keystone_authtoken
@property
def auth_token(self, token=None):
return token or self.keystone().auth_token
@property
def ks_url(self):
protocol = self._ks_conf.auth_protocol or 'http'
auth_host = self._ks_conf.auth_host or '127.0.0.1'
auth_port = self._ks_conf.auth_port or '35357'
return protocol + '://' + auth_host + ':' + str(auth_port) + '/v2.0/'
def url_for(self, **kwargs):
return self.keystone().service_catalog.url_for(**kwargs)
def get_urls(self, **kwargs):
return self.keystone().service_catalog.get_urls(**kwargs)
def keystone(self):
if self._keystone:
return self._keystone
if self._cxt_token and self._tenant_id:
creds = {'token': self._cxt_token,
'auth_url': self.ks_url,
'project_id': self._tenant_id
}
else:
creds = {'username': self._ks_conf.admin_user,
'password': self._ks_conf.admin_password,
'auth_url': self.ks_url,
'project_name': self._ks_conf.admin_tenant_name}
try:
self._keystone = ksclient.Client(**creds)
except Exception as e:
LOG.error(_('create keystone client error: reason: %s') % (e))
return None
return self._keystone
def glance(self, auth_token=None, url=None):
gclient = gclient2
if gclient is None:
return None
if self._glance:
return self._glance
args = {
'token': auth_token or self.auth_token,
'endpoint': url or self.url_for(service_type='image')
}
self._glance = gclient.Client(**args)
return self._glance

View File

@ -1,33 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from concurrent.futures import ThreadPoolExecutor
import glance.openstack.common.log as logging
LOG = logging.getLogger(__name__)
class ThreadPool(object):
def __init__(self):
self.pool = ThreadPoolExecutor(128)
def execute(self, func, *args, **kwargs):
LOG.info(_('execute %s in a thread pool') % (func.__name__))
self.pool.submit(func, *args, **kwargs)

View File

@ -1,171 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""
A simple filesystem-backed store
"""
import logging
import os
import sys
from oslo.config import cfg
import pxssh
import pexpect
from glance.common import exception
import glance.sync.store.driver
import glance.sync.store.location
from glance.sync.store.location import Location
from glance.sync import utils as s_utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('scp_copy_timeout', 'glance.common.config', group='sync')
def _login_ssh(host, passwd):
child_ssh = pexpect.spawn('ssh -p 22 %s' % (host))
child_ssh.logfile = sys.stdout
login_flag = True
while True:
ssh_index = child_ssh.expect(['.yes/no.', '.assword:.',
pexpect.TIMEOUT])
if ssh_index == 0:
child_ssh.sendline('yes')
elif ssh_index == 1:
child_ssh.sendline(passwd)
break
else:
login_flag = False
break
if not login_flag:
return None
return child_ssh
def _get_ssh(hostname, username, password):
s = pxssh.pxssh()
s.login(hostname, username, password, original_prompt='[#$>]')
s.logfile = sys.stdout
return s
class LocationCreator(glance.sync.store.location.LocationCreator):
def __init__(self):
self.scheme = 'file'
def create(self, **kwargs):
image_id = kwargs.get('image_id')
image_file_name = kwargs.get('image_name', None) or image_id
datadir = kwargs.get('datadir')
path = os.path.join(datadir, str(image_file_name))
login_user = kwargs.get('login_user')
login_password = kwargs.get('login_password')
host = kwargs.get('host')
store_specs = {'scheme': self.scheme, 'path': path, 'host': host,
'login_user': login_user,
'login_password': login_password}
return Location(self.scheme, StoreLocation, image_id=image_id,
store_specs=store_specs)
class StoreLocation(glance.sync.store.location.StoreLocation):
def process_specs(self):
self.scheme = self.specs.get('scheme', 'file')
self.path = self.specs.get('path')
self.host = self.specs.get('host')
self.login_user = self.specs.get('login_user')
self.login_password = self.specs.get('login_password')
class Store(glance.sync.store.driver.Store):
def copy_to(self, from_location, to_location, candidate_path=None):
from_store_loc = from_location.store_location
to_store_loc = to_location.store_location
if from_store_loc.host == to_store_loc.host and \
from_store_loc.path == to_store_loc.path:
LOG.info(_('The from_loc is same to to_loc, no need to copy. the '
'host:path is %s:%s') % (from_store_loc.host,
from_store_loc.path))
return 'file://%s' % to_store_loc.path
from_host = r"""{username}@{host}""".format(
username=from_store_loc.login_user,
host=from_store_loc.host)
to_host = r"""{username}@{host}""".format(
username=to_store_loc.login_user,
host=to_store_loc.host)
to_path = r"""{to_host}:{path}""".format(to_host=to_host,
path=to_store_loc.path)
copy_path = from_store_loc.path
try:
from_ssh = _get_ssh(from_store_loc.host,
from_store_loc.login_user,
from_store_loc.login_password)
except Exception:
raise exception.SyncStoreCopyError(reason="ssh login failed.")
from_ssh.sendline('ls %s' % copy_path)
from_ssh.prompt()
if 'cannot access' in from_ssh.before or \
'No such file' in from_ssh.before:
if candidate_path:
from_ssh.sendline('ls %s' % candidate_path)
from_ssh.prompt()
if 'cannot access' not in from_ssh.before and \
'No such file' not in from_ssh.before:
copy_path = candidate_path
else:
msg = _("the image path for copy to is not exists, file copy"
"failed: path is %s" % (copy_path))
raise exception.SyncStoreCopyError(reason=msg)
from_ssh.sendline('scp -P 22 %s %s' % (copy_path, to_path))
while True:
scp_index = from_ssh.expect(['.yes/no.', '.assword:.',
pexpect.TIMEOUT])
if scp_index == 0:
from_ssh.sendline('yes')
from_ssh.prompt()
elif scp_index == 1:
from_ssh.sendline(to_store_loc.login_password)
from_ssh.prompt(timeout=CONF.sync.scp_copy_timeout)
break
else:
msg = _("scp commond execute failed, with copy_path %s and "
"to_path %s" % (copy_path, to_path))
raise exception.SyncStoreCopyError(reason=msg)
break
if from_ssh:
from_ssh.logout()
return 'file://%s' % to_store_loc.path

View File

@ -1,63 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""Base class for all storage backends"""
from oslo.config import cfg
from stevedore import extension
from glance.common import exception
import glance.openstack.common.log as logging
from glance.openstack.common.gettextutils import _
from glance.openstack.common import importutils
from glance.openstack.common import strutils
LOG = logging.getLogger(__name__)
class StoreFactory(object):
SYNC_STORE_NAMESPACE = "glance.sync.store.driver"
def __init__(self):
self._stores = {}
self._load_store_drivers()
def _load_store_drivers(self):
extension_manager = extension.ExtensionManager(
namespace=self.SYNC_STORE_NAMESPACE,
invoke_on_load=True,
)
for ext in extension_manager:
if ext.name in self._stores:
continue
ext.obj.name = ext.name
self._stores[ext.name] = ext.obj
def get_instance(self, from_scheme='filesystem', to_scheme=None):
_store_driver = self._stores.get(from_scheme)
if to_scheme and to_scheme != from_scheme and _store_driver:
func_name = 'copy_to_%s' % to_scheme
if not getattr(_store_driver, func_name, None):
return None
return _store_driver
class Store(object):
def copy_to(self, source_location, dest_location, candidate_path=None):
pass

View File

@ -1,111 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import fnmatch
import operator
import os
from oslo.config import cfg
import yaml
from glance.sync import utils as s_utils
OPTS = [
cfg.StrOpt('glance_store_cfg_file',
default="glance_store.yaml",
help="Configuration file for glance's store location "
"definition."
),
]
PRIOR_SOTRE_SCHEMES = ['filesystem', 'http', 'swift']
cfg.CONF.register_opts(OPTS)
def choose_best_store_schemes(source_endpoint, dest_endpoint):
global GLANCE_STORES
source_host = s_utils.get_host_from_ep(source_endpoint)
dest_host = s_utils.get_host_from_ep(dest_endpoint)
source_store = GLANCE_STORES.get_glance_store(source_host)
dest_store = GLANCE_STORES.get_glance_store(dest_host)
tmp_dict = {}
for s_scheme in source_store.schemes:
s_scheme_name = s_scheme['name']
for d_scheme in dest_store.schemes:
d_scheme_name = d_scheme['name']
if s_scheme_name == d_scheme_name:
tmp_dict[s_scheme_name] = (s_scheme, d_scheme)
if tmp_dict:
return tmp_dict[sorted(tmp_dict, key=lambda scheme:
PRIOR_SOTRE_SCHEMES.index(scheme))[0]]
return (source_store.schemes[0], dest_store.schemes[0])
class GlanceStore(object):
def __init__(self, service_ip, name, schemes):
self.service_ip = service_ip
self.name = name
self.schemes = schemes
class ImageObject(object):
def __init__(self, image_id, glance_store):
self.image_id = image_id
self.glance_store = glance_store
class GlanceStoreManager(object):
def __init__(self, cfg):
self.cfg = cfg
self.g_stores = []
cfg_items = cfg['glances']
for item in cfg_items:
self.g_stores.append(GlanceStore(item['service_ip'],
item['name'],
item['schemes']))
def get_glance_store(self, service_ip):
for g_store in self.g_stores:
if service_ip == g_store.service_ip:
return g_store
return None
def generate_Image_obj(self, image_id, endpoint):
g_store = self.get_glance_store(s_utils.get_host_from_ep(endpoint))
return ImageObject(image_id, g_store)
GLANCE_STORES = None
def setup_glance_stores():
global GLANCE_STORES
cfg_file = cfg.CONF.glance_store_cfg_file
if not os.path.exists(cfg_file):
cfg_file = cfg.CONF.find_file(cfg_file)
with open(cfg_file) as fap:
data = fap.read()
locs_cfg = yaml.safe_load(data)
GLANCE_STORES = GlanceStoreManager(locs_cfg)

View File

@ -1,95 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import logging
import urlparse
from stevedore import extension
LOG = logging.getLogger(__name__)
class LocationCreator(object):
def __init__(self):
self.scheme = None
def creator(self, **kwargs):
pass
class Location(object):
"""
Class describing the location of an image that Glance knows about
"""
def __init__(self, store_name, store_location_class,
uri=None, image_id=None, store_specs=None):
"""
Create a new Location object.
:param store_name: The string identifier/scheme of the storage backend
:param store_location_class: The store location class to use
for this location instance.
:param image_id: The identifier of the image in whatever storage
backend is used.
:param uri: Optional URI to construct location from
:param store_specs: Dictionary of information about the location
of the image that is dependent on the backend
store
"""
self.store_name = store_name
self.image_id = image_id
self.store_specs = store_specs or {}
self.store_location = store_location_class(self.store_specs)
class StoreLocation(object):
"""
Base class that must be implemented by each store
"""
def __init__(self, store_specs):
self.specs = store_specs
if self.specs:
self.process_specs()
class LocationFactory(object):
SYNC_LOCATION_NAMESPACE = "glance.sync.store.location"
def __init__(self):
self._locations = {}
self._load_locations()
def _load_locations(self):
extension_manager = extension.ExtensionManager(
namespace=self.SYNC_LOCATION_NAMESPACE,
invoke_on_load=True,
)
for ext in extension_manager:
if ext.name in self._locations:
continue
ext.obj.name = ext.name
self._locations[ext.name] = ext.obj
def get_instance(self, scheme, **kwargs):
loc_creator = self._locations.get(scheme, None)
return loc_creator.create(**kwargs)

View File

@ -1,356 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import threading
import Queue
import uuid
import eventlet
from oslo.config import cfg
import glance.openstack.common.log as logging
from glance.openstack.common import timeutils
from glance.sync import utils as s_utils
LOG = logging.getLogger(__name__)
snapshot_opt = [
cfg.ListOpt('snapshot_region_names',
default=[],
help=_("for what regions the snapshot sync to"),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_region_names',
group='DEFAULT')]),
]
CONF = cfg.CONF
CONF.register_opts(snapshot_opt)
class TaskObject(object):
def __init__(self, type, input, retry_times=0):
self.id = str(uuid.uuid4())
self.type = type
self.input = input
self.image_id = self.input.get('image_id')
self.status = 'new'
self.retry_times = retry_times
self.start_time = None
@classmethod
def get_instance(cls, type, input, **kwargs):
_type_cls_dict = {'meta_update': MetaUpdateTask,
'meta_remove': MetaDeleteTask,
'sync': ImageActiveTask,
'snapshot': PatchSnapshotLocationTask,
'patch': PatchLocationTask,
'locs_remove': RemoveLocationsTask,
'periodic_add': ChkNewCascadedsPeriodicTask}
if _type_cls_dict.get(type):
return _type_cls_dict[type](input, **kwargs)
return None
def _handle_result(self, sync_manager):
return sync_manager.handle_tasks({'image_id': self.image_id,
'type': self.type,
'start_time': self.start_time,
'status': self.status
})
def execute(self, sync_manager, auth_token):
if not self.checkInput():
self.status = 'param_error'
LOG.error(_('the input content not valid: %s.' % (self.input)))
return self._handle_result(sync_manager)
try:
self.status = 'running'
green_threads = self.create_green_threads(sync_manager, auth_token)
for gt in green_threads:
gt.wait()
except Exception as e:
msg = _("Unable to execute task of image %(image_id)s: %(e)s") % \
{'image_id': self.image_id, 'e': unicode(e)}
LOG.exception(msg)
self.status = 'error'
else:
self.status = 'terminal'
return self._handle_result(sync_manager)
def checkInput(self):
if not self.input.pop('image_id', None):
LOG.warn(_('No cascading image_id specified.'))
return False
return self.do_checkInput()
class MetaUpdateTask(TaskObject):
def __init__(self, input):
super(MetaUpdateTask, self).__init__('meta_update', input)
def do_checkInput(self):
params = self.input
changes = params.get('changes')
removes = params.get('removes')
tags = params.get('tags')
if not changes and not removes and not tags:
LOG.warn(_('No changes and removes and tags with the glance.'))
return True
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_image(auth_token,
self.image_id)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.meta_update,
auth_token,
cascaded_ep,
image_id=cascaded_id,
**self.input))
return green_threads
class MetaDeleteTask(TaskObject):
def __init__(self, input):
super(MetaDeleteTask, self).__init__('meta_remove', input)
def do_checkInput(self):
self.locations = self.input.get('locations')
return self.locations is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_locations(self.locations)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.meta_delete,
auth_token,
cascaded_ep,
image_id=cascaded_id))
return green_threads
class ImageActiveTask(TaskObject):
"""
sync data task.
"""
def __init__(self, input):
super(ImageActiveTask, self).__init__('sync', input)
def do_checkInput(self):
image_data = self.input.get('body')
self.cascading_endpoint = self.input.get('cascading_ep')
self.copy_endpoint = self.input.pop('copy_ep', None)
self.copy_image_id = self.input.pop('copy_id', None)
return image_data and self.cascading_endpoint and \
self.copy_endpoint and self.copy_image_id
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_eps = s_utils.get_endpoints(auth_token)
for cascaded_ep in cascaded_eps:
green_threads.append(eventlet.spawn(sync_manager.sync_image,
auth_token,
copy_ep=self.copy_endpoint,
to_ep=cascaded_ep,
copy_image_id=self.copy_image_id,
cascading_image_id=self.image_id,
**self.input))
return green_threads
class PatchSnapshotLocationTask(TaskObject):
"""
sync data task
"""
def __init__(self, input):
super(PatchSnapshotLocationTask, self).__init__('snapshot', input)
def do_checkInput(self):
image_metadata = self.input.get('body')
self.snapshot_endpoint = self.input.pop('snapshot_ep', None)
self.snapshot_id = self.input.pop('snapshot_id', None)
return image_metadata and self.snapshot_endpoint and self.snapshot_id
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
_region_names = CONF.snapshot_region_names
cascaded_mapping = s_utils.get_endpoints(auth_token,
region_names=_region_names)
try:
if self.snapshot_endpoint in cascaded_mapping:
cascaded_mapping.remove(self.snapshot_endpoint)
except TypeError:
pass
for cascaded_ep in cascaded_mapping:
green_threads.append(eventlet.spawn(sync_manager.do_snapshot,
auth_token,
self.snapshot_endpoint,
cascaded_ep,
self.snapshot_id,
self.image_id,
**self.input))
return green_threads
class PatchLocationTask(TaskObject):
def __init__(self, input):
super(PatchLocationTask, self).__init__('patch', input)
def do_checkInput(self):
self.location = self.input.get('location')
return self.location is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_image(auth_token,
self.image_id)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.patch_location,
self.image_id,
cascaded_id,
auth_token,
cascaded_ep,
self.location))
return green_threads
class RemoveLocationsTask(TaskObject):
def __init__(self, input):
super(RemoveLocationsTask, self).__init__('locs_remove', input)
def do_checkInput(self):
self.locations = self.input.get('locations')
return self.locations is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_locations(self.locations)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.remove_loc,
cascaded_id,
auth_token,
cascaded_ep))
return green_threads
class PeriodicTask(TaskObject):
MAX_SLEEP_SECONDS = 15
def __init__(self, type, input, interval, last_run_time, run_immediately):
super(PeriodicTask, self).__init__(type, input)
self.interval = interval
self.last_run_time = last_run_time
self.run_immediately = run_immediately
def do_checkInput(self):
if not self.interval or self.interval < 0:
LOG.error(_('The Periodic Task interval invaild.'))
return False
return True
def ready(self):
# first time to run
if self.last_run_time is None:
self.last_run_time = timeutils.strtime()
return self.run_immediately
return timeutils.is_older_than(self.last_run_time, self.interval)
def execute(self, sync_manager, auth_token):
while not self.ready():
LOG.debug(_('the periodic task has not ready yet, sleep a while.'
'current_start_time is %s, last_run_time is %s, and '
'the interval is %i.' % (self.start_time,
self.last_run_time,
self.interval)))
_max_sleep_time = self.MAX_SLEEP_SECONDS
eventlet.sleep(seconds=max(self.interval / 10, _max_sleep_time))
super(PeriodicTask, self).execute(sync_manager, auth_token)
class ChkNewCascadedsPeriodicTask(PeriodicTask):
def __init__(self, input, interval=60, last_run_time=None,
run_immediately=False):
super(ChkNewCascadedsPeriodicTask, self).__init__('periodic_add',
input, interval,
last_run_time,
run_immediately)
LOG.debug(_('create ChkNewCascadedsPeriodicTask.'))
def do_checkInput(self):
self.images = self.input.get('images')
self.cascading_endpoint = self.input.get('cascading_ep')
if self.images is None or not self.cascading_endpoint:
return False
return super(ChkNewCascadedsPeriodicTask, self).do_checkInput()
def _stil_need_synced(self, cascaded_ep, image_id, auth_token):
g_client = s_utils.create_self_glance_client(auth_token)
try:
image = g_client.images.get(image_id)
except Exception:
LOG.warn(_('The add cascaded periodic task checks that the image '
'has deleted, no need to sync. id is %s' % image_id))
return False
else:
if image.status != 'active':
LOG.warn(_('The add cascaded period task checks image status '
'not active, no need to sync.'
'image id is %s.' % image_id))
return False
ep_list = [loc['url'] for loc in image.locations
if s_utils.is_glance_location(loc['url'])]
return not s_utils.is_ep_contains(cascaded_ep, ep_list)
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
for image_id in self.images:
cascaded_eps = self.images[image_id].get('locations')
kwargs = {'body': self.images[image_id].get('body')}
for cascaded_ep in cascaded_eps:
if not self._stil_need_synced(cascaded_ep,
image_id, auth_token):
continue
green_threads.append(eventlet.spawn(sync_manager.sync_image,
auth_token,
self.cascading_endpoint,
cascaded_ep,
image_id,
image_id,
**kwargs))
return green_threads

View File

@ -1,226 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import re
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
from glance.sync.clients import Clients as clients
CONF = cfg.CONF
CONF.import_opt('cascading_endpoint_url', 'glance.common.config', group='sync')
CONF.import_opt('sync_strategy', 'glance.common.config', group='sync')
def create_glance_client(auth_token, url):
"""
create glance clients
"""
return clients(auth_token).glance(url=url)
def create_self_glance_client(auth_token):
return create_glance_client(auth_token, get_cascading_endpoint_url())
def get_mappings_from_image(auth_token, image_id):
"""
get image's patched glance-locations
"""
client = create_self_glance_client(auth_token)
image = client.images.get(image_id)
locations = image.locations
if not locations:
return {}
return get_mappings_from_locations(locations)
def get_mappings_from_locations(locations):
mappings = {}
for loc in locations:
if is_glance_location(loc['url']):
id = loc['metadata'].get('image_id')
if not id:
continue
ep_url = create_ep_by_loc(loc)
mappings[ep_url] = id
return mappings
def get_cascading_endpoint_url():
return CONF.sync.cascading_endpoint_url
def get_host_from_ep(ep_url):
if not ep_url:
return None
pieces = urlparse.urlparse(ep_url)
return pieces.netloc.split(':')[0]
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
def get_default_location(locations):
for location in locations:
if is_default_location(location):
return location
return None
def is_glance_location(loc_url):
return pattern.match(loc_url)
def is_snapshot_location(location):
l_meta = location['metadata']
return l_meta and l_meta.get('image_from', None) in['snapshot', 'volume']
def get_id_from_glance_loc(location):
if not is_glance_location(location['url']):
return None
loc_meta = location['metadata']
if not loc_meta:
return None
return loc_meta.get('image_id', None)
def get_id_from_glance_loc_url(loc_url):
if not is_glance_location(loc_url):
return ''
_index = loc_url.find('/v2/images/') + len('/v2/images/')
return loc_url[_index:]
def is_default_location(location):
try:
return not is_glance_location(location['url']) \
and location['metadata']['is_default'] == 'true'
except:
return False
def get_snapshot_glance_loc(locations):
for location in locations:
if is_snapshot_location(location):
return location
return None
def create_ep_by_loc(location):
loc_url = location['url']
return create_ep_by_loc_url(loc_url)
def create_ep_by_loc_url(loc_url):
if not is_glance_location(loc_url):
return None
piece = urlparse.urlparse(loc_url)
return piece.scheme + '://' + piece.netloc + '/'
def generate_glance_location(ep, image_id, port=None):
default_port = port or '9292'
piece = urlparse.urlparse(ep)
paths = []
paths.append(piece.scheme)
paths.append('://')
paths.append(piece.netloc.split(':')[0])
paths.append(':')
paths.append(default_port)
paths.append('/v2/images/')
paths.append(image_id)
return ''.join(paths)
def get_endpoints(auth_token=None, tenant_id=None, **kwargs):
"""
find which glance should be sync by strategy config
"""
strategy = CONF.sync.sync_strategy
if strategy not in ['All', 'User', 'nova']:
return None
openstack_clients = clients(auth_token, tenant_id)
ksclient = openstack_clients.keystone()
'''
suppose that the cascading glance is 'public' endpoint type, and the
cascaded glacne endpoints are 'internal'
'''
regions = kwargs.pop('region_names', [])
if strategy in ['All', 'nova'] and not regions:
urls = ksclient.service_catalog.get_urls(service_type='image',
endpoint_type='publicURL')
if urls:
result = [u for u in urls if u != get_cascading_endpoint_url()]
else:
result = []
return result
else:
user_urls = []
for region_name in regions:
urls = ksclient.service_catalog.get_urls(service_type='image',
endpoint_type='publicURL',
region_name=region_name)
if urls:
user_urls.extend(urls)
result = [u for u in set(user_urls) if u !=
get_cascading_endpoint_url()]
return result
_V2_IMAGE_CREATE_PROPERTIES = ['container_format',
'disk_format', 'min_disk', 'min_ram', 'name',
'virtual_size', 'visibility', 'protected']
def get_core_properties(image):
"""
when sync, create image object, get the sync info
"""
_tags = list(image.tags) or []
kwargs = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs[key] = value
except KeyError:
pass
if _tags:
kwargs['tags'] = _tags
return kwargs
def calculate_lack_endpoints(all_ep_urls, glance_urls):
"""
calculate endpoints which exists in all_eps but not in glance_eps
"""
if not glance_urls:
return all_ep_urls
def _contain(ep):
_hosts = [urlparse.urlparse(_ep).netloc for _ep in glance_urls]
return not urlparse.urlparse(ep).netloc in _hosts
return filter(_contain, all_ep_urls)
def is_ep_contains(ep_url, glance_urls):
_hosts = [urlparse.urlparse(_ep).netloc for _ep in glance_urls]
return urlparse.urlparse(ep_url) in _hosts

View File

@ -1,160 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CURPATH=$(cd "$(dirname "$0")"; pwd)
_GLANCE_SYNC_CMD_FILE="glance-sync"
_PYTHON_INSTALL_DIR=${OPENSTACK_INSTALL_DIR}
if [ ! -n ${_PYTHON_INSTALL_DIR} ];then
_PYTHON_INSTALL_DIR="/usr/lib/python2.7/dist-packages"
fi
_GLANCE_DIR="${_PYTHON_INSTALL_DIR}/glance"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="${CURPATH}/../glance"
_CONF_DIR="${CURPATH}/../etc"
_BACKUP_DIR="${_GLANCE_DIR}/glance-sync-backup"
_SCRIPT_LOGFILE="/var/log/glance/installation/install.log"
export PS4='+{$LINENO:${FUNCNAME[0]}}'
ERRTRAP()
{
echo "[LINE:$1] Error: Command or function exited with status $?"
}
function log()
{
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
function process_stop
{
PID=`ps -efw|grep "$1"|grep -v grep|awk '{print $2}'`
echo "PID is: $PID">>$_SCRIPT_LOGFILE
if [ "x${PID}" != "x" ]; then
for kill_id in $PID
do
kill -9 ${kill_id}
if [ $? -ne 0 ]; then
echo "[[stop glance-sync]]$1 stop failed.">>$_SCRIPT_LOGFILE
exit 1
fi
done
echo "[[stop glance-sync]]$1 stop ok.">>$_SCRIPT_LOGFILE
fi
}
function backup
{
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/glance" ] ; then
log "It seems glance cascading has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/glance"
mkdir -p "${_BACKUP_DIR}/etc/glance"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/glance"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
}
function restart_services
{
log "restarting glance ..."
service glance-api restart
service glance-registry restart
process_stop "glance-sync"
python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf &
}
function preinstall
{
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
if [ ! -d "/var/log/glance/installation" ]; then
mkdir -p /var/log/glance/installation
touch _SCRIPT_LOGFILE
fi
log "checking installation directories..."
if [ ! -d "${_GLANCE_DIR}" ] ; then
log "Could not find the glance installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CONF_DIR}/${_GLANCE_SYNC_CMD_FILE}" ]; then
log "Could not find the glance-sync file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
}
#
#Start to execute here
#
trap 'ERRTRAP $LINENO' ERR
preinstall
if [ $? -ne 0 ] ; then
exit 1
fi
backup
if [ $? -ne 0 ] ; then
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_GLANCE_DIR}`
cp -r "${_CONF_DIR}/glance" "/etc"
cp "${_CONF_DIR}/${_GLANCE_SYNC_CMD_FILE}" "/usr/bin/"
#Config options
log "configurate the glance options which is in script/tricircle.cfg"
cd `dirname $0`/../../script
python config.py glance
if [ $? -ne 0 ] ; then
log "configurate the glance options error."
exit 1
fi
cd -
restart_services
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart glance manually."
exit 1
fi
log "Completed."
exit 0

View File

@ -7,7 +7,6 @@ import urlparse
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.image import glance
from nova.image.sync import drivers as drivermgr
@ -18,42 +17,42 @@ LOG = logging.getLogger(__name__)
glance_cascading_opt = [
cfg.StrOpt('image_copy_dest_location_url',
default='file:///var/lib/glance/images',
help=_("The path cascaded image_data copy to."),
help=("The path cascaded image_data copy to."),
deprecated_opts=[cfg.DeprecatedOpt('dest_location_url',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_host',
default='127.0.0.1',
help=_("The host name where image_data copy to."),
help=("The host name where image_data copy to."),
deprecated_opts=[cfg.DeprecatedOpt('dest_host',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_user',
default='glance',
help=_("The user name of cascaded glance for copy."),
help=("The user name of cascaded glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('dest_user',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_password',
default='openstack',
help=_("The passowrd of cascaded glance for copy."),
help=("The passowrd of cascaded glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('dest_password',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_location_url',
default='file:///var/lib/glance/images',
help=_("where the cascaded image data from"),
help=("where the cascaded image data from"),
deprecated_opts=[cfg.DeprecatedOpt('source_location_url',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_host',
default='0.0.0.1',
help=_("The host name where image_data copy from."),
help=("The host name where image_data copy from."),
deprecated_opts=[cfg.DeprecatedOpt('source_host',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_user',
default='glance',
help=_("The user name of glance for copy."),
help=("The user name of glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('source_user',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_password',
default='openstack',
help=_("The passowrd of glance for copy."),
help=("The passowrd of glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('source_password',
group='DEFAULT')]),
]
@ -123,11 +122,11 @@ class GlanceCascadingService(object):
try:
image_loc = self._copy_data(image_id, cascaded_id, candidate_path)
except Exception as e:
LOG.exception(_("copy image failed, reason=%s") % e)
LOG.exception(("copy image failed, reason=%s") % e)
raise
else:
if not image_loc:
LOG.exception(_("copy image Exception, no cascaded_loc"))
LOG.exception(("copy image Exception, no cascaded_loc"))
try:
# patch loc to the cascaded image
csd_locs = [{'url': image_loc,
@ -137,7 +136,7 @@ class GlanceCascadingService(object):
remove_props=None,
locations=csd_locs)
except Exception as e:
LOG.exception(_("patch loc to cascaded image Exception, reason: %s"
LOG.exception(("patch loc to cascaded image Exception, reason: %s"
% e))
raise
@ -154,7 +153,7 @@ class GlanceCascadingService(object):
self._client.call(context, 2, 'update', image_id,
remove_props=None, locations=csg_locs)
except Exception as e:
LOG.exception(_("patch loc to cascading image Exception, reason: %s"
LOG.exception(("patch loc to cascading image Exception, reason: %s"
% e))
raise

View File

View File

@ -1,54 +0,0 @@
Cinder timestamp-query-patch
===============================
it will be patched in cascaded level's control node
cinder juno version database has update_at attribute for change_since
query filter function, however cinder db api this version don't support
timestamp query function. So it is needed to make this patch in cascaded level
while syncronization state between cascading and cascaded openstack level
Key modules
-----------
* adding timestamp query function while list volumes:
cinder\db\sqlalchemy\api.py
Requirements
------------
* openstack juno has been installed
Installation
------------
We provide two ways to install the timestamp query patch code. In this section, we will guide you through installing the timestamp query patch.
* **Note:**
- Make sure you have an existing installation of **Openstack Juno**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing cinder, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $CINDER_PARENT_DIR```
(replace the $... with actual directory name.)
- restart cinder api service
- Done. The Cinder-Proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)

View File

@ -1,87 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder_timestamp_query_patch-installation-backup"
_SCRIPT_LOGFILE="/var/log/cinder/cinder_timestamp_query_patch/installation/install.log"
function log()
{
log_path=`dirname ${_SCRIPT_LOGFILE}`
if [ ! -d $log_path ] ; then
mkdir -p $log_path
chmod 777 $_SCRIPT_LOGFILE
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder timestamp query has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/db" "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
echo "Error in code backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
service openstack-cinder-api restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder api manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -1,22 +0,0 @@
Glance-Cascading Patch
================
Introduction
-----------------------------
*For glance cascading, we have to create the relationship bewteen one cascading-glance and some cascaded-glances. In order to achieve this goal, we using glance's multi-location feature, the relationshiop can be as a location with the special format. Besides, we modify the image status changing-rule: The image's active toggle into 'active' only if the cascaded have been synced. Because of these two reasons, a few existing source files were modified for adapting the cascading:
glance/api/v2/image.py
glance/gateway.py
glance/common/utils.py
glance/common/config.py
glance/common/exception.py
**Because in Juno, the code of glance store part is move out of the glance as an independent python project, the modification in Juno about the glance store is moved to the glance_store in Juno. **
Install
------------------------------
*To implement this patch just replacing the original files to these files, or run the install.sh in glancesync/installation/ directory.

View File

@ -1,21 +0,0 @@
[console_scripts]
glance-api = glance.cmd.api:main
glance-cache-cleaner = glance.cmd.cache_cleaner:main
glance-cache-manage = glance.cmd.cache_manage:main
glance-cache-prefetcher = glance.cmd.cache_prefetcher:main
glance-cache-pruner = glance.cmd.cache_pruner:main
glance-control = glance.cmd.control:main
glance-manage = glance.cmd.manage:main
glance-registry = glance.cmd.registry:main
glance-replicator = glance.cmd.replicator:main
glance-scrubber = glance.cmd.scrubber:main
[glance.common.image_location_strategy.modules]
location_order_strategy = glance.common.location_strategy.location_order
store_type_strategy = glance.common.location_strategy.store_type
[glance.sync.store.location]
filesystem = glance.sync.store._drivers.filesystem:LocationCreator
[glance.sync.store.driver]
filesystem = glance.sync.store._drivers.filesystem:Store

View File

@ -1,856 +0,0 @@
# Copyright 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import glance_store
from oslo.config import cfg
import six
import six.moves.urllib.parse as urlparse
import webob.exc
from glance.api import policy
from glance.common import exception
from glance.common import location_strategy
from glance.common import utils
from glance.common import wsgi
import glance.db
import glance.gateway
import glance.notifier
from glance.openstack.common import gettextutils
from glance.openstack.common import jsonutils as json
import glance.openstack.common.log as logging
from glance.openstack.common import timeutils
import glance.schema
import glance.sync.client.v1.api as sync_api
LOG = logging.getLogger(__name__)
_LI = gettextutils._LI
_LW = gettextutils._LW
CONF = cfg.CONF
CONF.import_opt('disk_formats', 'glance.common.config', group='image_format')
CONF.import_opt('container_formats', 'glance.common.config',
group='image_format')
class ImagesController(object):
def __init__(self, db_api=None, policy_enforcer=None, notifier=None,
store_api=None):
self.db_api = db_api or glance.db.get_api()
self.policy = policy_enforcer or policy.Enforcer()
self.notifier = notifier or glance.notifier.Notifier()
self.store_api = store_api or glance_store
self.sync_api = sync_api
self.sync_api.configure_sync_client()
self.gateway = glance.gateway.Gateway(self.db_api, self.store_api,
self.notifier, self.policy,
self.sync_api)
@utils.mutating
def create(self, req, image, extra_properties, tags):
image_factory = self.gateway.get_image_factory(req.context)
image_repo = self.gateway.get_repo(req.context)
try:
image = image_factory.new_image(extra_properties=extra_properties,
tags=tags, **image)
image_repo.add(image)
except exception.DuplicateLocation as dup:
raise webob.exc.HTTPBadRequest(explanation=dup.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.InvalidParameterValue as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.LimitExceeded as e:
LOG.info(utils.exception_to_str(e))
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=e.msg, request=req, content_type='text/plain')
except exception.Duplicate as dupex:
raise webob.exc.HTTPConflict(explanation=dupex.msg)
return image
def index(self, req, marker=None, limit=None, sort_key='created_at',
sort_dir='desc', filters=None, member_status='accepted'):
result = {}
if filters is None:
filters = {}
filters['deleted'] = False
if limit is None:
limit = CONF.limit_param_default
limit = min(CONF.api_limit_max, limit)
image_repo = self.gateway.get_repo(req.context)
try:
images = image_repo.list(marker=marker, limit=limit,
sort_key=sort_key, sort_dir=sort_dir,
filters=filters,
member_status=member_status)
if len(images) != 0 and len(images) == limit:
result['next_marker'] = images[-1].image_id
except (exception.NotFound, exception.InvalidSortKey,
exception.InvalidFilterRangeValue) as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
result['images'] = images
return result
def show(self, req, image_id):
image_repo = self.gateway.get_repo(req.context)
try:
return image_repo.get(image_id)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.NotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.msg)
@utils.mutating
def update(self, req, image_id, changes):
image_repo = self.gateway.get_repo(req.context)
try:
image = image_repo.get(image_id)
for change in changes:
change_method_name = '_do_%s' % change['op']
assert hasattr(self, change_method_name)
change_method = getattr(self, change_method_name)
change_method(req, image, change)
if changes:
image_repo.save(image)
except exception.NotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.InvalidParameterValue as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.StorageQuotaFull as e:
msg = (_LI("Denying attempt to upload image because it exceeds the"
" .quota: %s") % utils.exception_to_str(e))
LOG.info(msg)
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=msg, request=req, content_type='text/plain')
except exception.LimitExceeded as e:
LOG.info(utils.exception_to_str(e))
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=e.msg, request=req, content_type='text/plain')
return image
def _do_replace(self, req, image, change):
path = change['path']
path_root = path[0]
value = change['value']
if path_root == 'locations':
self._do_replace_locations(image, value)
else:
if hasattr(image, path_root):
setattr(image, path_root, value)
elif path_root in image.extra_properties:
image.extra_properties[path_root] = value
else:
msg = _("Property %s does not exist.")
raise webob.exc.HTTPConflict(msg % path_root)
def _do_add(self, req, image, change):
path = change['path']
path_root = path[0]
value = change['value']
if path_root == 'locations':
self._do_add_locations(image, path[1], value)
else:
if (hasattr(image, path_root) or
path_root in image.extra_properties):
msg = _("Property %s already present.")
raise webob.exc.HTTPConflict(msg % path_root)
image.extra_properties[path_root] = value
def _do_remove(self, req, image, change):
path = change['path']
path_root = path[0]
if path_root == 'locations':
self._do_remove_locations(image, path[1])
else:
if hasattr(image, path_root):
msg = _("Property %s may not be removed.")
raise webob.exc.HTTPForbidden(msg % path_root)
elif path_root in image.extra_properties:
del image.extra_properties[path_root]
else:
msg = _("Property %s does not exist.")
raise webob.exc.HTTPConflict(msg % path_root)
@utils.mutating
def delete(self, req, image_id):
image_repo = self.gateway.get_repo(req.context)
try:
image = image_repo.get(image_id)
image.delete()
image_repo.remove(image)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.NotFound as e:
msg = (_("Failed to find image %(image_id)s to delete") %
{'image_id': image_id})
LOG.info(msg)
raise webob.exc.HTTPNotFound(explanation=msg)
except exception.InUseByStore as e:
msg = (_LI("Image %s could not be deleted "
"because it is in use: %s") % (image_id, e.msg))
LOG.info(msg)
raise webob.exc.HTTPConflict(explanation=msg)
def _get_locations_op_pos(self, path_pos, max_pos, allow_max):
if path_pos is None or max_pos is None:
return None
pos = max_pos if allow_max else max_pos - 1
if path_pos.isdigit():
pos = int(path_pos)
elif path_pos != '-':
return None
if (not allow_max) and (pos not in range(max_pos)):
return None
return pos
def _do_replace_locations(self, image, value):
if len(image.locations) > 0 and len(value) > 0:
msg = _("Cannot replace locations from a non-empty "
"list to a non-empty list.")
raise webob.exc.HTTPBadRequest(explanation=msg)
if len(value) == 0:
# NOTE(zhiyan): this actually deletes the location
# from the backend store.
del image.locations[:]
if image.status == 'active':
image.status = 'queued'
else: # NOTE(zhiyan): len(image.locations) == 0
try:
image.locations = value
if image.status == 'queued':
image.status = 'active'
except (exception.BadStoreUri, exception.DuplicateLocation) as bse:
raise webob.exc.HTTPBadRequest(explanation=bse.msg)
except ValueError as ve: # update image status failed.
raise webob.exc.HTTPBadRequest(explanation=
utils.exception_to_str(ve))
def _do_add_locations(self, image, path_pos, value):
pos = self._get_locations_op_pos(path_pos,
len(image.locations), True)
if pos is None:
msg = _("Invalid position for adding a location.")
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
image.locations.insert(pos, value)
if image.status == 'queued':
image.status = 'active'
except (exception.BadStoreUri, exception.DuplicateLocation) as bse:
raise webob.exc.HTTPBadRequest(explanation=bse.msg)
except ValueError as ve: # update image status failed.
raise webob.exc.HTTPBadRequest(explanation=
utils.exception_to_str(ve))
def _do_remove_locations(self, image, path_pos):
pos = self._get_locations_op_pos(path_pos,
len(image.locations), False)
if pos is None:
msg = _("Invalid position for removing a location.")
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
# NOTE(zhiyan): this actually deletes the location
# from the backend store.
image.locations.pop(pos)
except Exception as e:
raise webob.exc.HTTPInternalServerError(explanation=
utils.exception_to_str(e))
if (len(image.locations) == 0) and (image.status == 'active'):
image.status = 'queued'
class RequestDeserializer(wsgi.JSONRequestDeserializer):
_disallowed_properties = ['direct_url', 'self', 'file', 'schema']
_readonly_properties = ['created_at', 'updated_at', 'status', 'checksum',
'size', 'virtual_size', 'direct_url', 'self',
'file', 'schema']
_reserved_properties = ['owner', 'is_public', 'location', 'deleted',
'deleted_at']
_base_properties = ['checksum', 'created_at', 'container_format',
'disk_format', 'id', 'min_disk', 'min_ram', 'name',
'size', 'virtual_size', 'status', 'tags',
'updated_at', 'visibility', 'protected']
_path_depth_limits = {'locations': {'add': 2, 'remove': 2, 'replace': 1}}
def __init__(self, schema=None):
super(RequestDeserializer, self).__init__()
self.schema = schema or get_schema()
def _get_request_body(self, request):
output = super(RequestDeserializer, self).default(request)
if 'body' not in output:
msg = _('Body expected in request.')
raise webob.exc.HTTPBadRequest(explanation=msg)
return output['body']
@classmethod
def _check_allowed(cls, image):
for key in cls._disallowed_properties:
if key in image:
msg = _("Attribute '%s' is read-only.") % key
raise webob.exc.HTTPForbidden(explanation=
utils.exception_to_str(msg))
def create(self, request):
body = self._get_request_body(request)
self._check_allowed(body)
try:
self.schema.validate(body)
except exception.InvalidObject as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
image = {}
properties = body
tags = properties.pop('tags', None)
for key in self._base_properties:
try:
# NOTE(flwang): Instead of changing the _check_unexpected
# of ImageFactory. It would be better to do the mapping
# at here.
if key == 'id':
image['image_id'] = properties.pop(key)
else:
image[key] = properties.pop(key)
except KeyError:
pass
return dict(image=image, extra_properties=properties, tags=tags)
def _get_change_operation_d10(self, raw_change):
try:
return raw_change['op']
except KeyError:
msg = _("Unable to find '%s' in JSON Schema change") % 'op'
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_operation_d4(self, raw_change):
op = None
for key in ['replace', 'add', 'remove']:
if key in raw_change:
if op is not None:
msg = _('Operation objects must contain only one member'
' named "add", "remove", or "replace".')
raise webob.exc.HTTPBadRequest(explanation=msg)
op = key
if op is None:
msg = _('Operation objects must contain exactly one member'
' named "add", "remove", or "replace".')
raise webob.exc.HTTPBadRequest(explanation=msg)
return op
def _get_change_path_d10(self, raw_change):
try:
return raw_change['path']
except KeyError:
msg = _("Unable to find '%s' in JSON Schema change") % 'path'
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_path_d4(self, raw_change, op):
return raw_change[op]
def _decode_json_pointer(self, pointer):
"""Parse a json pointer.
Json Pointers are defined in
http://tools.ietf.org/html/draft-pbryan-zyp-json-pointer .
The pointers use '/' for separation between object attributes, such
that '/A/B' would evaluate to C in {"A": {"B": "C"}}. A '/' character
in an attribute name is encoded as "~1" and a '~' character is encoded
as "~0".
"""
self._validate_json_pointer(pointer)
ret = []
for part in pointer.lstrip('/').split('/'):
ret.append(part.replace('~1', '/').replace('~0', '~').strip())
return ret
def _validate_json_pointer(self, pointer):
"""Validate a json pointer.
We only accept a limited form of json pointers.
"""
if not pointer.startswith('/'):
msg = _('Pointer `%s` does not start with "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if re.search('/\s*?/', pointer[1:]):
msg = _('Pointer `%s` contains adjacent "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if len(pointer) > 1 and pointer.endswith('/'):
msg = _('Pointer `%s` end with "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if pointer[1:].strip() == '/':
msg = _('Pointer `%s` does not contains valid token.') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if re.search('~[^01]', pointer) or pointer.endswith('~'):
msg = _('Pointer `%s` contains "~" not part of'
' a recognized escape sequence.') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_value(self, raw_change, op):
if 'value' not in raw_change:
msg = _('Operation "%s" requires a member named "value".')
raise webob.exc.HTTPBadRequest(explanation=msg % op)
return raw_change['value']
def _validate_change(self, change):
path_root = change['path'][0]
if path_root in self._readonly_properties:
msg = _("Attribute '%s' is read-only.") % path_root
raise webob.exc.HTTPForbidden(explanation=six.text_type(msg))
if path_root in self._reserved_properties:
msg = _("Attribute '%s' is reserved.") % path_root
raise webob.exc.HTTPForbidden(explanation=six.text_type(msg))
if change['op'] == 'delete':
return
partial_image = None
if len(change['path']) == 1:
partial_image = {path_root: change['value']}
elif ((path_root in get_base_properties().keys()) and
(get_base_properties()[path_root].get('type', '') == 'array')):
# NOTE(zhiyan): cient can use PATCH API to adding element to
# the image's existing set property directly.
# Such as: 1. using '/locations/N' path to adding a location
# to the image's 'locations' list at N position.
# (implemented)
# 2. using '/tags/-' path to appending a tag to the
# image's 'tags' list at last. (Not implemented)
partial_image = {path_root: [change['value']]}
if partial_image:
try:
self.schema.validate(partial_image)
except exception.InvalidObject as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
def _validate_path(self, op, path):
path_root = path[0]
limits = self._path_depth_limits.get(path_root, {})
if len(path) != limits.get(op, 1):
msg = _("Invalid JSON pointer for this resource: "
"'/%s'") % '/'.join(path)
raise webob.exc.HTTPBadRequest(explanation=six.text_type(msg))
def _parse_json_schema_change(self, raw_change, draft_version):
if draft_version == 10:
op = self._get_change_operation_d10(raw_change)
path = self._get_change_path_d10(raw_change)
elif draft_version == 4:
op = self._get_change_operation_d4(raw_change)
path = self._get_change_path_d4(raw_change, op)
else:
msg = _('Unrecognized JSON Schema draft version')
raise webob.exc.HTTPBadRequest(explanation=msg)
path_list = self._decode_json_pointer(path)
return op, path_list
def update(self, request):
changes = []
content_types = {
'application/openstack-images-v2.0-json-patch': 4,
'application/openstack-images-v2.1-json-patch': 10,
}
if request.content_type not in content_types:
headers = {'Accept-Patch':
', '.join(sorted(content_types.keys()))}
raise webob.exc.HTTPUnsupportedMediaType(headers=headers)
json_schema_version = content_types[request.content_type]
body = self._get_request_body(request)
if not isinstance(body, list):
msg = _('Request body must be a JSON array of operation objects.')
raise webob.exc.HTTPBadRequest(explanation=msg)
for raw_change in body:
if not isinstance(raw_change, dict):
msg = _('Operations must be JSON objects.')
raise webob.exc.HTTPBadRequest(explanation=msg)
(op, path) = self._parse_json_schema_change(raw_change,
json_schema_version)
# NOTE(zhiyan): the 'path' is a list.
self._validate_path(op, path)
change = {'op': op, 'path': path}
if not op == 'remove':
change['value'] = self._get_change_value(raw_change, op)
self._validate_change(change)
changes.append(change)
return {'changes': changes}
def _validate_limit(self, limit):
try:
limit = int(limit)
except ValueError:
msg = _("limit param must be an integer")
raise webob.exc.HTTPBadRequest(explanation=msg)
if limit < 0:
msg = _("limit param must be positive")
raise webob.exc.HTTPBadRequest(explanation=msg)
return limit
def _validate_sort_dir(self, sort_dir):
if sort_dir not in ['asc', 'desc']:
msg = _('Invalid sort direction: %s') % sort_dir
raise webob.exc.HTTPBadRequest(explanation=msg)
return sort_dir
def _validate_member_status(self, member_status):
if member_status not in ['pending', 'accepted', 'rejected', 'all']:
msg = _('Invalid status: %s') % member_status
raise webob.exc.HTTPBadRequest(explanation=msg)
return member_status
def _get_filters(self, filters):
visibility = filters.get('visibility')
if visibility:
if visibility not in ['public', 'private', 'shared']:
msg = _('Invalid visibility value: %s') % visibility
raise webob.exc.HTTPBadRequest(explanation=msg)
changes_since = filters.get('changes-since', None)
if changes_since:
msg = _('The "changes-since" filter is no longer available on v2.')
raise webob.exc.HTTPBadRequest(explanation=msg)
return filters
def index(self, request):
params = request.params.copy()
limit = params.pop('limit', None)
marker = params.pop('marker', None)
sort_dir = params.pop('sort_dir', 'desc')
member_status = params.pop('member_status', 'accepted')
# NOTE (flwang) To avoid using comma or any predefined chars to split
# multiple tags, now we allow user specify multiple 'tag' parameters
# in URL, such as v2/images?tag=x86&tag=64bit.
tags = []
while 'tag' in params:
tags.append(params.pop('tag').strip())
query_params = {
'sort_key': params.pop('sort_key', 'created_at'),
'sort_dir': self._validate_sort_dir(sort_dir),
'filters': self._get_filters(params),
'member_status': self._validate_member_status(member_status),
}
if marker is not None:
query_params['marker'] = marker
if limit is not None:
query_params['limit'] = self._validate_limit(limit)
if tags:
query_params['filters']['tags'] = tags
return query_params
class ResponseSerializer(wsgi.JSONResponseSerializer):
def __init__(self, schema=None):
super(ResponseSerializer, self).__init__()
self.schema = schema or get_schema()
def _get_image_href(self, image, subcollection=''):
base_href = '/v2/images/%s' % image.image_id
if subcollection:
base_href = '%s/%s' % (base_href, subcollection)
return base_href
def _format_image(self, image):
image_view = dict()
try:
image_view = dict(image.extra_properties)
attributes = ['name', 'disk_format', 'container_format',
'visibility', 'size', 'virtual_size', 'status',
'checksum', 'protected', 'min_ram', 'min_disk',
'owner']
for key in attributes:
image_view[key] = getattr(image, key)
image_view['id'] = image.image_id
image_view['created_at'] = timeutils.isotime(image.created_at)
image_view['updated_at'] = timeutils.isotime(image.updated_at)
if CONF.show_multiple_locations:
locations = list(image.locations)
if locations:
image_view['locations'] = []
for loc in locations:
tmp = dict(loc)
tmp.pop('id', None)
tmp.pop('status', None)
image_view['locations'].append(tmp)
else:
# NOTE (flwang): We will still show "locations": [] if
# image.locations is None to indicate it's allowed to show
# locations but it's just non-existent.
image_view['locations'] = []
LOG.debug("There is not available location "
"for image %s" % image.image_id)
if CONF.show_image_direct_url:
if image.locations:
# Choose best location configured strategy
l = location_strategy.choose_best_location(image.locations)
image_view['direct_url'] = l['url']
else:
LOG.debug("There is not available location "
"for image %s" % image.image_id)
image_view['tags'] = list(image.tags)
image_view['self'] = self._get_image_href(image)
image_view['file'] = self._get_image_href(image, 'file')
image_view['schema'] = '/v2/schemas/image'
image_view = self.schema.filter(image_view) # domain
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
return image_view
def create(self, response, image):
response.status_int = 201
self.show(response, image)
response.location = self._get_image_href(image)
def show(self, response, image):
image_view = self._format_image(image)
body = json.dumps(image_view, ensure_ascii=False)
response.unicode_body = six.text_type(body)
response.content_type = 'application/json'
def update(self, response, image):
image_view = self._format_image(image)
body = json.dumps(image_view, ensure_ascii=False)
response.unicode_body = six.text_type(body)
response.content_type = 'application/json'
def index(self, response, result):
params = dict(response.request.params)
params.pop('marker', None)
query = urlparse.urlencode(params)
body = {
'images': [self._format_image(i) for i in result['images']],
'first': '/v2/images',
'schema': '/v2/schemas/images',
}
if query:
body['first'] = '%s?%s' % (body['first'], query)
if 'next_marker' in result:
params['marker'] = result['next_marker']
next_query = urlparse.urlencode(params)
body['next'] = '/v2/images?%s' % next_query
response.unicode_body = six.text_type(json.dumps(body,
ensure_ascii=False))
response.content_type = 'application/json'
def delete(self, response, result):
response.status_int = 204
def get_base_properties():
return {
'id': {
'type': 'string',
'description': _('An identifier for the image'),
'pattern': ('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}'
'-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'),
},
'name': {
'type': 'string',
'description': _('Descriptive name for the image'),
'maxLength': 255,
},
'status': {
'type': 'string',
'description': _('Status of the image (READ-ONLY)'),
'enum': ['queued', 'saving', 'active', 'killed',
'deleted', 'pending_delete'],
},
'visibility': {
'type': 'string',
'description': _('Scope of image accessibility'),
'enum': ['public', 'private'],
},
'protected': {
'type': 'boolean',
'description': _('If true, image will not be deletable.'),
},
'checksum': {
'type': 'string',
'description': _('md5 hash of image contents. (READ-ONLY)'),
'maxLength': 32,
},
'owner': {
'type': 'string',
'description': _('Owner of the image'),
'maxLength': 255,
},
'size': {
'type': 'integer',
'description': _('Size of image file in bytes (READ-ONLY)'),
},
'virtual_size': {
'type': 'integer',
'description': _('Virtual size of image in bytes (READ-ONLY)'),
},
'container_format': {
'type': 'string',
'description': _('Format of the container'),
'enum': CONF.image_format.container_formats,
},
'disk_format': {
'type': 'string',
'description': _('Format of the disk'),
'enum': CONF.image_format.disk_formats,
},
'created_at': {
'type': 'string',
'description': _('Date and time of image registration'
' (READ-ONLY)'),
#TODO(bcwaldon): our jsonschema library doesn't seem to like the
# format attribute, figure out why!
#'format': 'date-time',
},
'updated_at': {
'type': 'string',
'description': _('Date and time of the last image modification'
' (READ-ONLY)'),
#'format': 'date-time',
},
'tags': {
'type': 'array',
'description': _('List of strings related to the image'),
'items': {
'type': 'string',
'maxLength': 255,
},
},
'direct_url': {
'type': 'string',
'description': _('URL to access the image file kept in external '
'store (READ-ONLY)'),
},
'min_ram': {
'type': 'integer',
'description': _('Amount of ram (in MB) required to boot image.'),
},
'min_disk': {
'type': 'integer',
'description': _('Amount of disk space (in GB) required to boot '
'image.'),
},
'self': {
'type': 'string',
'description': '(READ-ONLY)'
},
'file': {
'type': 'string',
'description': '(READ-ONLY)'
},
'schema': {
'type': 'string',
'description': '(READ-ONLY)'
},
'locations': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'url': {
'type': 'string',
'maxLength': 255,
},
'metadata': {
'type': 'object',
},
},
'required': ['url', 'metadata'],
},
'description': _('A set of URLs to access the image file kept in '
'external store'),
},
}
def _get_base_links():
return [
{'rel': 'self', 'href': '{self}'},
{'rel': 'enclosure', 'href': '{file}'},
{'rel': 'describedby', 'href': '{schema}'},
]
def get_schema(custom_properties=None):
properties = get_base_properties()
links = _get_base_links()
if CONF.allow_additional_image_properties:
schema = glance.schema.PermissiveSchema('image', properties, links)
else:
schema = glance.schema.Schema('image', properties)
if custom_properties:
for property_value in custom_properties.values():
property_value['is_base'] = False
schema.merge_properties(custom_properties)
return schema
def get_collection_schema(custom_properties=None):
image_schema = get_schema(custom_properties)
return glance.schema.CollectionSchema('images', image_schema)
def load_custom_properties():
"""Find the schema properties files and load them into a dict."""
filename = 'schema-image.json'
match = CONF.find_file(filename)
if match:
with open(match, 'r') as schema_file:
schema_data = schema_file.read()
return json.loads(schema_data)
else:
msg = (_LW('Could not find schema properties file %s. Continuing '
'without custom properties') % filename)
LOG.warn(msg)
return {}
def create_resource(custom_properties=None):
"""Images resource factory method"""
schema = get_schema(custom_properties)
deserializer = RequestDeserializer(schema)
serializer = ResponseSerializer(schema)
controller = ImagesController()
return wsgi.Resource(controller, deserializer, serializer)

View File

@ -1,286 +0,0 @@
#!/usr/bin/env python
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for configuring Glance
"""
import logging
import logging.config
import logging.handlers
import os
from oslo.config import cfg
from paste import deploy
from glance.version import version_info as version
paste_deploy_opts = [
cfg.StrOpt('flavor',
help=_('Partial name of a pipeline in your paste configuration '
'file with the service name removed. For example, if '
'your paste section name is '
'[pipeline:glance-api-keystone] use the value '
'"keystone"')),
cfg.StrOpt('config_file',
help=_('Name of the paste configuration file.')),
]
image_format_opts = [
cfg.ListOpt('container_formats',
default=['ami', 'ari', 'aki', 'bare', 'ovf', 'ova'],
help=_("Supported values for the 'container_format' "
"image attribute"),
deprecated_opts=[cfg.DeprecatedOpt('container_formats',
group='DEFAULT')]),
cfg.ListOpt('disk_formats',
default=['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2',
'vdi', 'iso'],
help=_("Supported values for the 'disk_format' "
"image attribute"),
deprecated_opts=[cfg.DeprecatedOpt('disk_formats',
group='DEFAULT')]),
]
task_opts = [
cfg.IntOpt('task_time_to_live',
default=48,
help=_("Time in hours for which a task lives after, either "
"succeeding or failing"),
deprecated_opts=[cfg.DeprecatedOpt('task_time_to_live',
group='DEFAULT')]),
cfg.StrOpt('task_executor',
default='eventlet',
help=_("Specifies which task executor to be used to run the "
"task scripts.")),
cfg.IntOpt('eventlet_executor_pool_size',
default=1000,
help=_("Specifies the maximum number of eventlet threads which "
"can be spun up by the eventlet based task executor to "
"perform execution of Glance tasks.")),
]
manage_opts = [
cfg.BoolOpt('db_enforce_mysql_charset',
default=True,
help=_('DEPRECATED. TO BE REMOVED IN THE JUNO RELEASE. '
'Whether or not to enforce that all DB tables have '
'charset utf8. If your database tables do not have '
'charset utf8 you will need to convert before this '
'option is removed. This option is only relevant if '
'your database engine is MySQL.'))
]
common_opts = [
cfg.BoolOpt('allow_additional_image_properties', default=True,
help=_('Whether to allow users to specify image properties '
'beyond what the image schema provides')),
cfg.IntOpt('image_member_quota', default=128,
help=_('Maximum number of image members per image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_property_quota', default=128,
help=_('Maximum number of properties allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_tag_quota', default=128,
help=_('Maximum number of tags allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_location_quota', default=10,
help=_('Maximum number of locations allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.StrOpt('data_api', default='glance.db.sqlalchemy.api',
help=_('Python module path of data access API')),
cfg.IntOpt('limit_param_default', default=25,
help=_('Default value for the number of items returned by a '
'request if not specified explicitly in the request')),
cfg.IntOpt('api_limit_max', default=1000,
help=_('Maximum permissible number of items that could be '
'returned by a request')),
cfg.BoolOpt('show_image_direct_url', default=False,
help=_('Whether to include the backend image storage location '
'in image properties. Revealing storage location can '
'be a security risk, so use this setting with '
'caution!')),
cfg.BoolOpt('show_multiple_locations', default=False,
help=_('Whether to include the backend image locations '
'in image properties. Revealing storage location can '
'be a security risk, so use this setting with '
'caution! The overrides show_image_direct_url.')),
cfg.IntOpt('image_size_cap', default=1099511627776,
help=_("Maximum size of image a user can upload in bytes. "
"Defaults to 1099511627776 bytes (1 TB).")),
cfg.StrOpt('user_storage_quota', default='0',
help=_("Set a system wide quota for every user. This value is "
"the total capacity that a user can use across "
"all storage systems. A value of 0 means unlimited."
"Optional unit can be specified for the value. Accepted "
"units are B, KB, MB, GB and TB representing "
"Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes"
"respectively. If no unit is specified then Bytes is "
"assumed. Note that there should not be any space "
"between value and unit and units are case sensitive.")),
cfg.BoolOpt('enable_v1_api', default=True,
help=_("Deploy the v1 OpenStack Images API.")),
cfg.BoolOpt('enable_v2_api', default=True,
help=_("Deploy the v2 OpenStack Images API.")),
cfg.BoolOpt('enable_v1_registry', default=True,
help=_("Deploy the v1 OpenStack Registry API.")),
cfg.BoolOpt('enable_v2_registry', default=True,
help=_("Deploy the v2 OpenStack Registry API.")),
cfg.StrOpt('pydev_worker_debug_host',
help=_('The hostname/IP of the pydev process listening for '
'debug connections')),
cfg.IntOpt('pydev_worker_debug_port', default=5678,
help=_('The port on which a pydev process is listening for '
'connections.')),
cfg.StrOpt('metadata_encryption_key', secret=True,
help=_('Key used for encrypting sensitive metadata while '
'talking to the registry or database.')),
cfg.BoolOpt('sync_enabled', default=False,
help=_("Whether to launch the Sync function.")),
cfg.StrOpt('sync_server_host', default='127.0.0.1',
help=_('host ip where sync_web_server in.')),
cfg.IntOpt('sync_server_port', default=9595,
help=_('host port where sync_web_server in.')),
]
sync_opts = [
cfg.StrOpt('cascading_endpoint_url', default='http://127.0.0.1:9292/',
help=_('host ip where glance in.'),
deprecated_opts=[cfg.DeprecatedOpt('cascading_endpoint_url',
group='DEFAULT')]),
cfg.StrOpt('sync_strategy', default='None',
help=_("Define the sync strategy, value can be All/User/None."),
deprecated_opts=[cfg.DeprecatedOpt('sync_strategy',
group='DEFAULT')]),
cfg.IntOpt('snapshot_timeout', default=300,
help=_('when snapshot, max wait (second)time for snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_timeout',
group='DEFAULT')]),
cfg.IntOpt('snapshot_sleep_interval', default=10,
help=_('when snapshot, sleep interval for waiting snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_sleep_interval',
group='DEFAULT')]),
cfg.IntOpt('task_retry_times', default=0,
help=_('sync task fail retry times.'),
deprecated_opts=[cfg.DeprecatedOpt('task_retry_times',
group='DEFAULT')]),
cfg.IntOpt('scp_copy_timeout', default=3600,
help=_('when snapshot, max wait (second)time for snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('scp_copy_timeout',
group='DEFAULT')]),
]
CONF = cfg.CONF
CONF.register_opts(paste_deploy_opts, group='paste_deploy')
CONF.register_opts(image_format_opts, group='image_format')
CONF.register_opts(task_opts, group='task')
CONF.register_opts(sync_opts, group='sync')
CONF.register_opts(manage_opts)
CONF.register_opts(common_opts)
def parse_args(args=None, usage=None, default_config_files=None):
CONF(args=args,
project='glance',
version=version.cached_version_string(),
usage=usage,
default_config_files=default_config_files)
def parse_cache_args(args=None):
config_files = cfg.find_config_files(project='glance', prog='glance-cache')
parse_args(args=args, default_config_files=config_files)
def _get_deployment_flavor(flavor=None):
"""
Retrieve the paste_deploy.flavor config item, formatted appropriately
for appending to the application name.
:param flavor: if specified, use this setting rather than the
paste_deploy.flavor configuration setting
"""
if not flavor:
flavor = CONF.paste_deploy.flavor
return '' if not flavor else ('-' + flavor)
def _get_paste_config_path():
paste_suffix = '-paste.ini'
conf_suffix = '.conf'
if CONF.config_file:
# Assume paste config is in a paste.ini file corresponding
# to the last config file
path = CONF.config_file[-1].replace(conf_suffix, paste_suffix)
else:
path = CONF.prog + paste_suffix
return CONF.find_file(os.path.basename(path))
def _get_deployment_config_file():
"""
Retrieve the deployment_config_file config item, formatted as an
absolute pathname.
"""
path = CONF.paste_deploy.config_file
if not path:
path = _get_paste_config_path()
if not path:
msg = _("Unable to locate paste config file for %s.") % CONF.prog
raise RuntimeError(msg)
return os.path.abspath(path)
def load_paste_app(app_name, flavor=None, conf_file=None):
"""
Builds and returns a WSGI app from a paste config file.
We assume the last config file specified in the supplied ConfigOpts
object is the paste config file, if conf_file is None.
:param app_name: name of the application to load
:param flavor: name of the variant of the application to load
:param conf_file: path to the paste config file
:raises RuntimeError when config file cannot be located or application
cannot be loaded from config file
"""
# append the deployment flavor to the application name,
# in order to identify the appropriate paste pipeline
app_name += _get_deployment_flavor(flavor)
if not conf_file:
conf_file = _get_deployment_config_file()
try:
logger = logging.getLogger(__name__)
logger.debug("Loading %(app_name)s from %(conf_file)s",
{'conf_file': conf_file, 'app_name': app_name})
app = deploy.loadapp("config:%s" % conf_file, name=app_name)
# Log the options used when starting if we're in debug mode...
if CONF.debug:
CONF.log_opt_values(logger, logging.DEBUG)
return app
except (LookupError, ImportError) as e:
msg = (_("Unable to load %(app_name)s from "
"configuration file %(conf_file)s."
"\nGot: %(e)r") % {'app_name': app_name,
'conf_file': conf_file,
'e': e})
logger.error(msg)
raise RuntimeError(msg)

View File

@ -1,422 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Glance exception subclasses"""
import six
import six.moves.urllib.parse as urlparse
_FATAL_EXCEPTION_FORMAT_ERRORS = False
class RedirectException(Exception):
def __init__(self, url):
self.url = urlparse.urlparse(url)
class GlanceException(Exception):
"""
Base Glance Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred")
def __init__(self, message=None, *args, **kwargs):
if not message:
message = self.message
try:
if kwargs:
message = message % kwargs
except Exception:
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise
else:
# at least get the core message out if something happened
pass
self.msg = message
super(GlanceException, self).__init__(message)
def __unicode__(self):
# NOTE(flwang): By default, self.msg is an instance of Message, which
# can't be converted by str(). Based on the definition of
# __unicode__, it should return unicode always.
return six.text_type(self.msg)
class MissingCredentialError(GlanceException):
message = _("Missing required credential: %(required)s")
class BadAuthStrategy(GlanceException):
message = _("Incorrect auth strategy, expected \"%(expected)s\" but "
"received \"%(received)s\"")
class NotFound(GlanceException):
message = _("An object with the specified identifier was not found.")
class BadStoreUri(GlanceException):
message = _("The Store URI was malformed.")
class Duplicate(GlanceException):
message = _("An object with the same identifier already exists.")
class Conflict(GlanceException):
message = _("An object with the same identifier is currently being "
"operated on.")
class StorageQuotaFull(GlanceException):
message = _("The size of the data %(image_size)s will exceed the limit. "
"%(remaining)s bytes remaining.")
class AuthBadRequest(GlanceException):
message = _("Connect error/bad request to Auth service at URL %(url)s.")
class AuthUrlNotFound(GlanceException):
message = _("Auth service at URL %(url)s not found.")
class AuthorizationFailure(GlanceException):
message = _("Authorization failed.")
class NotAuthenticated(GlanceException):
message = _("You are not authenticated.")
class Forbidden(GlanceException):
message = _("You are not authorized to complete this action.")
class ForbiddenPublicImage(Forbidden):
message = _("You are not authorized to complete this action.")
class ProtectedImageDelete(Forbidden):
message = _("Image %(image_id)s is protected and cannot be deleted.")
class ProtectedMetadefNamespaceDelete(Forbidden):
message = _("Metadata definition namespace %(namespace)s is protected"
" and cannot be deleted.")
class ProtectedMetadefNamespacePropDelete(Forbidden):
message = _("Metadata definition property %(property_name)s is protected"
" and cannot be deleted.")
class ProtectedMetadefObjectDelete(Forbidden):
message = _("Metadata definition object %(object_name)s is protected"
" and cannot be deleted.")
class ProtectedMetadefResourceTypeAssociationDelete(Forbidden):
message = _("Metadata definition resource-type-association"
" %(resource_type)s is protected and cannot be deleted.")
class ProtectedMetadefResourceTypeSystemDelete(Forbidden):
message = _("Metadata definition resource-type %(resource_type_name)s is"
" a seeded-system type and cannot be deleted.")
class Invalid(GlanceException):
message = _("Data supplied was not valid.")
class InvalidSortKey(Invalid):
message = _("Sort key supplied was not valid.")
class InvalidPropertyProtectionConfiguration(Invalid):
message = _("Invalid configuration in property protection file.")
class InvalidSwiftStoreConfiguration(Invalid):
message = _("Invalid configuration in glance-swift conf file.")
class InvalidFilterRangeValue(Invalid):
message = _("Unable to filter using the specified range.")
class ReadonlyProperty(Forbidden):
message = _("Attribute '%(property)s' is read-only.")
class ReservedProperty(Forbidden):
message = _("Attribute '%(property)s' is reserved.")
class AuthorizationRedirect(GlanceException):
message = _("Redirecting to %(uri)s for authorization.")
class ClientConnectionError(GlanceException):
message = _("There was an error connecting to a server")
class ClientConfigurationError(GlanceException):
message = _("There was an error configuring the client.")
class MultipleChoices(GlanceException):
message = _("The request returned a 302 Multiple Choices. This generally "
"means that you have not included a version indicator in a "
"request URI.\n\nThe body of response returned:\n%(body)s")
class LimitExceeded(GlanceException):
message = _("The request returned a 413 Request Entity Too Large. This "
"generally means that rate limiting or a quota threshold was "
"breached.\n\nThe response body:\n%(body)s")
def __init__(self, *args, **kwargs):
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
else None)
super(LimitExceeded, self).__init__(*args, **kwargs)
class ServiceUnavailable(GlanceException):
message = _("The request returned 503 Service Unavailable. This "
"generally occurs on service overload or other transient "
"outage.")
def __init__(self, *args, **kwargs):
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
else None)
super(ServiceUnavailable, self).__init__(*args, **kwargs)
class ServerError(GlanceException):
message = _("The request returned 500 Internal Server Error.")
class UnexpectedStatus(GlanceException):
message = _("The request returned an unexpected status: %(status)s."
"\n\nThe response body:\n%(body)s")
class InvalidContentType(GlanceException):
message = _("Invalid content type %(content_type)s")
class BadRegistryConnectionConfiguration(GlanceException):
message = _("Registry was not configured correctly on API server. "
"Reason: %(reason)s")
class BadDriverConfiguration(GlanceException):
message = _("Driver %(driver_name)s could not be configured correctly. "
"Reason: %(reason)s")
class MaxRedirectsExceeded(GlanceException):
message = _("Maximum redirects (%(redirects)s) was exceeded.")
class InvalidRedirect(GlanceException):
message = _("Received invalid HTTP redirect.")
class NoServiceEndpoint(GlanceException):
message = _("Response from Keystone does not contain a Glance endpoint.")
class RegionAmbiguity(GlanceException):
message = _("Multiple 'image' service matches for region %(region)s. This "
"generally means that a region is required and you have not "
"supplied one.")
class WorkerCreationFailure(GlanceException):
message = _("Server worker creation failed: %(reason)s.")
class SchemaLoadError(GlanceException):
message = _("Unable to load schema: %(reason)s")
class InvalidObject(GlanceException):
message = _("Provided object does not match schema "
"'%(schema)s': %(reason)s")
class UnsupportedHeaderFeature(GlanceException):
message = _("Provided header feature is unsupported: %(feature)s")
class InUseByStore(GlanceException):
message = _("The image cannot be deleted because it is in use through "
"the backend store outside of Glance.")
class ImageSizeLimitExceeded(GlanceException):
message = _("The provided image is too large.")
class ImageMemberLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"members for this image. Attempted: %(attempted)s, "
"Maximum: %(maximum)s")
class ImagePropertyLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"properties. Attempted: %(attempted)s, Maximum: %(maximum)s")
class ImageTagLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"tags. Attempted: %(attempted)s, Maximum: %(maximum)s")
class ImageLocationLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"locations. Attempted: %(attempted)s, Maximum: %(maximum)s")
class RPCError(GlanceException):
message = _("%(cls)s exception was raised in the last rpc call: %(val)s")
class TaskException(GlanceException):
message = _("An unknown task exception occurred")
class TaskNotFound(TaskException, NotFound):
message = _("Task with the given id %(task_id)s was not found")
class InvalidTaskStatus(TaskException, Invalid):
message = _("Provided status of task is unsupported: %(status)s")
class InvalidTaskType(TaskException, Invalid):
message = _("Provided type of task is unsupported: %(type)s")
class InvalidTaskStatusTransition(TaskException, Invalid):
message = _("Status transition from %(cur_status)s to"
" %(new_status)s is not allowed")
class DuplicateLocation(Duplicate):
message = _("The location %(location)s already exists")
class ImageDataNotFound(NotFound):
message = _("No image data could be found")
class InvalidParameterValue(Invalid):
message = _("Invalid value '%(value)s' for parameter '%(param)s': "
"%(extra_msg)s")
class InvalidImageStatusTransition(Invalid):
message = _("Image status transition from %(cur_status)s to"
" %(new_status)s is not allowed")
class MetadefDuplicateNamespace(Duplicate):
message = _("The metadata definition namespace=%(namespace_name)s"
" already exists.")
class MetadefDuplicateObject(Duplicate):
message = _("A metadata definition object with name=%(object_name)s"
" already exists in namespace=%(namespace_name)s.")
class MetadefDuplicateProperty(Duplicate):
message = _("A metadata definition property with name=%(property_name)s"
" already exists in namespace=%(namespace_name)s.")
class MetadefDuplicateResourceType(Duplicate):
message = _("A metadata definition resource-type with"
" name=%(resource_type_name)s already exists.")
class MetadefDuplicateResourceTypeAssociation(Duplicate):
message = _("The metadata definition resource-type association of"
" resource-type=%(resource_type_name)s to"
" namespace=%(namespace_name)s"
" already exists.")
class MetadefForbidden(Forbidden):
message = _("You are not authorized to complete this action.")
class MetadefIntegrityError(Forbidden):
message = _("The metadata definition %(record_type)s with"
" name=%(record_name)s not deleted."
" Other records still refer to it.")
class MetadefNamespaceNotFound(NotFound):
message = _("Metadata definition namespace=%(namespace_name)s"
"was not found.")
class MetadefObjectNotFound(NotFound):
message = _("The metadata definition object with"
" name=%(object_name)s was not found in"
" namespace=%(namespace_name)s.")
class MetadefPropertyNotFound(NotFound):
message = _("The metadata definition property with"
" name=%(property_name)s was not found in"
" namespace=%(namespace_name)s.")
class MetadefResourceTypeNotFound(NotFound):
message = _("The metadata definition resource-type with"
" name=%(resource_type_name)s, was not found.")
class MetadefResourceTypeAssociationNotFound(NotFound):
message = _("The metadata definition resource-type association of"
" resource-type=%(resource_type_name)s to"
" namespace=%(namespace_name)s,"
" was not found.")
class MetadefRecordNotFound(NotFound):
message = _("Metadata definition %(record_type)s record not found"
" for id %(id)s.")
class SyncServiceOperationError(GlanceException):
message = _("Image sync service execute failed with reason: %(reason)s")
class SyncStoreCopyError(GlanceException):
message = _("Image sync store failed with reason: %(reason)s")

View File

@ -1,657 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2014 SoftLayer Technologies, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
System-level utilities and helper functions.
"""
import errno
try:
from eventlet import sleep
except ImportError:
from time import sleep
from eventlet.green import socket
import functools
import os
import platform
import re
import subprocess
import sys
import uuid
import netaddr
from OpenSSL import crypto
from oslo.config import cfg
from webob import exc
import six
from glance.common import exception
from glance.openstack.common import excutils
import glance.openstack.common.log as logging
from glance.openstack.common import network_utils
from glance.openstack.common import strutils
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
FEATURE_BLACKLIST = ['content-length', 'content-type', 'x-image-meta-size']
# Whitelist of v1 API headers of form x-image-meta-xxx
IMAGE_META_HEADERS = ['x-image-meta-location', 'x-image-meta-size',
'x-image-meta-is_public', 'x-image-meta-disk_format',
'x-image-meta-container_format', 'x-image-meta-name',
'x-image-meta-status', 'x-image-meta-copy_from',
'x-image-meta-uri', 'x-image-meta-checksum',
'x-image-meta-created_at', 'x-image-meta-updated_at',
'x-image-meta-deleted_at', 'x-image-meta-min_ram',
'x-image-meta-min_disk', 'x-image-meta-owner',
'x-image-meta-store', 'x-image-meta-id',
'x-image-meta-protected', 'x-image-meta-deleted']
GLANCE_TEST_SOCKET_FD_STR = 'GLANCE_TEST_SOCKET_FD'
def chunkreadable(iter, chunk_size=65536):
"""
Wrap a readable iterator with a reader yielding chunks of
a preferred size, otherwise leave iterator unchanged.
:param iter: an iter which may also be readable
:param chunk_size: maximum size of chunk
"""
return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter
def chunkiter(fp, chunk_size=65536):
"""
Return an iterator to a file-like obj which yields fixed size chunks
:param fp: a file-like object
:param chunk_size: maximum size of chunk
"""
while True:
chunk = fp.read(chunk_size)
if chunk:
yield chunk
else:
break
def cooperative_iter(iter):
"""
Return an iterator which schedules after each
iteration. This can prevent eventlet thread starvation.
:param iter: an iterator to wrap
"""
try:
for chunk in iter:
sleep(0)
yield chunk
except Exception as err:
with excutils.save_and_reraise_exception():
msg = _("Error: cooperative_iter exception %s") % err
LOG.error(msg)
def cooperative_read(fd):
"""
Wrap a file descriptor's read with a partial function which schedules
after each read. This can prevent eventlet thread starvation.
:param fd: a file descriptor to wrap
"""
def readfn(*args):
result = fd.read(*args)
sleep(0)
return result
return readfn
class CooperativeReader(object):
"""
An eventlet thread friendly class for reading in image data.
When accessing data either through the iterator or the read method
we perform a sleep to allow a co-operative yield. When there is more than
one image being uploaded/downloaded this prevents eventlet thread
starvation, ie allows all threads to be scheduled periodically rather than
having the same thread be continuously active.
"""
def __init__(self, fd):
"""
:param fd: Underlying image file object
"""
self.fd = fd
self.iterator = None
# NOTE(markwash): if the underlying supports read(), overwrite the
# default iterator-based implementation with cooperative_read which
# is more straightforward
if hasattr(fd, 'read'):
self.read = cooperative_read(fd)
def read(self, length=None):
"""Return the next chunk of the underlying iterator.
This is replaced with cooperative_read in __init__ if the underlying
fd already supports read().
"""
if self.iterator is None:
self.iterator = self.__iter__()
try:
return self.iterator.next()
except StopIteration:
return ''
def __iter__(self):
return cooperative_iter(self.fd.__iter__())
class LimitingReader(object):
"""
Reader designed to fail when reading image data past the configured
allowable amount.
"""
def __init__(self, data, limit):
"""
:param data: Underlying image data object
:param limit: maximum number of bytes the reader should allow
"""
self.data = data
self.limit = limit
self.bytes_read = 0
def __iter__(self):
for chunk in self.data:
self.bytes_read += len(chunk)
if self.bytes_read > self.limit:
raise exception.ImageSizeLimitExceeded()
else:
yield chunk
def read(self, i):
result = self.data.read(i)
self.bytes_read += len(result)
if self.bytes_read > self.limit:
raise exception.ImageSizeLimitExceeded()
return result
def image_meta_to_http_headers(image_meta):
"""
Returns a set of image metadata into a dict
of HTTP headers that can be fed to either a Webob
Request object or an httplib.HTTP(S)Connection object
:param image_meta: Mapping of image metadata
"""
headers = {}
for k, v in image_meta.items():
if v is not None:
if k == 'properties':
for pk, pv in v.items():
if pv is not None:
headers["x-image-meta-property-%s"
% pk.lower()] = six.text_type(pv)
else:
headers["x-image-meta-%s" % k.lower()] = six.text_type(v)
return headers
def get_image_meta_from_headers(response):
"""
Processes HTTP headers from a supplied response that
match the x-image-meta and x-image-meta-property and
returns a mapping of image metadata and properties
:param response: Response to process
"""
result = {}
properties = {}
if hasattr(response, 'getheaders'): # httplib.HTTPResponse
headers = response.getheaders()
else: # webob.Response
headers = response.headers.items()
for key, value in headers:
key = str(key.lower())
if key.startswith('x-image-meta-property-'):
field_name = key[len('x-image-meta-property-'):].replace('-', '_')
properties[field_name] = value or None
elif key.startswith('x-image-meta-'):
field_name = key[len('x-image-meta-'):].replace('-', '_')
if 'x-image-meta-' + field_name not in IMAGE_META_HEADERS:
msg = _("Bad header: %(header_name)s") % {'header_name': key}
raise exc.HTTPBadRequest(msg, content_type="text/plain")
result[field_name] = value or None
result['properties'] = properties
for key in ('size', 'min_disk', 'min_ram'):
if key in result:
try:
result[key] = int(result[key])
except ValueError:
extra = (_("Cannot convert image %(key)s '%(value)s' "
"to an integer.")
% {'key': key, 'value': result[key]})
raise exception.InvalidParameterValue(value=result[key],
param=key,
extra_msg=extra)
if result[key] < 0:
extra = (_("Image %(key)s must be >= 0 "
"('%(value)s' specified).")
% {'key': key, 'value': result[key]})
raise exception.InvalidParameterValue(value=result[key],
param=key,
extra_msg=extra)
for key in ('is_public', 'deleted', 'protected'):
if key in result:
result[key] = strutils.bool_from_string(result[key])
return result
def create_mashup_dict(image_meta):
"""
Returns a dictionary-like mashup of the image core properties
and the image custom properties from given image metadata.
:param image_meta: metadata of image with core and custom properties
"""
def get_items():
for key, value in six.iteritems(image_meta):
if isinstance(value, dict):
for subkey, subvalue in six.iteritems(
create_mashup_dict(value)):
if subkey not in image_meta:
yield subkey, subvalue
else:
yield key, value
return dict(get_items())
def safe_mkdirs(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
def safe_remove(path):
try:
os.remove(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
class PrettyTable(object):
"""Creates an ASCII art table for use in bin/glance
Example:
ID Name Size Hits
--- ----------------- ------------ -----
122 image 22 0
"""
def __init__(self):
self.columns = []
def add_column(self, width, label="", just='l'):
"""Add a column to the table
:param width: number of characters wide the column should be
:param label: column heading
:param just: justification for the column, 'l' for left,
'r' for right
"""
self.columns.append((width, label, just))
def make_header(self):
label_parts = []
break_parts = []
for width, label, _ in self.columns:
# NOTE(sirp): headers are always left justified
label_part = self._clip_and_justify(label, width, 'l')
label_parts.append(label_part)
break_part = '-' * width
break_parts.append(break_part)
label_line = ' '.join(label_parts)
break_line = ' '.join(break_parts)
return '\n'.join([label_line, break_line])
def make_row(self, *args):
row = args
row_parts = []
for data, (width, _, just) in zip(row, self.columns):
row_part = self._clip_and_justify(data, width, just)
row_parts.append(row_part)
row_line = ' '.join(row_parts)
return row_line
@staticmethod
def _clip_and_justify(data, width, just):
# clip field to column width
clipped_data = str(data)[:width]
if just == 'r':
# right justify
justified = clipped_data.rjust(width)
else:
# left justify
justified = clipped_data.ljust(width)
return justified
def get_terminal_size():
def _get_terminal_size_posix():
import fcntl
import struct
import termios
height_width = None
try:
height_width = struct.unpack('hh', fcntl.ioctl(sys.stderr.fileno(),
termios.TIOCGWINSZ,
struct.pack('HH', 0, 0)))
except Exception:
pass
if not height_width:
try:
p = subprocess.Popen(['stty', 'size'],
shell=False,
stdout=subprocess.PIPE,
stderr=open(os.devnull, 'w'))
result = p.communicate()
if p.returncode == 0:
return tuple(int(x) for x in result[0].split())
except Exception:
pass
return height_width
def _get_terminal_size_win32():
try:
from ctypes import create_string_buffer
from ctypes import windll
handle = windll.kernel32.GetStdHandle(-12)
csbi = create_string_buffer(22)
res = windll.kernel32.GetConsoleScreenBufferInfo(handle, csbi)
except Exception:
return None
if res:
import struct
unpack_tmp = struct.unpack("hhhhHhhhhhh", csbi.raw)
(bufx, bufy, curx, cury, wattr,
left, top, right, bottom, maxx, maxy) = unpack_tmp
height = bottom - top + 1
width = right - left + 1
return (height, width)
else:
return None
def _get_terminal_size_unknownOS():
raise NotImplementedError
func = {'posix': _get_terminal_size_posix,
'win32': _get_terminal_size_win32}
height_width = func.get(platform.os.name, _get_terminal_size_unknownOS)()
if height_width is None:
raise exception.Invalid()
for i in height_width:
if not isinstance(i, int) or i <= 0:
raise exception.Invalid()
return height_width[0], height_width[1]
def mutating(func):
"""Decorator to enforce read-only logic"""
@functools.wraps(func)
def wrapped(self, req, *args, **kwargs):
if req.context.read_only:
msg = "Read-only access"
LOG.debug(msg)
raise exc.HTTPForbidden(msg, request=req,
content_type="text/plain")
return func(self, req, *args, **kwargs)
return wrapped
def setup_remote_pydev_debug(host, port):
error_msg = _('Error setting up the debug environment. Verify that the'
' option pydev_worker_debug_host is pointing to a valid '
'hostname or IP on which a pydev server is listening on'
' the port indicated by pydev_worker_debug_port.')
try:
try:
from pydev import pydevd
except ImportError:
import pydevd
pydevd.settrace(host,
port=port,
stdoutToServer=True,
stderrToServer=True)
return True
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(error_msg)
class LazyPluggable(object):
"""A pluggable backend loaded lazily based on some value."""
def __init__(self, pivot, config_group=None, **backends):
self.__backends = backends
self.__pivot = pivot
self.__backend = None
self.__config_group = config_group
def __get_backend(self):
if not self.__backend:
if self.__config_group is None:
backend_name = CONF[self.__pivot]
else:
backend_name = CONF[self.__config_group][self.__pivot]
if backend_name not in self.__backends:
msg = _('Invalid backend: %s') % backend_name
raise exception.GlanceException(msg)
backend = self.__backends[backend_name]
if isinstance(backend, tuple):
name = backend[0]
fromlist = backend[1]
else:
name = backend
fromlist = backend
self.__backend = __import__(name, None, None, fromlist)
return self.__backend
def __getattr__(self, key):
backend = self.__get_backend()
return getattr(backend, key)
def validate_key_cert(key_file, cert_file):
try:
error_key_name = "private key"
error_filename = key_file
with open(key_file, 'r') as keyfile:
key_str = keyfile.read()
key = crypto.load_privatekey(crypto.FILETYPE_PEM, key_str)
error_key_name = "certificate"
error_filename = cert_file
with open(cert_file, 'r') as certfile:
cert_str = certfile.read()
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_str)
except IOError as ioe:
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
"%(error_filename)s. Please verify it."
" Error: %(ioe)s") %
{'error_key_name': error_key_name,
'error_filename': error_filename,
'ioe': ioe})
except crypto.Error as ce:
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
"%(error_filename)s. Please verify it. OpenSSL"
" error: %(ce)s") %
{'error_key_name': error_key_name,
'error_filename': error_filename,
'ce': ce})
try:
data = str(uuid.uuid4())
digest = "sha1"
out = crypto.sign(key, data, digest)
crypto.verify(cert, out, data, digest)
except crypto.Error as ce:
raise RuntimeError(_("There is a problem with your key pair. "
"Please verify that cert %(cert_file)s and "
"key %(key_file)s belong together. OpenSSL "
"error %(ce)s") % {'cert_file': cert_file,
'key_file': key_file,
'ce': ce})
def get_test_suite_socket():
global GLANCE_TEST_SOCKET_FD_STR
if GLANCE_TEST_SOCKET_FD_STR in os.environ:
fd = int(os.environ[GLANCE_TEST_SOCKET_FD_STR])
sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
sock = socket.SocketType(_sock=sock)
sock.listen(CONF.backlog)
del os.environ[GLANCE_TEST_SOCKET_FD_STR]
os.close(fd)
return sock
return None
def is_uuid_like(val):
"""Returns validation of a value as a UUID.
For our purposes, a UUID is a canonical form string:
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
"""
try:
return str(uuid.UUID(val)) == val
except (TypeError, ValueError, AttributeError):
return False
def is_valid_port(port):
"""Verify that port represents a valid port number."""
return str(port).isdigit() and int(port) > 0 and int(port) <= 65535
def is_valid_ipv4(address):
"""Verify that address represents a valid IPv4 address."""
try:
return netaddr.valid_ipv4(address)
except Exception:
return False
def is_valid_ipv6(address):
"""Verify that address represents a valid IPv6 address."""
try:
return netaddr.valid_ipv6(address)
except Exception:
return False
def is_valid_hostname(hostname):
"""Verify whether a hostname (not an FQDN) is valid."""
return re.match('^[a-zA-Z0-9-]+$', hostname) is not None
def is_valid_fqdn(fqdn):
"""Verify whether a host is a valid FQDN."""
return re.match('^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', fqdn) is not None
def parse_valid_host_port(host_port):
"""
Given a "host:port" string, attempts to parse it as intelligently as
possible to determine if it is valid. This includes IPv6 [host]:port form,
IPv4 ip:port form, and hostname:port or fqdn:port form.
Invalid inputs will raise a ValueError, while valid inputs will return
a (host, port) tuple where the port will always be of type int.
"""
try:
try:
host, port = network_utils.parse_host_port(host_port)
except Exception:
raise ValueError(_('Host and port "%s" is not valid.') % host_port)
if not is_valid_port(port):
raise ValueError(_('Port "%s" is not valid.') % port)
# First check for valid IPv6 and IPv4 addresses, then a generic
# hostname. Failing those, if the host includes a period, then this
# should pass a very generic FQDN check. The FQDN check for letters at
# the tail end will weed out any hilariously absurd IPv4 addresses.
if not (is_valid_ipv6(host) or is_valid_ipv4(host) or
is_valid_hostname(host) or is_valid_fqdn(host)):
raise ValueError(_('Host "%s" is not valid.') % host)
except Exception as ex:
raise ValueError(_('%s '
'Please specify a host:port pair, where host is an '
'IPv4 address, IPv6 address, hostname, or FQDN. If '
'using an IPv6 address, enclose it in brackets '
'separately from the port (i.e., '
'"[fe80::a:b:c]:9876").') % ex)
return (host, int(port))
def exception_to_str(exc):
try:
error = six.text_type(exc)
except UnicodeError:
try:
error = str(exc)
except UnicodeError:
error = ("Caught '%(exception)s' exception." %
{"exception": exc.__class__.__name__})
return strutils.safe_encode(error, errors='ignore')

View File

@ -1,214 +0,0 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from glance.api import authorization
from glance.api import policy
from glance.api import property_protections
from glance.common import property_utils
from glance.common import store_utils
import glance.db
import glance.domain
import glance.location
import glance.notifier
import glance.quota
import glance_store
from glance.sync.client.v1 import api as syncapi
CONF = cfg.CONF
CONF.import_opt('sync_enabled', 'glance.common.config')
class Gateway(object):
def __init__(self, db_api=None, store_api=None, notifier=None,
policy_enforcer=None, sync_api=None):
self.db_api = db_api or glance.db.get_api()
self.store_api = store_api or glance_store
self.store_utils = store_utils
self.notifier = notifier or glance.notifier.Notifier()
self.policy = policy_enforcer or policy.Enforcer()
self.sync_api = sync_api or syncapi
def get_image_factory(self, context):
image_factory = glance.domain.ImageFactory()
store_image_factory = glance.location.ImageFactoryProxy(
image_factory, context, self.store_api, self.store_utils)
quota_image_factory = glance.quota.ImageFactoryProxy(
store_image_factory, context, self.db_api, self.store_utils)
policy_image_factory = policy.ImageFactoryProxy(
quota_image_factory, context, self.policy)
notifier_image_factory = glance.notifier.ImageFactoryProxy(
policy_image_factory, context, self.notifier)
if property_utils.is_property_protection_enabled():
property_rules = property_utils.PropertyRules(self.policy)
protected_image_factory = property_protections.\
ProtectedImageFactoryProxy(notifier_image_factory, context,
property_rules)
authorized_image_factory = authorization.ImageFactoryProxy(
protected_image_factory, context)
else:
authorized_image_factory = authorization.ImageFactoryProxy(
notifier_image_factory, context)
if CONF.sync_enabled:
sync_image_factory = glance.sync.ImageFactoryProxy(
authorized_image_factory, context, self.sync_api)
return sync_image_factory
return authorized_image_factory
def get_image_member_factory(self, context):
image_factory = glance.domain.ImageMemberFactory()
quota_image_factory = glance.quota.ImageMemberFactoryProxy(
image_factory, context, self.db_api, self.store_utils)
policy_member_factory = policy.ImageMemberFactoryProxy(
quota_image_factory, context, self.policy)
authorized_image_factory = authorization.ImageMemberFactoryProxy(
policy_member_factory, context)
return authorized_image_factory
def get_repo(self, context):
image_repo = glance.db.ImageRepo(context, self.db_api)
store_image_repo = glance.location.ImageRepoProxy(
image_repo, context, self.store_api, self.store_utils)
quota_image_repo = glance.quota.ImageRepoProxy(
store_image_repo, context, self.db_api, self.store_utils)
policy_image_repo = policy.ImageRepoProxy(
quota_image_repo, context, self.policy)
notifier_image_repo = glance.notifier.ImageRepoProxy(
policy_image_repo, context, self.notifier)
if property_utils.is_property_protection_enabled():
property_rules = property_utils.PropertyRules(self.policy)
protected_image_repo = property_protections.\
ProtectedImageRepoProxy(notifier_image_repo, context,
property_rules)
authorized_image_repo = authorization.ImageRepoProxy(
protected_image_repo, context)
else:
authorized_image_repo = authorization.ImageRepoProxy(
notifier_image_repo, context)
if CONF.sync_enabled:
sync_image_repo = glance.sync.ImageRepoProxy(
authorized_image_repo, context, self.sync_api)
return sync_image_repo
return authorized_image_repo
def get_task_factory(self, context):
task_factory = glance.domain.TaskFactory()
policy_task_factory = policy.TaskFactoryProxy(
task_factory, context, self.policy)
notifier_task_factory = glance.notifier.TaskFactoryProxy(
policy_task_factory, context, self.notifier)
authorized_task_factory = authorization.TaskFactoryProxy(
notifier_task_factory, context)
return authorized_task_factory
def get_task_repo(self, context):
task_repo = glance.db.TaskRepo(context, self.db_api)
policy_task_repo = policy.TaskRepoProxy(
task_repo, context, self.policy)
notifier_task_repo = glance.notifier.TaskRepoProxy(
policy_task_repo, context, self.notifier)
authorized_task_repo = authorization.TaskRepoProxy(
notifier_task_repo, context)
return authorized_task_repo
def get_task_stub_repo(self, context):
task_stub_repo = glance.db.TaskRepo(context, self.db_api)
policy_task_stub_repo = policy.TaskStubRepoProxy(
task_stub_repo, context, self.policy)
notifier_task_stub_repo = glance.notifier.TaskStubRepoProxy(
policy_task_stub_repo, context, self.notifier)
authorized_task_stub_repo = authorization.TaskStubRepoProxy(
notifier_task_stub_repo, context)
return authorized_task_stub_repo
def get_task_executor_factory(self, context):
task_repo = self.get_task_repo(context)
image_repo = self.get_repo(context)
image_factory = self.get_image_factory(context)
return glance.domain.TaskExecutorFactory(task_repo,
image_repo,
image_factory)
def get_metadef_namespace_factory(self, context):
ns_factory = glance.domain.MetadefNamespaceFactory()
policy_ns_factory = policy.MetadefNamespaceFactoryProxy(
ns_factory, context, self.policy)
authorized_ns_factory = authorization.MetadefNamespaceFactoryProxy(
policy_ns_factory, context)
return authorized_ns_factory
def get_metadef_namespace_repo(self, context):
ns_repo = glance.db.MetadefNamespaceRepo(context, self.db_api)
policy_ns_repo = policy.MetadefNamespaceRepoProxy(
ns_repo, context, self.policy)
authorized_ns_repo = authorization.MetadefNamespaceRepoProxy(
policy_ns_repo, context)
return authorized_ns_repo
def get_metadef_object_factory(self, context):
object_factory = glance.domain.MetadefObjectFactory()
policy_object_factory = policy.MetadefObjectFactoryProxy(
object_factory, context, self.policy)
authorized_object_factory = authorization.MetadefObjectFactoryProxy(
policy_object_factory, context)
return authorized_object_factory
def get_metadef_object_repo(self, context):
object_repo = glance.db.MetadefObjectRepo(context, self.db_api)
policy_object_repo = policy.MetadefObjectRepoProxy(
object_repo, context, self.policy)
authorized_object_repo = authorization.MetadefObjectRepoProxy(
policy_object_repo, context)
return authorized_object_repo
def get_metadef_resource_type_factory(self, context):
resource_type_factory = glance.domain.MetadefResourceTypeFactory()
policy_resource_type_factory = policy.MetadefResourceTypeFactoryProxy(
resource_type_factory, context, self.policy)
authorized_resource_type_factory = \
authorization.MetadefResourceTypeFactoryProxy(
policy_resource_type_factory, context)
return authorized_resource_type_factory
def get_metadef_resource_type_repo(self, context):
resource_type_repo = glance.db.MetadefResourceTypeRepo(
context, self.db_api)
policy_object_repo = policy.MetadefResourceTypeRepoProxy(
resource_type_repo, context, self.policy)
authorized_resource_type_repo = \
authorization.MetadefResourceTypeRepoProxy(policy_object_repo,
context)
return authorized_resource_type_repo
def get_metadef_property_factory(self, context):
prop_factory = glance.domain.MetadefPropertyFactory()
policy_prop_factory = policy.MetadefPropertyFactoryProxy(
prop_factory, context, self.policy)
authorized_prop_factory = authorization.MetadefPropertyFactoryProxy(
policy_prop_factory, context)
return authorized_prop_factory
def get_metadef_property_repo(self, context):
prop_repo = glance.db.MetadefPropertyRepo(context, self.db_api)
policy_prop_repo = policy.MetadefPropertyRepoProxy(
prop_repo, context, self.policy)
authorized_prop_repo = authorization.MetadefPropertyRepoProxy(
policy_prop_repo, context)
return authorized_prop_repo

View File

@ -1,459 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import copy
import re
import glance_store as store
from oslo.config import cfg
from glance.common import exception
from glance.common import utils
import glance.domain.proxy
from glance.openstack.common import excutils
from glance.openstack.common import gettextutils
import glance.openstack.common.log as logging
_LE = gettextutils._LE
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class ImageRepoProxy(glance.domain.proxy.Repo):
def __init__(self, image_repo, context, store_api, store_utils):
self.context = context
self.store_api = store_api
proxy_kwargs = {'context': context, 'store_api': store_api,
'store_utils': store_utils}
super(ImageRepoProxy, self).__init__(image_repo,
item_proxy_class=ImageProxy,
item_proxy_kwargs=proxy_kwargs)
def _set_acls(self, image):
public = image.visibility == 'public'
member_ids = []
if image.locations and not public:
member_repo = image.get_member_repo()
member_ids = [m.member_id for m in member_repo.list()]
for location in image.locations:
self.store_api.set_acls(location['url'], public=public,
read_tenants=member_ids,
context=self.context)
def add(self, image):
result = super(ImageRepoProxy, self).add(image)
self._set_acls(image)
return result
def save(self, image):
result = super(ImageRepoProxy, self).save(image)
self._set_acls(image)
return result
def _check_location_uri(context, store_api, uri):
"""Check if an image location is valid.
:param context: Glance request context
:param store_api: store API module
:param uri: location's uri string
"""
is_ok = True
try:
size = store_api.get_size_from_backend(uri, context=context)
# NOTE(zhiyan): Some stores return zero when it catch exception
is_ok = size > 0
except (store.UnknownScheme, store.NotFound):
is_ok = False
if not is_ok:
reason = _('Invalid location')
raise exception.BadStoreUri(message=reason)
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
def is_glance_location(loc_url):
return pattern.match(loc_url)
def _check_glance_loc(context, location):
uri = location['url']
if not is_glance_location(uri):
return False
if 'auth_token=' in uri:
return True
location['url'] = uri + ('?auth_token=' + context.auth_tok)
return True
def _check_image_location(context, store_api, location):
if not _check_glance_loc(context, location):
_check_location_uri(context, store_api, location['url'])
store_api.check_location_metadata(location['metadata'])
def _set_image_size(context, image, locations):
if not image.size:
for location in locations:
size_from_backend = store.get_size_from_backend(
location['url'], context=context)
if size_from_backend:
# NOTE(flwang): This assumes all locations have the same size
image.size = size_from_backend
break
def _count_duplicated_locations(locations, new):
"""
To calculate the count of duplicated locations for new one.
:param locations: The exiting image location set
:param new: The new image location
:returns: The count of duplicated locations
"""
ret = 0
for loc in locations:
if (loc['url'] == new['url'] and loc['metadata'] == new['metadata']):
ret += 1
return ret
def _remove_extra_info(location):
url = location['url']
if url.startswith('http'):
start = url.find('auth_token')
if start == -1:
return
end = url.find('&', start)
if end == -1:
if url[start - 1] == '?':
url = re.sub(r'\?auth_token=\S+', r'', url)
elif url[start - 1] == '&':
url = re.sub(r'&auth_token=\S+', r'', url)
else:
url = re.sub(r'auth_token=\S+&', r'', url)
location['url'] = url
class ImageFactoryProxy(glance.domain.proxy.ImageFactory):
def __init__(self, factory, context, store_api, store_utils):
self.context = context
self.store_api = store_api
proxy_kwargs = {'context': context, 'store_api': store_api,
'store_utils': store_utils}
super(ImageFactoryProxy, self).__init__(factory,
proxy_class=ImageProxy,
proxy_kwargs=proxy_kwargs)
def new_image(self, **kwargs):
locations = kwargs.get('locations', [])
for loc in locations:
_check_image_location(self.context, self.store_api, loc)
loc['status'] = 'active'
if _count_duplicated_locations(locations, loc) > 1:
raise exception.DuplicateLocation(location=loc['url'])
return super(ImageFactoryProxy, self).new_image(**kwargs)
class StoreLocations(collections.MutableSequence):
"""
The proxy for store location property. It takes responsibility for:
1. Location uri correctness checking when adding a new location.
2. Remove the image data from the store when a location is removed
from an image.
"""
def __init__(self, image_proxy, value):
self.image_proxy = image_proxy
if isinstance(value, list):
self.value = value
else:
self.value = list(value)
def append(self, location):
# NOTE(flaper87): Insert this
# location at the very end of
# the value list.
self.insert(len(self.value), location)
def extend(self, other):
if isinstance(other, StoreLocations):
locations = other.value
else:
locations = list(other)
for location in locations:
self.append(location)
def insert(self, i, location):
_check_image_location(self.image_proxy.context,
self.image_proxy.store_api, location)
_remove_extra_info(location)
location['status'] = 'active'
if _count_duplicated_locations(self.value, location) > 0:
raise exception.DuplicateLocation(location=location['url'])
self.value.insert(i, location)
_set_image_size(self.image_proxy.context,
self.image_proxy,
[location])
def pop(self, i=-1):
location = self.value.pop(i)
try:
self.image_proxy.store_utils.delete_image_location_from_backend(
self.image_proxy.context,
self.image_proxy.image.image_id,
location)
except Exception:
with excutils.save_and_reraise_exception():
self.value.insert(i, location)
return location
def count(self, location):
return self.value.count(location)
def index(self, location, *args):
return self.value.index(location, *args)
def remove(self, location):
if self.count(location):
self.pop(self.index(location))
else:
self.value.remove(location)
def reverse(self):
self.value.reverse()
# Mutable sequence, so not hashable
__hash__ = None
def __getitem__(self, i):
return self.value.__getitem__(i)
def __setitem__(self, i, location):
_check_image_location(self.image_proxy.context,
self.image_proxy.store_api, location)
location['status'] = 'active'
self.value.__setitem__(i, location)
_set_image_size(self.image_proxy.context,
self.image_proxy,
[location])
def __delitem__(self, i):
location = None
try:
location = self.value.__getitem__(i)
except Exception:
return self.value.__delitem__(i)
self.image_proxy.store_utils.delete_image_location_from_backend(
self.image_proxy.context,
self.image_proxy.image.image_id,
location)
self.value.__delitem__(i)
def __delslice__(self, i, j):
i = max(i, 0)
j = max(j, 0)
locations = []
try:
locations = self.value.__getslice__(i, j)
except Exception:
return self.value.__delslice__(i, j)
for location in locations:
self.image_proxy.store_utils.delete_image_location_from_backend(
self.image_proxy.context,
self.image_proxy.image.image_id,
location)
self.value.__delitem__(i)
def __iadd__(self, other):
self.extend(other)
return self
def __contains__(self, location):
return location in self.value
def __len__(self):
return len(self.value)
def __cast(self, other):
if isinstance(other, StoreLocations):
return other.value
else:
return other
def __cmp__(self, other):
return cmp(self.value, self.__cast(other))
def __iter__(self):
return iter(self.value)
def __copy__(self):
return type(self)(self.image_proxy, self.value)
def __deepcopy__(self, memo):
# NOTE(zhiyan): Only copy location entries, others can be reused.
value = copy.deepcopy(self.value, memo)
self.image_proxy.image.locations = value
return type(self)(self.image_proxy, value)
def _locations_proxy(target, attr):
"""
Make a location property proxy on the image object.
:param target: the image object on which to add the proxy
:param attr: the property proxy we want to hook
"""
def get_attr(self):
value = getattr(getattr(self, target), attr)
return StoreLocations(self, value)
def set_attr(self, value):
if not isinstance(value, (list, StoreLocations)):
reason = _('Invalid locations')
raise exception.BadStoreUri(message=reason)
ori_value = getattr(getattr(self, target), attr)
if ori_value != value:
# NOTE(zhiyan): Enforced locations list was previously empty list.
if len(ori_value) > 0:
raise exception.Invalid(_('Original locations is not empty: '
'%s') % ori_value)
# NOTE(zhiyan): Check locations are all valid.
for location in value:
_check_image_location(self.context, self.store_api,
location)
location['status'] = 'active'
if _count_duplicated_locations(value, location) > 1:
raise exception.DuplicateLocation(location=location['url'])
_set_image_size(self.context, getattr(self, target), value)
return setattr(getattr(self, target), attr, list(value))
def del_attr(self):
value = getattr(getattr(self, target), attr)
while len(value):
self.store_utils.delete_image_location_from_backend(
self.context,
self.image.image_id,
value[0])
del value[0]
setattr(getattr(self, target), attr, value)
return delattr(getattr(self, target), attr)
return property(get_attr, set_attr, del_attr)
class ImageProxy(glance.domain.proxy.Image):
locations = _locations_proxy('image', 'locations')
def __init__(self, image, context, store_api, store_utils):
self.image = image
self.context = context
self.store_api = store_api
self.store_utils = store_utils
proxy_kwargs = {
'context': context,
'image': self,
'store_api': store_api,
}
super(ImageProxy, self).__init__(
image, member_repo_proxy_class=ImageMemberRepoProxy,
member_repo_proxy_kwargs=proxy_kwargs)
def delete(self):
self.image.delete()
if self.image.locations:
for location in self.image.locations:
self.store_utils.delete_image_location_from_backend(
self.context,
self.image.image_id,
location)
def set_data(self, data, size=None):
if size is None:
size = 0 # NOTE(markwash): zero -> unknown size
location, size, checksum, loc_meta = self.store_api.add_to_backend(
CONF,
self.image.image_id,
utils.LimitingReader(utils.CooperativeReader(data),
CONF.image_size_cap),
size,
context=self.context)
loc_meta = loc_meta or {}
loc_meta['is_default'] = 'true'
self.image.locations = [{'url': location, 'metadata': loc_meta,
'status': 'active'}]
self.image.size = size
self.image.checksum = checksum
self.image.status = 'active'
def get_data(self, offset=0, chunk_size=None):
if not self.image.locations:
raise store.NotFound(_("No image data could be found"))
err = None
for loc in self.image.locations:
if is_glance_location(loc['url']):
continue
try:
data, size = self.store_api.get_from_backend(
loc['url'],
offset=offset,
chunk_size=chunk_size,
context=self.context)
return data
except Exception as e:
LOG.warn(_('Get image %(id)s data failed: '
'%(err)s.') % {'id': self.image.image_id,
'err': utils.exception_to_str(e)})
err = e
# tried all locations
LOG.error(_LE('Glance tried all active locations to get data for '
'image %s but all have failed.') % self.image.image_id)
raise err
class ImageMemberRepoProxy(glance.domain.proxy.Repo):
def __init__(self, repo, image, context, store_api):
self.repo = repo
self.image = image
self.context = context
self.store_api = store_api
super(ImageMemberRepoProxy, self).__init__(repo)
def _set_acls(self):
public = self.image.visibility == 'public'
if self.image.locations and not public:
member_ids = [m.member_id for m in self.repo.list()]
for location in self.image.locations:
self.store_api.set_acls(location['url'], public=public,
read_tenants=member_ids,
context=self.context)
def add(self, member):
super(ImageMemberRepoProxy, self).add(member)
self._set_acls()
def remove(self, member):
super(ImageMemberRepoProxy, self).remove(member)
self._set_acls()

View File

@ -1,130 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CURPATH=$(cd "$(dirname "$0")"; pwd)
_PYTHON_INSTALL_DIR=${OPENSTACK_INSTALL_DIR}
if [ ! -n ${_PYTHON_INSTALL_DIR} ];then
_PYTHON_INSTALL_DIR="/usr/lib/python2.7/dist-packages"
fi
_GLANCE_DIR="${_PYTHON_INSTALL_DIR}/glance"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_PATCH_DIR="${CURPATH}/.."
_BACKUP_DIR="${_GLANCE_DIR}/glance-installation-backup"
_SCRIPT_LOGFILE="/var/log/glance/installation/install.log"
api_config_option_list="sync_enabled=True sync_server_port=9595 sync_server_host=127.0.0.1"
export PS4='+{$LINENO:${FUNCNAME[0]}}'
ERRTRAP()
{
echo "[LINE:$1] Error: Command or function exited with status $?"
}
function log()
{
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
function process_stop
{
PID=`ps -efw|grep "$1"|grep -v grep|awk '{print $2}'`
echo "PID is: $PID">>$_SCRIPT_LOGFILE
if [ "x${PID}" != "x" ]; then
for kill_id in $PID
do
kill -9 ${kill_id}
if [ $? -ne 0 ]; then
echo "[[stop glance-sync]]$1 stop failed.">>$_SCRIPT_LOGFILE
exit 1
fi
done
echo "[[stop glance-sync]]$1 stop ok.">>$_SCRIPT_LOGFILE
fi
}
function restart_services
{
log "restarting glance ..."
service glance-api restart
service glance-registry restart
process_stop "glance-sync"
python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf &
}
trap 'ERRTRAP $LINENO' ERR
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
if [ ! -d "/var/log/glance/installation" ]; then
mkdir /var/log/glance/installation
touch _SCRIPT_LOGFILE
fi
cd `dirname $0`
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/glance" ] ; then
log "It seems glance cascading has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/glance"
mkdir -p "${_BACKUP_DIR}/etc"
mkdir -p "${_BACKUP_DIR}/etc/glance"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/glance"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_PATCH_DIR}/glance" `dirname ${_GLANCE_DIR}`
glanceEggDir=`ls ${_PYTHON_INSTALL_DIR} |grep -e glance- |grep -e egg-info `
if [ ! -d ${_PYTHON_INSTALL_DIR}/${glanceEggDir} ]; then
log "glance install dir not exist. Pleas check manually."
exit 1
fi
cp "${_PATCH_DIR}/glance-egg-info/entry_points.txt" "${_PYTHON_INSTALL_DIR}/${glanceEggDir}/"
if [ $? -ne 0 ] ; then
log "Error in copying, aborted. Please install manually."
exit 1
fi
#restart services
restart_services
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart glance manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -1,5 +0,0 @@
Glance Store Patch
------------
we simple modified the following two python file for the cascading need:
- backend.py: add a little code for handling the glance-location;
- _drivers/http.py: alike backend.py, adding some handle logic for glance-location.

View File

@ -1,230 +0,0 @@
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import httplib
import logging
import socket
import urlparse
import glance_store.driver
from glance_store import exceptions
from glance_store.i18n import _
from glance_store.openstack.common import jsonutils
import glance_store.location
LOG = logging.getLogger(__name__)
MAX_REDIRECTS = 5
class StoreLocation(glance_store.location.StoreLocation):
"""Class describing an HTTP(S) URI"""
def process_specs(self):
self.scheme = self.specs.get('scheme', 'http')
self.netloc = self.specs['netloc']
self.user = self.specs.get('user')
self.password = self.specs.get('password')
self.path = self.specs.get('path')
def _get_credstring(self):
if self.user:
return '%s:%s@' % (self.user, self.password)
return ''
def get_uri(self):
return "%s://%s%s%s" % (
self.scheme,
self._get_credstring(),
self.netloc,
self.path)
def parse_uri(self, uri):
"""
Parse URLs. This method fixes an issue where credentials specified
in the URL are interpreted differently in Python 2.6.1+ than prior
versions of Python.
"""
pieces = urlparse.urlparse(uri)
assert pieces.scheme in ('https', 'http')
self.scheme = pieces.scheme
netloc = pieces.netloc
path = pieces.path
try:
if '@' in netloc:
creds, netloc = netloc.split('@')
else:
creds = None
except ValueError:
# Python 2.6.1 compat
# see lp659445 and Python issue7904
if '@' in path:
creds, path = path.split('@')
else:
creds = None
if creds:
try:
self.user, self.password = creds.split(':')
except ValueError:
reason = _("Credentials are not well-formatted.")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
else:
self.user = None
if netloc == '':
LOG.info(_("No address specified in HTTP URL"))
raise exceptions.BadStoreUri(uri=uri)
self.netloc = netloc
self.path = path
self.token = None
if pieces.query:
params = pieces.query.split('&')
for param in params:
if 'auth_token' == param.split("=")[0].strip():
self.token = param.split("=")[1]
break
def http_response_iterator(conn, response, size):
"""
Return an iterator for a file-like object.
:param conn: HTTP(S) Connection
:param response: httplib.HTTPResponse object
:param size: Chunk size to iterate with
"""
chunk = response.read(size)
while chunk:
yield chunk
chunk = response.read(size)
conn.close()
class Store(glance_store.driver.Store):
"""An implementation of the HTTP(S) Backend Adapter"""
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
conn, resp, content_length = self._query(location, 'GET')
cs = chunk_size or self.READ_CHUNKSIZE
iterator = http_response_iterator(conn, resp, cs)
class ResponseIndexable(glance_store.Indexable):
def another(self):
try:
return self.wrapped.next()
except StopIteration:
return ''
return (ResponseIndexable(iterator, content_length), content_length)
def get_schemes(self):
return ('http', 'https')
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns the size
:param location `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
try:
size = self._query(location, 'HEAD')[2]
except socket.error:
reason = _("The HTTP URL is invalid.")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
except Exception:
# NOTE(flaper87): Catch more granular exceptions,
# keeping this branch for backwards compatibility.
return 0
return size
def _query(self, location, verb, depth=0):
if depth > MAX_REDIRECTS:
reason = (_("The HTTP URL exceeded %s maximum "
"redirects.") % MAX_REDIRECTS)
LOG.debug(reason)
raise exceptions.MaxRedirectsExceeded(message=reason)
loc = location.store_location
conn_class = self._get_conn_class(loc)
conn = conn_class(loc.netloc)
hearders = {}
if loc.token:
# hearders.setdefault('x-auth-token', loc.token)
# verb = 'GET'
# conn.request(verb, loc.path, "", hearders)
# resp = conn.getresponse()
# try:
# size = jsonutils.loads(resp.read())['size']
# except Exception:
# size = 0
# raise exception.BadStoreUri(loc.path, reason)
return (conn, None, 1)
conn.request(verb, loc.path, "", {})
resp = conn.getresponse()
# Check for bad status codes
if resp.status >= 400:
if resp.status == httplib.NOT_FOUND:
reason = _("HTTP datastore could not find image at URI.")
LOG.debug(reason)
raise exceptions.NotFound(message=reason)
reason = (_("HTTP URL %(url)s returned a "
"%(status)s status code.") %
dict(url=loc.path, status=resp.status))
LOG.debug(reason)
raise exceptions.BadStoreUri(message=reason)
location_header = resp.getheader("location")
if location_header:
if resp.status not in (301, 302):
reason = (_("The HTTP URL %(url)s attempted to redirect "
"with an invalid %(status)s status code.") %
dict(url=loc.path, status=resp.status))
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
location_class = glance_store.location.Location
new_loc = location_class(location.store_name,
location.store_location.__class__,
uri=location_header,
image_id=location.image_id,
store_specs=location.store_specs)
return self._query(new_loc, verb, depth + 1)
content_length = int(resp.getheader('content-length', 0))
return (conn, resp, content_length)
def _get_conn_class(self, loc):
"""
Returns connection class for accessing the resource. Useful
for dependency injection and stubouts in testing...
"""
return {'http': httplib.HTTPConnection,
'https': httplib.HTTPSConnection}[loc.scheme]

View File

@ -1,400 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import re
import sys
from oslo.config import cfg
from stevedore import driver
from glance_store import exceptions
from glance_store.i18n import _
from glance_store import location
LOG = logging.getLogger(__name__)
_DEPRECATED_STORE_OPTS = [
cfg.DeprecatedOpt('known_stores', group='DEFAULT'),
cfg.DeprecatedOpt('default_store', group='DEFAULT')
]
_STORE_OPTS = [
cfg.ListOpt('stores', default=['file', 'http'],
help=_('List of stores enabled'),
deprecated_opts=[_DEPRECATED_STORE_OPTS[0]]),
cfg.StrOpt('default_store', default='file',
help=_("Default scheme to use to store image data. The "
"scheme must be registered by one of the stores "
"defined by the 'stores' config option."),
deprecated_opts=[_DEPRECATED_STORE_OPTS[1]])
]
CONF = cfg.CONF
_STORE_CFG_GROUP = 'glance_store'
def _oslo_config_options():
return ((opt, _STORE_CFG_GROUP) for opt in _STORE_OPTS)
def register_opts(conf):
for opt, group in _oslo_config_options():
conf.register_opt(opt, group=group)
register_store_opts(conf)
def register_store_opts(conf):
for store_entry in set(conf.glance_store.stores):
LOG.debug("Registering options for %s" % store_entry)
store_cls = _load_store(conf, store_entry, False)
if store_cls is None:
msg = _('Store %s not found') % store_entry
raise exceptions.GlanceStoreException(message=msg)
if getattr(store_cls, 'OPTIONS', None) is not None:
# NOTE(flaper87): To be removed in k-2. This should
# give deployers enough time to migrate their systems
# and move configs under the new section.
for opt in store_cls.OPTIONS:
opt.deprecated_opts = [cfg.DeprecatedOpt(opt.name,
group='DEFAULT')]
conf.register_opt(opt, group=_STORE_CFG_GROUP)
class Indexable(object):
"""Indexable for file-like objs iterators
Wrapper that allows an iterator or filelike be treated as an indexable
data structure. This is required in the case where the return value from
Store.get() is passed to Store.add() when adding a Copy-From image to a
Store where the client library relies on eventlet GreenSockets, in which
case the data to be written is indexed over.
"""
def __init__(self, wrapped, size):
"""
Initialize the object
:param wrappped: the wrapped iterator or filelike.
:param size: the size of data available
"""
self.wrapped = wrapped
self.size = int(size) if size else (wrapped.len
if hasattr(wrapped, 'len') else 0)
self.cursor = 0
self.chunk = None
def __iter__(self):
"""
Delegate iteration to the wrapped instance.
"""
for self.chunk in self.wrapped:
yield self.chunk
def __getitem__(self, i):
"""
Index into the next chunk (or previous chunk in the case where
the last data returned was not fully consumed).
:param i: a slice-to-the-end
"""
start = i.start if isinstance(i, slice) else i
if start < self.cursor:
return self.chunk[(start - self.cursor):]
self.chunk = self.another()
if self.chunk:
self.cursor += len(self.chunk)
return self.chunk
def another(self):
"""Implemented by subclasses to return the next element"""
raise NotImplementedError
def getvalue(self):
"""
Return entire string value... used in testing
"""
return self.wrapped.getvalue()
def __len__(self):
"""
Length accessor.
"""
return self.size
def _load_store(conf, store_entry, invoke_load=True):
store_cls = None
try:
LOG.debug("Attempting to import store %s", store_entry)
mgr = driver.DriverManager('glance_store.drivers',
store_entry,
invoke_args=[conf],
invoke_on_load=invoke_load)
return mgr.driver
except RuntimeError as ex:
LOG.warn("Failed to load driver %(driver)s."
"The driver will be disabled" % dict(driver=driver))
def _load_stores(conf):
for store_entry in set(conf.glance_store.stores):
try:
# FIXME(flaper87): Don't hide BadStoreConfiguration
# exceptions. These exceptions should be propagated
# to the user of the library.
store_instance = _load_store(conf, store_entry)
if not store_instance:
continue
yield (store_entry, store_instance)
except exceptions.BadStoreConfiguration as e:
continue
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
def is_glance_location(loc_url):
return pattern.match(loc_url)
def create_stores(conf=CONF):
"""
Registers all store modules and all schemes
from the given config. Duplicates are not re-registered.
"""
store_count = 0
store_classes = set()
for (store_entry, store_instance) in _load_stores(conf):
schemes = store_instance.get_schemes()
store_instance.configure()
if not schemes:
raise exceptions.BackendException('Unable to register store %s. '
'No schemes associated with it.'
% store_cls)
else:
LOG.debug("Registering store %s with schemes %s",
store_entry, schemes)
scheme_map = {}
for scheme in schemes:
loc_cls = store_instance.get_store_location_class()
scheme_map[scheme] = {
'store': store_instance,
'location_class': loc_cls,
}
location.register_scheme_map(scheme_map)
store_count += 1
return store_count
def verify_default_store():
scheme = cfg.CONF.glance_store.default_store
try:
get_store_from_scheme(scheme)
except exceptions.UnknownScheme:
msg = _("Store for scheme %s not found") % scheme
raise RuntimeError(msg)
def get_known_schemes():
"""Returns list of known schemes"""
return location.SCHEME_TO_CLS_MAP.keys()
def get_store_from_scheme(scheme):
"""
Given a scheme, return the appropriate store object
for handling that scheme.
"""
if scheme not in location.SCHEME_TO_CLS_MAP:
raise exceptions.UnknownScheme(scheme=scheme)
scheme_info = location.SCHEME_TO_CLS_MAP[scheme]
return scheme_info['store']
def get_store_from_uri(uri):
"""
Given a URI, return the store object that would handle
operations on the URI.
:param uri: URI to analyze
"""
scheme = uri[0:uri.find('/') - 1]
return get_store_from_scheme(scheme)
def get_from_backend(uri, offset=0, chunk_size=None, context=None):
"""Yields chunks of data from backend specified by uri"""
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(uri)
try:
return store.get(loc, offset=offset,
chunk_size=chunk_size,
context=context)
except NotImplementedError:
raise exceptions.StoreGetNotSupported
def get_size_from_backend(uri, context=None):
"""Retrieves image size from backend specified by uri"""
if is_glance_location(uri):
uri += ('?auth_token=' + context.auth_tok)
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(uri)
return store.get_size(loc, context=context)
def delete_from_backend(uri, context=None):
"""Removes chunks of data from backend specified by uri"""
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(uri)
try:
return store.delete(loc, context=context)
except NotImplementedError:
raise exceptions.StoreDeleteNotSupported
def get_store_from_location(uri):
"""
Given a location (assumed to be a URL), attempt to determine
the store from the location. We use here a simple guess that
the scheme of the parsed URL is the store...
:param uri: Location to check for the store
"""
loc = location.get_location_from_uri(uri)
return loc.store_name
def safe_delete_from_backend(uri, image_id, context=None):
"""Given a uri, delete an image from the store."""
try:
return delete_from_backend(uri, context=context)
except exceptions.NotFound:
msg = _('Failed to delete image %s in store from URI')
LOG.warn(msg % image_id)
except exceptions.StoreDeleteNotSupported as e:
LOG.warn(str(e))
except exceptions.UnsupportedBackend:
exc_type = sys.exc_info()[0].__name__
msg = (_('Failed to delete image %(image_id)s '
'from store (%(exc_type)s)') %
dict(image_id=image_id, exc_type=exc_type))
LOG.error(msg)
def _delete_image_from_backend(context, store_api, image_id, uri):
if CONF.delayed_delete:
store_api.schedule_delayed_delete_from_backend(context, uri, image_id)
else:
store_api.safe_delete_from_backend(context, uri, image_id)
def check_location_metadata(val, key=''):
if isinstance(val, dict):
for key in val:
check_location_metadata(val[key], key=key)
elif isinstance(val, list):
ndx = 0
for v in val:
check_location_metadata(v, key='%s[%d]' % (key, ndx))
ndx = ndx + 1
elif not isinstance(val, unicode):
raise exceptions.BackendException(_("The image metadata key %(key)s "
"has an invalid type of %(type)s. "
"Only dict, list, and unicode are "
"supported.")
% dict(key=key, type=type(val)))
def store_add_to_backend(image_id, data, size, store, context=None):
"""
A wrapper around a call to each stores add() method. This gives glance
a common place to check the output
:param image_id: The image add to which data is added
:param data: The data to be stored
:param size: The length of the data in bytes
:param store: The store to which the data is being added
:return: The url location of the file,
the size amount of data,
the checksum of the data
the storage systems metadata dictionary for the location
"""
(location, size, checksum, metadata) = store.add(image_id, data, size)
if metadata is not None:
if not isinstance(metadata, dict):
msg = (_("The storage driver %(driver)s returned invalid "
" metadata %(metadata)s. This must be a dictionary type")
% dict(driver=str(store), metadata=str(metadata)))
LOG.error(msg)
raise exceptions.BackendException(msg)
try:
check_location_metadata(metadata)
except exceptions.BackendException as e:
e_msg = (_("A bad metadata structure was returned from the "
"%(driver)s storage driver: %(metadata)s. %(e)s.") %
dict(driver=unicode(store),
metadata=unicode(metadata),
e=unicode(e)))
LOG.error(e_msg)
raise exceptions.BackendException(e_msg)
return (location, size, checksum, metadata)
def add_to_backend(conf, image_id, data, size, scheme=None, context=None):
if scheme is None:
scheme = conf['glance_store']['default_store']
store = get_store_from_scheme(scheme)
try:
return store_add_to_backend(image_id, data, size, store, context)
except NotImplementedError:
raise exceptions.StoreAddNotSupported
def set_acls(location_uri, public=False, read_tenants=[],
write_tenants=None, context=None):
if write_tenants is None:
write_tenants = []
loc = location.get_location_from_uri(location_uri)
scheme = get_store_from_location(location_uri)
store = get_store_from_scheme(scheme)
try:
store.set_acls(loc, public=public,
read_tenants=read_tenants,
write_tenants=write_tenants)
except NotImplementedError:
LOG.debug(_("Skipping store.set_acls... not implemented."))
def validate_location(uri, context=None):
store = get_store_from_uri(uri)
store.validate_location(uri)

View File

@ -1,103 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CURPATH=$(cd "$(dirname "$0")"; pwd)
_PYTHON_INSTALL_DIR=${OPENSTACK_INSTALL_DIR}
if [ ! -n ${_PYTHON_INSTALL_DIR} ];then
_PYTHON_INSTALL_DIR="/usr/lib/python2.7/dist-packages"
fi
_GLANCE_STORE_DIR="${_PYTHON_INSTALL_DIR}/glance_store"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="${CURPATH}/../glance_store"
_SCRIPT_LOGFILE="/var/log/glance/installation/install_store.log"
export PS4='+{$LINENO:${FUNCNAME[0]}}'
ERRTRAP()
{
echo "[LINE:$1] Error: Command or function exited with status $?"
}
function log()
{
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
function process_stop
{
PID=`ps -efw|grep "$1"|grep -v grep|awk '{print $2}'`
echo "PID is: $PID">>$_SCRIPT_LOGFILE
if [ "x${PID}" != "x" ]; then
for kill_id in $PID
do
kill -9 ${kill_id}
if [ $? -ne 0 ]; then
echo "[[stop glance-sync]]$1 stop failed.">>$_SCRIPT_LOGFILE
exit 1
fi
done
echo "[[stop glance-sync]]$1 stop ok.">>$_SCRIPT_LOGFILE
fi
}
function restart_services
{
log "restarting glance ..."
service glance-api restart
service glance-registry restart
process_stop "glance-sync"
python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf &
}
trap 'ERRTRAP $LINENO' ERR
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
if [ ! -d "/var/log/glance/installation" ]; then
mkdir -p /var/log/glance/installation
touch _SCRIPT_LOGFILE
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_GLANCE_STORE_DIR}" ] ; then
log "Could not find the glance installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "copying in new files..."
cp -rf "${_CODE_DIR}" ${_PYTHON_INSTALL_DIR}
restart_services
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart glance manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -1,97 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_INSTALL="/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascaded-server-big2layer-patch-installation-backup"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-server-big2layer-cascaded-patch has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "restarting cascaded neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron server manually."
exit 1
fi
echo "restarting cascaded neutron-plugin-openvswitch-agent..."
service neutron-plugin-openvswitch-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-plugin-openvswitch-agent manually."
exit 1
fi
echo "restarting cascaded neutron-l3-agent..."
service neutron-l3-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-l3-agent manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,28 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
l2_population_options = [
cfg.IntOpt('agent_boot_time', default=180,
help=_('Delay within which agent is expected to update '
'existing ports whent it restarts')),
cfg.StrOpt('cascaded_gateway', default='no_gateway',
help=_('if not existing the gateway host Configure no_gateway'
'else configure admin_gateway or population_opt')),
]
cfg.CONF.register_opts(l2_population_options, "l2pop")

View File

@ -1,136 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import sql
from neutron.common import constants as const
from neutron.db import agents_db
from neutron.db import common_db_mixin as base_db
from neutron.db import models_v2
from neutron.openstack.common import jsonutils
from neutron.openstack.common import timeutils
from neutron.plugins.ml2.drivers.l2pop import constants as l2_const
from neutron.plugins.ml2 import models as ml2_models
class L2populationDbMixin(base_db.CommonDbMixin):
def get_agent_ip_by_host(self, session, agent_host):
agent = self.get_agent_by_host(session, agent_host)
if agent:
return self.get_agent_ip(agent)
def get_agent_ip(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('tunneling_ip')
def get_agent_uptime(self, agent):
return timeutils.delta_seconds(agent.started_at,
agent.heartbeat_timestamp)
def get_agent_tunnel_types(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('tunnel_types')
def get_agent_l2pop_network_types(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('l2pop_network_types')
def get_agent_by_host(self, session, agent_host):
with session.begin(subtransactions=True):
query = session.query(agents_db.Agent)
query = query.filter(agents_db.Agent.host == agent_host,
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query.first()
def get_network_ports(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.PortBinding,
agents_db.Agent)
query = query.join(agents_db.Agent,
agents_db.Agent.host ==
ml2_models.PortBinding.host)
query = query.join(models_v2.Port)
query = query.filter(models_v2.Port.network_id == network_id,
models_v2.Port.admin_state_up == sql.true(),
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query
def get_nondvr_network_ports(self, session, network_id):
query = self.get_network_ports(session, network_id)
return query.filter(models_v2.Port.device_owner !=
const.DEVICE_OWNER_DVR_INTERFACE)
def get_dvr_network_ports(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.DVRPortBinding,
agents_db.Agent)
query = query.join(agents_db.Agent,
agents_db.Agent.host ==
ml2_models.DVRPortBinding.host)
query = query.join(models_v2.Port)
query = query.filter(models_v2.Port.network_id == network_id,
models_v2.Port.admin_state_up == sql.true(),
models_v2.Port.device_owner ==
const.DEVICE_OWNER_DVR_INTERFACE,
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query
def get_agent_network_active_port_count(self, session, agent_host,
network_id):
with session.begin(subtransactions=True):
query = session.query(models_v2.Port)
query1 = query.join(ml2_models.PortBinding)
query1 = query1.filter(models_v2.Port.network_id == network_id,
models_v2.Port.status ==
const.PORT_STATUS_ACTIVE,
models_v2.Port.device_owner !=
const.DEVICE_OWNER_DVR_INTERFACE,
ml2_models.PortBinding.host == agent_host)
query2 = query.join(ml2_models.DVRPortBinding)
query2 = query2.filter(models_v2.Port.network_id == network_id,
ml2_models.DVRPortBinding.status ==
const.PORT_STATUS_ACTIVE,
models_v2.Port.device_owner ==
const.DEVICE_OWNER_DVR_INTERFACE,
ml2_models.DVRPortBinding.host ==
agent_host)
return (query1.count() + query2.count())
def get_host_ip_from_binding_profile(self, profile):
if(not profile):
return
profile = jsonutils.loads(profile)
return profile.get('host_ip')
def get_segment_by_network_id(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.NetworkSegment)
query = query.filter(
ml2_models.NetworkSegment.network_id == network_id,
ml2_models.NetworkSegment.network_type == 'vxlan')
return query.first()
def get_remote_ports(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.PortBinding)
query = query.join(models_v2.Port)
query = query.filter(
models_v2.Port.network_id == network_id,
ml2_models.PortBinding.profile.contains('"port_key": "remote_port"'))
return query

View File

@ -1,383 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from neutron.common import constants as const
from neutron import context as n_context
from neutron.db import api as db_api
from neutron.openstack.common import log as logging
from neutron.plugins.ml2 import driver_api as api
from neutron.plugins.ml2.drivers.l2pop import config # noqa
from neutron.plugins.ml2.drivers.l2pop import db as l2pop_db
from neutron.plugins.ml2.drivers.l2pop import rpc as l2pop_rpc
LOG = logging.getLogger(__name__)
class L2populationMechanismDriver(api.MechanismDriver,
l2pop_db.L2populationDbMixin):
def __init__(self):
super(L2populationMechanismDriver, self).__init__()
self.L2populationAgentNotify = l2pop_rpc.L2populationAgentNotifyAPI()
def initialize(self):
LOG.debug(_("Experimental L2 population driver"))
self.rpc_ctx = n_context.get_admin_context_without_session()
self.migrated_ports = {}
self.remove_fdb_entries = {}
self.remove_remote_ports_fdb = {}
def _get_port_fdb_entries(self, port):
return [[port['mac_address'],
ip['ip_address']] for ip in port['fixed_ips']]
def _is_remote_port(self, port):
return port['binding:profile'].get('port_key') == 'remote_port'
def create_port_postcommit(self, context):
"""
if port is "remote_port",
then notify all l2-agent or only l2-gateway-agent
else do nothing
"""
port_context = context.current
if(self._is_remote_port(port_context)):
other_fdb_entries = self.get_remote_port_fdb(port_context)
if(not other_fdb_entries):
return
if(cfg.CONF.l2pop.cascaded_gateway == 'no_gateway'):
# notify all l2-agent
self.L2populationAgentNotify.add_fdb_entries(self.rpc_ctx,
other_fdb_entries)
else:
# only notify to l2-gateway-agent
pass
def get_remote_port_fdb(self, port_context):
port_id = port_context['id']
network_id = port_context['network_id']
session = db_api.get_session()
segment = self.get_segment_by_network_id(session, network_id)
if not segment:
LOG.warning(_("Network %(network_id)s has no "
" vxlan provider, so cannot get segment"),
{'network_id': network_id})
return
ip = port_context['binding:profile'].get('host_ip')
if not ip:
LOG.debug(_("Unable to retrieve the ip from remote port, "
"check the remote port %(port_id)."),
{'port_id': port_id})
return
other_fdb_entries = {network_id:
{'segment_id': segment.segmentation_id,
'network_type': segment.network_type,
'ports': {}}}
ports = other_fdb_entries[network_id]['ports']
agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
agent_ports += self._get_port_fdb_entries(port_context)
ports[ip] = agent_ports
return other_fdb_entries
def _get_agent_host(self, context, port):
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
agent_host = context.binding.host
else:
agent_host = port['binding:host_id']
return agent_host
def delete_port_precommit(self, context):
# TODO(matrohon): revisit once the original bound segment will be
# available in delete_port_postcommit. in delete_port_postcommit
# agent_active_ports will be equal to 0, and the _update_port_down
# won't need agent_active_ports_count_for_flooding anymore
port = context.current
if(self._is_remote_port(port)):
fdb_entry = self.get_remote_port_fdb(port)
self.remove_remote_ports_fdb[port['id']] = fdb_entry
agent_host = context.host #self._get_agent_host(context, port)
if port['id'] not in self.remove_fdb_entries:
self.remove_fdb_entries[port['id']] = {}
self.remove_fdb_entries[port['id']][agent_host] = (
self._update_port_down(context, port, agent_host))
def delete_port_postcommit(self, context):
port = context.current
agent_host = context.host #self._get_agent_host(context, port)
if port['id'] in self.remove_fdb_entries:
for agent_host in list(self.remove_fdb_entries[port['id']]):
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx,
self.remove_fdb_entries[port['id']][agent_host])
self.remove_fdb_entries[port['id']].pop(agent_host, 0)
self.remove_fdb_entries.pop(port['id'], 0)
remote_port_fdb = self.remove_remote_ports_fdb.pop(
context.current['id'],
None)
if(remote_port_fdb):
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, remote_port_fdb)
def _get_diff_ips(self, orig, port):
orig_ips = set([ip['ip_address'] for ip in orig['fixed_ips']])
port_ips = set([ip['ip_address'] for ip in port['fixed_ips']])
# check if an ip has been added or removed
orig_chg_ips = orig_ips.difference(port_ips)
port_chg_ips = port_ips.difference(orig_ips)
if orig_chg_ips or port_chg_ips:
return orig_chg_ips, port_chg_ips
def _fixed_ips_changed(self, context, orig, port, diff_ips):
orig_ips, port_ips = diff_ips
if (port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE):
agent_host = context.host
else:
agent_host = context.original_host
port_infos = self._get_port_infos(
context, orig, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
orig_mac_ip = [[port['mac_address'], ip] for ip in orig_ips]
port_mac_ip = [[port['mac_address'], ip] for ip in port_ips]
upd_fdb_entries = {port['network_id']: {agent_ip: {}}}
ports = upd_fdb_entries[port['network_id']][agent_ip]
if orig_mac_ip:
ports['before'] = orig_mac_ip
if port_mac_ip:
ports['after'] = port_mac_ip
self.L2populationAgentNotify.update_fdb_entries(
self.rpc_ctx, {'chg_ip': upd_fdb_entries})
return True
def update_port_postcommit(self, context):
port = context.current
orig = context.original
diff_ips = self._get_diff_ips(orig, port)
if diff_ips:
self._fixed_ips_changed(context, orig, port, diff_ips)
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
if context.status == const.PORT_STATUS_DOWN:
agent_host = context.host
fdb_entries = self._update_port_down(
context, port, agent_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif (context.host != context.original_host
and context.status == const.PORT_STATUS_ACTIVE
and not self.migrated_ports.get(orig['id'])):
# The port has been migrated. We have to store the original
# binding to send appropriate fdb once the port will be set
# on the destination host
self.migrated_ports[orig['id']] = (
(orig, context.original_host))
elif context.status != context.original_status:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
elif context.status == const.PORT_STATUS_DOWN:
fdb_entries = self._update_port_down(
context, port, context.host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif context.status == const.PORT_STATUS_BUILD:
orig = self.migrated_ports.pop(port['id'], None)
if orig:
original_port = orig[0]
original_host = orig[1]
# this port has been migrated: remove its entries from fdb
fdb_entries = self._update_port_down(
context, original_port, original_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
def _get_port_infos(self, context, port, agent_host):
if not agent_host:
return
session = db_api.get_session()
agent = self.get_agent_by_host(session, agent_host)
if not agent:
return
agent_ip = self.get_agent_ip(agent)
if not agent_ip:
LOG.warning(_("Unable to retrieve the agent ip, check the agent "
"configuration."))
return
segment = context.bound_segment
if not segment:
LOG.warning(_("Port %(port)s updated by agent %(agent)s "
"isn't bound to any segment"),
{'port': port['id'], 'agent': agent})
return
network_types = self.get_agent_l2pop_network_types(agent)
if network_types is None:
network_types = self.get_agent_tunnel_types(agent)
if segment['network_type'] not in network_types:
return
fdb_entries = self._get_port_fdb_entries(port)
return agent, agent_host, agent_ip, segment, fdb_entries
def _update_port_up(self, context):
port = context.current
agent_host = context.host
port_infos = self._get_port_infos(context, port, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
network_id = port['network_id']
session = db_api.get_session()
agent_active_ports = self.get_agent_network_active_port_count(
session, agent_host, network_id)
other_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {agent_ip: []}}}
if agent_active_ports == 1 or (
self.get_agent_uptime(agent) < cfg.CONF.l2pop.agent_boot_time):
# First port activated on current agent in this network,
# we have to provide it with the whole list of fdb entries
agent_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {}}}
ports = agent_fdb_entries[network_id]['ports']
nondvr_network_ports = self.get_nondvr_network_ports(session,
network_id)
for network_port in nondvr_network_ports:
binding, agent = network_port
if agent.host == agent_host:
continue
ip = self.get_agent_ip(agent)
if not ip:
LOG.debug(_("Unable to retrieve the agent ip, check "
"the agent %(agent_host)s configuration."),
{'agent_host': agent.host})
continue
agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
agent_ports += self._get_port_fdb_entries(binding.port)
ports[ip] = agent_ports
if cfg.CONF.l2pop.cascaded_gateway == 'no_gateway':
remote_ports = self.get_remote_ports(session, network_id)
else:
remote_ports = {}
# elif cfg.CONF.cascaded_gateway == 'admin_gateway' or
# cfg.CONF.cascaded_gateway == 'population_opt':
# if self.is_proxy_port(port_context):
# remote_ports = self.get_remote_ports(session, network_id)
# else:
for binding in remote_ports:
profile = binding['profile']
ip = self.get_host_ip_from_binding_profile(profile)
if not ip:
LOG.debug(_("Unable to retrieve the agent ip, check "
"the agent %(agent_host)s configuration."),
{'agent_host': agent.host})
continue
agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
agent_ports += self._get_port_fdb_entries(binding.port)
ports[ip] = agent_ports
dvr_network_ports = self.get_dvr_network_ports(session, network_id)
for network_port in dvr_network_ports:
binding, agent = network_port
if agent.host == agent_host:
continue
ip = self.get_agent_ip(agent)
if not ip:
LOG.debug(_("Unable to retrieve the agent ip, check "
"the agent %(agent_host)s configuration."),
{'agent_host': agent.host})
continue
agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
ports[ip] = agent_ports
# And notify other agents to add flooding entry
other_fdb_entries[network_id]['ports'][agent_ip].append(
const.FLOODING_ENTRY)
if ports.keys():
self.L2populationAgentNotify.add_fdb_entries(
self.rpc_ctx, agent_fdb_entries, agent_host)
# Notify other agents to add fdb rule for current port
if port['device_owner'] != const.DEVICE_OWNER_DVR_INTERFACE:
other_fdb_entries[network_id]['ports'][agent_ip] += (
port_fdb_entries)
self.L2populationAgentNotify.add_fdb_entries(self.rpc_ctx,
other_fdb_entries)
def _update_port_down(self, context, port, agent_host):
port_infos = self._get_port_infos(context, port, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
network_id = port['network_id']
session = db_api.get_session()
agent_active_ports = self.get_agent_network_active_port_count(
session, agent_host, network_id)
other_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {agent_ip: []}}}
if agent_active_ports == 0:
# Agent is removing its last activated port in this network,
# other agents needs to be notified to delete their flooding entry.
other_fdb_entries[network_id]['ports'][agent_ip].append(
const.FLOODING_ENTRY)
# Notify other agents to remove fdb rules for current port
if port['device_owner'] != const.DEVICE_OWNER_DVR_INTERFACE:
fdb_entries = port_fdb_entries
other_fdb_entries[network_id]['ports'][agent_ip] += fdb_entries
return other_fdb_entries

View File

@ -1,83 +0,0 @@
Openstack Neutron cascaded_l3_patch
===============================
Neutron cascaded_l3_patch is mainly used to achieve L3 communications crossing OpenStack. To solve the problem, we add 'onlink' field for extra route of router based on the ip range in neutron-server, and add GRE Tunnel in l3-agent. This patch should be made to the Cascaded Neutron nodes.
Key modules
-----------
* We add GRE Tunnel in l3-agent by modifying some files:
neutron/agent/linux/ip_lib.py
neutron/agent/l3_agent.py
* We add 'onlink' field for extra route of router based on the ip range in neutron-server by modifying some files:
neutron/common/config.py
neutron/db/extraroute_db.py
Requirements
------------
* openstack neutron-2014.2 has been installed.
Installation
------------
We provide two ways to install the Neutron cascaded_l3_patch. In this section, we will guide you through installing the Neutron cascaded_l3_patch with modifying the configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Neutron of Juno Version**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$NEUTRON_PARENT_DIR/neutron
(replace the $... with actual directory names.)
* **Manual Installation**
- Navigate to the local repository and copy the contents in 'neutron' sub-directory to the corresponding places in existing neutron, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/neutron $NEUTRON_PARENT_DIR```
(replace the $... with actual directory name.)
```
- you can modify neutron config file
$CONFIG_FILE_PATH/plugins/ml2/ml2_conf.ini
Modify the value of firewall_driver option as:
[securitygroup]
firewall_driver=neutron.agent.firewall.NoopFirewallDriver
$CONFIG_FILE_PATH/l3_agent.ini
Modify the value of agent_mode option as:
[DEFAULT]
agent_mode=dvr_snat
$CONFIG_FILE_PATH/neutron.conf, you can also don't modify
Default value of 3gw_extern_net_ip_range option in config file, is
l3gw_extern_net_ip_range=100.64.0.0/16
- Restart the neutron-server and neutron-l3-agent.
```service neutron-server restart```
```service neutron-l3-agent restart```
- Done.
* **Automatic Installation**
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation script will automatically modify the neutron code and the configurations.
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Juno.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically modify the related codes to $NEUTRON_PARENT_DIR/neutron and the related configuration.
- In case the automatic installation does not work, try to install manually.

View File

@ -1,127 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_ML2_CONF_FILE='plugins/ml2/ml2_conf.ini'
_NEUTRON_L3_CONF_FILE='l3_agent.ini'
_NEUTRON_INSTALL="/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascaded-server-installation-backup"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/neutron-cascaded-server/installation/${_SCRIPT_NAME}.log"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}" ] ; then
echo "Could not find ml2 config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}" ] ; then
echo "Could not find l3_agent config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-server-cascaded has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "updating config file..."
cp "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}.bk"
cp "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}" "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}.bk"
sed -i '/^firewall_driver/d' "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}"
sed -i '/^\[securitygroup\]/a\firewall_driver=neutron.agent.firewall.NoopFirewallDriver' "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}"
sed -i '/^agent_mode/d' "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}"
sed -i '/^\[DEFAULT\]/a\agent_mode=dvr_snat' "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}"
echo "restarting cascaded neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron server manually."
exit 1
fi
echo "restarting cascaded neutron-plugin-openvswitch-agent..."
service neutron-plugin-openvswitch-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-plugin-openvswitch-agent manually."
exit 1
fi
echo "restarting cascaded neutron-l3-agent..."
service neutron-l3-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-l3-agent manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,86 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_ML2_CONF_FILE='plugins/ml2/ml2_conf.ini'
_NEUTRON_L3_CONF_FILE='l3_agent.ini'
_NEUTRON_INSTALL="/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascaded-server-installation-backup"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
echo "checking previous installation..."
if [ ! -d "${_BACKUP_DIR}/neutron" ] ; then
echo "Could not find the neutron backup. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}.bk" ] ; then
echo "Could not find bak for ml2 config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}.bk" ] ; then
echo "Could not find bak for l3_agent config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "starting uninstall cascaded ..."
rm -r "${_NEUTRON_INSTALL}/neutron/"
cp -r "${_BACKUP_DIR}/neutron/" "${_NEUTRON_INSTALL}"
echo "updating config file..."
cp "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}.bk" "${_NEUTRON_CONF_DIR}/${_NEUTRON_ML2_CONF_FILE}"
cp "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}.bk" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONF_FILE}"
echo "restarting cascaded neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron server manually."
exit 1
fi
echo "restarting cascaded neutron-plugin-openvswitch-agent..."
service neutron-plugin-openvswitch-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-plugin-openvswitch-agent manually."
exit 1
fi
echo "restarting cascaded neutron-l3-agent..."
service neutron-l3-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-l3-agent manually."
exit 1
fi
rm -rf $_BACKUP_DIR/*
echo "Completed."
echo "uninstall success."
exit 0

View File

@ -1,625 +0,0 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from oslo.config import cfg
from neutron.agent.linux import utils
from neutron.common import exceptions
OPTS = [
cfg.BoolOpt('ip_lib_force_root',
default=False,
help=_('Force ip_lib calls to use the root helper')),
]
LOOPBACK_DEVNAME = 'lo'
# NOTE(ethuleau): depend of the version of iproute2, the vlan
# interface details vary.
VLAN_INTERFACE_DETAIL = ['vlan protocol 802.1q',
'vlan protocol 802.1Q',
'vlan id']
class SubProcessBase(object):
def __init__(self, root_helper=None, namespace=None,
log_fail_as_error=True):
self.root_helper = root_helper
self.namespace = namespace
self.log_fail_as_error = log_fail_as_error
try:
self.force_root = cfg.CONF.ip_lib_force_root
except cfg.NoSuchOptError:
# Only callers that need to force use of the root helper
# need to register the option.
self.force_root = False
def _run(self, options, command, args):
if self.namespace:
return self._as_root(options, command, args)
elif self.force_root:
# Force use of the root helper to ensure that commands
# will execute in dom0 when running under XenServer/XCP.
return self._execute(options, command, args, self.root_helper,
log_fail_as_error=self.log_fail_as_error)
else:
return self._execute(options, command, args,
log_fail_as_error=self.log_fail_as_error)
def _as_root(self, options, command, args, use_root_namespace=False):
if not self.root_helper:
raise exceptions.SudoRequired()
namespace = self.namespace if not use_root_namespace else None
return self._execute(options,
command,
args,
self.root_helper,
namespace,
log_fail_as_error=self.log_fail_as_error)
@classmethod
def _execute(cls, options, command, args, root_helper=None,
namespace=None, log_fail_as_error=True):
opt_list = ['-%s' % o for o in options]
if namespace:
ip_cmd = ['ip', 'netns', 'exec', namespace, 'ip']
else:
ip_cmd = ['ip']
return utils.execute(ip_cmd + opt_list + [command] + list(args),
root_helper=root_helper,
log_fail_as_error=log_fail_as_error)
def set_log_fail_as_error(self, fail_with_error):
self.log_fail_as_error = fail_with_error
class IPWrapper(SubProcessBase):
def __init__(self, root_helper=None, namespace=None):
super(IPWrapper, self).__init__(root_helper=root_helper,
namespace=namespace)
self.netns = IpNetnsCommand(self)
def device(self, name):
return IPDevice(name, self.root_helper, self.namespace)
def get_devices(self, exclude_loopback=False):
retval = []
output = self._execute(['o', 'd'], 'link', ('list',),
self.root_helper, self.namespace)
for line in output.split('\n'):
if '<' not in line:
continue
tokens = line.split(' ', 2)
if len(tokens) == 3:
if any(v in tokens[2] for v in VLAN_INTERFACE_DETAIL):
delimiter = '@'
else:
delimiter = ':'
name = tokens[1].rpartition(delimiter)[0].strip()
if exclude_loopback and name == LOOPBACK_DEVNAME:
continue
retval.append(IPDevice(name,
self.root_helper,
self.namespace))
return retval
def add_tuntap(self, name, mode='tap'):
self._as_root('', 'tuntap', ('add', name, 'mode', mode))
return IPDevice(name, self.root_helper, self.namespace)
def add_veth(self, name1, name2, namespace2=None):
args = ['add', name1, 'type', 'veth', 'peer', 'name', name2]
if namespace2 is None:
namespace2 = self.namespace
else:
self.ensure_namespace(namespace2)
args += ['netns', namespace2]
self._as_root('', 'link', tuple(args))
return (IPDevice(name1, self.root_helper, self.namespace),
IPDevice(name2, self.root_helper, namespace2))
def del_veth(self, name):
"""Delete a virtual interface between two namespaces."""
self._as_root('', 'link', ('del', name))
def ensure_namespace(self, name):
if not self.netns.exists(name):
ip = self.netns.add(name)
lo = ip.device(LOOPBACK_DEVNAME)
lo.link.set_up()
else:
ip = IPWrapper(self.root_helper, name)
return ip
def namespace_is_empty(self):
return not self.get_devices(exclude_loopback=True)
def garbage_collect_namespace(self):
"""Conditionally destroy the namespace if it is empty."""
if self.namespace and self.netns.exists(self.namespace):
if self.namespace_is_empty():
self.netns.delete(self.namespace)
return True
return False
def add_device_to_namespace(self, device):
if self.namespace:
device.link.set_netns(self.namespace)
def add_vxlan(self, name, vni, group=None, dev=None, ttl=None, tos=None,
local=None, port=None, proxy=False):
cmd = ['add', name, 'type', 'vxlan', 'id', vni]
if group:
cmd.extend(['group', group])
if dev:
cmd.extend(['dev', dev])
if ttl:
cmd.extend(['ttl', ttl])
if tos:
cmd.extend(['tos', tos])
if local:
cmd.extend(['local', local])
if proxy:
cmd.append('proxy')
# tuple: min,max
if port and len(port) == 2:
cmd.extend(['port', port[0], port[1]])
elif port:
raise exceptions.NetworkVxlanPortRangeError(vxlan_range=port)
self._as_root('', 'link', cmd)
return (IPDevice(name, self.root_helper, self.namespace))
@classmethod
def get_namespaces(cls, root_helper):
output = cls._execute('', 'netns', ('list',), root_helper=root_helper)
return [l.strip() for l in output.split('\n')]
class IpRule(IPWrapper):
def add_rule_from(self, ip, table, rule_pr):
args = ['add', 'from', ip, 'lookup', table, 'priority', rule_pr]
ip = self._as_root('', 'rule', tuple(args))
return ip
def delete_rule_priority(self, rule_pr):
args = ['del', 'priority', rule_pr]
ip = self._as_root('', 'rule', tuple(args))
return ip
class IPDevice(SubProcessBase):
def __init__(self, name, root_helper=None, namespace=None):
super(IPDevice, self).__init__(root_helper=root_helper,
namespace=namespace)
self.name = name
self.link = IpLinkCommand(self)
self.addr = IpAddrCommand(self)
self.route = IpRouteCommand(self)
self.neigh = IpNeighCommand(self)
self.tunnel = IpTunnelCommand(self)
def __eq__(self, other):
return (other is not None and self.name == other.name
and self.namespace == other.namespace)
def __str__(self):
return self.name
class IpCommandBase(object):
COMMAND = ''
def __init__(self, parent):
self._parent = parent
def _run(self, *args, **kwargs):
return self._parent._run(kwargs.get('options', []), self.COMMAND, args)
def _as_root(self, *args, **kwargs):
return self._parent._as_root(kwargs.get('options', []),
self.COMMAND,
args,
kwargs.get('use_root_namespace', False))
class IpDeviceCommandBase(IpCommandBase):
@property
def name(self):
return self._parent.name
class IpLinkCommand(IpDeviceCommandBase):
COMMAND = 'link'
def set_address(self, mac_address):
self._as_root('set', self.name, 'address', mac_address)
def set_mtu(self, mtu_size):
self._as_root('set', self.name, 'mtu', mtu_size)
def set_up(self):
self._as_root('set', self.name, 'up')
def set_down(self):
self._as_root('set', self.name, 'down')
def set_netns(self, namespace):
self._as_root('set', self.name, 'netns', namespace)
self._parent.namespace = namespace
def set_name(self, name):
self._as_root('set', self.name, 'name', name)
self._parent.name = name
def set_alias(self, alias_name):
self._as_root('set', self.name, 'alias', alias_name)
def delete(self):
self._as_root('delete', self.name)
@property
def address(self):
return self.attributes.get('link/ether')
@property
def state(self):
return self.attributes.get('state')
@property
def mtu(self):
return self.attributes.get('mtu')
@property
def qdisc(self):
return self.attributes.get('qdisc')
@property
def qlen(self):
return self.attributes.get('qlen')
@property
def alias(self):
return self.attributes.get('alias')
@property
def attributes(self):
return self._parse_line(self._run('show', self.name, options='o'))
def _parse_line(self, value):
if not value:
return {}
device_name, settings = value.replace("\\", '').split('>', 1)
tokens = settings.split()
keys = tokens[::2]
values = [int(v) if v.isdigit() else v for v in tokens[1::2]]
retval = dict(zip(keys, values))
return retval
class IpTunnelCommand(IpDeviceCommandBase):
COMMAND = 'tunnel'
def add(self, mode, remote_ip, local_ip):
self._as_root('add',
self.name,
'mode',
mode,
'remote',
remote_ip,
'local',
local_ip)
def delete(self):
self._as_root('delete',
self.name)
class IpAddrCommand(IpDeviceCommandBase):
COMMAND = 'addr'
def add(self, ip_version, cidr, broadcast, scope='global'):
self._as_root('add',
cidr,
'brd',
broadcast,
'scope',
scope,
'dev',
self.name,
options=[ip_version])
def delete(self, ip_version, cidr):
self._as_root('del',
cidr,
'dev',
self.name,
options=[ip_version])
def flush(self):
self._as_root('flush', self.name)
def list(self, scope=None, to=None, filters=None):
if filters is None:
filters = []
retval = []
if scope:
filters += ['scope', scope]
if to:
filters += ['to', to]
for line in self._run('show', self.name, *filters).split('\n'):
line = line.strip()
if not line.startswith('inet'):
continue
parts = line.split()
if parts[0] == 'inet6':
version = 6
scope = parts[3]
broadcast = '::'
else:
version = 4
if parts[2] == 'brd':
broadcast = parts[3]
scope = parts[5]
else:
# sometimes output of 'ip a' might look like:
# inet 192.168.100.100/24 scope global eth0
# and broadcast needs to be calculated from CIDR
broadcast = str(netaddr.IPNetwork(parts[1]).broadcast)
scope = parts[3]
retval.append(dict(cidr=parts[1],
broadcast=broadcast,
scope=scope,
ip_version=version,
dynamic=('dynamic' == parts[-1])))
return retval
class IpRouteCommand(IpDeviceCommandBase):
COMMAND = 'route'
def add_gateway(self, gateway, metric=None, table=None):
args = ['replace', 'default', 'via', gateway]
if metric:
args += ['metric', metric]
args += ['dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def delete_gateway(self, gateway=None, table=None):
args = ['del', 'default']
if gateway:
args += ['via', gateway]
args += ['dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def list_onlink_routes(self):
def iterate_routes():
output = self._run('list', 'dev', self.name, 'scope', 'link')
for line in output.split('\n'):
line = line.strip()
if line and not line.count('src'):
yield line
return [x for x in iterate_routes()]
def add_onlink_route(self, cidr):
self._as_root('replace', cidr, 'dev', self.name, 'scope', 'link')
def delete_onlink_route(self, cidr):
self._as_root('del', cidr, 'dev', self.name, 'scope', 'link')
def get_gateway(self, scope=None, filters=None):
if filters is None:
filters = []
retval = None
if scope:
filters += ['scope', scope]
route_list_lines = self._run('list', 'dev', self.name,
*filters).split('\n')
default_route_line = next((x.strip() for x in
route_list_lines if
x.strip().startswith('default')), None)
if default_route_line:
gateway_index = 2
parts = default_route_line.split()
retval = dict(gateway=parts[gateway_index])
if 'metric' in parts:
metric_index = parts.index('metric') + 1
retval.update(metric=int(parts[metric_index]))
return retval
def pullup_route(self, interface_name):
"""Ensures that the route entry for the interface is before all
others on the same subnet.
"""
device_list = []
device_route_list_lines = self._run('list', 'proto', 'kernel',
'dev', interface_name).split('\n')
for device_route_line in device_route_list_lines:
try:
subnet = device_route_line.split()[0]
except Exception:
continue
subnet_route_list_lines = self._run('list', 'proto', 'kernel',
'match', subnet).split('\n')
for subnet_route_line in subnet_route_list_lines:
i = iter(subnet_route_line.split())
while(i.next() != 'dev'):
pass
device = i.next()
try:
while(i.next() != 'src'):
pass
src = i.next()
except Exception:
src = ''
if device != interface_name:
device_list.append((device, src))
else:
break
for (device, src) in device_list:
self._as_root('del', subnet, 'dev', device)
if (src != ''):
self._as_root('append', subnet, 'proto', 'kernel',
'src', src, 'dev', device)
else:
self._as_root('append', subnet, 'proto', 'kernel',
'dev', device)
def add_route(self, cidr, ip, table=None):
args = ['replace', cidr, 'via', ip, 'dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def delete_route(self, cidr, ip, table=None):
args = ['del', cidr, 'via', ip, 'dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
class IpNeighCommand(IpDeviceCommandBase):
COMMAND = 'neigh'
def add(self, ip_version, ip_address, mac_address):
self._as_root('replace',
ip_address,
'lladdr',
mac_address,
'nud',
'permanent',
'dev',
self.name,
options=[ip_version])
def delete(self, ip_version, ip_address, mac_address):
self._as_root('del',
ip_address,
'lladdr',
mac_address,
'dev',
self.name,
options=[ip_version])
class IpNetnsCommand(IpCommandBase):
COMMAND = 'netns'
def add(self, name):
self._as_root('add', name, use_root_namespace=True)
wrapper = IPWrapper(self._parent.root_helper, name)
wrapper.netns.execute(['sysctl', '-w',
'net.ipv4.conf.all.promote_secondaries=1'])
return wrapper
def delete(self, name):
self._as_root('delete', name, use_root_namespace=True)
def execute(self, cmds, addl_env={}, check_exit_code=True,
extra_ok_codes=None):
ns_params = []
if self._parent.namespace:
if not self._parent.root_helper:
raise exceptions.SudoRequired()
ns_params = ['ip', 'netns', 'exec', self._parent.namespace]
env_params = []
if addl_env:
env_params = (['env'] +
['%s=%s' % pair for pair in addl_env.items()])
return utils.execute(
ns_params + env_params + list(cmds),
root_helper=self._parent.root_helper,
check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
def exists(self, name):
output = self._parent._execute('o', 'netns', ['list'])
for line in output.split('\n'):
if name == line.strip():
return True
return False
def device_exists(device_name, root_helper=None, namespace=None):
"""Return True if the device exists in the namespace."""
try:
dev = IPDevice(device_name, root_helper, namespace)
dev.set_log_fail_as_error(False)
address = dev.link.address
except RuntimeError:
return False
return bool(address)
def device_exists_with_ip_mac(device_name, ip_cidr, mac, namespace=None,
root_helper=None):
"""Return True if the device with the given IP and MAC addresses
exists in the namespace.
"""
try:
device = IPDevice(device_name, root_helper, namespace)
if mac != device.link.address:
return False
if ip_cidr not in (ip['cidr'] for ip in device.addr.list()):
return False
except RuntimeError:
return False
else:
return True
def ensure_device_is_ready(device_name, root_helper=None, namespace=None):
dev = IPDevice(device_name, root_helper, namespace)
dev.set_log_fail_as_error(False)
try:
# Ensure the device is up, even if it is already up. If the device
# doesn't exist, a RuntimeError will be raised.
dev.link.set_up()
except RuntimeError:
return False
return True
def iproute_arg_supported(command, arg, root_helper=None):
command += ['help']
stdout, stderr = utils.execute(command, root_helper=root_helper,
check_exit_code=False, return_stderr=True)
return any(arg in line for line in stderr.split('\n'))

View File

@ -1,196 +0,0 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for configuring Neutron
"""
import os
from oslo.config import cfg
from oslo.db import options as db_options
from oslo import messaging
from paste import deploy
from neutron.api.v2 import attributes
from neutron.common import utils
from neutron.openstack.common import log as logging
from neutron import version
LOG = logging.getLogger(__name__)
core_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=9696,
help=_("The port to bind to")),
cfg.StrOpt('api_paste_config', default="api-paste.ini",
help=_("The API paste config file to use")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('policy_file', default="policy.json",
help=_("The policy file to use")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.StrOpt('core_plugin',
help=_("The core plugin Neutron will use")),
cfg.ListOpt('service_plugins', default=[],
help=_("The service plugins Neutron will use")),
cfg.StrOpt('base_mac', default="fa:16:3e:00:00:00",
help=_("The base MAC address Neutron will use for VIFs")),
cfg.IntOpt('mac_generation_retries', default=16,
help=_("How many times Neutron will retry MAC generation")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
cfg.IntOpt('max_dns_nameservers', default=5,
help=_("Maximum number of DNS nameservers")),
cfg.IntOpt('max_subnet_host_routes', default=20,
help=_("Maximum number of host routes per subnet")),
cfg.IntOpt('max_fixed_ips_per_port', default=5,
help=_("Maximum number of fixed ips per port")),
cfg.IntOpt('dhcp_lease_duration', default=86400,
deprecated_name='dhcp_lease_time',
help=_("DHCP lease duration (in seconds). Use -1 to tell "
"dnsmasq to use infinite lease times.")),
cfg.BoolOpt('dhcp_agent_notification', default=True,
help=_("Allow sending resource operation"
" notification to DHCP agent")),
cfg.BoolOpt('allow_overlapping_ips', default=False,
help=_("Allow overlapping IP support in Neutron")),
cfg.StrOpt('host', default=utils.get_hostname(),
help=_("The hostname Neutron is running on")),
cfg.BoolOpt('force_gateway_on_subnet', default=True,
help=_("Ensure that configured gateway is on subnet. "
"For IPv6, validate only if gateway is not a link "
"local address. Deprecated, to be removed during the "
"K release, at which point the check will be "
"mandatory.")),
cfg.BoolOpt('notify_nova_on_port_status_changes', default=True,
help=_("Send notification to nova when port status changes")),
cfg.BoolOpt('notify_nova_on_port_data_changes', default=True,
help=_("Send notification to nova when port data (fixed_ips/"
"floatingip) changes so nova can update its cache.")),
cfg.StrOpt('nova_url',
default='http://127.0.0.1:8774/v2',
help=_('URL for connection to nova')),
cfg.StrOpt('nova_admin_username',
help=_('Username for connecting to nova in admin context')),
cfg.StrOpt('nova_admin_password',
help=_('Password for connection to nova in admin context'),
secret=True),
cfg.StrOpt('nova_admin_tenant_id',
help=_('The uuid of the admin nova tenant')),
cfg.StrOpt('nova_admin_auth_url',
default='http://localhost:5000/v2.0',
help=_('Authorization URL for connecting to nova in admin '
'context')),
cfg.StrOpt('nova_ca_certificates_file',
help=_('CA file for novaclient to verify server certificates')),
cfg.BoolOpt('nova_api_insecure', default=False,
help=_("If True, ignore any SSL validation issues")),
cfg.StrOpt('nova_region_name',
help=_('Name of nova region to use. Useful if keystone manages'
' more than one region.')),
cfg.IntOpt('send_events_interval', default=2,
help=_('Number of seconds between sending events to nova if '
'there are any events to send.')),
# add by j00209498
cfg.StrOpt('cascade_str', default='cascaded',
help=_('cascade_str identity cascading openstack or cascaded'
'openstack, value = cascaded or cascading.')),
]
core_cli_opts = [
cfg.StrOpt('state_path',
default='/var/lib/neutron',
help=_("Where to store Neutron state files. "
"This directory must be writable by the agent.")),
]
# Register the configuration options
cfg.CONF.register_opts(core_opts)
cfg.CONF.register_cli_opts(core_cli_opts)
# Ensure that the control exchange is set correctly
messaging.set_transport_defaults(control_exchange='neutron')
_SQL_CONNECTION_DEFAULT = 'sqlite://'
# Update the default QueuePool parameters. These can be tweaked by the
# configuration variables - max_pool_size, max_overflow and pool_timeout
db_options.set_defaults(cfg.CONF,
connection=_SQL_CONNECTION_DEFAULT,
sqlite_db='', max_pool_size=10,
max_overflow=20, pool_timeout=10)
def init(args, **kwargs):
cfg.CONF(args=args, project='neutron',
version='%%prog %s' % version.version_info.release_string(),
**kwargs)
# FIXME(ihrachys): if import is put in global, circular import
# failure occurs
from neutron.common import rpc as n_rpc
n_rpc.init(cfg.CONF)
# Validate that the base_mac is of the correct format
msg = attributes._validate_regex(cfg.CONF.base_mac,
attributes.MAC_PATTERN)
if msg:
msg = _("Base MAC: %s") % msg
raise Exception(msg)
def setup_logging():
"""Sets up the logging options for a log with supplied name."""
product_name = "neutron"
logging.setup(product_name)
LOG.info(_("Logging enabled!"))
def load_paste_app(app_name):
"""Builds and returns a WSGI app from a paste config file.
:param app_name: Name of the application to load
:raises ConfigFilesNotFoundError when config file cannot be located
:raises RuntimeError when application cannot be loaded from config file
"""
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
if not config_path:
raise cfg.ConfigFilesNotFoundError(
config_files=[cfg.CONF.api_paste_config])
config_path = os.path.abspath(config_path)
LOG.info(_("Config paste file: %s"), config_path)
try:
app = deploy.loadapp("config:%s" % config_path, name=app_name)
except (LookupError, ImportError):
msg = (_("Unable to load %(app_name)s from "
"configuration file %(config_path)s.") %
{'app_name': app_name,
'config_path': config_path})
LOG.exception(msg)
raise RuntimeError(msg)
return app

View File

@ -1,220 +0,0 @@
# Copyright 2013, Nachi Ueno, NTT MCL, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from oslo.config import cfg
import sqlalchemy as sa
from sqlalchemy import orm
from neutron.common import utils
from neutron.db import db_base_plugin_v2
from neutron.db import l3_db
from neutron.db import model_base
from neutron.db import models_v2
from neutron.extensions import extraroute
from neutron.extensions import l3
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
extra_route_opts = [
#TODO(nati): use quota framework when it support quota for attributes
cfg.IntOpt('max_routes', default=30,
help=_("Maximum number of routes")),
# add by j00209498 ---begin
cfg.StrOpt('l3gw_extern_net_ip_range',
default="100.64.0.0/16",
help=_('The l3gw external ip range(cidr) used for unique '
'like 100.64.0.0/16')),
# add by j00209498 ---end
]
cfg.CONF.register_opts(extra_route_opts)
class RouterRoute(model_base.BASEV2, models_v2.Route):
router_id = sa.Column(sa.String(36),
sa.ForeignKey('routers.id',
ondelete="CASCADE"),
primary_key=True)
router = orm.relationship(l3_db.Router,
backref=orm.backref("route_list",
lazy='joined',
cascade='delete'))
class ExtraRoute_dbonly_mixin(l3_db.L3_NAT_dbonly_mixin):
"""Mixin class to support extra route configuration on router."""
def _extend_router_dict_extraroute(self, router_res, router_db):
router_res['routes'] = (ExtraRoute_dbonly_mixin.
_make_extra_route_list(
router_db['route_list']
))
db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(
l3.ROUTERS, ['_extend_router_dict_extraroute'])
def update_router(self, context, id, router):
r = router['router']
with context.session.begin(subtransactions=True):
#check if route exists and have permission to access
router_db = self._get_router(context, id)
if 'routes' in r:
self._update_extra_routes(context, router_db, r['routes'])
routes = self._get_extra_routes_by_router_id(context, id)
router_updated = super(ExtraRoute_dbonly_mixin, self).update_router(
context, id, router)
router_updated['routes'] = routes
return router_updated
def _get_subnets_by_cidr(self, context, cidr):
query_subnets = context.session.query(models_v2.Subnet)
return query_subnets.filter_by(cidr=cidr).all()
def _validate_routes_nexthop(self, cidrs, ips, routes, nexthop):
#Note(nati): Nexthop should be connected,
# so we need to check
# nexthop belongs to one of cidrs of the router ports
extern_relay_cidr = cfg.CONF.l3gw_extern_net_ip_range
if not netaddr.all_matching_cidrs(nexthop, cidrs):
if(cfg.CONF.cascade_str == 'cascaded'
and extern_relay_cidr
and netaddr.all_matching_cidrs(nexthop,
[extern_relay_cidr])):
LOG.debug(_('nexthop(%s) is in extern_relay_cidr,'
'so not raise InvalidRoutes exception'), nexthop)
return
raise extraroute.InvalidRoutes(
routes=routes,
reason=_('the nexthop is not connected with router'))
#Note(nati) nexthop should not be same as fixed_ips
if nexthop in ips:
raise extraroute.InvalidRoutes(
routes=routes,
reason=_('the nexthop is used by router'))
def _validate_routes(self, context,
router_id, routes):
if len(routes) > cfg.CONF.max_routes:
raise extraroute.RoutesExhausted(
router_id=router_id,
quota=cfg.CONF.max_routes)
filters = {'device_id': [router_id]}
ports = self._core_plugin.get_ports(context, filters)
cidrs = []
ips = []
for port in ports:
for ip in port['fixed_ips']:
cidrs.append(self._core_plugin._get_subnet(
context, ip['subnet_id'])['cidr'])
ips.append(ip['ip_address'])
for route in routes:
self._validate_routes_nexthop(
cidrs, ips, routes, route['nexthop'])
def _update_extra_routes(self, context, router, routes):
self._validate_routes(context, router['id'],
routes)
old_routes, routes_dict = self._get_extra_routes_dict_by_router_id(
context, router['id'])
added, removed = utils.diff_list_of_dict(old_routes,
routes)
LOG.debug(_('Added routes are %s'), added)
for route in added:
router_routes = RouterRoute(
router_id=router['id'],
destination=route['destination'],
nexthop=route['nexthop'])
context.session.add(router_routes)
LOG.debug(_('Removed routes are %s'), removed)
for route in removed:
context.session.delete(
routes_dict[(route['destination'], route['nexthop'])])
@staticmethod
def _make_extra_route_list(extra_routes):
# added by j00209498 ----begin
extern_relay_cidr = cfg.CONF.l3gw_extern_net_ip_range
if(cfg.CONF.cascade_str == 'cascaded' and extern_relay_cidr):
routes_list = []
for route in extra_routes:
if(netaddr.all_matching_cidrs(route['nexthop'],
[extern_relay_cidr])):
routes_list.append({'destination': route['destination'],
'nexthop': route['nexthop'],
'onlink': True})
else:
routes_list.append({'destination': route['destination'],
'nexthop': route['nexthop']})
return routes_list
# added by j00209498 ----end
return [{'destination': route['destination'],
'nexthop': route['nexthop']}
for route in extra_routes]
def _get_extra_routes_by_router_id(self, context, id):
query = context.session.query(RouterRoute)
query = query.filter_by(router_id=id)
return self._make_extra_route_list(query)
def _get_extra_routes_dict_by_router_id(self, context, id):
query = context.session.query(RouterRoute)
query = query.filter_by(router_id=id)
routes = []
routes_dict = {}
for route in query:
routes.append({'destination': route['destination'],
'nexthop': route['nexthop']})
routes_dict[(route['destination'], route['nexthop'])] = route
return routes, routes_dict
def get_router(self, context, id, fields=None):
with context.session.begin(subtransactions=True):
router = super(ExtraRoute_dbonly_mixin, self).get_router(
context, id, fields)
return router
def get_routers(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None,
page_reverse=False):
with context.session.begin(subtransactions=True):
routers = super(ExtraRoute_dbonly_mixin, self).get_routers(
context, filters, fields, sorts=sorts, limit=limit,
marker=marker, page_reverse=page_reverse)
return routers
def _confirm_router_interface_not_in_use(self, context, router_id,
subnet_id):
super(ExtraRoute_dbonly_mixin,
self)._confirm_router_interface_not_in_use(
context, router_id, subnet_id)
subnet_db = self._core_plugin._get_subnet(context, subnet_id)
subnet_cidr = netaddr.IPNetwork(subnet_db['cidr'])
extra_routes = self._get_extra_routes_by_router_id(context, router_id)
for route in extra_routes:
if netaddr.all_matching_cidrs(route['nexthop'], [subnet_cidr]):
raise extraroute.RouterInterfaceInUseByRoute(
router_id=router_id, subnet_id=subnet_id)
class ExtraRoute_db_mixin(ExtraRoute_dbonly_mixin, l3_db.L3_NAT_db_mixin):
"""Mixin class to support extra route configuration on router with rpc."""
pass

View File

@ -1,97 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_INSTALL="/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascading-server-big2layer-patch-installation-backup"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-server-big2layer-cascading-patch has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "restarting cascading neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascading neutron server manually."
exit 1
fi
echo "restarting cascading neutron-plugin-openvswitch-agent..."
service neutron-plugin-openvswitch-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascading neutron-plugin-openvswitch-agent manually."
exit 1
fi
echo "restarting cascading neutron-l3-agent..."
service neutron-l3-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascading neutron-l3-agent manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,123 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import sql
from neutron.common import constants as const
from neutron.db import agents_db
from neutron.db import common_db_mixin as base_db
from neutron.db import models_v2
from neutron.openstack.common import jsonutils
from neutron.openstack.common import timeutils
from neutron.plugins.ml2.drivers.l2pop import constants as l2_const
from neutron.plugins.ml2 import models as ml2_models
class L2populationDbMixin(base_db.CommonDbMixin):
def get_agent_ip_by_host(self, session, agent_host):
agent = self.get_agent_by_host(session, agent_host)
if agent:
return self.get_agent_ip(agent)
def get_agent_ip(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('tunneling_ip')
def get_host_ip_from_binding_profile(self, port):
ip = port['binding:profile'].get('host_ip')
return ip
def get_host_ip_from_binding_profile_str(self, profile):
if(not profile):
return
profile = jsonutils.loads(profile)
return profile.get('host_ip')
def get_agent_uptime(self, agent):
return timeutils.delta_seconds(agent.started_at,
agent.heartbeat_timestamp)
def get_agent_tunnel_types(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('tunnel_types')
def get_agent_l2pop_network_types(self, agent):
configuration = jsonutils.loads(agent.configurations)
return configuration.get('l2pop_network_types')
def get_agent_by_host(self, session, agent_host):
with session.begin(subtransactions=True):
query = session.query(agents_db.Agent)
query = query.filter(agents_db.Agent.host == agent_host,
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query.first()
def get_network_ports(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.PortBinding,
agents_db.Agent)
query = query.join(agents_db.Agent,
agents_db.Agent.host ==
ml2_models.PortBinding.host)
query = query.join(models_v2.Port)
query = query.filter(models_v2.Port.network_id == network_id,
models_v2.Port.admin_state_up == sql.true(),
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query
def get_nondvr_network_ports(self, session, network_id):
query = self.get_network_ports(session, network_id)
return query.filter(models_v2.Port.device_owner !=
const.DEVICE_OWNER_DVR_INTERFACE)
def get_dvr_network_ports(self, session, network_id):
with session.begin(subtransactions=True):
query = session.query(ml2_models.DVRPortBinding,
agents_db.Agent)
query = query.join(agents_db.Agent,
agents_db.Agent.host ==
ml2_models.DVRPortBinding.host)
query = query.join(models_v2.Port)
query = query.filter(models_v2.Port.network_id == network_id,
models_v2.Port.admin_state_up == sql.true(),
models_v2.Port.device_owner ==
const.DEVICE_OWNER_DVR_INTERFACE,
agents_db.Agent.agent_type.in_(
l2_const.SUPPORTED_AGENT_TYPES))
return query
def get_agent_network_active_port_count(self, session, agent_host,
network_id):
with session.begin(subtransactions=True):
query = session.query(models_v2.Port)
query1 = query.join(ml2_models.PortBinding)
query1 = query1.filter(models_v2.Port.network_id == network_id,
models_v2.Port.status ==
const.PORT_STATUS_ACTIVE,
models_v2.Port.device_owner !=
const.DEVICE_OWNER_DVR_INTERFACE,
ml2_models.PortBinding.host == agent_host)
query2 = query.join(ml2_models.DVRPortBinding)
query2 = query2.filter(models_v2.Port.network_id == network_id,
ml2_models.DVRPortBinding.status ==
const.PORT_STATUS_ACTIVE,
models_v2.Port.device_owner ==
const.DEVICE_OWNER_DVR_INTERFACE,
ml2_models.DVRPortBinding.host ==
agent_host)
return (query1.count() + query2.count())

View File

@ -1,304 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from neutron.common import constants as const
from neutron import context as n_context
from neutron.db import api as db_api
from neutron.openstack.common import log as logging
from neutron.plugins.ml2 import driver_api as api
from neutron.plugins.ml2.drivers.l2pop import config # noqa
from neutron.plugins.ml2.drivers.l2pop import db as l2pop_db
from neutron.plugins.ml2.drivers.l2pop import rpc as l2pop_rpc
LOG = logging.getLogger(__name__)
class L2populationMechanismDriver(api.MechanismDriver,
l2pop_db.L2populationDbMixin):
def __init__(self):
super(L2populationMechanismDriver, self).__init__()
self.L2populationAgentNotify = l2pop_rpc.L2populationAgentNotifyAPI()
def initialize(self):
LOG.debug(_("Experimental L2 population driver"))
self.rpc_ctx = n_context.get_admin_context_without_session()
self.migrated_ports = {}
self.remove_fdb_entries = {}
def _get_port_fdb_entries(self, port):
return [[port['mac_address'], port['device_owner'],
ip['ip_address']] for ip in port['fixed_ips']]
def _get_agent_host(self, context, port):
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
agent_host = context.binding.host
else:
agent_host = port['binding:host_id']
return agent_host
def delete_port_precommit(self, context):
# TODO(matrohon): revisit once the original bound segment will be
# available in delete_port_postcommit. in delete_port_postcommit
# agent_active_ports will be equal to 0, and the _update_port_down
# won't need agent_active_ports_count_for_flooding anymore
port = context.current
agent_host = context.host #self._get_agent_host(context, port)
if port['id'] not in self.remove_fdb_entries:
self.remove_fdb_entries[port['id']] = {}
self.remove_fdb_entries[port['id']][agent_host] = (
self._update_port_down(context, port, 1))
def delete_port_postcommit(self, context):
port = context.current
agent_host = context.host
fdb_entries = self._update_port_down(context, port, agent_host)
self.L2populationAgentNotify.remove_fdb_entries(self.rpc_ctx,
fdb_entries)
def _get_diff_ips(self, orig, port):
orig_ips = set([ip['ip_address'] for ip in orig['fixed_ips']])
port_ips = set([ip['ip_address'] for ip in port['fixed_ips']])
# check if an ip has been added or removed
orig_chg_ips = orig_ips.difference(port_ips)
port_chg_ips = port_ips.difference(orig_ips)
if orig_chg_ips or port_chg_ips:
return orig_chg_ips, port_chg_ips
def _fixed_ips_changed(self, context, orig, port, diff_ips):
orig_ips, port_ips = diff_ips
if (port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE):
agent_host = context.host
else:
agent_host = context.original_host
port_infos = self._get_port_infos(
context, orig, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
orig_mac_ip = [[port['mac_address'], port['device_owner'], ip]
for ip in orig_ips]
port_mac_ip = [[port['mac_address'], port['device_owner'], ip]
for ip in port_ips]
upd_fdb_entries = {port['network_id']: {agent_ip: {}}}
ports = upd_fdb_entries[port['network_id']][agent_ip]
if orig_mac_ip:
ports['before'] = orig_mac_ip
if port_mac_ip:
ports['after'] = port_mac_ip
self.L2populationAgentNotify.update_fdb_entries(
self.rpc_ctx, {'chg_ip': upd_fdb_entries})
return True
def update_port_postcommit(self, context):
port = context.current
orig = context.original
diff_ips = self._get_diff_ips(orig, port)
if diff_ips:
self._fixed_ips_changed(context, orig, port, diff_ips)
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
if context.status == const.PORT_STATUS_DOWN:
agent_host = context.host
fdb_entries = self._update_port_down(
context, port, agent_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif (context.host != context.original_host
and context.status == const.PORT_STATUS_ACTIVE
and not self.migrated_ports.get(orig['id'])):
# The port has been migrated. We have to store the original
# binding to send appropriate fdb once the port will be set
# on the destination host
self.migrated_ports[orig['id']] = (
(orig, context.original_host))
elif context.status != context.original_status:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
elif context.status == const.PORT_STATUS_DOWN:
fdb_entries = self._update_port_down(
context, port, context.host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif context.status == const.PORT_STATUS_BUILD:
orig = self.migrated_ports.pop(port['id'], None)
if orig:
original_port = orig[0]
original_host = orig[1]
# this port has been migrated: remove its entries from fdb
fdb_entries = self._update_port_down(
context, original_port, original_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
def _get_port_infos(self, context, port, agent_host):
if not agent_host:
return
session = db_api.get_session()
agent = self.get_agent_by_host(session, agent_host)
if not agent:
return
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
agent_ip = self.get_agent_ip(agent)
else:
agent_ip = self.get_host_ip_from_binding_profile(port)
if not agent_ip:
LOG.warning(_("Unable to retrieve the agent ip, check the agent "
"configuration."))
return
segment = context.bound_segment
if not segment:
LOG.warning(_("Port %(port)s updated by agent %(agent)s "
"isn't bound to any segment"),
{'port': port['id'], 'agent': agent})
return
network_types = self.get_agent_l2pop_network_types(agent)
if network_types is None:
network_types = self.get_agent_tunnel_types(agent)
if segment['network_type'] not in network_types:
return
fdb_entries = self._get_port_fdb_entries(port)
return agent, agent_host, agent_ip, segment, fdb_entries
def _update_port_up(self, context):
port = context.current
agent_host = context.host
port_infos = self._get_port_infos(context, port, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
network_id = port['network_id']
session = db_api.get_session()
agent_active_ports = self.get_agent_network_active_port_count(
session, agent_host, network_id)
other_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {agent_ip: []}}}
if agent_active_ports == 1 or (
self.get_agent_uptime(agent) < cfg.CONF.l2pop.agent_boot_time):
# First port activated on current agent in this network,
# we have to provide it with the whole list of fdb entries
agent_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {}}}
ports = agent_fdb_entries[network_id]['ports']
#import pdb;pdb.set_trace()
nondvr_network_ports = self.get_nondvr_network_ports(session,
network_id)
for network_port in nondvr_network_ports:
binding, agent = network_port
if agent.host == agent_host:
continue
#ip = self.get_agent_ip(agent)
profile = binding['profile']
ip = self.get_host_ip_from_binding_profile_str(profile)
if not ip:
LOG.debug(_("Unable to retrieve the agent ip, check "
"the agent %(agent_host)s configuration."),
{'agent_host': agent.host})
continue
agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
agent_ports += self._get_port_fdb_entries(binding.port)
ports[ip] = agent_ports
# comment by j00209498
# dvr_network_ports = self.get_dvr_network_ports(session, network_id)
# for network_port in dvr_network_ports:
# binding, agent = network_port
# if agent.host == agent_host:
# continue
#
# ip = self.get_agent_ip(agent)
# if not ip:
# LOG.debug(_("Unable to retrieve the agent ip, check "
# "the agent %(agent_host)s configuration."),
# {'agent_host': agent.host})
# continue
#
# agent_ports = ports.get(ip, [const.FLOODING_ENTRY])
# ports[ip] = agent_ports
# And notify other agents to add flooding entry
other_fdb_entries[network_id]['ports'][agent_ip].append(
const.FLOODING_ENTRY)
if ports.keys():
self.L2populationAgentNotify.add_fdb_entries(
self.rpc_ctx, agent_fdb_entries, agent_host)
# Notify other agents to add fdb rule for current port
if port['device_owner'] != const.DEVICE_OWNER_DVR_INTERFACE:
other_fdb_entries[network_id]['ports'][agent_ip] += (
port_fdb_entries)
self.L2populationAgentNotify.add_fdb_entries(self.rpc_ctx,
other_fdb_entries)
def _update_port_down(self, context, port, agent_host):
port_infos = self._get_port_infos(context, port, agent_host)
if not port_infos:
return
agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
network_id = port['network_id']
session = db_api.get_session()
agent_active_ports = self.get_agent_network_active_port_count(
session, agent_host, network_id)
other_fdb_entries = {network_id:
{'segment_id': segment['segmentation_id'],
'network_type': segment['network_type'],
'ports': {agent_ip: []}}}
if agent_active_ports == 0:
# Agent is removing its last activated port in this network,
# other agents needs to be notified to delete their flooding entry.
other_fdb_entries[network_id]['ports'][agent_ip].append(
const.FLOODING_ENTRY)
# Notify other agents to remove fdb rules for current port
if port['device_owner'] != const.DEVICE_OWNER_DVR_INTERFACE:
fdb_entries = port_fdb_entries
other_fdb_entries[network_id]['ports'][agent_ip] += fdb_entries
return other_fdb_entries

View File

@ -1,39 +0,0 @@
[DEFAULT]
cascade_str = cascading
debug=true
verbose=true
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend=rabbit
rabbit_host = CASCADING_CONTROL_IP
rabbit_password = USER_PWD
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://CASCADING_CONTROL_IP:8774/v2
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = openstack
nova_admin_auth_url = http://CASCADING_CONTROL_IP:35357/v2.0
lock_path = $state_path/lock
core_plugin = ml2
auth_strategy = keystone
nova_region_name = CASCADING_REGION_NAME
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
identity_uri = http://CASCADING_CONTROL_IP:5000
auth_host = CASCADING_CONTROL_IP
auth_port = 35357
auth_protocol = http
admin_tenant_name = TENANT_NAME
admin_user = USER_NAME
admin_password = USER_PWD
[database]
connection = mysql://neutron:openstack@CASCADING_CONTROL_IP/neutron
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

View File

@ -1,107 +0,0 @@
[ovs]
bridge_mappings = default:br-eth1,external:br-ex
integration_bridge = br-int
network_vlan_ranges = default:1:4094
tunnel_type = vxlan,gre
enable_tunneling = True
local_ip = LOCAL_IP
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,l2population
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan
# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade
# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver
[ml2_type_flat]
flat_networks = external
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *
[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2
network_vlan_ranges = default:1:4094
[ml2_type_gre]
tunnel_id_ranges = 1:1000
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
vni_ranges = 4097:200000
# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1
[securitygroup]
#firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
firewall_driver=neutron.agent.firewall.NoopFirewallDriver
enable_security_group = True
enable_ipset = True
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True
[agent]
tunnel_types = vxlan, gre
l2_population = True
arp_responder = True
enable_distributed_routing = True
#configure added by j00209498
keystone_auth_url = http://CASCADING_CONTROL_IP:35357/v2.0
neutron_user_name = USER_NAME
neutron_password = USER_PWD
neutron_tenant_name = TENANT_NAME
os_region_name = CASCADED_REGION_NAME
cascading_os_region_name = CASCADING_REGION_NAME
cascading_auth_url = http://CASCADING_CONTROL_IP:35357/v2.0
cascading_user_name = USER_NAME
cascading_password = USER_PWD
cascading_tenant_name = TENANT_NAME

View File

@ -1,159 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CASCADING_CONTROL_IP=127.0.0.1
CASCADING_REGION_NAME=Cascading_Openstack
CASCADED_REGION_NAME=AZ1
USER_NAME=neutron
USER_PWD=openstack
TENANT_NAME=service
#For test path or the path is not standard
_PREFIX_DIR=""
_NEUTRON_CONF_DIR="${_PREFIX_DIR}/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_INSTALL="${_PREFIX_DIR}/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
_NEUTRON_CONF="${_NEUTRON_CONF_DIR}/neutron.conf"
_NEUTRON_L2_PROXY_FILE="plugins/ml2/ml2_conf.ini"
_NEUTRON_L2_PROXY_CONF="${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CONF_DIR="../etc/neutron/"
_CONF_BACKUP_DIR="`dirname ${_NEUTRON_CONF_DIR}`/.neutron-cascading-server-installation-backup"
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascading-server-installation-backup"
#for test begin
#rm -rf "${_CONF_BACKUP_DIR}/neutron"
#rm -rf "${_BACKUP_DIR}/neutron"
#for test end
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/neutron-server-cascading/installation/${_SCRIPT_NAME}.log"
if [ "$EUID" != "0" ]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" -o -d "${_CONF_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-server-cascading has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_CONF_BACKUP_DIR}"
cp -r "${_NEUTRON_CONF_DIR}/" "${_CONF_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_CONF_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
echo "copying in config files..."
cp -r "${_CONF_DIR}" `dirname ${_NEUTRON_CONF_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_CONF_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_CONF_DIR}` && rm -r "${_CONF_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "updating config file..."
sed -i "s/CASCADING_CONTROL_IP/$CASCADING_CONTROL_IP/g" "${_NEUTRON_CONF}"
sed -i "s/CASCADING_REGION_NAME/$CASCADING_REGION_NAME/g" "${_NEUTRON_CONF}"
sed -i "s/USER_NAME/$USER_NAME/g" "${_NEUTRON_CONF}"
sed -i "s/USER_PWD/$USER_PWD/g" "${_NEUTRON_CONF}"
sed -i "s/TENANT_NAME/$TENANT_NAME/g" "${_NEUTRON_CONF}"
sed -i "s/CASCADING_CONTROL_IP/$CASCADING_CONTROL_IP/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
sed -i "s/CASCADING_REGION_NAME/$CASCADING_REGION_NAME/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
sed -i "s/CASCADED_REGION_NAME/$CASCADED_REGION_NAME/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
sed -i "s/USER_NAME/$USER_NAME/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
sed -i "s/USER_PWD/$USER_PWD/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
sed -i "s/TENANT_NAME/$TENANT_NAME/g" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_PROXY_FILE}"
echo "upgrade and syc neutron DB for cascading-server-l3-patch..."
_MYSQL_PASS='openstack'
exec_sql_str="DROP DATABASE if exists neutron;CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY \"$_MYSQL_PASS\";GRANT ALL PRIVILEGES ON *.* TO 'neutron'@'%'IDENTIFIED BY \"$_MYSQL_PASS\";"
mysql -u root -p$_MYSQL_PASS -e "$exec_sql_str"
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
if [ $? -ne 0 ] ; then
echo "There was an error in upgrading DB for cascading-server-l3-patch, please check cascacaded neutron server code manually."
exit 1
fi
echo "restarting cascading neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascading neutron server manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,275 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo.config import cfg
from neutron.common import constants
from neutron.common import exceptions
from neutron.common import rpc as n_rpc
from neutron.common import utils
from neutron import context as neutron_context
from neutron.extensions import l3
from neutron.extensions import portbindings
from neutron import manager
from neutron.openstack.common import jsonutils
from neutron.openstack.common import log as logging
from neutron.plugins.common import constants as plugin_constants
LOG = logging.getLogger(__name__)
class L3RpcCallback(n_rpc.RpcCallback):
"""L3 agent RPC callback in plugin implementations."""
# 1.0 L3PluginApi BASE_RPC_API_VERSION
# 1.1 Support update_floatingip_statuses
# 1.2 Added methods for DVR support
# 1.3 Added a method that returns the list of activated services
# 1.4 Added L3 HA update_router_state
RPC_API_VERSION = '1.4'
@property
def plugin(self):
if not hasattr(self, '_plugin'):
self._plugin = manager.NeutronManager.get_plugin()
return self._plugin
@property
def l3plugin(self):
if not hasattr(self, '_l3plugin'):
self._l3plugin = manager.NeutronManager.get_service_plugins()[
plugin_constants.L3_ROUTER_NAT]
return self._l3plugin
def sync_routers(self, context, **kwargs):
"""Sync routers according to filters to a specific agent.
@param context: contain user information
@param kwargs: host, router_ids
@return: a list of routers
with their interfaces and floating_ips
"""
router_ids = kwargs.get('router_ids')
host = kwargs.get('host')
context = neutron_context.get_admin_context()
if not self.l3plugin:
routers = {}
LOG.error(_('No plugin for L3 routing registered! Will reply '
'to l3 agent with empty router dictionary.'))
elif utils.is_extension_supported(
self.l3plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
if cfg.CONF.router_auto_schedule:
self.l3plugin.auto_schedule_routers(context, host, router_ids)
routers = (
self.l3plugin.list_active_sync_routers_on_active_l3_agent(
context, host, router_ids))
else:
routers = self.l3plugin.get_sync_data(context, router_ids)
if utils.is_extension_supported(
self.plugin, constants.PORT_BINDING_EXT_ALIAS):
self._ensure_host_set_on_ports(context, host, routers)
LOG.debug(_("Routers returned to l3 agent:\n %s"),
jsonutils.dumps(routers, indent=5))
return routers
def _ensure_host_set_on_ports(self, context, host, routers):
for router in routers:
LOG.debug(_("Checking router: %(id)s for host: %(host)s"),
{'id': router['id'], 'host': host})
if router.get('gw_port') and router.get('distributed'):
self._ensure_host_set_on_port(context,
router.get('gw_port_host'),
router.get('gw_port'),
router['id'])
for p in router.get(constants.SNAT_ROUTER_INTF_KEY, []):
self._ensure_host_set_on_port(context,
router.get('gw_port_host'),
p, router['id'])
else:
self._ensure_host_set_on_port(context, host,
router.get('gw_port'),
router['id'])
for interface in router.get(constants.INTERFACE_KEY, []):
self._ensure_host_set_on_port(context, host,
interface, router['id'])
interface = router.get(constants.HA_INTERFACE_KEY)
if interface:
self._ensure_host_set_on_port(context, host, interface,
router['id'])
def _ensure_host_set_on_port(self, context, host, port, router_id=None):
if (port and
(port.get('device_owner') !=
constants.DEVICE_OWNER_DVR_INTERFACE and
port.get(portbindings.HOST_ID) != host or
port.get(portbindings.VIF_TYPE) ==
portbindings.VIF_TYPE_BINDING_FAILED)):
# All ports, including ports created for SNAT'ing for
# DVR are handled here
try:
self.plugin.update_port(context, port['id'],
{'port': {portbindings.HOST_ID: host}})
except exceptions.PortNotFound:
LOG.debug("Port %(port)s not found while updating "
"agent binding for router %(router)s."
% {"port": port['id'], "router": router_id})
elif (port and
port.get('device_owner') ==
constants.DEVICE_OWNER_DVR_INTERFACE):
# Ports that are DVR interfaces have multiple bindings (based on
# of hosts on which DVR router interfaces are spawned). Such
# bindings are created/updated here by invoking
# update_dvr_port_binding
self.plugin.update_dvr_port_binding(context, port['id'],
{'port':
{portbindings.HOST_ID: host,
'device_id': router_id}
})
def get_external_network_id(self, context, **kwargs):
"""Get one external network id for l3 agent.
l3 agent expects only on external network when it performs
this query.
"""
context = neutron_context.get_admin_context()
net_id = self.plugin.get_external_network_id(context)
LOG.debug(_("External network ID returned to l3 agent: %s"),
net_id)
return net_id
def get_service_plugin_list(self, context, **kwargs):
plugins = manager.NeutronManager.get_service_plugins()
return plugins.keys()
def update_floatingip_statuses(self, context, router_id, fip_statuses):
"""Update operational status for a floating IP."""
with context.session.begin(subtransactions=True):
for (floatingip_id, status) in fip_statuses.iteritems():
LOG.debug(_("New status for floating IP %(floatingip_id)s: "
"%(status)s"), {'floatingip_id': floatingip_id,
'status': status})
try:
self.l3plugin.update_floatingip_status(context,
floatingip_id,
status)
except l3.FloatingIPNotFound:
LOG.debug(_("Floating IP: %s no longer present."),
floatingip_id)
# Find all floating IPs known to have been the given router
# for which an update was not received. Set them DOWN mercilessly
# This situation might occur for some asynchronous backends if
# notifications were missed
known_router_fips = self.l3plugin.get_floatingips(
context, {'last_known_router_id': [router_id]})
# Consider only floating ips which were disassociated in the API
# FIXME(salv-orlando): Filtering in code should be avoided.
# the plugin should offer a way to specify a null filter
fips_to_disable = (fip['id'] for fip in known_router_fips
if not fip['router_id'])
for fip_id in fips_to_disable:
self.l3plugin.update_floatingip_status(
context, fip_id, constants.FLOATINGIP_STATUS_DOWN)
def get_ports_by_subnet(self, context, **kwargs):
"""DVR: RPC called by dvr-agent to get all ports for subnet."""
subnet_id = kwargs.get('subnet_id')
LOG.debug("DVR: subnet_id: %s", subnet_id)
filters = {'fixed_ips': {'subnet_id': [subnet_id]}}
return self.plugin.get_ports(context, filters=filters)
def get_agent_gateway_port(self, context, **kwargs):
"""Get Agent Gateway port for FIP.
l3 agent expects an Agent Gateway Port to be returned
for this query.
"""
network_id = kwargs.get('network_id')
host = kwargs.get('host')
admin_ctx = neutron_context.get_admin_context()
agent_port = self.l3plugin.create_fip_agent_gw_port_if_not_exists(
admin_ctx, network_id, host)
self._ensure_host_set_on_port(admin_ctx, host, agent_port)
LOG.debug('Agent Gateway port returned : %(agent_port)s with '
'host %(host)s', {'agent_port': agent_port,
'host': host})
return agent_port
#added by jiahaojie 00209498---begin
def update_router_extern_ip_map(self, context, **kwargs):
router_id = kwargs.get('router_id')
host = kwargs.get('host')
extern_ip = kwargs.get('gateway_ip')
context = neutron_context.get_admin_context()
plugin = manager.NeutronManager.get_plugin()
plugin.update_router_az_extern_ip_mapping(context,
router_id, host, extern_ip)
def get_extra_routes_by_subnet(self, context, **kwargs):
router_id = kwargs.get('router_id')
host = kwargs.get('host')
subnet_id = kwargs.get('subnet_id')
plugin = manager.NeutronManager.get_plugin()
subnet = plugin.get_subnet(context, subnet_id)
network = plugin.get_network(context, subnet['network_id'])
binding_host = plugin.get_binding_az_by_network_id(context,
network['id'])
net_type = network['provider:network_type']
seg_id = network['provider:segmentation_id']
if(net_type == 'vxlan' and plugin.is_big2layer_vni(seg_id)):
extra_routes = ['big2Layer']
elif(net_type in ['vlan', 'vxlan'] and binding_host != host):
if(binding_host is None):
return['not_bound_network']
extern_ip = plugin.get_extern_ip_by_router_id_and_host(
context,
router_id,
binding_host)
extra_routes = [(extern_ip, subnet['cidr'])]
else:
extra_routes = ['local_network']
return extra_routes
#added by jiahaojie 00209498---end
def get_snat_router_interface_ports(self, context, **kwargs):
"""Get SNAT serviced Router Port List.
The Service Node that hosts the SNAT service requires
the ports to service the router interfaces.
This function will check if any available ports, if not
it will create ports on the routers interfaces and
will send a list to the L3 agent.
"""
router_id = kwargs.get('router_id')
host = kwargs.get('host')
admin_ctx = neutron_context.get_admin_context()
snat_port_list = (
self.l3plugin.create_snat_intf_port_list_if_not_exists(
admin_ctx, router_id))
for p in snat_port_list:
self._ensure_host_set_on_port(admin_ctx, host, p)
LOG.debug('SNAT interface ports returned : %(snat_port_list)s '
'and on host %(host)s', {'snat_port_list': snat_port_list,
'host': host})
return snat_port_list
def update_router_state(self, context, **kwargs):
router_id = kwargs.get('router_id')
state = kwargs.get('state')
host = kwargs.get('host')
return self.l3plugin.update_router_state(context, router_id, state,
host=host)

View File

@ -1,196 +0,0 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for configuring Neutron
"""
import os
from oslo.config import cfg
from oslo.db import options as db_options
from oslo import messaging
from paste import deploy
from neutron.api.v2 import attributes
from neutron.common import utils
from neutron.openstack.common import log as logging
from neutron import version
LOG = logging.getLogger(__name__)
core_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=9696,
help=_("The port to bind to")),
cfg.StrOpt('api_paste_config', default="api-paste.ini",
help=_("The API paste config file to use")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('policy_file', default="policy.json",
help=_("The policy file to use")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.StrOpt('core_plugin',
help=_("The core plugin Neutron will use")),
cfg.ListOpt('service_plugins', default=[],
help=_("The service plugins Neutron will use")),
cfg.StrOpt('base_mac', default="fa:16:3e:00:00:00",
help=_("The base MAC address Neutron will use for VIFs")),
cfg.IntOpt('mac_generation_retries', default=16,
help=_("How many times Neutron will retry MAC generation")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
cfg.IntOpt('max_dns_nameservers', default=5,
help=_("Maximum number of DNS nameservers")),
cfg.IntOpt('max_subnet_host_routes', default=20,
help=_("Maximum number of host routes per subnet")),
cfg.IntOpt('max_fixed_ips_per_port', default=5,
help=_("Maximum number of fixed ips per port")),
cfg.IntOpt('dhcp_lease_duration', default=86400,
deprecated_name='dhcp_lease_time',
help=_("DHCP lease duration (in seconds). Use -1 to tell "
"dnsmasq to use infinite lease times.")),
cfg.BoolOpt('dhcp_agent_notification', default=True,
help=_("Allow sending resource operation"
" notification to DHCP agent")),
cfg.BoolOpt('allow_overlapping_ips', default=False,
help=_("Allow overlapping IP support in Neutron")),
cfg.StrOpt('host', default=utils.get_hostname(),
help=_("The hostname Neutron is running on")),
cfg.BoolOpt('force_gateway_on_subnet', default=True,
help=_("Ensure that configured gateway is on subnet. "
"For IPv6, validate only if gateway is not a link "
"local address. Deprecated, to be removed during the "
"K release, at which point the check will be "
"mandatory.")),
cfg.BoolOpt('notify_nova_on_port_status_changes', default=True,
help=_("Send notification to nova when port status changes")),
cfg.BoolOpt('notify_nova_on_port_data_changes', default=True,
help=_("Send notification to nova when port data (fixed_ips/"
"floatingip) changes so nova can update its cache.")),
cfg.StrOpt('nova_url',
default='http://127.0.0.1:8774/v2',
help=_('URL for connection to nova')),
cfg.StrOpt('nova_admin_username',
help=_('Username for connecting to nova in admin context')),
cfg.StrOpt('nova_admin_password',
help=_('Password for connection to nova in admin context'),
secret=True),
cfg.StrOpt('nova_admin_tenant_id',
help=_('The uuid of the admin nova tenant')),
cfg.StrOpt('nova_admin_auth_url',
default='http://localhost:5000/v2.0',
help=_('Authorization URL for connecting to nova in admin '
'context')),
cfg.StrOpt('nova_ca_certificates_file',
help=_('CA file for novaclient to verify server certificates')),
cfg.BoolOpt('nova_api_insecure', default=False,
help=_("If True, ignore any SSL validation issues")),
cfg.StrOpt('nova_region_name',
help=_('Name of nova region to use. Useful if keystone manages'
' more than one region.')),
cfg.IntOpt('send_events_interval', default=2,
help=_('Number of seconds between sending events to nova if '
'there are any events to send.')),
# add by j00209498
cfg.StrOpt('cascade_str', default='cascading',
help=_('cascade_str identity cascading openstack or cascaded'
'openstack, value = cascaded or cascading.')),
]
core_cli_opts = [
cfg.StrOpt('state_path',
default='/var/lib/neutron',
help=_("Where to store Neutron state files. "
"This directory must be writable by the agent.")),
]
# Register the configuration options
cfg.CONF.register_opts(core_opts)
cfg.CONF.register_cli_opts(core_cli_opts)
# Ensure that the control exchange is set correctly
messaging.set_transport_defaults(control_exchange='neutron')
_SQL_CONNECTION_DEFAULT = 'sqlite://'
# Update the default QueuePool parameters. These can be tweaked by the
# configuration variables - max_pool_size, max_overflow and pool_timeout
db_options.set_defaults(cfg.CONF,
connection=_SQL_CONNECTION_DEFAULT,
sqlite_db='', max_pool_size=10,
max_overflow=20, pool_timeout=10)
def init(args, **kwargs):
cfg.CONF(args=args, project='neutron',
version='%%prog %s' % version.version_info.release_string(),
**kwargs)
# FIXME(ihrachys): if import is put in global, circular import
# failure occurs
from neutron.common import rpc as n_rpc
n_rpc.init(cfg.CONF)
# Validate that the base_mac is of the correct format
msg = attributes._validate_regex(cfg.CONF.base_mac,
attributes.MAC_PATTERN)
if msg:
msg = _("Base MAC: %s") % msg
raise Exception(msg)
def setup_logging():
"""Sets up the logging options for a log with supplied name."""
product_name = "neutron"
logging.setup(product_name)
LOG.info(_("Logging enabled!"))
def load_paste_app(app_name):
"""Builds and returns a WSGI app from a paste config file.
:param app_name: Name of the application to load
:raises ConfigFilesNotFoundError when config file cannot be located
:raises RuntimeError when application cannot be loaded from config file
"""
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
if not config_path:
raise cfg.ConfigFilesNotFoundError(
config_files=[cfg.CONF.api_paste_config])
config_path = os.path.abspath(config_path)
LOG.info(_("Config paste file: %s"), config_path)
try:
app = deploy.loadapp("config:%s" % config_path, name=app_name)
except (LookupError, ImportError):
msg = (_("Unable to load %(app_name)s from "
"configuration file %(config_path)s.") %
{'app_name': app_name,
'config_path': config_path})
LOG.exception(msg)
raise RuntimeError(msg)
return app

View File

@ -1,341 +0,0 @@
# Copyright 2011 VMware, Inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Neutron base exception handling.
"""
from neutron.openstack.common import excutils
class NeutronException(Exception):
"""Base Neutron Exception.
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
def __init__(self, **kwargs):
try:
super(NeutronException, self).__init__(self.message % kwargs)
self.msg = self.message % kwargs
except Exception:
with excutils.save_and_reraise_exception() as ctxt:
if not self.use_fatal_exceptions():
ctxt.reraise = False
# at least get the core message out if something happened
super(NeutronException, self).__init__(self.message)
def __unicode__(self):
return unicode(self.msg)
def use_fatal_exceptions(self):
return False
class BadRequest(NeutronException):
message = _('Bad %(resource)s request: %(msg)s')
class NotFound(NeutronException):
pass
class Conflict(NeutronException):
pass
class NotAuthorized(NeutronException):
message = _("Not authorized.")
class ServiceUnavailable(NeutronException):
message = _("The service is unavailable")
class AdminRequired(NotAuthorized):
message = _("User does not have admin privileges: %(reason)s")
class PolicyNotAuthorized(NotAuthorized):
message = _("Policy doesn't allow %(action)s to be performed.")
class NetworkNotFound(NotFound):
message = _("Network %(net_id)s could not be found")
class SubnetNotFound(NotFound):
message = _("Subnet %(subnet_id)s could not be found")
class PortNotFound(NotFound):
message = _("Port %(port_id)s could not be found")
class PortNotFoundOnNetwork(NotFound):
message = _("Port %(port_id)s could not be found "
"on network %(net_id)s")
class PolicyFileNotFound(NotFound):
message = _("Policy configuration policy.json could not be found")
class PolicyInitError(NeutronException):
message = _("Failed to init policy %(policy)s because %(reason)s")
class PolicyCheckError(NeutronException):
message = _("Failed to check policy %(policy)s because %(reason)s")
class StateInvalid(BadRequest):
message = _("Unsupported port state: %(port_state)s")
class InUse(NeutronException):
message = _("The resource is inuse")
class NetworkInUse(InUse):
message = _("Unable to complete operation on network %(net_id)s. "
"There are one or more ports still in use on the network.")
class SubnetInUse(InUse):
message = _("Unable to complete operation on subnet %(subnet_id)s. "
"One or more ports have an IP allocation from this subnet.")
class PortInUse(InUse):
message = _("Unable to complete operation on port %(port_id)s "
"for network %(net_id)s. Port already has an attached"
"device %(device_id)s.")
class MacAddressInUse(InUse):
message = _("Unable to complete operation for network %(net_id)s. "
"The mac address %(mac)s is in use.")
class HostRoutesExhausted(BadRequest):
# NOTE(xchenum): probably make sense to use quota exceeded exception?
message = _("Unable to complete operation for %(subnet_id)s. "
"The number of host routes exceeds the limit %(quota)s.")
class DNSNameServersExhausted(BadRequest):
# NOTE(xchenum): probably make sense to use quota exceeded exception?
message = _("Unable to complete operation for %(subnet_id)s. "
"The number of DNS nameservers exceeds the limit %(quota)s.")
class IpAddressInUse(InUse):
message = _("Unable to complete operation for network %(net_id)s. "
"The IP address %(ip_address)s is in use.")
class VlanIdInUse(InUse):
message = _("Unable to create the network. "
"The VLAN %(vlan_id)s on physical network "
"%(physical_network)s is in use.")
class FlatNetworkInUse(InUse):
message = _("Unable to create the flat network. "
"Physical network %(physical_network)s is in use.")
class TunnelIdInUse(InUse):
message = _("Unable to create the network. "
"The tunnel ID %(tunnel_id)s is in use.")
class TenantNetworksDisabled(ServiceUnavailable):
message = _("Tenant network creation is not enabled.")
class ResourceExhausted(ServiceUnavailable):
pass
class NoNetworkAvailable(ResourceExhausted):
message = _("Unable to create the network. "
"No tenant network is available for allocation.")
class NoNetworkFoundInMaximumAllowedAttempts(ServiceUnavailable):
message = _("Unable to create the network. "
"No available network found in maximum allowed attempts.")
class SubnetMismatchForPort(BadRequest):
message = _("Subnet on port %(port_id)s does not match "
"the requested subnet %(subnet_id)s")
class MalformedRequestBody(BadRequest):
message = _("Malformed request body: %(reason)s")
class Invalid(NeutronException):
def __init__(self, message=None):
self.message = message
super(Invalid, self).__init__()
class InvalidInput(BadRequest):
message = _("Invalid input for operation: %(error_message)s.")
class InvalidAllocationPool(BadRequest):
message = _("The allocation pool %(pool)s is not valid.")
class OverlappingAllocationPools(Conflict):
message = _("Found overlapping allocation pools:"
"%(pool_1)s %(pool_2)s for subnet %(subnet_cidr)s.")
class OutOfBoundsAllocationPool(BadRequest):
message = _("The allocation pool %(pool)s spans "
"beyond the subnet cidr %(subnet_cidr)s.")
class MacAddressGenerationFailure(ServiceUnavailable):
message = _("Unable to generate unique mac on network %(net_id)s.")
class IpAddressGenerationFailure(Conflict):
message = _("No more IP addresses available on network %(net_id)s.")
class BridgeDoesNotExist(NeutronException):
message = _("Bridge %(bridge)s does not exist.")
class PreexistingDeviceFailure(NeutronException):
message = _("Creation failed. %(dev_name)s already exists.")
class SudoRequired(NeutronException):
message = _("Sudo privilege is required to run this command.")
class QuotaResourceUnknown(NotFound):
message = _("Unknown quota resources %(unknown)s.")
class OverQuota(Conflict):
message = _("Quota exceeded for resources: %(overs)s")
class QuotaMissingTenant(BadRequest):
message = _("Tenant-id was missing from Quota request")
class InvalidQuotaValue(Conflict):
message = _("Change would make usage less than 0 for the following "
"resources: %(unders)s")
class InvalidSharedSetting(Conflict):
message = _("Unable to reconfigure sharing settings for network "
"%(network)s. Multiple tenants are using it")
class InvalidExtensionEnv(BadRequest):
message = _("Invalid extension environment: %(reason)s")
class ExtensionsNotFound(NotFound):
message = _("Extensions not found: %(extensions)s")
class InvalidContentType(NeutronException):
message = _("Invalid content type %(content_type)s")
class ExternalIpAddressExhausted(BadRequest):
message = _("Unable to find any IP address on external "
"network %(net_id)s.")
class TooManyExternalNetworks(NeutronException):
message = _("More than one external network exists")
class InvalidConfigurationOption(NeutronException):
message = _("An invalid value was provided for %(opt_name)s: "
"%(opt_value)s")
class GatewayConflictWithAllocationPools(InUse):
message = _("Gateway ip %(ip_address)s conflicts with "
"allocation pool %(pool)s")
class GatewayIpInUse(InUse):
message = _("Current gateway ip %(ip_address)s already in use "
"by port %(port_id)s. Unable to update.")
class NetworkVlanRangeError(NeutronException):
message = _("Invalid network VLAN range: '%(vlan_range)s' - '%(error)s'")
def __init__(self, **kwargs):
# Convert vlan_range tuple to 'start:end' format for display
if isinstance(kwargs['vlan_range'], tuple):
kwargs['vlan_range'] = "%d:%d" % kwargs['vlan_range']
super(NetworkVlanRangeError, self).__init__(**kwargs)
class NetworkTunnelRangeError(NeutronException):
message = _("Invalid network Tunnel range: "
"'%(tunnel_range)s' - %(error)s")
def __init__(self, **kwargs):
# Convert tunnel_range tuple to 'start:end' format for display
if isinstance(kwargs['tunnel_range'], tuple):
kwargs['tunnel_range'] = "%d:%d" % kwargs['tunnel_range']
super(NetworkTunnelRangeError, self).__init__(**kwargs)
class NetworkVxlanPortRangeError(NeutronException):
message = _("Invalid network VXLAN port range: '%(vxlan_range)s'")
class VxlanNetworkUnsupported(NeutronException):
message = _("VXLAN Network unsupported.")
class DuplicatedExtension(NeutronException):
message = _("Found duplicate extension: %(alias)s")
class DeviceIDNotOwnedByTenant(Conflict):
message = _("The following device_id %(device_id)s is not owned by your "
"tenant or matches another tenants router.")
class InvalidCIDR(BadRequest):
message = _("Invalid CIDR %(input)s given as IP prefix")
class PortBindAZError(BadRequest):
message = _("Network %(net_id)s is local network, "
"cannot be created in host %(host)s AZ.")

View File

@ -1,162 +0,0 @@
'''
Created on 2014-8-5
@author: j00209498
'''
from oslo.db import exception as db_exc
import sqlalchemy as sa
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.common import exceptions as q_exc
from neutron.common import log
from neutron.common import utils
from neutron.db import model_base
from neutron.extensions import dvr as ext_dvr
from neutron import manager
from neutron.openstack.common import log as logging
from oslo.config import cfg
from sqlalchemy.orm import exc
LOG = logging.getLogger(__name__)
big2layer_vni_opts = [
cfg.StrOpt('big2layer_vni_range',
default="4097:20000",
help=_('The big 2 layer vxlan vni range used for '
'CascadeDBMixin instances by Neutron')),
]
cfg.CONF.register_opts(big2layer_vni_opts)
class CascadeAZNetworkBinding(model_base.BASEV2):
"""Represents a v2 neutron distributed virtual router mac address."""
__tablename__ = 'cascade_az_network_bind'
network_id = sa.Column(sa.String(36), primary_key=True, nullable=False)
host = sa.Column(sa.String(255), primary_key=True, nullable=False)
class CascadeRouterAZExternipMapping(model_base.BASEV2):
"""Represents a v2 neutron distributed virtual router mac address."""
__tablename__ = 'cascade_router_az_externip_map'
router_id = sa.Column(sa.String(36), primary_key=True, nullable=False)
host = sa.Column(sa.String(255), primary_key=True, nullable=False)
extern_ip = sa.Column(sa.String(64), nullable=False)
class CascadeDBMixin(object):
@property
def l3_rpc_notifier(self):
if not hasattr(self, '_l3_rpc_notifier'):
self._l3_rpc_notifier = l3_rpc_agent_api.L3AgentNotifyAPI()
return self._l3_rpc_notifier
def is_big2layer_vni(self, seg_id):
vni = cfg.CONF.big2layer_vni_range.split(':')
if(seg_id >= int(vni[0]) and seg_id <= int(vni[1])):
return True
else:
return False
def get_binding_az_by_network_id(self, context, net_id):
try:
query = context.session.query(CascadeAZNetworkBinding)
ban = query.filter(
CascadeAZNetworkBinding.network_id == net_id).one()
except exc.NoResultFound:
return None
return ban['host']
def add_binding_az_network_id(self, context, binding_host, net_id):
try:
with context.session.begin(subtransactions=True):
dvr_mac_binding = CascadeAZNetworkBinding(
network_id=net_id, host=binding_host)
context.session.add(dvr_mac_binding)
LOG.debug("add az_host %(host)s for network %(network_id)s ",
{'host': binding_host, 'network_id': net_id})
except db_exc.DBDuplicateEntry:
LOG.debug("az_host %(host)s exists for network %(network_id)s,"
" DBDuplicateEntry error.",
{'host': binding_host, 'network_id': net_id})
def get_extern_ip_by_router_id_and_host(self, context, router_id, host):
rae = self.get_router_az_extern_ip_mapping(context, router_id, host)
if(rae):
return rae['extern_ip']
return None
# try:
# query = context.session.query(CascadeRouterAZExternipMapping)
# erh = query.filter(
# CascadeRouterAZExternipMapping.router_id == router_id,
# CascadeRouterAZExternipMapping.host == host).one()
# except exc.NoResultFound:
# return None
# return erh['extern_ip']
def get_router_az_extern_ip_mapping(self, context, router_id, host):
try:
query = context.session.query(CascadeRouterAZExternipMapping)
erh = query.filter(
CascadeRouterAZExternipMapping.router_id == router_id,
CascadeRouterAZExternipMapping.host == host).one()
except exc.NoResultFound:
return None
return erh
def update_router_az_extern_ip_mapping(self, context, router_id,
host, extern_ip):
if extern_ip is None:
self.del_router_az_extern_ip_mapping(context, router_id, host)
self.l3_rpc_notifier.routers_updated(context, [router_id],
None, None)
return
rae = self.get_router_az_extern_ip_mapping(context, router_id, host)
if(rae and rae['extern_ip'] != extern_ip):
update_rae = {}
update_rae['router_id'] = rae['router_id']
update_rae['host'] = rae['host']
update_rae['extern_ip'] = extern_ip
rae.update(update_rae)
LOG.debug("update extern_ip %(extern_ip)s for az_host %(host)s "
"and router %(router_id)s ",
{'extern_ip': extern_ip,
'host': host,
'network_id': router_id})
self.l3_rpc_notifier.routers_updated(context, [router_id],
None, None)
return
try:
with context.session.begin(subtransactions=True):
router_az_extern_ip_map = CascadeRouterAZExternipMapping(
router_id=router_id, host=host, extern_ip=extern_ip)
context.session.add(router_az_extern_ip_map)
LOG.debug("add extern_ip %(extern_ip)s for az_host %(host)s "
"and router %(router_id)s ",
{'extern_ip': extern_ip,
'host': host,
'network_id': router_id})
self.l3_rpc_notifier.routers_updated(context, [router_id],
None, None)
except db_exc.DBDuplicateEntry:
LOG.debug("DBDuplicateEntry ERR:update extern_ip %(extern_ip)s "
"for az_host %(host)s and router %(router_id)s ",
{'extern_ip': extern_ip,
'host': host,
'network_id': router_id})
def del_router_az_extern_ip_mapping(self, context, router_id, host):
try:
query = context.session.query(CascadeRouterAZExternipMapping)
query.filter(
CascadeRouterAZExternipMapping.router_id == router_id,
CascadeRouterAZExternipMapping.host == host).delete()
except exc.NoResultFound:
return None

View File

@ -1,84 +0,0 @@
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""L2 models to support DVR
Revision ID: 2026156eab2f
Revises: 3927f7f7c456
Create Date: 2014-06-23 19:12:43.392912
"""
# revision identifiers, used by Alembic.
revision = '2026156eab2f'
down_revision = '3927f7f7c456'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'dvr_host_macs',
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('mac_address', sa.String(length=32),
nullable=False, unique=True),
sa.PrimaryKeyConstraint('host')
)
op.create_table(
'ml2_dvr_port_bindings',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('router_id', sa.String(length=36), nullable=True),
sa.Column('vif_type', sa.String(length=64), nullable=False),
sa.Column('vif_details', sa.String(length=4095),
nullable=False, server_default=''),
sa.Column('vnic_type', sa.String(length=64),
nullable=False, server_default='normal'),
sa.Column('profile', sa.String(length=4095),
nullable=False, server_default=''),
sa.Column('cap_port_filter', sa.Boolean(), nullable=False),
sa.Column('driver', sa.String(length=64), nullable=True),
sa.Column('segment', sa.String(length=36), nullable=True),
sa.Column(u'status', sa.String(16), nullable=False),
sa.ForeignKeyConstraint(['port_id'], ['ports.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['segment'], ['ml2_network_segments.id'],
ondelete='SET NULL'),
sa.PrimaryKeyConstraint('port_id', 'host')
)
# add by jiahaojie 00209498 ---begin
op.create_table(
'cascade_az_network_bind',
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('network_id', 'host')
)
op.create_table(
'cascade_router_az_externip_map',
sa.Column('router_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('extern_ip', sa.String(length=64), nullable=False),
sa.PrimaryKeyConstraint('router_id', 'host')
)
# add by jiahaojie 00209498 ---end
def downgrade():
op.drop_table('ml2_dvr_port_bindings')
op.drop_table('dvr_host_macs')
op.drop_table('cascade_az_network_bind')
op.drop_table('cascade_router_az_externip_map')

View File

@ -1,68 +0,0 @@
Openstack Neutron timestamp_cascaded_patch
===============================
Neutron timestamp_cascaded_patch is mainly used to provide query filter 'sinces_change' for "list ports" API. To achieve the goal, we add three fields ('created_at'/'updated_at'/'delete_at') for ports table in neutron DB, and modify few lines of code in _apply_filters_to_query() function. This patch should be made to the Cascaded Neutron nodes.
Key modules
-----------
* add three fields ('created_at'/'updated_at'/'delete_at') for ports table, and modify few lines of code in _apply_filters_to_query() function:
neutron/db/migration/alembic_migrations/versions/238cf36dab26_add_port_timestamp_revision.py
neutron/db/migration/core_init_ops.py
neutron/db/common_db_mixin.py
neutron/db/models_v2.py
Requirements
------------
* openstack neutron-2014.2 has been installed.
Installation
------------
We provide two ways to install the Neutron timestamp_cascaded_patch. In this section, we will guide you through installing the Neutron timestamp_cascaded_patch without modifying the configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Neutron of Juno Version**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$NEUTRON_PARENT_DIR/neutron
(replace the $... with actual directory names.)
* **Manual Installation**
- Navigate to the local repository and copy the contents in 'neutron' sub-directory to the corresponding places in existing neutron, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/neutron $NEUTRON_PARENT_DIR```
(replace the $... with actual directory name.)
- Upgrade DB
```neutron-db-manage --config-file $CONFIG_FILE_PATH/neutron.conf --config-file $CONFIG_FILE_PATH/plugins/ml2/ml2_conf.ini upgrade head```
(replace the $... with actual directory name.)
- Restart the neutron-server.
```service neutron-server restart```
- Done.
* **Automatic Installation**
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation script will automatically modify the neutron code, upgrade DB and restart neutron-server.
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Juno.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically modify the related codes to $NEUTRON_PARENT_DIR/neutron.
- In case the automatic installation does not work, try to install manually.

View File

@ -1,103 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_INSTALL="/usr/lib/python2.7/dist-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-cascaded-timestamp-patch-installation-backup"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-server-cascaded-timestamp-patch has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "upgrade DB for cascaded-timestamp-patch..."
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
if [ $? -ne 0 ] ; then
echo "There was an error in upgrading DB for cascaded-timestamp-patch, please check cascacaded neutron server code manually."
exit 1
fi
echo "restarting cascaded neutron server..."
service neutron-server restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron server manually."
exit 1
fi
echo "restarting cascaded neutron-plugin-openvswitch-agent..."
service neutron-plugin-openvswitch-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-plugin-openvswitch-agent manually."
exit 1
fi
echo "restarting cascaded neutron-l3-agent..."
service neutron-l3-agent restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart cascaded neutron-l3-agent manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,203 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import weakref
from sqlalchemy import sql
from neutron.common import exceptions as n_exc
from neutron.db import sqlalchemyutils
from neutron.openstack.common import timeutils
class CommonDbMixin(object):
"""Common methods used in core and service plugins."""
# Plugins, mixin classes implementing extension will register
# hooks into the dict below for "augmenting" the "core way" of
# building a query for retrieving objects from a model class.
# To this aim, the register_model_query_hook and unregister_query_hook
# from this class should be invoked
_model_query_hooks = {}
# This dictionary will store methods for extending attributes of
# api resources. Mixins can use this dict for adding their own methods
# TODO(salvatore-orlando): Avoid using class-level variables
_dict_extend_functions = {}
@classmethod
def register_model_query_hook(cls, model, name, query_hook, filter_hook,
result_filters=None):
"""Register a hook to be invoked when a query is executed.
Add the hooks to the _model_query_hooks dict. Models are the keys
of this dict, whereas the value is another dict mapping hook names to
callables performing the hook.
Each hook has a "query" component, used to build the query expression
and a "filter" component, which is used to build the filter expression.
Query hooks take as input the query being built and return a
transformed query expression.
Filter hooks take as input the filter expression being built and return
a transformed filter expression
"""
model_hooks = cls._model_query_hooks.get(model)
if not model_hooks:
# add key to dict
model_hooks = {}
cls._model_query_hooks[model] = model_hooks
model_hooks[name] = {'query': query_hook, 'filter': filter_hook,
'result_filters': result_filters}
@property
def safe_reference(self):
"""Return a weakref to the instance.
Minimize the potential for the instance persisting
unnecessarily in memory by returning a weakref proxy that
won't prevent deallocation.
"""
return weakref.proxy(self)
def _model_query(self, context, model):
query = context.session.query(model)
# define basic filter condition for model query
# NOTE(jkoelker) non-admin queries are scoped to their tenant_id
# NOTE(salvatore-orlando): unless the model allows for shared objects
query_filter = None
if not context.is_admin and hasattr(model, 'tenant_id'):
if hasattr(model, 'shared'):
query_filter = ((model.tenant_id == context.tenant_id) |
(model.shared == sql.true()))
else:
query_filter = (model.tenant_id == context.tenant_id)
# Execute query hooks registered from mixins and plugins
for _name, hooks in self._model_query_hooks.get(model,
{}).iteritems():
query_hook = hooks.get('query')
if isinstance(query_hook, basestring):
query_hook = getattr(self, query_hook, None)
if query_hook:
query = query_hook(context, model, query)
filter_hook = hooks.get('filter')
if isinstance(filter_hook, basestring):
filter_hook = getattr(self, filter_hook, None)
if filter_hook:
query_filter = filter_hook(context, model, query_filter)
# NOTE(salvatore-orlando): 'if query_filter' will try to evaluate the
# condition, raising an exception
if query_filter is not None:
query = query.filter(query_filter)
return query
def _fields(self, resource, fields):
if fields:
return dict(((key, item) for key, item in resource.items()
if key in fields))
return resource
def _get_tenant_id_for_create(self, context, resource):
if context.is_admin and 'tenant_id' in resource:
tenant_id = resource['tenant_id']
elif ('tenant_id' in resource and
resource['tenant_id'] != context.tenant_id):
reason = _('Cannot create resource for another tenant')
raise n_exc.AdminRequired(reason=reason)
else:
tenant_id = context.tenant_id
return tenant_id
def _get_by_id(self, context, model, id):
query = self._model_query(context, model)
return query.filter(model.id == id).one()
def _apply_filters_to_query(self, query, model, filters):
if filters:
for key, value in filters.iteritems():
column = getattr(model, key, None)
if column:
query = query.filter(column.in_(value))
if 'changes_since' in filters:
if isinstance(filters['changes_since'], list):
changes_since = timeutils.parse_isotime(filters['changes_since'][0])
else:
changes_since = timeutils.parse_isotime(filters['changes_since'])
updated_at = timeutils.normalize_time(changes_since)
query = query.filter(model.updated_at >= updated_at)
for _name, hooks in self._model_query_hooks.get(model,
{}).iteritems():
result_filter = hooks.get('result_filters', None)
if isinstance(result_filter, basestring):
result_filter = getattr(self, result_filter, None)
if result_filter:
query = result_filter(query, filters)
return query
def _apply_dict_extend_functions(self, resource_type,
response, db_object):
for func in self._dict_extend_functions.get(
resource_type, []):
args = (response, db_object)
if isinstance(func, basestring):
func = getattr(self, func, None)
else:
# must call unbound method - use self as 1st argument
args = (self,) + args
if func:
func(*args)
def _get_collection_query(self, context, model, filters=None,
sorts=None, limit=None, marker_obj=None,
page_reverse=False):
collection = self._model_query(context, model)
collection = self._apply_filters_to_query(collection, model, filters)
if limit and page_reverse and sorts:
sorts = [(s[0], not s[1]) for s in sorts]
collection = sqlalchemyutils.paginate_query(collection, model, limit,
sorts,
marker_obj=marker_obj)
return collection
def _get_collection(self, context, model, dict_func, filters=None,
fields=None, sorts=None, limit=None, marker_obj=None,
page_reverse=False):
query = self._get_collection_query(context, model, filters=filters,
sorts=sorts,
limit=limit,
marker_obj=marker_obj,
page_reverse=page_reverse)
items = [dict_func(c, fields) for c in query]
if limit and page_reverse:
items.reverse()
return items
def _get_collection_count(self, context, model, filters=None):
return self._get_collection_query(context, model, filters).count()
def _get_marker_obj(self, context, resource, limit, marker):
if limit and marker:
return getattr(self, '_get_%s' % resource)(context, marker)
return None
def _filter_non_model_columns(self, data, model):
"""Remove all the attributes from data which are not columns of
the model passed as second parameter.
"""
columns = [c.name for c in model.__table__.columns]
return dict((k, v) for (k, v) in
data.iteritems() if k in columns)

View File

@ -1,132 +0,0 @@
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Initial operations for core resources
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'networks',
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('status', sa.String(length=16), nullable=True),
sa.Column('admin_state_up', sa.Boolean(), nullable=True),
sa.Column('shared', sa.Boolean(), nullable=True),
sa.PrimaryKeyConstraint('id'))
op.create_table(
'ports',
sa.Column('created_at', sa.DateTime),
sa.Column('updated_at', sa.DateTime),
sa.Column('deleted_at', sa.DateTime),
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.Column('mac_address', sa.String(length=32), nullable=False),
sa.Column('admin_state_up', sa.Boolean(), nullable=False),
sa.Column('status', sa.String(length=16), nullable=False),
sa.Column('device_id', sa.String(length=255), nullable=False),
sa.Column('device_owner', sa.String(length=255), nullable=False),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'], ),
sa.PrimaryKeyConstraint('id'))
op.create_table(
'subnets',
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('network_id', sa.String(length=36), nullable=True),
sa.Column('ip_version', sa.Integer(), nullable=False),
sa.Column('cidr', sa.String(length=64), nullable=False),
sa.Column('gateway_ip', sa.String(length=64), nullable=True),
sa.Column('enable_dhcp', sa.Boolean(), nullable=True),
sa.Column('shared', sa.Boolean(), nullable=True),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'], ),
sa.PrimaryKeyConstraint('id'))
op.create_table(
'dnsnameservers',
sa.Column('address', sa.String(length=128), nullable=False),
sa.Column('subnet_id', sa.String(length=36), nullable=False),
sa.ForeignKeyConstraint(['subnet_id'], ['subnets.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('address', 'subnet_id'))
op.create_table(
'ipallocationpools',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('subnet_id', sa.String(length=36), nullable=True),
sa.Column('first_ip', sa.String(length=64), nullable=False),
sa.Column('last_ip', sa.String(length=64), nullable=False),
sa.ForeignKeyConstraint(['subnet_id'], ['subnets.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'))
op.create_table(
'subnetroutes',
sa.Column('destination', sa.String(length=64), nullable=False),
sa.Column('nexthop', sa.String(length=64), nullable=False),
sa.Column('subnet_id', sa.String(length=36), nullable=False),
sa.ForeignKeyConstraint(['subnet_id'], ['subnets.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('destination', 'nexthop', 'subnet_id'))
op.create_table(
'ipallocations',
sa.Column('port_id', sa.String(length=36), nullable=True),
sa.Column('ip_address', sa.String(length=64), nullable=False),
sa.Column('subnet_id', sa.String(length=36), nullable=False),
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['port_id'], ['ports.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['subnet_id'], ['subnets.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('ip_address', 'subnet_id', 'network_id'))
op.create_table(
'ipavailabilityranges',
sa.Column('allocation_pool_id', sa.String(length=36), nullable=False),
sa.Column('first_ip', sa.String(length=64), nullable=False),
sa.Column('last_ip', sa.String(length=64), nullable=False),
sa.ForeignKeyConstraint(['allocation_pool_id'],
['ipallocationpools.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('allocation_pool_id', 'first_ip', 'last_ip'))
op.create_table(
'networkdhcpagentbindings',
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.Column('dhcp_agent_id', sa.String(length=36), nullable=False),
sa.ForeignKeyConstraint(['dhcp_agent_id'], ['agents.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('network_id', 'dhcp_agent_id'))
def downgrade():
op.drop_table('networkdhcpagentbindings')
op.drop_table('ipavailabilityranges')
op.drop_table('ipallocations')
op.drop_table('subnetroutes')
op.drop_table('ipallocationpools')
op.drop_table('dnsnameservers')
op.drop_table('subnets')
op.drop_table('ports')
op.drop_table('networks')

View File

@ -1,44 +0,0 @@
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add port timestamp revision
Revision ID: 238cf36dab26
Revises: juno
Create Date: 2014-11-27 17:04:05.835703
"""
# revision identifiers, used by Alembic.
revision = '238cf36dab26'
down_revision = 'juno'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('ports', sa.Column('created_at', sa.DateTime(), nullable=True))
op.add_column('ports', sa.Column('updated_at', sa.DateTime(), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('ports', 'updated_at')
op.drop_column('ports', 'created_at')
### end Alembic commands ###

View File

@ -1,209 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from sqlalchemy import orm
from neutron.common import constants
from neutron.db import model_base
from neutron.openstack.common import uuidutils
from neutron.openstack.common import timeutils
class HasTenant(object):
"""Tenant mixin, add to subclasses that have a tenant."""
# NOTE(jkoelker) tenant_id is just a free form string ;(
tenant_id = sa.Column(sa.String(255))
class HasId(object):
"""id mixin, add to subclasses that have an id."""
id = sa.Column(sa.String(36),
primary_key=True,
default=uuidutils.generate_uuid)
class HasStatusDescription(object):
"""Status with description mixin."""
status = sa.Column(sa.String(16), nullable=False)
status_description = sa.Column(sa.String(255))
class IPAvailabilityRange(model_base.BASEV2):
"""Internal representation of available IPs for Neutron subnets.
Allocation - first entry from the range will be allocated.
If the first entry is equal to the last entry then this row
will be deleted.
Recycling ips involves reading the IPAllocationPool and IPAllocation tables
and inserting ranges representing available ips. This happens after the
final allocation is pulled from this table and a new ip allocation is
requested. Any contiguous ranges of available ips will be inserted as a
single range.
"""
allocation_pool_id = sa.Column(sa.String(36),
sa.ForeignKey('ipallocationpools.id',
ondelete="CASCADE"),
nullable=False,
primary_key=True)
first_ip = sa.Column(sa.String(64), nullable=False, primary_key=True)
last_ip = sa.Column(sa.String(64), nullable=False, primary_key=True)
def __repr__(self):
return "%s - %s" % (self.first_ip, self.last_ip)
class IPAllocationPool(model_base.BASEV2, HasId):
"""Representation of an allocation pool in a Neutron subnet."""
subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
ondelete="CASCADE"),
nullable=True)
first_ip = sa.Column(sa.String(64), nullable=False)
last_ip = sa.Column(sa.String(64), nullable=False)
available_ranges = orm.relationship(IPAvailabilityRange,
backref='ipallocationpool',
lazy="joined",
cascade='all, delete-orphan')
def __repr__(self):
return "%s - %s" % (self.first_ip, self.last_ip)
class IPAllocation(model_base.BASEV2):
"""Internal representation of allocated IP addresses in a Neutron subnet.
"""
port_id = sa.Column(sa.String(36), sa.ForeignKey('ports.id',
ondelete="CASCADE"),
nullable=True)
ip_address = sa.Column(sa.String(64), nullable=False, primary_key=True)
subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
ondelete="CASCADE"),
nullable=False, primary_key=True)
network_id = sa.Column(sa.String(36), sa.ForeignKey("networks.id",
ondelete="CASCADE"),
nullable=False, primary_key=True)
class Route(object):
"""mixin of a route."""
destination = sa.Column(sa.String(64), nullable=False, primary_key=True)
nexthop = sa.Column(sa.String(64), nullable=False, primary_key=True)
class SubnetRoute(model_base.BASEV2, Route):
subnet_id = sa.Column(sa.String(36),
sa.ForeignKey('subnets.id',
ondelete="CASCADE"),
primary_key=True)
class Port(model_base.BASEV2, HasId, HasTenant):
"""Represents a port on a Neutron v2 network."""
name = sa.Column(sa.String(255))
network_id = sa.Column(sa.String(36), sa.ForeignKey("networks.id"),
nullable=False)
fixed_ips = orm.relationship(IPAllocation, backref='ports', lazy='joined')
mac_address = sa.Column(sa.String(32), nullable=False)
admin_state_up = sa.Column(sa.Boolean(), nullable=False)
status = sa.Column(sa.String(16), nullable=False)
device_id = sa.Column(sa.String(255), nullable=False)
device_owner = sa.Column(sa.String(255), nullable=False)
created_at = sa.Column(sa.DateTime, default=lambda: timeutils.utcnow())
updated_at = sa.Column(sa.DateTime, default=lambda: timeutils.utcnow(),
onupdate=lambda: timeutils.utcnow())
def __init__(self, id=None, tenant_id=None, name=None, network_id=None,
mac_address=None, admin_state_up=None, status=None,
device_id=None, device_owner=None, fixed_ips=None):
self.id = id
self.tenant_id = tenant_id
self.name = name
self.network_id = network_id
self.mac_address = mac_address
self.admin_state_up = admin_state_up
self.device_owner = device_owner
self.device_id = device_id
# Since this is a relationship only set it if one is passed in.
if fixed_ips:
self.fixed_ips = fixed_ips
# NOTE(arosen): status must be set last as an event is triggered on!
self.status = status
class DNSNameServer(model_base.BASEV2):
"""Internal representation of a DNS nameserver."""
address = sa.Column(sa.String(128), nullable=False, primary_key=True)
subnet_id = sa.Column(sa.String(36),
sa.ForeignKey('subnets.id',
ondelete="CASCADE"),
primary_key=True)
class Subnet(model_base.BASEV2, HasId, HasTenant):
"""Represents a neutron subnet.
When a subnet is created the first and last entries will be created. These
are used for the IP allocation.
"""
name = sa.Column(sa.String(255))
network_id = sa.Column(sa.String(36), sa.ForeignKey('networks.id'))
ip_version = sa.Column(sa.Integer, nullable=False)
cidr = sa.Column(sa.String(64), nullable=False)
gateway_ip = sa.Column(sa.String(64))
allocation_pools = orm.relationship(IPAllocationPool,
backref='subnet',
lazy="joined",
cascade='delete')
enable_dhcp = sa.Column(sa.Boolean())
dns_nameservers = orm.relationship(DNSNameServer,
backref='subnet',
cascade='all, delete, delete-orphan')
routes = orm.relationship(SubnetRoute,
backref='subnet',
cascade='all, delete, delete-orphan')
shared = sa.Column(sa.Boolean)
ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC,
constants.DHCPV6_STATEFUL,
constants.DHCPV6_STATELESS,
name='ipv6_ra_modes'), nullable=True)
ipv6_address_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC,
constants.DHCPV6_STATEFUL,
constants.DHCPV6_STATELESS,
name='ipv6_address_modes'), nullable=True)
class Network(model_base.BASEV2, HasId, HasTenant):
"""Represents a v2 neutron network."""
name = sa.Column(sa.String(255))
ports = orm.relationship(Port, backref='networks')
subnets = orm.relationship(Subnet, backref='networks',
lazy="joined")
status = sa.Column(sa.String(16))
admin_state_up = sa.Column(sa.Boolean)
shared = sa.Column(sa.Boolean)

View File

@ -1,769 +0,0 @@
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Handles database requests from other nova services."""
import copy
import itertools
from oslo import messaging
import six
from nova.api.ec2 import ec2utils
from nova import block_device
from nova.cells import rpcapi as cells_rpcapi
from nova.compute import api as compute_api
from nova.compute import rpcapi as compute_rpcapi
from nova.compute import task_states
from nova.compute import utils as compute_utils
from nova.compute import vm_states
from nova.conductor.tasks import live_migrate
from nova.db import base
from nova import exception
from nova.i18n import _
from nova import image
from nova import manager
from nova import network
from nova.network.security_group import openstack_driver
from nova import notifications
from nova import objects
from nova.objects import base as nova_object
from nova.openstack.common import excutils
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from nova.openstack.common import timeutils
from nova import quota
from nova.scheduler import client as scheduler_client
from nova.scheduler import driver as scheduler_driver
from nova.scheduler import utils as scheduler_utils
LOG = logging.getLogger(__name__)
# Instead of having a huge list of arguments to instance_update(), we just
# accept a dict of fields to update and use this whitelist to validate it.
allowed_updates = ['task_state', 'vm_state', 'expected_task_state',
'power_state', 'access_ip_v4', 'access_ip_v6',
'launched_at', 'terminated_at', 'host', 'node',
'memory_mb', 'vcpus', 'root_gb', 'ephemeral_gb',
'instance_type_id', 'root_device_name', 'launched_on',
'progress', 'vm_mode', 'default_ephemeral_device',
'default_swap_device', 'root_device_name',
'system_metadata', 'updated_at'
]
# Fields that we want to convert back into a datetime object.
datetime_fields = ['launched_at', 'terminated_at', 'updated_at']
class ConductorManager(manager.Manager):
"""Mission: Conduct things.
The methods in the base API for nova-conductor are various proxy operations
performed on behalf of the nova-compute service running on compute nodes.
Compute nodes are not allowed to directly access the database, so this set
of methods allows them to get specific work done without locally accessing
the database.
The nova-conductor service also exposes an API in the 'compute_task'
namespace. See the ComputeTaskManager class for details.
"""
target = messaging.Target(version='2.0')
def __init__(self, *args, **kwargs):
super(ConductorManager, self).__init__(service_name='conductor',
*args, **kwargs)
self.security_group_api = (
openstack_driver.get_openstack_security_group_driver())
self._network_api = None
self._compute_api = None
self.compute_task_mgr = ComputeTaskManager()
self.cells_rpcapi = cells_rpcapi.CellsAPI()
self.additional_endpoints.append(self.compute_task_mgr)
@property
def network_api(self):
# NOTE(danms): We need to instantiate our network_api on first use
# to avoid the circular dependency that exists between our init
# and network_api's
if self._network_api is None:
self._network_api = network.API()
return self._network_api
@property
def compute_api(self):
if self._compute_api is None:
self._compute_api = compute_api.API()
return self._compute_api
def ping(self, context, arg):
# NOTE(russellb) This method can be removed in 2.0 of this API. It is
# now a part of the base rpc API.
return jsonutils.to_primitive({'service': 'conductor', 'arg': arg})
@messaging.expected_exceptions(KeyError, ValueError,
exception.InvalidUUID,
exception.InstanceNotFound,
exception.UnexpectedTaskStateError)
def instance_update(self, context, instance_uuid,
updates, service):
for key, value in updates.iteritems():
if key not in allowed_updates:
LOG.error(_("Instance update attempted for "
"'%(key)s' on %(instance_uuid)s"),
{'key': key, 'instance_uuid': instance_uuid})
raise KeyError("unexpected update keyword '%s'" % key)
if key in datetime_fields and isinstance(value, six.string_types):
updates[key] = timeutils.parse_strtime(value)
old_ref, instance_ref = self.db.instance_update_and_get_original(
context, instance_uuid, updates)
notifications.send_update(context, old_ref, instance_ref, service)
return jsonutils.to_primitive(instance_ref)
@messaging.expected_exceptions(exception.InstanceNotFound)
def instance_get_by_uuid(self, context, instance_uuid,
columns_to_join):
return jsonutils.to_primitive(
self.db.instance_get_by_uuid(context, instance_uuid,
columns_to_join))
def instance_get_all_by_host(self, context, host, node,
columns_to_join):
if node is not None:
result = self.db.instance_get_all_by_host_and_node(
context.elevated(), host, node)
else:
result = self.db.instance_get_all_by_host(context.elevated(), host,
columns_to_join)
return jsonutils.to_primitive(result)
def migration_get_in_progress_by_host_and_node(self, context,
host, node):
migrations = self.db.migration_get_in_progress_by_host_and_node(
context, host, node)
return jsonutils.to_primitive(migrations)
@messaging.expected_exceptions(exception.AggregateHostExists)
def aggregate_host_add(self, context, aggregate, host):
host_ref = self.db.aggregate_host_add(context.elevated(),
aggregate['id'], host)
return jsonutils.to_primitive(host_ref)
@messaging.expected_exceptions(exception.AggregateHostNotFound)
def aggregate_host_delete(self, context, aggregate, host):
self.db.aggregate_host_delete(context.elevated(),
aggregate['id'], host)
def aggregate_metadata_get_by_host(self, context, host,
key='availability_zone'):
result = self.db.aggregate_metadata_get_by_host(context, host, key)
return jsonutils.to_primitive(result)
def bw_usage_update(self, context, uuid, mac, start_period,
bw_in, bw_out, last_ctr_in, last_ctr_out,
last_refreshed, update_cells):
if [bw_in, bw_out, last_ctr_in, last_ctr_out].count(None) != 4:
self.db.bw_usage_update(context, uuid, mac, start_period,
bw_in, bw_out, last_ctr_in, last_ctr_out,
last_refreshed,
update_cells=update_cells)
usage = self.db.bw_usage_get(context, uuid, start_period, mac)
return jsonutils.to_primitive(usage)
def provider_fw_rule_get_all(self, context):
rules = self.db.provider_fw_rule_get_all(context)
return jsonutils.to_primitive(rules)
# NOTE(danms): This can be removed in version 3.0 of the RPC API
def agent_build_get_by_triple(self, context, hypervisor, os, architecture):
info = self.db.agent_build_get_by_triple(context, hypervisor, os,
architecture)
return jsonutils.to_primitive(info)
def block_device_mapping_update_or_create(self, context, values, create):
if create is None:
bdm = self.db.block_device_mapping_update_or_create(context,
values)
elif create is True:
bdm = self.db.block_device_mapping_create(context, values)
else:
bdm = self.db.block_device_mapping_update(context,
values['id'],
values)
bdm_obj = objects.BlockDeviceMapping._from_db_object(
context, objects.BlockDeviceMapping(), bdm)
self.cells_rpcapi.bdm_update_or_create_at_top(context, bdm_obj,
create=create)
def block_device_mapping_get_all_by_instance(self, context, instance,
legacy):
bdms = self.db.block_device_mapping_get_all_by_instance(
context, instance['uuid'])
if legacy:
bdms = block_device.legacy_mapping(bdms)
return jsonutils.to_primitive(bdms)
def instance_get_all_by_filters(self, context, filters, sort_key,
sort_dir, columns_to_join,
use_slave):
result = self.db.instance_get_all_by_filters(
context, filters, sort_key, sort_dir,
columns_to_join=columns_to_join, use_slave=use_slave)
return jsonutils.to_primitive(result)
def instance_get_active_by_window(self, context, begin, end,
project_id, host):
# Unused, but cannot remove until major RPC version bump
result = self.db.instance_get_active_by_window(context, begin, end,
project_id, host)
return jsonutils.to_primitive(result)
def instance_get_active_by_window_joined(self, context, begin, end,
project_id, host):
result = self.db.instance_get_active_by_window_joined(
context, begin, end, project_id, host)
return jsonutils.to_primitive(result)
def instance_destroy(self, context, instance):
result = self.db.instance_destroy(context, instance['uuid'])
return jsonutils.to_primitive(result)
def instance_fault_create(self, context, values):
result = self.db.instance_fault_create(context, values)
return jsonutils.to_primitive(result)
# NOTE(kerrin): The last_refreshed argument is unused by this method
# and can be removed in v3.0 of the RPC API.
def vol_usage_update(self, context, vol_id, rd_req, rd_bytes, wr_req,
wr_bytes, instance, last_refreshed, update_totals):
vol_usage = self.db.vol_usage_update(context, vol_id,
rd_req, rd_bytes,
wr_req, wr_bytes,
instance['uuid'],
instance['project_id'],
instance['user_id'],
instance['availability_zone'],
update_totals)
# We have just updated the database, so send the notification now
self.notifier.info(context, 'volume.usage',
compute_utils.usage_volume_info(vol_usage))
@messaging.expected_exceptions(exception.ComputeHostNotFound,
exception.HostBinaryNotFound)
def service_get_all_by(self, context, topic, host, binary):
if not any((topic, host, binary)):
result = self.db.service_get_all(context)
elif all((topic, host)):
if topic == 'compute':
result = self.db.service_get_by_compute_host(context, host)
# FIXME(comstud) Potentially remove this on bump to v3.0
result = [result]
else:
result = self.db.service_get_by_host_and_topic(context,
host, topic)
elif all((host, binary)):
result = self.db.service_get_by_args(context, host, binary)
elif topic:
result = self.db.service_get_all_by_topic(context, topic)
elif host:
result = self.db.service_get_all_by_host(context, host)
return jsonutils.to_primitive(result)
@messaging.expected_exceptions(exception.InstanceActionNotFound)
def action_event_start(self, context, values):
evt = self.db.action_event_start(context, values)
return jsonutils.to_primitive(evt)
@messaging.expected_exceptions(exception.InstanceActionNotFound,
exception.InstanceActionEventNotFound)
def action_event_finish(self, context, values):
evt = self.db.action_event_finish(context, values)
return jsonutils.to_primitive(evt)
def service_create(self, context, values):
svc = self.db.service_create(context, values)
return jsonutils.to_primitive(svc)
@messaging.expected_exceptions(exception.ServiceNotFound)
def service_destroy(self, context, service_id):
self.db.service_destroy(context, service_id)
def compute_node_create(self, context, values):
result = self.db.compute_node_create(context, values)
return jsonutils.to_primitive(result)
def compute_node_update(self, context, node, values):
result = self.db.compute_node_update(context, node['id'], values)
return jsonutils.to_primitive(result)
def compute_node_delete(self, context, node):
result = self.db.compute_node_delete(context, node['id'])
return jsonutils.to_primitive(result)
@messaging.expected_exceptions(exception.ServiceNotFound)
def service_update(self, context, service, values):
svc = self.db.service_update(context, service['id'], values)
return jsonutils.to_primitive(svc)
def task_log_get(self, context, task_name, begin, end, host, state):
result = self.db.task_log_get(context, task_name, begin, end, host,
state)
return jsonutils.to_primitive(result)
def task_log_begin_task(self, context, task_name, begin, end, host,
task_items, message):
result = self.db.task_log_begin_task(context.elevated(), task_name,
begin, end, host, task_items,
message)
return jsonutils.to_primitive(result)
def task_log_end_task(self, context, task_name, begin, end, host,
errors, message):
result = self.db.task_log_end_task(context.elevated(), task_name,
begin, end, host, errors, message)
return jsonutils.to_primitive(result)
def notify_usage_exists(self, context, instance, current_period,
ignore_missing_network_data,
system_metadata, extra_usage_info):
compute_utils.notify_usage_exists(self.notifier, context, instance,
current_period,
ignore_missing_network_data,
system_metadata, extra_usage_info)
def security_groups_trigger_handler(self, context, event, args):
self.security_group_api.trigger_handler(event, context, *args)
def security_groups_trigger_members_refresh(self, context, group_ids):
self.security_group_api.trigger_members_refresh(context, group_ids)
def network_migrate_instance_start(self, context, instance, migration):
self.network_api.migrate_instance_start(context, instance, migration)
def network_migrate_instance_finish(self, context, instance, migration):
self.network_api.migrate_instance_finish(context, instance, migration)
def quota_commit(self, context, reservations, project_id=None,
user_id=None):
quota.QUOTAS.commit(context, reservations, project_id=project_id,
user_id=user_id)
def quota_rollback(self, context, reservations, project_id=None,
user_id=None):
quota.QUOTAS.rollback(context, reservations, project_id=project_id,
user_id=user_id)
def get_ec2_ids(self, context, instance):
ec2_ids = {}
ec2_ids['instance-id'] = ec2utils.id_to_ec2_inst_id(instance['uuid'])
ec2_ids['ami-id'] = ec2utils.glance_id_to_ec2_id(context,
instance['image_ref'])
for image_type in ['kernel', 'ramdisk']:
image_id = instance.get('%s_id' % image_type)
if image_id is not None:
ec2_image_type = ec2utils.image_type(image_type)
ec2_id = ec2utils.glance_id_to_ec2_id(context, image_id,
ec2_image_type)
ec2_ids['%s-id' % image_type] = ec2_id
return ec2_ids
def compute_unrescue(self, context, instance):
self.compute_api.unrescue(context, instance)
def _object_dispatch(self, target, method, context, args, kwargs):
"""Dispatch a call to an object method.
This ensures that object methods get called and any exception
that is raised gets wrapped in an ExpectedException for forwarding
back to the caller (without spamming the conductor logs).
"""
try:
# NOTE(danms): Keep the getattr inside the try block since
# a missing method is really a client problem
return getattr(target, method)(context, *args, **kwargs)
except Exception:
raise messaging.ExpectedException()
def object_class_action(self, context, objname, objmethod,
objver, args, kwargs):
"""Perform a classmethod action on an object."""
objclass = nova_object.NovaObject.obj_class_from_name(objname,
objver)
result = self._object_dispatch(objclass, objmethod, context,
args, kwargs)
# NOTE(danms): The RPC layer will convert to primitives for us,
# but in this case, we need to honor the version the client is
# asking for, so we do it before returning here.
return (result.obj_to_primitive(target_version=objver)
if isinstance(result, nova_object.NovaObject) else result)
def object_action(self, context, objinst, objmethod, args, kwargs):
"""Perform an action on an object."""
oldobj = objinst.obj_clone()
result = self._object_dispatch(objinst, objmethod, context,
args, kwargs)
updates = dict()
# NOTE(danms): Diff the object with the one passed to us and
# generate a list of changes to forward back
for name, field in objinst.fields.items():
if not objinst.obj_attr_is_set(name):
# Avoid demand-loading anything
continue
if (not oldobj.obj_attr_is_set(name) or
oldobj[name] != objinst[name]):
updates[name] = field.to_primitive(objinst, name,
objinst[name])
# This is safe since a field named this would conflict with the
# method anyway
updates['obj_what_changed'] = objinst.obj_what_changed()
return updates, result
def object_backport(self, context, objinst, target_version):
return objinst.obj_to_primitive(target_version=target_version)
class ComputeTaskManager(base.Base):
"""Namespace for compute methods.
This class presents an rpc API for nova-conductor under the 'compute_task'
namespace. The methods here are compute operations that are invoked
by the API service. These methods see the operation to completion, which
may involve coordinating activities on multiple compute nodes.
"""
target = messaging.Target(namespace='compute_task', version='1.9')
def __init__(self):
super(ComputeTaskManager, self).__init__()
self.compute_rpcapi = compute_rpcapi.ComputeAPI()
self.image_api = image.API()
self.scheduler_client = scheduler_client.SchedulerClient()
@messaging.expected_exceptions(exception.NoValidHost,
exception.ComputeServiceUnavailable,
exception.InvalidHypervisorType,
exception.InvalidCPUInfo,
exception.UnableToMigrateToSelf,
exception.DestinationHypervisorTooOld,
exception.InvalidLocalStorage,
exception.InvalidSharedStorage,
exception.HypervisorUnavailable,
exception.InstanceNotRunning,
exception.MigrationPreCheckError)
def migrate_server(self, context, instance, scheduler_hint, live, rebuild,
flavor, block_migration, disk_over_commit, reservations=None):
if instance and not isinstance(instance, nova_object.NovaObject):
# NOTE(danms): Until v2 of the RPC API, we need to tolerate
# old-world instance objects here
attrs = ['metadata', 'system_metadata', 'info_cache',
'security_groups']
instance = objects.Instance._from_db_object(
context, objects.Instance(), instance,
expected_attrs=attrs)
if live and not rebuild and not flavor:
self._live_migrate(context, instance, scheduler_hint,
block_migration, disk_over_commit)
elif not live and not rebuild and flavor:
instance_uuid = instance['uuid']
with compute_utils.EventReporter(context, 'cold_migrate',
instance_uuid):
self._cold_migrate(context, instance, flavor,
scheduler_hint['filter_properties'],
reservations)
else:
raise NotImplementedError()
def _cold_migrate(self, context, instance, flavor, filter_properties,
reservations):
image_ref = instance.image_ref
image = compute_utils.get_image_metadata(
context, self.image_api, image_ref, instance)
request_spec = scheduler_utils.build_request_spec(
context, image, [instance], instance_type=flavor)
quotas = objects.Quotas.from_reservations(context,
reservations,
instance=instance)
try:
scheduler_utils.populate_retry(filter_properties, instance['uuid'])
hosts = self.scheduler_client.select_destinations(
context, request_spec, filter_properties)
host_state = hosts[0]
except exception.NoValidHost as ex:
vm_state = instance['vm_state']
if not vm_state:
vm_state = vm_states.ACTIVE
updates = {'vm_state': vm_state, 'task_state': None}
self._set_vm_state_and_notify(context, 'migrate_server',
updates, ex, request_spec)
quotas.rollback()
# if the flavor IDs match, it's migrate; otherwise resize
if flavor['id'] == instance['instance_type_id']:
msg = _("No valid host found for cold migrate")
else:
msg = _("No valid host found for resize")
raise exception.NoValidHost(reason=msg)
try:
scheduler_utils.populate_filter_properties(filter_properties,
host_state)
# context is not serializable
filter_properties.pop('context', None)
# TODO(timello): originally, instance_type in request_spec
# on compute.api.resize does not have 'extra_specs', so we
# remove it for now to keep tests backward compatibility.
request_spec['instance_type'].pop('extra_specs')
(host, node) = (host_state['host'], host_state['nodename'])
self.compute_rpcapi.prep_resize(
context, image, instance,
flavor, host,
reservations, request_spec=request_spec,
filter_properties=filter_properties, node=node)
except Exception as ex:
with excutils.save_and_reraise_exception():
updates = {'vm_state': instance['vm_state'],
'task_state': None}
self._set_vm_state_and_notify(context, 'migrate_server',
updates, ex, request_spec)
quotas.rollback()
def _set_vm_state_and_notify(self, context, method, updates, ex,
request_spec):
scheduler_utils.set_vm_state_and_notify(
context, 'compute_task', method, updates,
ex, request_spec, self.db)
def _live_migrate(self, context, instance, scheduler_hint,
block_migration, disk_over_commit):
destination = scheduler_hint.get("host")
try:
live_migrate.execute(context, instance, destination,
block_migration, disk_over_commit)
except (exception.NoValidHost,
exception.ComputeServiceUnavailable,
exception.InvalidHypervisorType,
exception.InvalidCPUInfo,
exception.UnableToMigrateToSelf,
exception.DestinationHypervisorTooOld,
exception.InvalidLocalStorage,
exception.InvalidSharedStorage,
exception.HypervisorUnavailable,
exception.InstanceNotRunning,
exception.MigrationPreCheckError) as ex:
with excutils.save_and_reraise_exception():
# TODO(johngarbutt) - eventually need instance actions here
request_spec = {'instance_properties': {
'uuid': instance['uuid'], },
}
scheduler_utils.set_vm_state_and_notify(context,
'compute_task', 'migrate_server',
dict(vm_state=instance['vm_state'],
task_state=None,
expected_task_state=task_states.MIGRATING,),
ex, request_spec, self.db)
except Exception as ex:
LOG.error(_('Migration of instance %(instance_id)s to host'
' %(dest)s unexpectedly failed.'),
{'instance_id': instance['uuid'], 'dest': destination},
exc_info=True)
raise exception.MigrationError(reason=ex)
def build_instances(self, context, instances, image, filter_properties,
admin_password, injected_files, requested_networks,
security_groups, block_device_mapping=None, legacy_bdm=True):
# TODO(ndipanov): Remove block_device_mapping and legacy_bdm in version
# 2.0 of the RPC API.
request_spec = scheduler_utils.build_request_spec(context, image,
instances)
# TODO(danms): Remove this in version 2.0 of the RPC API
if (requested_networks and
not isinstance(requested_networks,
objects.NetworkRequestList)):
requested_networks = objects.NetworkRequestList(
objects=[objects.NetworkRequest.from_tuple(t)
for t in requested_networks])
try:
# check retry policy. Rather ugly use of instances[0]...
# but if we've exceeded max retries... then we really only
# have a single instance.
scheduler_utils.populate_retry(filter_properties,
instances[0].uuid)
hosts = self.scheduler_client.select_destinations(context,
request_spec, filter_properties)
except Exception as exc:
for instance in instances:
scheduler_driver.handle_schedule_error(context, exc,
instance.uuid, request_spec)
return
for (instance, host) in itertools.izip(instances, hosts):
try:
instance.refresh()
except (exception.InstanceNotFound,
exception.InstanceInfoCacheNotFound):
LOG.debug('Instance deleted during build', instance=instance)
continue
local_filter_props = copy.deepcopy(filter_properties)
scheduler_utils.populate_filter_properties(local_filter_props,
host)
# The block_device_mapping passed from the api doesn't contain
# instance specific information
bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
context, instance.uuid)
# Note(lizm) convert host name to cas host name
host_name = host['host']
if host_name.find("_"):
host_name = self._convert_host(host_name)
self.compute_rpcapi.build_and_run_instance(context,
instance=instance, host=host_name, image=image,
request_spec=request_spec,
filter_properties=local_filter_props,
admin_password=admin_password,
injected_files=injected_files,
requested_networks=requested_networks,
security_groups=security_groups,
block_device_mapping=bdms, node=host['nodename'],
limits=host['limits'])
def _convert_host(self, host):
# Note(lizm) convert host name, get cas host name
# eg. "lee_str"-->"lee"
return str(host.split("_")[0])
def _delete_image(self, context, image_id):
return self.image_api.delete(context, image_id)
def _schedule_instances(self, context, image, filter_properties,
*instances):
request_spec = scheduler_utils.build_request_spec(context, image,
instances)
hosts = self.scheduler_client.select_destinations(context,
request_spec, filter_properties)
return hosts
def unshelve_instance(self, context, instance):
sys_meta = instance.system_metadata
def safe_image_show(ctx, image_id):
if image_id:
return self.image_api.get(ctx, image_id)
if instance.vm_state == vm_states.SHELVED:
instance.task_state = task_states.POWERING_ON
instance.save(expected_task_state=task_states.UNSHELVING)
self.compute_rpcapi.start_instance(context, instance)
snapshot_id = sys_meta.get('shelved_image_id')
if snapshot_id:
self._delete_image(context, snapshot_id)
elif instance.vm_state == vm_states.SHELVED_OFFLOADED:
image_id = sys_meta.get('shelved_image_id')
with compute_utils.EventReporter(
context, 'get_image_info', instance.uuid):
try:
image = safe_image_show(context, image_id)
except exception.ImageNotFound:
instance.vm_state = vm_states.ERROR
instance.save()
reason = _('Unshelve attempted but the image %s '
'cannot be found.') % image_id
LOG.error(reason, instance=instance)
raise exception.UnshelveException(
instance_id=instance.uuid, reason=reason)
try:
with compute_utils.EventReporter(context, 'schedule_instances',
instance.uuid):
filter_properties = {}
hosts = self._schedule_instances(context, image,
filter_properties,
instance)
host_state = hosts[0]
scheduler_utils.populate_filter_properties(
filter_properties, host_state)
(host, node) = (host_state['host'], host_state['nodename'])
self.compute_rpcapi.unshelve_instance(
context, instance, host, image=image,
filter_properties=filter_properties, node=node)
except exception.NoValidHost:
instance.task_state = None
instance.save()
LOG.warning(_("No valid host found for unshelve instance"),
instance=instance)
return
else:
LOG.error(_('Unshelve attempted but vm_state not SHELVED or '
'SHELVED_OFFLOADED'), instance=instance)
instance.vm_state = vm_states.ERROR
instance.save()
return
for key in ['shelved_at', 'shelved_image_id', 'shelved_host']:
if key in sys_meta:
del(sys_meta[key])
instance.system_metadata = sys_meta
instance.save()
def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
injected_files, new_pass, orig_sys_metadata,
bdms, recreate, on_shared_storage,
preserve_ephemeral=False, host=None):
with compute_utils.EventReporter(context, 'rebuild_server',
instance.uuid):
if not host:
# NOTE(lcostantino): Retrieve scheduler filters for the
# instance when the feature is available
filter_properties = {'ignore_hosts': [instance.host]}
request_spec = scheduler_utils.build_request_spec(context,
image_ref,
[instance])
try:
hosts = self.scheduler_client.select_destinations(context,
request_spec,
filter_properties)
host = hosts.pop(0)['host']
except exception.NoValidHost as ex:
with excutils.save_and_reraise_exception():
self._set_vm_state_and_notify(context,
'rebuild_server',
{'vm_state': instance.vm_state,
'task_state': None}, ex, request_spec)
LOG.warning(_("No valid host found for rebuild"),
instance=instance)
self.compute_rpcapi.rebuild_instance(context,
instance=instance,
new_pass=new_pass,
injected_files=injected_files,
image_ref=image_ref,
orig_image_ref=orig_image_ref,
orig_sys_metadata=orig_sys_metadata,
bdms=bdms,
recreate=recreate,
on_shared_storage=on_shared_storage,
preserve_ephemeral=preserve_ephemeral,
host=host)

View File

@ -19,8 +19,8 @@ from oslo.config import cfg
#from heat.openstack.common import importutils
#from heat.openstack.common import log as logging
from neutron.openstack.common import importutils
from neutron.openstack.common import log as logging
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(__name__)

View File

@ -16,12 +16,12 @@
# @author: Haojie Jia, Huawei
import hashlib
import os
import select
import signal
import socket
import sys
import time
import os
import socket
import select
from neutron import context as n_context
from neutron.common import constants as const
@ -30,12 +30,18 @@ import eventlet
eventlet.monkey_patch()
import netaddr
from neutronclient.common import exceptions
from oslo_log import log as logging
import oslo_messaging
from oslo.config import cfg
from oslo.serialization import jsonutils
from oslo.utils import excutils
from oslo.utils import timeutils
from six import moves
from neutron.agent import l2population_rpc
from neutron.agent.linux import ip_lib
from neutron.agent.linux import ovs_lib
from neutron.agent.common import ovs_lib
from neutron.agent.linux import polling
from neutron.agent.linux import utils
from neutron.agent import rpc as agent_rpc
@ -48,18 +54,12 @@ from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.common import utils as q_utils
from neutron import context
from neutron.openstack.common import log as logging
from neutron.openstack.common import loopingcall
from neutron.openstack.common import jsonutils
from neutron.plugins.common import constants as p_const
from neutron.plugins.l2_proxy.common import config # noqa
from neutron.plugins.l2_proxy.common import constants
from neutron.plugins.l2_proxy.agent import neutron_proxy_context
from neutron.plugins.l2_proxy.agent import clients
from neutron.openstack.common import timeutils
from neutronclient.common import exceptions
from neutron.openstack.common import excutils
LOG = logging.getLogger(__name__)
@ -79,8 +79,9 @@ class QueryPortsInterface:
def __init__(self):
self.context = n_context.get_admin_context_without_session()
def _get_cascaded_neutron_client(self):
context = n_context.get_admin_context_without_session()
@classmethod
def _get_cascaded_neutron_client(cls):
admin_context = n_context.get_admin_context_without_session()
keystone_auth_url = cfg.CONF.AGENT.keystone_auth_url
kwargs = {'auth_token': None,
'username': cfg.CONF.AGENT.neutron_user_name,
@ -88,36 +89,39 @@ class QueryPortsInterface:
'aws_creds': None,
'tenant': cfg.CONF.AGENT.neutron_tenant_name,
'auth_url': keystone_auth_url,
'roles': context.roles,
'is_admin': context.is_admin,
'roles': admin_context.roles,
'is_admin': admin_context.is_admin,
'region_name': cfg.CONF.AGENT.os_region_name}
reqCon = neutron_proxy_context.RequestContext(**kwargs)
openStackClients = clients.OpenStackClients(reqCon)
neutronClient = openStackClients.neutron()
return neutronClient
req_context = neutron_proxy_context.RequestContext(**kwargs)
openstack_clients = clients.OpenStackClients(req_context)
cls.cascaded_neutron_client = openstack_clients.neutron()
@classmethod
def _is_cascaded_neutron_client_ready(cls):
if cls.cascaded_neutron_client:
return True
else:
return False
def _show_port(self, port_id):
portResponse = None
if(not QueryPortsFromCascadedNeutron.cascaded_neutron_client):
QueryPortsFromCascadedNeutron.cascaded_neutron_client = \
if not self._is_cascaded_neutron_client_ready():
self._get_cascaded_neutron_client()
retry = 0
while(True):
while True:
try:
portResponse = QueryPortsFromCascadedNeutron.\
cascaded_neutron_client.show_port(port_id)
LOG.debug(_('show port, port_id=%s, Response:%s'), str(port_id),
str(portResponse))
return portResponse
port_response = self.cascaded_neutron_client.show_port(port_id)
LOG.debug(_('show port, port_id=%s, Response:%s'),
str(port_id), str(port_response))
return port_response
except exceptions.Unauthorized:
retry = retry + 1
if(retry <= 3):
QueryPortsFromCascadedNeutron.cascaded_neutron_client = \
self._get_cascaded_neutron_client()
retry += 1
if retry <= 3:
self._get_cascaded_neutron_client()
continue
else:
with excutils.save_and_reraise_exception():
LOG.error(_('ERR: Try 3 times,Unauthorized to list ports!'))
LOG.error(
_('ERR: Try 3 times, Unauthorized to list ports!'))
return None
except Exception:
with excutils.save_and_reraise_exception():
@ -128,36 +132,34 @@ class QueryPortsInterface:
pagination_limit=None,
pagination_marker=None):
filters = {'status': 'ACTIVE'}
if(since_time):
if since_time:
filters['changes_since'] = since_time
if(pagination_limit):
if pagination_limit:
filters['limit'] = pagination_limit
filters['page_reverse'] = 'False'
if(pagination_marker):
if pagination_marker:
filters['marker'] = pagination_marker
portResponse = None
if(not QueryPortsFromCascadedNeutron.cascaded_neutron_client):
QueryPortsFromCascadedNeutron.cascaded_neutron_client = \
self._get_cascaded_neutron_client()
if not self._is_cascaded_neutron_client_ready():
self._get_cascaded_neutron_client()
retry = 0
while(True):
while True:
try:
portResponse = QueryPortsFromCascadedNeutron.\
cascaded_neutron_client.get('/ports', params=filters)
port_response = self.cascaded_neutron_client.get(
'/ports', params=filters)
LOG.debug(_('list ports, filters:%s, since_time:%s, limit=%s, '
'marker=%s, Response:%s'), str(filters),
str(since_time), str(pagination_limit),
str(pagination_marker), str(portResponse))
return portResponse
str(since_time), str(pagination_limit),
str(pagination_marker), str(port_response))
return port_response
except exceptions.Unauthorized:
retry = retry + 1
if(retry <= 3):
QueryPortsFromCascadedNeutron.cascaded_neutron_client = \
self._get_cascaded_neutron_client()
retry += 1
if retry <= 3:
self._get_cascaded_neutron_client()
continue
else:
with excutils.save_and_reraise_exception():
LOG.error(_('ERR: Try 3 times,Unauthorized to list ports!'))
LOG.error(
_('ERR: Try 3 times, Unauthorized to list ports!'))
return None
except Exception:
with excutils.save_and_reraise_exception():
@ -174,84 +176,27 @@ class QueryPortsInterface:
else:
pagination_limit = cfg.CONF.AGENT.pagination_limit
first_page = self._list_ports(since_time, pagination_limit)
if(not first_page):
if not first_page:
return ports_info
ports_info['ports'].extend(first_page.get('ports', []))
ports_links_list = first_page.get('ports_links', [])
while(True):
while True:
last_port_id = None
current_page = None
for pl in ports_links_list:
if (pl.get('rel', None) == 'next'):
if pl.get('rel') == 'next':
port_count = len(ports_info['ports'])
last_port_id = ports_info['ports'][port_count - 1].get('id')
if(last_port_id):
last_port_id = ports_info['ports'][
port_count - 1].get('id')
if last_port_id:
current_page = self._list_ports(since_time,
pagination_limit,
last_port_id)
if(not current_page):
if not current_page:
return ports_info
ports_info['ports'].extend(current_page.get('ports', []))
ports_links_list = current_page.get('ports_links', [])
class QueryPortsFromNovaproxy(QueryPortsInterface):
ports_info = {'ports': {'add': [], 'del': []}}
def __init__(self):
self.context = n_context.get_admin_context_without_session()
self.sock_path = None
self.sock = None
def listen_and_recv_port_info(self, sock_path):
try:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
path = sock_path
if os.path.exists(path):
os.unlink(path)
sock.bind(path)
sock.listen(5)
while(True):
infds, outfds, errfds = select.select([sock,], [], [], 5)
if len(infds) != 0:
con, addr = sock.accept()
recv_data = con.recv(1024)
self.process_recv_data(recv_data)
except socket.error as e:
LOG.warn(_('Error while connecting to socket: %s'), e)
return {}
# con.close()
# sock.close()
def process_recv_data(self, data):
LOG.debug(_('process_recv_data begin! data:%s'), data)
data_dict = jsonutils.loads(data)
ports = data_dict.get('ports', None)
if(ports):
added_ports = ports.get('add', [])
for port_id in added_ports:
port_ret = self._show_port(port_id)
if port_ret and port_ret.get('port', None):
QueryPortsFromNovaproxy.ports_info['ports']['add']. \
append(port_ret.get('port'))
# removed_ports = ports.get('delete', [])
def get_update_net_port_info(self, since_time=None):
if(since_time):
ports_info = QueryPortsFromNovaproxy.ports_info['ports'].get('add', [])
QueryPortsFromNovaproxy.ports_info['ports']['add'] = []
else:
all_ports = self._get_ports_pagination()
ports_info = all_ports.get('ports', [])
return ports_info
class QueryPortsFromCascadedNeutron(QueryPortsInterface):
def __init__(self):
self.context = n_context.get_admin_context_without_session()
def get_update_net_port_info(self, since_time=None):
if since_time:
ports = self._get_ports_pagination(since_time)
@ -259,10 +204,6 @@ class QueryPortsFromCascadedNeutron(QueryPortsInterface):
ports = self._get_ports_pagination()
return ports.get("ports", [])
# def get_update_port_info_since(self, since_time):
# ports = self._get_ports_pagination(since_time)
# return ports.get("ports", [])
class RemotePort:
@ -271,7 +212,7 @@ class RemotePort:
self.port_name = port_name
self.mac = mac
self.binding_profile = binding_profile
if(ips is None):
if not ips:
self.ip = set()
else:
self.ip = set(ips)
@ -283,7 +224,7 @@ class LocalPort:
self.port_id = port_id
self.cascaded_port_id = cascaded_port_id
self.mac = mac
if(ips is None):
if not ips:
self.ip = set()
else:
self.ip = set(ips)
@ -312,22 +253,11 @@ class LocalVLANMapping:
self.segmentation_id))
class OVSPluginApi(agent_rpc.PluginApi,
dvr_rpc.DVRServerRpcApiMixin,
sg_rpc.SecurityGroupServerRpcApiMixin):
class OVSPluginApi(agent_rpc.PluginApi):
pass
class OVSSecurityGroupAgent(sg_rpc.SecurityGroupAgentRpcMixin):
def __init__(self, context, plugin_rpc, root_helper):
self.context = context
self.plugin_rpc = plugin_rpc
self.root_helper = root_helper
self.init_firewall(defer_refresh_firewall=True)
class OVSNeutronAgent(n_rpc.RpcCallback,
sg_rpc.SecurityGroupAgentRpcCallbackMixin,
class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
l2population_rpc.L2populationRpcCallBackTunnelMixin,
dvr_rpc.DVRAgentRpcCallbackMixin):
'''Implements OVS-based tunneling, VLANs and flat networks.
@ -360,7 +290,8 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
# 1.0 Initial version
# 1.1 Support Security Group RPC
# 1.2 Support DVR (Distributed Virtual Router) RPC
RPC_API_VERSION = '1.2'
# RPC_API_VERSION = '1.2'
target = oslo_messaging.Target(version='1.2')
def __init__(self, integ_br, tun_br, local_ip,
bridge_mappings, root_helper,
@ -422,13 +353,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
self.enable_distributed_routing},
'agent_type': q_const.AGENT_TYPE_OVS,
'start_flag': True}
if(cfg.CONF.AGENT.query_ports_mode == 'cascaded_neutron'):
self.query_ports_info_inter = QueryPortsFromCascadedNeutron()
elif(cfg.CONF.AGENT.query_ports_mode == 'nova_proxy'):
self.sock_path = cfg.CONF.AGENT.proxy_sock_path
self.query_ports_info_inter = QueryPortsFromNovaproxy()
eventlet.spawn_n(self.query_ports_info_inter.listen_and_recv_port_info,
self.sock_path)
self.query_ports_info_inter = QueryPortsInterface()
self.cascaded_port_info = {}
self.cascaded_host_map = {}
self.first_scan_flag = True
@ -436,13 +361,13 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
# Keep track of int_br's device count for use by _report_state()
self.int_br_device_count = 0
self.int_br = ovs_lib.OVSBridge(integ_br, self.root_helper)
# self.setup_integration_br()
self.int_br = ovs_lib.OVSBridge(integ_br)
# self.setup_integration_br()
# Stores port update notifications for processing in main rpc loop
self.updated_ports = set()
self.setup_rpc()
self.bridge_mappings = bridge_mappings
# self.setup_physical_bridges(self.bridge_mappings)
# self.setup_physical_bridges(self.bridge_mappings)
self.local_vlan_map = {}
self.tun_br_ofports = {p_const.TYPE_GRE: {},
p_const.TYPE_VXLAN: {}}
@ -463,12 +388,11 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
self.patch_int_ofport = constants.OFPORT_INVALID
self.patch_tun_ofport = constants.OFPORT_INVALID
# self.dvr_agent.setup_dvr_flows_on_integ_tun_br()
# self.dvr_agent.setup_dvr_flows_on_integ_tun_br()
# Security group agent support
self.sg_agent = OVSSecurityGroupAgent(self.context,
self.plugin_rpc,
root_helper)
self.sg_agent = sg_rpc.SecurityGroupAgentRpc(
self.context, self.sg_plugin_rpc, defer_refresh_firewall=True)
# Initialize iteration counter
self.iter_num = 0
self.run_daemon_loop = True
@ -490,6 +414,11 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
self.agent_id = 'ovs-agent-%s' % cfg.CONF.host
self.topic = topics.AGENT
self.plugin_rpc = OVSPluginApi(topics.PLUGIN)
# Vega: adopt the change in community which replaces
# xxxRpcApiMixin with a standalone class xxxRpcApi
self.sg_plugin_rpc = sg_rpc.SecurityGroupServerRpcApi(topics.PLUGIN)
self.dvr_plugin_rpc = dvr_rpc.DVRServerRpcApi(topics.PLUGIN)
self.state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
# RPC network init
@ -532,17 +461,17 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
return network_id
def network_delete(self, context, **kwargs):
LOG.debug(_("TRICIRCLE network_delete received"))
LOG.debug(_("Tricircle network_delete received"))
network_id = kwargs.get('network_id')
csd_network_name = self.get_csd_network_name(network_id)
network_ret = self.list_cascaded_network_by_name(csd_network_name)
if(network_ret and (network_ret.get('networks'))):
if network_ret and (network_ret.get('networks')):
cascaded_net = network_ret['networks'][0]
self.delete_cascaded_network_by_id(cascaded_net['id'])
else:
LOG.error('TRICIRCLE List cascaded network %s failed when '
LOG.error('Tricircle List cascaded network %s failed when '
'call network_delete!', csd_network_name)
LOG.debug(_("TRICIRCLE Network %s was deleted successfully."),
LOG.debug(_("Tricircle Network %s was deleted successfully."),
network_id)
def port_update(self, context, **kwargs):
@ -559,10 +488,10 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
def _create_port(self, context, network_id, binding_profile, port_name,
mac_address, ips):
if(not network_id):
if not network_id:
LOG.error(_("No network id is specified, cannot create port"))
return
neutronClient = self.get_cascaded_neutron_client()
neutron_client = self.get_cascaded_neutron_client()
req_props = {'network_id': network_id,
'name': port_name,
'admin_state_up': True,
@ -571,19 +500,19 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
'binding:profile': binding_profile,
'device_owner': 'compute:'
}
bodyResponse = neutronClient.create_port({'port': req_props})
LOG.debug(_('create port, Response:%s'), str(bodyResponse))
return bodyResponse
body_response = neutron_client.create_port({'port': req_props})
LOG.debug(_('create port, Response:%s'), str(body_response))
return body_response
def _destroy_port(self, context, port_id):
if(not port_id):
if not port_id:
LOG.error(_("No port id is specified, cannot destroy port"))
return
neutronClient = self.get_cascaded_neutron_client()
bodyResponse = neutronClient.delete_port(port_id)
LOG.debug(_('destroy port, Response:%s'), str(bodyResponse))
return bodyResponse
neutron_client = self.get_cascaded_neutron_client()
body_response = neutron_client.delete_port(port_id)
LOG.debug(_('destroy port, Response:%s'), str(body_response))
return body_response
def fdb_add(self, context, fdb_entries):
LOG.debug("fdb_add received")
@ -601,12 +530,12 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
port_name = 'remote_port'
mac_ip_map = {}
for port in ports:
if(port == q_const.FLOODING_ENTRY):
if port == q_const.FLOODING_ENTRY:
continue
if(const.DEVICE_OWNER_DVR_INTERFACE in port[1]):
if const.DEVICE_OWNER_DVR_INTERFACE in port[1]:
return
ips = mac_ip_map.get(port[0])
if(ips):
if ips:
ips += port[2]
mac_ip_map[port[0]] = ips
else:
@ -649,7 +578,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
for agent_ip, ports in agent_ports.items():
for port in ports:
local_p = lvm.vif_ports.pop(port[0], None)
if(local_p and local_p.port_id):
if local_p and local_p.port_id:
self.cascaded_port_info.pop(local_p.port_id, None)
continue
remote_p = lvm.remote_ports.pop(port[0], None)
@ -660,7 +589,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
def add_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
'''TODO can not delete, by jiahaojie
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
setup_entry_for_arp_reply, setup_tunnel_port '''
LOG.debug("add_fdb_flow received")
@ -676,7 +605,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
def setup_entry_for_arp_reply(self, br, action, local_vid, mac_address,
ip_address):
'''TODO can not delete, by jiahaojie
if delete,it will raise TypeError:
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
setup_entry_for_arp_reply, setup_tunnel_port '''
@ -704,8 +633,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
LOG.error(_("No local VLAN available for net-id=%s"), net_uuid)
return
lvid = self.available_local_vlans.pop()
self.local_vlan_map[net_uuid] = LocalVLANMapping(
network_type,
self.local_vlan_map[net_uuid] = LocalVLANMapping(network_type,
physical_network,
segmentation_id,
cascaded_net_id)
@ -764,10 +692,8 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
physical_network, segmentation_id,
cascaded_port_info['network_id'])
lvm = self.local_vlan_map[net_uuid]
lvm.vif_ports[cascaded_port_info['mac_address']] = \
LocalPort(port,
cascaded_port_info['id'],
cascaded_port_info['mac_address'])
lvm.vif_ports[cascaded_port_info['mac_address']] = LocalPort(
port, cascaded_port_info['id'], cascaded_port_info['mac_address'])
def get_port_id_from_profile(self, profile):
return profile.get('cascading_port_id', None)
@ -779,29 +705,32 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
LOG.debug(_('jiahaojie---port: %s'), str(port))
profile = port['binding:profile']
cascading_port_id = self.get_port_id_from_profile(profile)
if(not cascading_port_id):
if not cascading_port_id:
continue
self.cascaded_port_info[cascading_port_id] = port
cur_ports.add(cascading_port_id)
return cur_ports
def scan_ports(self, registered_ports, updated_ports=None):
if(self.first_scan_flag):
if self.first_scan_flag:
ports_info = self.query_ports_info_inter.get_update_net_port_info()
self.first_scan_flag = False
# Vega: since query based on timestamp is not supported currently,
# comment the following line to always query all the ports.
# self.first_scan_flag = False
else:
pre_time = time.time() - self.polling_interval - 1
since_time = time.strftime("%Y-%m-%d %H:%M:%S",
time.gmtime(pre_time))
ports_info = self.query_ports_info_inter.get_update_net_port_info(
since_time)
since_time)
added_or_updated_ports = self.analysis_ports_info(ports_info)
cur_ports = set(self.cascaded_port_info.keys()) | added_or_updated_ports
cur_ports = set(
self.cascaded_port_info.keys()) | added_or_updated_ports
self.int_br_device_count = len(cur_ports)
port_info = {'current': cur_ports}
if updated_ports is None:
updated_ports = set()
#updated_ports.update(self.check_changed_vlans(registered_ports))
# updated_ports.update(self.check_changed_vlans(registered_ports))
if updated_ports:
# Some updated ports might have been removed in the
# meanwhile, and therefore should not be processed.
@ -839,17 +768,17 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
def setup_tunnel_port(self, br, remote_ip, network_type):
'''TODO can not delete, by jiahaojie
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
setup_entry_for_arp_reply, setup_tunnel_port '''
LOG.debug("cleanup_tunnel_port is called!")
def cleanup_tunnel_port(self, br, tun_ofport, tunnel_type):
'''TODO can not delete, by jiahaojie
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
if delete,it will raise TypeError:
Can't instantiate abstract class OVSNeutronAgent with abstract
methods add_fdb_flow, cleanup_tunnel_port, del_fdb_flow,
setup_entry_for_arp_reply, setup_tunnel_port '''
LOG.debug("cleanup_tunnel_port is called!")
@ -863,26 +792,24 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
return details_ips_set == cascaded_ips_set
def get_cascading_neutron_client(self):
context = n_context.get_admin_context_without_session()
admin_context = n_context.get_admin_context_without_session()
keystone_auth_url = cfg.CONF.AGENT.cascading_auth_url
kwargs = {'auth_token': None,
'username': cfg.CONF.AGENT.cascading_user_name,
'password': cfg.CONF.AGENT.cascading_password,
'aws_creds': None,
'tenant': cfg.CONF.AGENT.cascading_tenant_name,
# 'tenant_id':'e8f280855dbe42a189eebb0f3ecb94bb', #context.values['tenant'],
'auth_url': keystone_auth_url,
'roles': context.roles,
'is_admin': context.is_admin,
'roles': admin_context.roles,
'is_admin': admin_context.is_admin,
'region_name': cfg.CONF.AGENT.cascading_os_region_name}
reqCon = neutron_proxy_context.RequestContext(**kwargs)
openStackClients = clients.OpenStackClients(reqCon)
neutronClient = openStackClients.neutron()
return neutronClient
req_context = neutron_proxy_context.RequestContext(**kwargs)
openstack_clients = clients.OpenStackClients(req_context)
return openstack_clients.neutron()
def update_cascading_port_profile(self, cascaded_host_ip,
cascaded_port_info, details):
if(not cascaded_host_ip):
if not cascaded_host_ip:
return
profile = {'host_ip': cascaded_host_ip,
'cascaded_net_id': {
@ -893,10 +820,10 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
subnet_map = profile['cascaded_subnet_id']
for fi_ing in details['fixed_ips']:
for fi_ed in cascaded_port_info['fixed_ips']:
if (fi_ed['ip_address'] == fi_ing['ip_address']):
if fi_ed['ip_address'] == fi_ing['ip_address']:
subnet_map[fi_ing['subnet_id']] = {}
subnet_map[fi_ing['subnet_id']][cfg.CONF.host] = \
fi_ed['subnet_id']
subnet_map[fi_ing['subnet_id']][
cfg.CONF.host] = fi_ed['subnet_id']
break
neutron_client = self.get_cascading_neutron_client()
req_props = {"binding:profile": profile}
@ -905,26 +832,26 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
LOG.debug(_('update compute port, Response:%s'), str(port_ret))
def get_cascaded_neutron_client(self):
context = n_context.get_admin_context_without_session()
keystone_auth_url = cfg.CONF.AGENT.keystone_auth_url
kwargs = {'auth_token': None,
'username': cfg.CONF.AGENT.neutron_user_name,
'password': cfg.CONF.AGENT.neutron_password,
'aws_creds': None,
'tenant': cfg.CONF.AGENT.neutron_tenant_name,
# 'tenant_id':'e8f280855dbe42a189eebb0f3ecb94bb', #context.values['tenant'],
'auth_url': keystone_auth_url,
'roles': context.roles,
'is_admin': context.is_admin,
'region_name': cfg.CONF.AGENT.os_region_name}
reqCon = neutron_proxy_context.RequestContext(**kwargs)
openStackClients = clients.OpenStackClients(reqCon)
neutronClient = openStackClients.neutron()
return neutronClient
return self.query_ports_info_inter.cascaded_neutron_client
# context = n_context.get_admin_context_without_session()
# keystone_auth_url = cfg.CONF.AGENT.keystone_auth_url
# kwargs = {'auth_token': None,
# 'username': cfg.CONF.AGENT.neutron_user_name,
# 'password': cfg.CONF.AGENT.neutron_password,
# 'aws_creds': None,
# 'tenant': cfg.CONF.AGENT.neutron_tenant_name,
# 'auth_url': keystone_auth_url,
# 'roles': context.roles,
# 'is_admin': context.is_admin,
# 'region_name': cfg.CONF.AGENT.os_region_name}
# reqCon = neutron_proxy_context.RequestContext(**kwargs)
# openStackClients = clients.OpenStackClients(reqCon)
# neutronClient = openStackClients.neutron()
# return neutronClient
def get_cascaded_host_ip(self, ed_host_id):
host_ip = self.cascaded_host_map.get(ed_host_id)
if(host_ip):
if host_ip:
return host_ip
neutron_client = self.get_cascaded_neutron_client()
agent_ret = neutron_client.list_agents(host=ed_host_id,
@ -937,7 +864,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
# json.loads(agent_config)
configuration = agent_config
host_ip = configuration.get('tunneling_ip')
if(host_ip):
if host_ip:
self.cascaded_host_map[ed_host_id] = host_ip
return host_ip
@ -957,9 +884,9 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
if 'port_id' in details:
cascaded_port_info = self.cascaded_port_info.get(device)
if(not self.compare_port_info(details, cascaded_port_info)):
LOG.info(_("Port %(device)s can not updated. "
"Because port info in cascading and cascaded layer"
if not self.compare_port_info(details, cascaded_port_info):
LOG.info(_("Port %(device)s can not updated because "
"port info in cascading and cascaded layer"
"are different, Details: %(details)s"),
{'device': device, 'details': details})
skipped_devices.append(device)
@ -978,7 +905,7 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
ovs_restarted)
# update cascading port, modify binding:profile to add host_ip
# and cascaded net_id/cascaded_subnet_id
if('compute' in details['device_owner']):
if 'compute' in details['device_owner']:
ed_host_id = cascaded_port_info['binding:host_id']
cascaded_host_ip = self.get_cascaded_host_ip(ed_host_id)
self.update_cascading_port_profile(cascaded_host_ip,
@ -998,10 +925,6 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
self.plugin_rpc.update_device_down(
self.context, device, self.agent_id, cfg.CONF.host)
LOG.info(_("Configuration for device %s completed."), device)
# else:
# LOG.warn(_("Device %s not defined on plugin"), device)
# if (port and port.ofport != -1):
# self.port_dead(port)
return skipped_devices
def process_network_ports(self, port_info, ovs_restarted):
@ -1051,13 +974,13 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
resync_a = True
if 'removed' in port_info:
start = time.time()
#resync_b = self.treat_devices_removed(port_info['removed'])
# resync_b = self.treat_devices_removed(port_info['removed'])
LOG.debug(_("process_network_ports - iteration:%(iter_num)d -"
"treat_devices_removed completed in %(elapsed).3f"),
{'iter_num': self.iter_num,
'elapsed': time.time() - start})
# If one of the above operations fails => resync with plugin
return (resync_a | resync_b)
return resync_a or resync_b
def get_ip_in_hex(self, ip_address):
try:
@ -1072,14 +995,9 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
port_info.get('removed') or
port_info.get('updated'))
def rpc_loop(self, polling_manager=None):
# if not polling_manager:
# polling_manager = polling.AlwaysPoll()
sync = True
def rpc_loop(self):
ports = set()
updated_ports_copy = set()
ancillary_ports = set()
ovs_restarted = False
while self.run_daemon_loop:
start = time.time()
@ -1090,14 +1008,6 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
'removed': 0}}
LOG.debug(_("Agent rpc_loop - iteration:%d started"),
self.iter_num)
# if sync:
# LOG.info(_("Agent out of sync with plugin!"))
# ports.clear()
# ancillary_ports.clear()
# sync = False
# polling_manager.force_polling()
# if self._agent_has_updates(polling_manager) or ovs_restarted:
if True:
try:
LOG.debug(_("Agent rpc_loop - iteration:%(iter_num)d - "
@ -1111,7 +1021,6 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
updated_ports_copy = self.updated_ports
self.updated_ports = set()
reg_ports = (set() if ovs_restarted else ports)
#import pdb;pdb.set_trace()
port_info = self.scan_ports(reg_ports, updated_ports_copy)
LOG.debug(_("Agent rpc_loop - iteration:%(iter_num)d - "
"port information retrieved. "
@ -1121,13 +1030,12 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
# Secure and wire/unwire VIFs and update their status
# on Neutron server
if (self._port_info_has_changes(port_info) or
self.sg_agent.firewall_refresh_needed() or
ovs_restarted):
self.sg_agent.firewall_refresh_needed() or
ovs_restarted):
LOG.debug(_("Starting to process devices in:%s"),
port_info)
# If treat devices fails - must resync with plugin
sync = self.process_network_ports(port_info,
ovs_restarted)
self.process_network_ports(port_info, ovs_restarted)
LOG.debug(_("Agent rpc_loop - iteration:%(iter_num)d -"
"ports processed. Elapsed:%(elapsed).3f"),
{'iter_num': self.iter_num,
@ -1139,13 +1047,10 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
port_stats['regular']['removed'] = (
len(port_info.get('removed', [])))
ports = port_info['current']
# polling_manager.polling_completed()
except Exception:
LOG.exception(_("Error while processing VIF ports"))
# Put the ports back in self.updated_port
self.updated_ports |= updated_ports_copy
sync = True
# sleep till end of polling interval
elapsed = (time.time() - start)
@ -1155,27 +1060,18 @@ class OVSNeutronAgent(n_rpc.RpcCallback,
{'iter_num': self.iter_num,
'port_stats': port_stats,
'elapsed': elapsed})
if (elapsed < self.polling_interval):
if elapsed < self.polling_interval:
time.sleep(self.polling_interval - elapsed)
else:
LOG.debug(_("Loop iteration exceeded interval "
"(%(polling_interval)s vs. %(elapsed)s)!"),
{'polling_interval': self.polling_interval,
'elapsed': elapsed})
self.iter_num = self.iter_num + 1
self.iter_num += 1
def daemon_loop(self):
# with polling.get_polling_manager(
# self.minimize_polling) as pm:
self.rpc_loop()
# with polling.get_polling_manager(
# self.minimize_polling,
# self.root_helper,
# self.ovsdb_monitor_respawn_interval) as pm:
#
# self.rpc_loop(polling_manager=pm)
def _handle_sigterm(self, signum, frame):
LOG.debug("Agent caught SIGTERM, quitting daemon loop.")
self.run_daemon_loop = False
@ -1236,14 +1132,7 @@ def main():
LOG.error(_('%s Agent terminated!'), e)
sys.exit(1)
# is_xen_compute_host = 'rootwrap-xen-dom0' in agent_config['root_helper']
# if is_xen_compute_host:
# # Force ip_lib to always use the root helper to ensure that ip
# # commands target xen dom0 rather than domU.
# cfg.CONF.set_default('ip_lib_force_root', True)
agent = OVSNeutronAgent(**agent_config)
# signal.signal(signal.SIGTERM, agent._handle_sigterm)
# Start everything.
LOG.info(_("Agent initialized successfully, now running... "))

View File

@ -14,7 +14,7 @@
# under the License.
# @author: Haojie Jia, Huawei
from neutron.openstack.common import context
from oslo_context import context
from neutron.common import exceptions
import eventlet
@ -23,10 +23,8 @@ from keystoneclient.v2_0 import client as kc
from keystoneclient.v3 import client as kc_v3
from oslo.config import cfg
#from heat.openstack.common import importutils
from neutron.openstack.common import importutils
#from heat.openstack.common import log as logging
from neutron.openstack.common import log as logging
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(
'neutron.plugins.cascading_proxy_agent.keystoneclient')

View File

@ -17,16 +17,19 @@
from oslo.config import cfg
#from heat.openstack.common import local
from neutron.openstack.common import local
#from neutron.openstack.common import local
#from heat.common import exception
from neutron.common import exceptions
#from heat.common import wsgi
from neutron import wsgi
from neutron.openstack.common import context
#from neutron.openstack.common import context
from oslo_context import context
#from heat.openstack.common import importutils
from neutron.openstack.common import importutils
#from neutron.openstack.common import importutils
from oslo.utils import importutils
#from heat.openstack.common import uuidutils
from neutron.openstack.common import uuidutils
#from neutron.openstack.common import uuidutils
from oslo.utils import uuidutils
def generate_request_id():
@ -69,14 +72,14 @@ class RequestContext(context.RequestContext):
self.roles = roles or []
self.region_name = region_name
self.owner_is_tenant = owner_is_tenant
if overwrite or not hasattr(local.store, 'context'):
self.update_store()
# if overwrite or not hasattr(local.store, 'context'):
# self.update_store()
self._session = None
self.trust_id = trust_id
self.trustor_user_id = trustor_user_id
def update_store(self):
local.store.context = self
# def update_store(self):
# local.store.context = self
def to_dict(self):
return {'auth_token': self.auth_token,

View File

@ -1,107 +0,0 @@
[ovs]
bridge_mappings = default:br-eth1,external:br-ex
integration_bridge = br-int
network_vlan_ranges = default:1:4094
tunnel_type = vxlan,gre
enable_tunneling = True
local_ip = LOCAL_IP
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,l2population
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan
# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade
# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver
[ml2_type_flat]
flat_networks = external
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *
[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2
network_vlan_ranges = default:1:4094
[ml2_type_gre]
tunnel_id_ranges = 1:1000
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
vni_ranges = 4097:200000
# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1
[securitygroup]
#firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
firewall_driver=neutron.agent.firewall.NoopFirewallDriver
enable_security_group = True
enable_ipset = True
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True
[agent]
tunnel_types = vxlan, gre
l2_population = True
arp_responder = True
enable_distributed_routing = True
#configure added by j00209498
keystone_auth_url = http://CASCADING_CONTROL_IP:35357/v2.0
neutron_user_name = USER_NAME
neutron_password = USER_PWD
neutron_tenant_name = TENANT_NAME
os_region_name = CASCADED_REGION_NAME
cascading_os_region_name = CASCADING_REGION_NAME
cascading_auth_url = http://CASCADING_CONTROL_IP:35357/v2.0
cascading_user_name = USER_NAME
cascading_password = USER_PWD
cascading_tenant_name = TENANT_NAME

Some files were not shown because too many files have changed in this diff Show More