Pike sync

The following changes have been made to coordinate with the changes
made in Neutron for Pike:

* Partial use of Neutron context has been completely moved to neutron_lib's
context.

* The patching of neutron.db.api.get_session() has been replaced with
patching of sqlalchemy.orm.session to add the notification_queue attribute.
This significantly reduces the earlier complexity of patching.

* Use of top-level start of transaction in GBP plugins:
with context.session.begin(subtransactions=True):
has been migrated to use of:
with db_api.context_manager.writer.using(context):
or
with db_api.context_manager.reader.using(context)
as relevant.

* Calls to _make_resource_xxx_dict() in GBP plugins have been moved
to inside the transaction.

* The use of:
neutron.callbacks.events
neutron.callbacks.exceptions
neutron.callbacks.registry
to
neutron_lib.callbacks.events
neutron_lib.callbacks.exceptions
neutron_lib.callbacks.registry

* The use of:
neutron.api.v2.attributes.resource_xxx
neutron.extensions.extension_xxx
to:
from neutron_lib.api.definitions.resource_xxx
from neutron_lib.api.definitions.extension_xxx
resp.

* The use of:
neutron.db.db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs
to:
neutron.db._resource_extend.resource_extend
(the latter is a decorator)

* The use of:
neutron.db.db_base_plugin_v2.NeutronDbPluginV2.register_model_query_hook()
to:
from neutron.db import _model_query as model_query.register_hook()

* The use of:
neutron.db.segments_db.NetworkSegment
to:
neutron.db.models.segment.NetworkSegment

* In the case of Neutron ml2plus plugin (used by APIC/AIM solution),
the use of get_admin_context() has been patched to return elevated
version of the current context in use. This helps to preserve the session
and transaction semantics. Ideally, context.elevated() would have been
directly used in all these places, however the current context is not
available in these places, and hence getting the current context and elevating
it is wrapped in the get_admin_context() patched method.

* In the case of the components used by the APIC/AIM solution (including
the ml2plus and l3_plugin) the use of:
with context.session.begin(subtransactions=True):
to
with db_api.context_manager.writer.using(context):
or
with db_api.context_manager.reader.using(context):
as relevant.

* Patching of methods from Neutron which is no longer relevant have been
removed from gbpservice.neutron.extensions.patch module.

* Setting up of UTs has been fixed to load and reset configurations
appropriately. This helps to eleminate some failures when tests are
run in non-deterministic orders.

* In tree devstack plugin has been updated (aim repo commit pin needs
to be reverted).

* Gate jobs have been updated as relevant (including fixes to the exercise
scripts and job configurations).

The associated repos, namely, client, UI and automation have also been
updated (the reference to the client's gerrit patch needs to be updated
once the patch has been merged).

Change-Id: I11dd089effbf40cf104afd720dc40a9911dcf28d
This commit is contained in:
Sumit Naiksatam 2017-10-24 16:49:25 -07:00
parent e3fac66aa2
commit d649785c9e
40 changed files with 727 additions and 915 deletions

View File

@ -5,7 +5,7 @@ function prepare_nsx_policy {
NSXLIB_NAME='vmware-nsxlib'
GITDIR[$NSXLIB_NAME]=/opt/stack/vmware-nsxlib
GITREPO[$NSXLIB_NAME]=${NSXLIB_REPO:-${GIT_BASE}/openstack/vmware-nsxlib.git}
GITBRANCH[$NSXLIB_NAME]=${NSXLIB_BRANCH:-stable/ocata}
GITBRANCH[$NSXLIB_NAME]=${NSXLIB_BRANCH:-stable/pike}
if use_library_from_git $NSXLIB_NAME; then
git_clone_by_name $NSXLIB_NAME

View File

@ -44,10 +44,10 @@ if [[ $ENABLE_NFP = True ]]; then
# Make sure that your public interface is not attached to any bridge.
PUBLIC_INTERFACE=
enable_plugin neutron-fwaas http://git.openstack.org/openstack/neutron-fwaas stable/ocata
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/ocata
enable_plugin neutron https://github.com/openstack/neutron.git stable/ocata
enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas stable/ocata
enable_plugin neutron-fwaas http://git.openstack.org/openstack/neutron-fwaas stable/pike
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/pike
enable_plugin neutron https://github.com/openstack/neutron.git stable/pike
enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas stable/pike
enable_plugin octavia https://git.openstack.org/openstack/octavia
#enable_plugin barbican https://git.openstack.org/openstack/barbican master
#enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git master

View File

@ -31,7 +31,6 @@ GBPUI_BRANCH=${GBPUI_BRANCH:-master}
GBPHEAT_REPO=${GBPHEAT_REPO:-${GIT_BASE}/openstack/group-based-policy-automation.git}
GBPHEAT_BRANCH=${GBPHEAT_BRANCH:-master}
AIM_BRANCH=${AIM_BRANCH:-master}
APICML2_BRANCH=${APICML2_BRANCH:-master}
OPFLEX_BRANCH=${OPFLEX_BRANCH:-master}
APICAPI_BRANCH=${APICAPI_BRANCH:-master}

View File

@ -12,7 +12,6 @@
import sys
from neutron import context as old_context
from neutron_lib import context as n_context
from oslo_config import cfg
from oslo_log import log as logging
@ -25,23 +24,22 @@ LOG = logging.getLogger(__name__)
cfg.CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token')
def get_current_context():
def get_obj_from_stack(cls):
i = 1
try:
while True:
for val in sys._getframe(i).f_locals.itervalues():
# REVISIT (Sumit); In Ocata, neutron is still
# using the neutron_lib context, hence we need
# to check for both. This should be changed in
# Pike to only check for the neutron_lib context.
if isinstance(val, n_context.Context) or (
isinstance(val, old_context.Context)):
if isinstance(val, cls):
return val
i = i + 1
except Exception:
return
def get_current_context():
return get_obj_from_stack(n_context.Context)
def get_current_session():
ctx = get_current_context()
if ctx:

View File

@ -11,13 +11,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.callbacks import registry
from neutron.extensions import address_scope
from neutron.extensions import l3
from neutron.extensions import securitygroup as ext_sg
from neutron.notifiers import nova
from neutron.plugins.common import constants as pconst
from neutron import quota
from neutron_lib.callbacks import registry
from neutron_lib import constants as nl_const
from neutron_lib import exceptions as n_exc
from neutron_lib.plugins import directory

View File

@ -12,6 +12,7 @@
import netaddr
from neutron.db import api as db_api
from neutron.db import common_db_mixin
from neutron_lib.api import validators
from neutron_lib import constants
@ -1098,7 +1099,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_policy_target(self, context, policy_target):
pt = policy_target['policy_target']
tenant_id = self._get_tenant_id_for_create(context, pt)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pt_db = PolicyTarget(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=pt['name'], description=pt['description'],
@ -1107,19 +1108,19 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status=pt.get('status'),
status_details=pt.get('status_details'))
context.session.add(pt_db)
return self._make_policy_target_dict(pt_db)
return self._make_policy_target_dict(pt_db)
@log.log_method_call
def update_policy_target(self, context, policy_target_id, policy_target):
pt = policy_target['policy_target']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pt_db = self._get_policy_target(context, policy_target_id)
pt_db.update(pt)
return self._make_policy_target_dict(pt_db)
return self._make_policy_target_dict(pt_db)
@log.log_method_call
def delete_policy_target(self, context, policy_target_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pt_db = self._get_policy_target(context, policy_target_id)
context.session.delete(pt_db)
@ -1150,7 +1151,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_policy_target_group(self, context, policy_target_group):
ptg = policy_target_group['policy_target_group']
tenant_id = self._get_tenant_id_for_create(context, ptg)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
if ptg['service_management']:
self._validate_service_management_ptg(context, tenant_id)
ptg_db = PolicyTargetGroup(
@ -1166,22 +1167,22 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status_details=ptg.get('status_details'))
context.session.add(ptg_db)
self._process_policy_rule_sets_for_ptg(context, ptg_db, ptg)
return self._make_policy_target_group_dict(ptg_db)
return self._make_policy_target_group_dict(ptg_db)
@log.log_method_call
def update_policy_target_group(self, context, policy_target_group_id,
policy_target_group):
ptg = policy_target_group['policy_target_group']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ptg_db = self._get_policy_target_group(
context, policy_target_group_id)
ptg = self._process_policy_rule_sets_for_ptg(context, ptg_db, ptg)
ptg_db.update(ptg)
return self._make_policy_target_group_dict(ptg_db)
return self._make_policy_target_group_dict(ptg_db)
@log.log_method_call
def delete_policy_target_group(self, context, policy_target_group_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ptg_db = self._get_policy_target_group(
context, policy_target_group_id)
# REVISIT(rkukura): An exception should be raised here if
@ -1226,7 +1227,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
application_policy_group):
apg = application_policy_group['application_policy_group']
tenant_id = self._get_tenant_id_for_create(context, apg)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
apg_db = ApplicationPolicyGroup(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=apg['name'], description=apg['description'],
@ -1234,23 +1235,23 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status=apg.get('status'),
status_details=apg.get('status_details'))
context.session.add(apg_db)
return self._make_application_policy_group_dict(apg_db)
return self._make_application_policy_group_dict(apg_db)
@log.log_method_call
def update_application_policy_group(self, context,
application_policy_group_id,
application_policy_group):
apg = application_policy_group['application_policy_group']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
apg_db = self._get_application_policy_group(
context, application_policy_group_id)
apg_db.update(apg)
return self._make_application_policy_group_dict(apg_db)
return self._make_application_policy_group_dict(apg_db)
@log.log_method_call
def delete_application_policy_group(self, context,
application_policy_group_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
apg_db = self._get_application_policy_group(
context, application_policy_group_id)
context.session.delete(apg_db)
@ -1284,7 +1285,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_l2_policy(self, context, l2_policy):
l2p = l2_policy['l2_policy']
tenant_id = self._get_tenant_id_for_create(context, l2p)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l2p_db = L2Policy(id=uuidutils.generate_uuid(),
tenant_id=tenant_id, name=l2p['name'],
description=l2p['description'],
@ -1295,19 +1296,19 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status=l2p.get('status'),
status_details=l2p.get('status_details'))
context.session.add(l2p_db)
return self._make_l2_policy_dict(l2p_db)
return self._make_l2_policy_dict(l2p_db)
@log.log_method_call
def update_l2_policy(self, context, l2_policy_id, l2_policy):
l2p = l2_policy['l2_policy']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l2p_db = self._get_l2_policy(context, l2_policy_id)
l2p_db.update(l2p)
return self._make_l2_policy_dict(l2p_db)
return self._make_l2_policy_dict(l2p_db)
@log.log_method_call
def delete_l2_policy(self, context, l2_policy_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l2p_db = self._get_l2_policy(context, l2_policy_id)
# When delete_l2_policy is called implicitly (as a
# side effect of the last PTG deletion), the L2P's
@ -1350,7 +1351,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
self.validate_subnet_prefix_length(
l3p['ip_version'], l3p['subnet_prefix_length'],
l3p.get('ip_pool', None))
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l3p_db = L3Policy(
id=uuidutils.generate_uuid(),
tenant_id=tenant_id, name=l3p['name'],
@ -1365,12 +1366,12 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
self._set_ess_for_l3p(context, l3p_db,
l3p['external_segments'])
context.session.add(l3p_db)
return self._make_l3_policy_dict(l3p_db)
return self._make_l3_policy_dict(l3p_db)
@log.log_method_call
def update_l3_policy(self, context, l3_policy_id, l3_policy):
l3p = l3_policy['l3_policy']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l3p_db = self._get_l3_policy(context, l3_policy_id)
if 'subnet_prefix_length' in l3p:
self.validate_subnet_prefix_length(
@ -1381,11 +1382,11 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
l3p['external_segments'])
del l3p['external_segments']
l3p_db.update(l3p)
return self._make_l3_policy_dict(l3p_db)
return self._make_l3_policy_dict(l3p_db)
@log.log_method_call
def delete_l3_policy(self, context, l3_policy_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l3p_db = self._get_l3_policy(context, l3_policy_id)
if l3p_db.l2_policies:
raise gpolicy.L3PolicyInUse(l3_policy_id=l3_policy_id)
@ -1418,7 +1419,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_network_service_policy(self, context, network_service_policy):
nsp = network_service_policy['network_service_policy']
tenant_id = self._get_tenant_id_for_create(context, nsp)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
nsp_db = NetworkServicePolicy(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=nsp['name'],
@ -1430,25 +1431,25 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
context.session.add(nsp_db)
self._set_params_for_network_service_policy(
context, nsp_db, nsp)
return self._make_network_service_policy_dict(nsp_db)
return self._make_network_service_policy_dict(nsp_db)
@log.log_method_call
def update_network_service_policy(
self, context, network_service_policy_id, network_service_policy):
nsp = network_service_policy['network_service_policy']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
nsp_db = self._get_network_service_policy(
context, network_service_policy_id)
if 'network_service_params' in network_service_policy:
self._set_params_for_network_service_policy(
context, nsp_db, nsp)
nsp_db.update(nsp)
return self._make_network_service_policy_dict(nsp_db)
return self._make_network_service_policy_dict(nsp_db)
@log.log_method_call
def delete_network_service_policy(
self, context, network_service_policy_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
nsp_db = self._get_network_service_policy(
context, network_service_policy_id)
if nsp_db.policy_target_groups:
@ -1487,7 +1488,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
tenant_id = self._get_tenant_id_for_create(context, pc)
port_min, port_max = GroupPolicyDbPlugin._get_min_max_ports_from_range(
pc['port_range'])
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pc_db = PolicyClassifier(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=pc['name'],
@ -1501,13 +1502,13 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status_details=
pc.get('status_details'))
context.session.add(pc_db)
return self._make_policy_classifier_dict(pc_db)
return self._make_policy_classifier_dict(pc_db)
@log.log_method_call
def update_policy_classifier(self, context, policy_classifier_id,
policy_classifier):
pc = policy_classifier['policy_classifier']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pc_db = self._get_policy_classifier(context, policy_classifier_id)
if 'port_range' in pc:
port_min, port_max = (GroupPolicyDbPlugin.
@ -1517,11 +1518,11 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
'port_range_max': port_max})
del pc['port_range']
pc_db.update(pc)
return self._make_policy_classifier_dict(pc_db)
return self._make_policy_classifier_dict(pc_db)
@log.log_method_call
def delete_policy_classifier(self, context, policy_classifier_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pc_db = self._get_policy_classifier(context, policy_classifier_id)
pc_ids = self._get_policy_classifier_rules(context,
policy_classifier_id)
@ -1558,7 +1559,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_policy_action(self, context, policy_action):
pa = policy_action['policy_action']
tenant_id = self._get_tenant_id_for_create(context, pa)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pa_db = PolicyAction(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=pa['name'],
@ -1570,19 +1571,19 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status_details=
pa.get('status_details'))
context.session.add(pa_db)
return self._make_policy_action_dict(pa_db)
return self._make_policy_action_dict(pa_db)
@log.log_method_call
def update_policy_action(self, context, policy_action_id, policy_action):
pa = policy_action['policy_action']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pa_db = self._get_policy_action(context, policy_action_id)
pa_db.update(pa)
return self._make_policy_action_dict(pa_db)
return self._make_policy_action_dict(pa_db)
@log.log_method_call
def delete_policy_action(self, context, policy_action_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pa_db = self._get_policy_action(context, policy_action_id)
pa_ids = self._get_policy_action_rules(context, policy_action_id)
if pa_ids:
@ -1617,7 +1618,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_policy_rule(self, context, policy_rule):
pr = policy_rule['policy_rule']
tenant_id = self._get_tenant_id_for_create(context, pr)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pr_db = PolicyRule(id=uuidutils.generate_uuid(),
tenant_id=tenant_id, name=pr['name'],
description=pr['description'],
@ -1629,23 +1630,23 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
context.session.add(pr_db)
self._set_actions_for_rule(context, pr_db,
pr['policy_actions'])
return self._make_policy_rule_dict(pr_db)
return self._make_policy_rule_dict(pr_db)
@log.log_method_call
def update_policy_rule(self, context, policy_rule_id, policy_rule):
pr = policy_rule['policy_rule']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pr_db = self._get_policy_rule(context, policy_rule_id)
if 'policy_actions' in pr:
self._set_actions_for_rule(context, pr_db,
pr['policy_actions'])
del pr['policy_actions']
pr_db.update(pr)
return self._make_policy_rule_dict(pr_db)
return self._make_policy_rule_dict(pr_db)
@log.log_method_call
def delete_policy_rule(self, context, policy_rule_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pr_db = self._get_policy_rule(context, policy_rule_id)
prs_ids = self._get_policy_rule_policy_rule_sets(context,
policy_rule_id)
@ -1680,7 +1681,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_policy_rule_set(self, context, policy_rule_set):
prs = policy_rule_set['policy_rule_set']
tenant_id = self._get_tenant_id_for_create(context, prs)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
prs_db = PolicyRuleSet(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=prs['name'],
@ -1693,13 +1694,13 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
prs['policy_rules'])
self._set_children_for_policy_rule_set(
context, prs_db, prs['child_policy_rule_sets'])
return self._make_policy_rule_set_dict(prs_db)
return self._make_policy_rule_set_dict(prs_db)
@log.log_method_call
def update_policy_rule_set(self, context, policy_rule_set_id,
policy_rule_set):
prs = policy_rule_set['policy_rule_set']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
prs_db = self._get_policy_rule_set(context, policy_rule_set_id)
if 'policy_rules' in prs:
self._set_rules_for_policy_rule_set(
@ -1710,11 +1711,11 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
context, prs_db, prs['child_policy_rule_sets'])
del prs['child_policy_rule_sets']
prs_db.update(prs)
return self._make_policy_rule_set_dict(prs_db)
return self._make_policy_rule_set_dict(prs_db)
@log.log_method_call
def delete_policy_rule_set(self, context, policy_rule_set_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
prs_db = self._get_policy_rule_set(context, policy_rule_set_id)
prs_ids = (
self._get_ptgs_for_providing_policy_rule_set(
@ -1758,7 +1759,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_external_policy(self, context, external_policy):
ep = external_policy['external_policy']
tenant_id = self._get_tenant_id_for_create(context, ep)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ep_db = ExternalPolicy(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=ep['name'], description=ep['description'],
@ -1770,13 +1771,13 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
self._set_ess_for_ep(context, ep_db,
ep['external_segments'])
self._process_policy_rule_sets_for_ep(context, ep_db, ep)
return self._make_external_policy_dict(ep_db)
return self._make_external_policy_dict(ep_db)
@log.log_method_call
def update_external_policy(self, context, external_policy_id,
external_policy):
ep = external_policy['external_policy']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ep_db = self._get_external_policy(
context, external_policy_id)
if 'external_segments' in ep:
@ -1785,7 +1786,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
del ep['external_segments']
self._process_policy_rule_sets_for_ep(context, ep_db, ep)
ep_db.update(ep)
return self._make_external_policy_dict(ep_db)
return self._make_external_policy_dict(ep_db)
@log.log_method_call
def get_external_policies(self, context, filters=None, fields=None,
@ -1813,7 +1814,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
@log.log_method_call
def delete_external_policy(self, context, external_policy_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ep_db = self._get_external_policy(
context, external_policy_id)
context.session.delete(ep_db)
@ -1822,7 +1823,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_external_segment(self, context, external_segment):
es = external_segment['external_segment']
tenant_id = self._get_tenant_id_for_create(context, es)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es_db = ExternalSegment(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=es['name'], description=es['description'],
@ -1834,20 +1835,20 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
context.session.add(es_db)
if 'external_routes' in es:
self._process_segment_ers(context, es_db, es)
return self._make_external_segment_dict(es_db)
return self._make_external_segment_dict(es_db)
@log.log_method_call
def update_external_segment(self, context, external_segment_id,
external_segment):
es = external_segment['external_segment']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es_db = self._get_external_segment(
context, external_segment_id)
if 'external_routes' in es:
self._process_segment_ers(context, es_db, es)
del es['external_routes']
es_db.update(es)
return self._make_external_segment_dict(es_db)
return self._make_external_segment_dict(es_db)
@log.log_method_call
def get_external_segments(self, context, filters=None, fields=None,
@ -1875,7 +1876,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
@log.log_method_call
def delete_external_segment(self, context, external_segment_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es_db = self._get_external_segment(
context, external_segment_id)
context.session.delete(es_db)
@ -1884,7 +1885,7 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
def create_nat_pool(self, context, nat_pool):
np = nat_pool['nat_pool']
tenant_id = self._get_tenant_id_for_create(context, np)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
np_db = NATPool(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=np['name'], description=np['description'],
@ -1894,16 +1895,16 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
status=np.get('status'),
status_details=np.get('status_details'))
context.session.add(np_db)
return self._make_nat_pool_dict(np_db)
return self._make_nat_pool_dict(np_db)
@log.log_method_call
def update_nat_pool(self, context, nat_pool_id, nat_pool):
np = nat_pool['nat_pool']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
np_db = self._get_nat_pool(
context, nat_pool_id)
np_db.update(np)
return self._make_nat_pool_dict(np_db)
return self._make_nat_pool_dict(np_db)
@log.log_method_call
def get_nat_pools(self, context, filters=None, fields=None,
@ -1929,6 +1930,6 @@ class GroupPolicyDbPlugin(gpolicy.GroupPolicyPluginBase,
@log.log_method_call
def delete_nat_pool(self, context, nat_pool_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
np_db = self._get_nat_pool(context, nat_pool_id)
context.session.delete(np_db)

View File

@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.db import api as db_api
from neutron.db import models_v2
from neutron_lib.db import model_base
from neutron_lib import exceptions as nexc
@ -427,7 +428,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
def create_policy_target(self, context, policy_target):
pt = policy_target['policy_target']
tenant_id = self._get_tenant_id_for_create(context, pt)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
self._validate_pt_port_exta_attributes(context, pt)
pt_db = PolicyTargetMapping(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
@ -438,8 +439,8 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
port_id=pt['port_id'],
cluster_id=pt['cluster_id'])
context.session.add(pt_db)
return self._make_policy_target_dict(
pt_db, port_attributes=pt.get('port_attributes', {}))
return self._make_policy_target_dict(
pt_db, port_attributes=pt.get('port_attributes', {}))
@log.log_method_call
def get_policy_targets_count(self, context, filters=None):
@ -463,7 +464,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
def create_policy_target_group(self, context, policy_target_group):
ptg = policy_target_group['policy_target_group']
tenant_id = self._get_tenant_id_for_create(context, ptg)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
if ptg['service_management']:
self._validate_service_management_ptg(context, tenant_id)
uuid = ptg.get('id')
@ -487,13 +488,13 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
)
ptg_db.subnets.append(assoc)
self._process_policy_rule_sets_for_ptg(context, ptg_db, ptg)
return self._make_policy_target_group_dict(ptg_db)
return self._make_policy_target_group_dict(ptg_db)
@log.log_method_call
def update_policy_target_group(self, context, policy_target_group_id,
policy_target_group):
ptg = policy_target_group['policy_target_group']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
ptg_db = self._get_policy_target_group(
context, policy_target_group_id)
self._process_policy_rule_sets_for_ptg(context, ptg_db, ptg)
@ -518,7 +519,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
# Don't update ptg_db.subnets with subnet IDs.
del ptg['subnets']
ptg_db.update(ptg)
return self._make_policy_target_group_dict(ptg_db)
return self._make_policy_target_group_dict(ptg_db)
@log.log_method_call
def get_policy_target_groups_count(self, context, filters=None):
@ -548,7 +549,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
def create_l2_policy(self, context, l2_policy):
l2p = l2_policy['l2_policy']
tenant_id = self._get_tenant_id_for_create(context, l2p)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l2p_db = L2PolicyMapping(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=l2p['name'],
@ -559,7 +560,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
'inject_default_route', True),
shared=l2p.get('shared', False))
context.session.add(l2p_db)
return self._make_l2_policy_dict(l2p_db)
return self._make_l2_policy_dict(l2p_db)
@log.log_method_call
def get_l2_policies(self, context, filters=None, fields=None,
@ -589,7 +590,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
self.validate_subnet_prefix_length(l3p['ip_version'],
l3p['subnet_prefix_length'],
l3p.get('ip_pool', None))
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l3p_db = L3PolicyMapping(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=l3p['name'],
@ -622,7 +623,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
self._set_ess_for_l3p(context, l3p_db,
l3p['external_segments'])
context.session.add(l3p_db)
return self._make_l3_policy_dict(l3p_db, ip_pool=l3p['ip_pool'])
return self._make_l3_policy_dict(l3p_db, ip_pool=l3p['ip_pool'])
@log.log_method_call
def update_l3_policy(self, context, l3_policy_id, l3_policy):
@ -631,7 +632,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
if 'address_scope_v4_id' in l3p or 'address_scope_v6_id' in l3p:
raise AddressScopeUpdateForL3PNotSupported()
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l3p_db = self._get_l3_policy(context, l3_policy_id)
self._update_subnetpools_for_l3_policy(context, l3_policy_id,
@ -671,13 +672,13 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
l3p['external_segments'])
del l3p['external_segments']
l3p_db.update(l3p)
return self._make_l3_policy_dict(l3p_db)
return self._make_l3_policy_dict(l3p_db)
@log.log_method_call
def create_external_segment(self, context, external_segment):
es = external_segment['external_segment']
tenant_id = self._get_tenant_id_for_create(context, es)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es_db = ExternalSegmentMapping(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=es['name'], description=es['description'],
@ -688,7 +689,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
context.session.add(es_db)
if 'external_routes' in es:
self._process_segment_ers(context, es_db, es)
return self._make_external_segment_dict(es_db)
return self._make_external_segment_dict(es_db)
@log.log_method_call
def get_external_segments(self, context, filters=None, fields=None,
@ -711,7 +712,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
def create_nat_pool(self, context, nat_pool):
np = nat_pool['nat_pool']
tenant_id = self._get_tenant_id_for_create(context, np)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
np_db = NATPoolMapping(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=np['name'], description=np['description'],
@ -720,7 +721,7 @@ class GroupPolicyMappingDbPlugin(gpdb.GroupPolicyDbPlugin):
external_segment_id=np['external_segment_id'],
subnet_id=np.get('subnet_id'))
context.session.add(np_db)
return self._make_nat_pool_dict(np_db)
return self._make_nat_pool_dict(np_db)
@log.log_method_call
def get_nat_pools(self, context, filters=None, fields=None,

View File

@ -16,14 +16,32 @@ import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy import sql
from neutron.api.v2 import attributes as attr
from neutron.db import db_base_plugin_v2
from neutron.db import _model_query as model_query
from neutron.db import _resource_extend as resource_extend
from neutron.db import models_v2
from neutron_lib.api.definitions import subnetpool as subnetpool_def
from neutron_lib.api import validators
from neutron_lib.db import model_base
from neutron_lib import exceptions as n_exc
def _subnetpool_model_hook(context, original_model, query):
query = query.outerjoin(ImplicitSubnetpool,
(original_model.id == ImplicitSubnetpool.subnetpool_id))
return query
def _subnetpool_filter_hook(context, original_model, conditions):
return conditions
def _subnetpool_result_filter_hook(query, filters):
vals = filters and filters.get('is_implicit', [])
if not vals:
return query
return query.filter((ImplicitSubnetpool.is_implicit.in_(vals)))
class ImplicitSubnetpool(model_base.BASEV2):
__tablename__ = "implicit_subnetpools"
subnetpool_id = sa.Column(sa.String(36),
@ -38,9 +56,20 @@ class ImplicitSubnetpool(model_base.BASEV2):
lazy="joined", cascade="delete"))
@resource_extend.has_resource_extenders
class ImplicitSubnetpoolMixin(object):
"""Mixin class for implicit subnetpool."""
def __new__(cls, *args, **kwargs):
model_query.register_hook(
models_v2.SubnetPool,
"implicit_subnetpool",
query_hook=_subnetpool_model_hook,
filter_hook=_subnetpool_filter_hook,
result_filters=_subnetpool_result_filter_hook)
return super(ImplicitSubnetpoolMixin, cls).__new__(
cls, *args, **kwargs)
def get_implicit_subnetpool_id(self, context, tenant=None, ip_version="4"):
pool = self.get_implicit_subnetpool(context, tenant=tenant,
ip_version=ip_version)
@ -66,31 +95,9 @@ class ImplicitSubnetpoolMixin(object):
return (context.session.query(ImplicitSubnetpool).
filter_by(subnetpool_id=subnetpool_id)).first()
def _subnetpool_model_hook(self, context, original_model, query):
query = query.outerjoin(ImplicitSubnetpool,
(original_model.id ==
ImplicitSubnetpool.subnetpool_id))
return query
def _subnetpool_filter_hook(self, context, original_model, conditions):
return conditions
def _subnetpool_result_filter_hook(self, query, filters):
vals = filters and filters.get('is_implicit', [])
if not vals:
return query
return query.filter(
(ImplicitSubnetpool.is_implicit.in_(vals)))
db_base_plugin_v2.NeutronDbPluginV2.register_model_query_hook(
models_v2.SubnetPool,
"implicit_subnetpool",
'_subnetpool_model_hook',
'_subnetpool_filter_hook',
'_subnetpool_result_filter_hook')
def _extend_subnetpool_dict_implicit(self, subnetpool_res,
subnetpool_db):
@staticmethod
@resource_extend.extends([subnetpool_def.COLLECTION_NAME])
def _extend_subnetpool_dict_implicit(subnetpool_res, subnetpool_db):
try:
subnetpool_res["is_implicit"] = (
subnetpool_db.implicit[0].is_implicit)
@ -100,10 +107,6 @@ class ImplicitSubnetpoolMixin(object):
pass
return subnetpool_res
# Register dict extend functions for ports
db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(
attr.SUBNETPOOLS, ['_extend_subnetpool_dict_implicit'])
def update_implicit_subnetpool(self, context, subnetpool):
is_implicit = False
if validators.is_attr_set(subnetpool.get('is_implicit')):

View File

@ -12,6 +12,7 @@
import ast
from neutron.db import api as db_api
from neutron.db import common_db_mixin
from neutron.plugins.common import constants as pconst
from neutron_lib.db import model_base
@ -258,7 +259,7 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
def create_servicechain_node(self, context, servicechain_node):
node = servicechain_node['servicechain_node']
tenant_id = self._get_tenant_id_for_create(context, node)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
node_db = ServiceChainNode(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=node['name'], description=node['description'],
@ -268,13 +269,13 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
status=node.get('status'),
status_details=node.get('status_details'))
context.session.add(node_db)
return self._make_sc_node_dict(node_db)
return self._make_sc_node_dict(node_db)
@log.log_method_call
def update_servicechain_node(self, context, servicechain_node_id,
servicechain_node, set_params=False):
node = servicechain_node['servicechain_node']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
node_db = self._get_servicechain_node(context,
servicechain_node_id)
node_db.update(node)
@ -286,11 +287,11 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
self._process_nodes_for_spec(
context, spec_db, self._make_sc_spec_dict(spec_db),
set_params=set_params)
return self._make_sc_node_dict(node_db)
return self._make_sc_node_dict(node_db)
@log.log_method_call
def delete_servicechain_node(self, context, servicechain_node_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
node_db = self._get_servicechain_node(context,
servicechain_node_id)
if node_db.specs:
@ -425,7 +426,7 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
set_params=True):
spec = servicechain_spec['servicechain_spec']
tenant_id = self._get_tenant_id_for_create(context, spec)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
spec_db = ServiceChainSpec(id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
name=spec['name'],
@ -437,19 +438,19 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
self._process_nodes_for_spec(context, spec_db, spec,
set_params=set_params)
context.session.add(spec_db)
return self._make_sc_spec_dict(spec_db)
return self._make_sc_spec_dict(spec_db)
@log.log_method_call
def update_servicechain_spec(self, context, spec_id,
servicechain_spec, set_params=True):
spec = servicechain_spec['servicechain_spec']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
spec_db = self._get_servicechain_spec(context,
spec_id)
spec = self._process_nodes_for_spec(context, spec_db, spec,
set_params=set_params)
spec_db.update(spec)
return self._make_sc_spec_dict(spec_db)
return self._make_sc_spec_dict(spec_db)
@log.log_method_call
def delete_servicechain_spec(self, context, spec_id):
@ -457,7 +458,7 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
context, filters={"action_value": [spec_id]})
if policy_actions:
raise schain.ServiceChainSpecInUse(spec_id=spec_id)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
spec_db = self._get_servicechain_spec(context,
spec_id)
if spec_db.instances:
@ -492,7 +493,7 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
def create_servicechain_instance(self, context, servicechain_instance):
instance = servicechain_instance['servicechain_instance']
tenant_id = self._get_tenant_id_for_create(context, instance)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
if not instance.get('management_ptg_id'):
management_groups = (
self._grouppolicy_plugin.get_policy_target_groups(
@ -518,23 +519,23 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
status_details=instance.get('status_details'))
self._process_specs_for_instance(context, instance_db, instance)
context.session.add(instance_db)
return self._make_sc_instance_dict(instance_db)
return self._make_sc_instance_dict(instance_db)
@log.log_method_call
def update_servicechain_instance(self, context, servicechain_instance_id,
servicechain_instance):
instance = servicechain_instance['servicechain_instance']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
instance_db = self._get_servicechain_instance(
context, servicechain_instance_id)
instance = self._process_specs_for_instance(context, instance_db,
instance)
instance_db.update(instance)
return self._make_sc_instance_dict(instance_db)
return self._make_sc_instance_dict(instance_db)
@log.log_method_call
def delete_servicechain_instance(self, context, servicechain_instance_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
instance_db = self._get_servicechain_instance(
context, servicechain_instance_id)
context.session.delete(instance_db)
@ -571,7 +572,7 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
def create_service_profile(self, context, service_profile):
profile = service_profile['service_profile']
tenant_id = self._get_tenant_id_for_create(context, profile)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
profile_db = ServiceProfile(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
name=profile['name'], description=profile['description'],
@ -583,21 +584,21 @@ class ServiceChainDbPlugin(schain.ServiceChainPluginBase,
status=profile.get('status'),
status_details=profile.get('status_details'))
context.session.add(profile_db)
return self._make_service_profile_dict(profile_db)
return self._make_service_profile_dict(profile_db)
@log.log_method_call
def update_service_profile(self, context, service_profile_id,
service_profile):
profile = service_profile['service_profile']
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
profile_db = self._get_service_profile(context,
service_profile_id)
profile_db.update(profile)
return self._make_service_profile_dict(profile_db)
return self._make_service_profile_dict(profile_db)
@log.log_method_call
def delete_service_profile(self, context, service_profile_id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
profile_db = self._get_service_profile(context,
service_profile_id)
if profile_db.nodes:

View File

@ -13,9 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api.v2 import attributes
from neutron.extensions import address_scope
from neutron_lib.api import converters as conv
from neutron_lib.api.definitions import address_scope as as_def
from neutron_lib.api.definitions import network as net_def
from neutron_lib.api.definitions import subnet as subnet_def
from neutron_lib.api import extensions
ALIAS = 'cisco-apic'
@ -103,11 +104,11 @@ ADDRESS_SCOPE_ATTRIBUTES = {
EXTENDED_ATTRIBUTES_2_0 = {
attributes.NETWORKS: dict(
net_def.COLLECTION_NAME: dict(
APIC_ATTRIBUTES.items() + EXT_NET_ATTRIBUTES.items()),
attributes.SUBNETS: dict(
subnet_def.COLLECTION_NAME: dict(
APIC_ATTRIBUTES.items() + EXT_SUBNET_ATTRIBUTES.items()),
address_scope.ADDRESS_SCOPES: dict(
as_def.COLLECTION_NAME: dict(
APIC_ATTRIBUTES.items() + ADDRESS_SCOPE_ATTRIBUTES.items())
}

View File

@ -10,14 +10,14 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api.v2 import attributes
from neutron_lib.api import converters as conv
from neutron_lib.api.definitions import subnetpool as subnetpool_def
from neutron_lib.api import extensions
from neutron_lib import constants
EXTENDED_ATTRIBUTES_2_0 = {
attributes.SUBNETPOOLS: {
subnetpool_def.COLLECTION_NAME: {
'is_implicit': {'allow_post': True, 'allow_put': True,
'default': False,
'convert_to': conv.convert_to_boolean,

View File

@ -10,38 +10,104 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.db import api as db_api
from neutron_lib import context as lib_context
# REVISIT(Sumit): The neutron_lib context uses
# a neutron_lib version of db_api. In ocata this
# version of the db_api is different from the
# db_api in neutron, and does not work for GBP.
# Revisit for Pike.
lib_context.db_api = db_api
from neutron.callbacks import events
from neutron.callbacks import registry
from neutron.callbacks import resources
from neutron.db import address_scope_db
from neutron.db import common_db_mixin
from neutron.db import l3_db
from neutron.db.models import securitygroup as sg_models
from neutron.db import models_v2
from neutron.db import securitygroups_db
from neutron.extensions import address_scope as ext_address_scope
from neutron.extensions import securitygroup as ext_sg
from neutron.objects import subnetpool as subnetpool_obj
from neutron.plugins.ml2 import db as ml2_db
from neutron_lib.api import validators
from neutron_lib import exceptions as n_exc
from oslo_log import log
from oslo_utils import uuidutils
from sqlalchemy import event
from sqlalchemy.orm import exc
from sqlalchemy.orm import session as sql_session
LOG = log.getLogger(__name__)
PUSH_NOTIFICATIONS_METHOD = None
DISCARD_NOTIFICATIONS_METHOD = None
def gbp_after_transaction(session, transaction):
if transaction and not transaction._parent and (
not transaction.is_active and not transaction.nested):
if transaction in session.notification_queue:
# push the queued notifications only when the
# outermost transaction completes
PUSH_NOTIFICATIONS_METHOD(session, transaction)
def gbp_after_rollback(session):
# We discard all queued notifiactions if the transaction fails.
DISCARD_NOTIFICATIONS_METHOD(session)
# This module is loaded twice, once by way of imports,
# and once explicitly by Neutron's extension loading
# mechanism. We do the following to ensure that the
# patching happens only once and we preserve the reference
# to the original method.
if not hasattr(sql_session.Session, 'GBP_PATCHED'):
orig_session_init = getattr(sql_session.Session, '__init__')
def new_session_init(self, **kwargs):
self.notification_queue = {}
orig_session_init(self, **kwargs)
from gbpservice.network.neutronv2 import local_api
if local_api.QUEUE_OUT_OF_PROCESS_NOTIFICATIONS:
global PUSH_NOTIFICATIONS_METHOD
global DISCARD_NOTIFICATIONS_METHOD
PUSH_NOTIFICATIONS_METHOD = (
local_api.post_notifications_from_queue)
DISCARD_NOTIFICATIONS_METHOD = (
local_api.discard_notifications_after_rollback)
event.listen(self, "after_transaction_end",
gbp_after_transaction)
event.listen(self, "after_rollback",
gbp_after_rollback)
setattr(sql_session.Session, '__init__', new_session_init)
setattr(sql_session.Session, 'GBP_PATCHED', True)
import copy
from neutron.api.v2 import resource as neutron_resource
from neutron.quota import resource as quota_resource
from neutron_lib.plugins import directory
from gbpservice.common import utils as gbp_utils
if not hasattr(quota_resource, 'GBP_PATCHED'):
orig_count_resource = quota_resource._count_resource
def new_count_resource(*kwargs):
request = gbp_utils.get_obj_from_stack(neutron_resource.Request)
orig_plugins = directory._get_plugin_directory()._plugins
if request and request.environ['PATH_INFO'] == (
'/servicechain/service_profiles.json'):
new_plugins = copy.copy(directory._get_plugin_directory()._plugins)
# The service_profile resource is supported by the FLAVORS
# plugin as well as the SERVICECHAIN plugin. At this point
# we know that we are dealing with the service_profile from
# SERVICECHAIN, and since the original implementation of the
# count_resource will think of service_profile from FLAVORS
# (in the sorted order of plugins, FLAVORS preceedes SERVICECHAIN)
# we temporarily remove the FLAVORS plugin reference from the
# plugins directory.
new_plugins.pop('FLAVORS')
directory._get_plugin_directory()._plugins = new_plugins
count_resource = orig_count_resource(*kwargs)
directory._get_plugin_directory()._plugins = orig_plugins
return count_resource
quota_resource._count_resource = new_count_resource
quota_resource.GBP_PATCHED = True
# REVISIT(ivar): Monkey patch to allow explicit router_id to be set in Neutron
@ -99,244 +165,6 @@ securitygroups_db.SecurityGroupDbMixin._get_security_groups_on_port = (
_get_security_groups_on_port)
# REVISIT(kent): Neutron doesn't pass the remote_group_id while creating the
# ingress rule for the default SG. It also doesn't pass the newly created SG
# for the PRECOMMIT_CREATE event. Note that we should remove this in Pike as
# upstream has fixed the bug there
def create_security_group(self, context, security_group, default_sg=False):
"""Create security group.
If default_sg is true that means we are a default security group for
a given tenant if it does not exist.
"""
s = security_group['security_group']
kwargs = {
'context': context,
'security_group': s,
'is_default': default_sg,
}
self._registry_notify(resources.SECURITY_GROUP, events.BEFORE_CREATE,
exc_cls=ext_sg.SecurityGroupConflict, **kwargs)
tenant_id = s['tenant_id']
if not default_sg:
self._ensure_default_security_group(context, tenant_id)
else:
existing_def_sg_id = self._get_default_sg_id(context, tenant_id)
if existing_def_sg_id is not None:
# default already exists, return it
return self.get_security_group(context, existing_def_sg_id)
with db_api.autonested_transaction(context.session):
security_group_db = sg_models.SecurityGroup(id=s.get('id') or (
uuidutils.generate_uuid()),
description=s['description'],
tenant_id=tenant_id,
name=s['name'])
context.session.add(security_group_db)
if default_sg:
context.session.add(sg_models.DefaultSecurityGroup(
security_group=security_group_db,
tenant_id=security_group_db['tenant_id']))
for ethertype in ext_sg.sg_supported_ethertypes:
if default_sg:
# Allow intercommunication
ingress_rule = sg_models.SecurityGroupRule(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
security_group=security_group_db,
direction='ingress',
ethertype=ethertype,
remote_group_id=security_group_db.id,
source_group=security_group_db)
context.session.add(ingress_rule)
egress_rule = sg_models.SecurityGroupRule(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
security_group=security_group_db,
direction='egress',
ethertype=ethertype)
context.session.add(egress_rule)
secgroup_dict = self._make_security_group_dict(security_group_db)
kwargs['security_group'] = secgroup_dict
self._registry_notify(resources.SECURITY_GROUP,
events.PRECOMMIT_CREATE,
exc_cls=ext_sg.SecurityGroupConflict,
**kwargs)
registry.notify(resources.SECURITY_GROUP, events.AFTER_CREATE, self,
**kwargs)
return secgroup_dict
securitygroups_db.SecurityGroupDbMixin.create_security_group = (
create_security_group)
# REVISIT(kent): Neutron doesn't pass the updated SG for the PRECOMMIT_UPDATE
# event. Note that we should remove this in Pike as upstream has fixed the bug
# there
def update_security_group(self, context, id, security_group):
s = security_group['security_group']
kwargs = {
'context': context,
'security_group_id': id,
'security_group': s,
}
self._registry_notify(resources.SECURITY_GROUP, events.BEFORE_UPDATE,
exc_cls=ext_sg.SecurityGroupConflict, **kwargs)
with context.session.begin(subtransactions=True):
sg = self._get_security_group(context, id)
if sg['name'] == 'default' and 'name' in s:
raise ext_sg.SecurityGroupCannotUpdateDefault()
sg_dict = self._make_security_group_dict(sg)
kwargs['original_security_group'] = sg_dict
sg.update(s)
sg_dict = self._make_security_group_dict(sg)
kwargs['security_group'] = sg_dict
self._registry_notify(
resources.SECURITY_GROUP,
events.PRECOMMIT_UPDATE,
exc_cls=ext_sg.SecurityGroupConflict, **kwargs)
registry.notify(resources.SECURITY_GROUP, events.AFTER_UPDATE, self,
**kwargs)
return sg_dict
securitygroups_db.SecurityGroupDbMixin.update_security_group = (
update_security_group)
# REVISIT(kent): Neutron doesn't pass the SG rules for the PRECOMMIT_DELETE
# event. Note that we should remove this in Pike as upstream has fixed the bug
# there
def delete_security_group(self, context, id):
filters = {'security_group_id': [id]}
ports = self._get_port_security_group_bindings(context, filters)
if ports:
raise ext_sg.SecurityGroupInUse(id=id)
# confirm security group exists
sg = self._get_security_group(context, id)
if sg['name'] == 'default' and not context.is_admin:
raise ext_sg.SecurityGroupCannotRemoveDefault()
kwargs = {
'context': context,
'security_group_id': id,
'security_group': sg,
}
self._registry_notify(resources.SECURITY_GROUP, events.BEFORE_DELETE,
exc_cls=ext_sg.SecurityGroupInUse, id=id,
**kwargs)
with context.session.begin(subtransactions=True):
# pass security_group_rule_ids to ensure
# consistency with deleted rules
kwargs['security_group_rule_ids'] = [r['id'] for r in sg.rules]
kwargs['security_group'] = self._make_security_group_dict(sg)
self._registry_notify(resources.SECURITY_GROUP,
events.PRECOMMIT_DELETE,
exc_cls=ext_sg.SecurityGroupInUse, id=id,
**kwargs)
context.session.delete(sg)
kwargs.pop('security_group')
registry.notify(resources.SECURITY_GROUP, events.AFTER_DELETE, self,
**kwargs)
securitygroups_db.SecurityGroupDbMixin.delete_security_group = (
delete_security_group)
# REVISIT(kent): Neutron doesn't pass the newly created SG rule for the
# PRECOMMIT_CREATE event. Note that we should remove this in Pike as upstream
# has fixed the bug there
def _create_security_group_rule(self, context, security_group_rule,
validate=True):
if validate:
self._validate_security_group_rule(context, security_group_rule)
rule_dict = security_group_rule['security_group_rule']
kwargs = {
'context': context,
'security_group_rule': rule_dict
}
self._registry_notify(resources.SECURITY_GROUP_RULE,
events.BEFORE_CREATE,
exc_cls=ext_sg.SecurityGroupConflict, **kwargs)
with context.session.begin(subtransactions=True):
if validate:
self._check_for_duplicate_rules_in_db(context,
security_group_rule)
db = sg_models.SecurityGroupRule(
id=(rule_dict.get('id') or uuidutils.generate_uuid()),
tenant_id=rule_dict['tenant_id'],
security_group_id=rule_dict['security_group_id'],
direction=rule_dict['direction'],
remote_group_id=rule_dict.get('remote_group_id'),
ethertype=rule_dict['ethertype'],
protocol=rule_dict['protocol'],
port_range_min=rule_dict['port_range_min'],
port_range_max=rule_dict['port_range_max'],
remote_ip_prefix=rule_dict.get('remote_ip_prefix'),
description=rule_dict.get('description')
)
context.session.add(db)
res_rule_dict = self._make_security_group_rule_dict(db)
kwargs['security_group_rule'] = res_rule_dict
self._registry_notify(resources.SECURITY_GROUP_RULE,
events.PRECOMMIT_CREATE,
exc_cls=ext_sg.SecurityGroupConflict, **kwargs)
registry.notify(
resources.SECURITY_GROUP_RULE, events.AFTER_CREATE, self,
**kwargs)
return res_rule_dict
securitygroups_db.SecurityGroupDbMixin._create_security_group_rule = (
_create_security_group_rule)
# REVISIT(kent): Neutron doesn't pass the SG ID of the rule for the
# PRECOMMIT_DELETE event. Note that we should remove this in Pike as upstream
# has fixed the bug there
def delete_security_group_rule(self, context, id):
kwargs = {
'context': context,
'security_group_rule_id': id
}
self._registry_notify(resources.SECURITY_GROUP_RULE,
events.BEFORE_DELETE, id=id,
exc_cls=ext_sg.SecurityGroupRuleInUse, **kwargs)
with context.session.begin(subtransactions=True):
query = self._model_query(context,
sg_models.SecurityGroupRule).filter(
sg_models.SecurityGroupRule.id == id)
try:
# As there is a filter on a primary key it is not possible for
# MultipleResultsFound to be raised
sg_rule = query.one()
except exc.NoResultFound:
raise ext_sg.SecurityGroupRuleNotFound(id=id)
kwargs['security_group_id'] = sg_rule['security_group_id']
self._registry_notify(resources.SECURITY_GROUP_RULE,
events.PRECOMMIT_DELETE,
exc_cls=ext_sg.SecurityGroupRuleInUse, id=id,
**kwargs)
context.session.delete(sg_rule)
registry.notify(
resources.SECURITY_GROUP_RULE, events.AFTER_DELETE, self,
**kwargs)
securitygroups_db.SecurityGroupDbMixin.delete_security_group_rule = (
delete_security_group_rule)
def get_port_from_device_mac(context, device_mac):
LOG.debug("get_port_from_device_mac() called for mac %s", device_mac)
qry = context.session.query(models_v2.Port).filter_by(
@ -345,73 +173,6 @@ def get_port_from_device_mac(context, device_mac):
ml2_db.get_port_from_device_mac = get_port_from_device_mac
PUSH_NOTIFICATIONS_METHOD = None
DISCARD_NOTIFICATIONS_METHOD = None
def gbp_after_transaction(session, transaction):
if transaction and not transaction._parent and (
not transaction.is_active and not transaction.nested):
if transaction in session.notification_queue:
# push the queued notifications only when the
# outermost transaction completes
PUSH_NOTIFICATIONS_METHOD(session, transaction)
def gbp_after_rollback(session):
# We discard all queued notifiactions if the transaction fails.
DISCARD_NOTIFICATIONS_METHOD(session)
def pre_session():
from gbpservice.network.neutronv2 import local_api
# The folowing are declared as global so that they can
# used in the inner functions that follow.
global PUSH_NOTIFICATIONS_METHOD
global DISCARD_NOTIFICATIONS_METHOD
PUSH_NOTIFICATIONS_METHOD = (
local_api.post_notifications_from_queue)
DISCARD_NOTIFICATIONS_METHOD = (
local_api.discard_notifications_after_rollback)
def post_session(new_session):
from gbpservice.network.neutronv2 import local_api
new_session.notification_queue = {}
if local_api.QUEUE_OUT_OF_PROCESS_NOTIFICATIONS:
event.listen(new_session, "after_transaction_end",
gbp_after_transaction)
event.listen(new_session, "after_rollback",
gbp_after_rollback)
def get_session(autocommit=True, expire_on_commit=False, use_slave=False):
pre_session()
# The following two lines are copied from the original
# implementation of db_api.get_session() and should be updated
# if the original implementation changes.
new_session = db_api.context_manager.get_legacy_facade().get_session(
autocommit=autocommit, expire_on_commit=expire_on_commit,
use_slave=use_slave)
post_session(new_session)
return new_session
def get_writer_session():
pre_session()
new_session = db_api.context_manager.writer.get_sessionmaker()()
post_session(new_session)
return new_session
db_api.get_session = get_session
db_api.get_writer_session = get_writer_session
# REVISIT: This is temporary, the correct fix is to use
# the 'project_id' directly from the context rather than

View File

@ -10,10 +10,10 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.extensions import portbindings
from neutron.plugins.ml2 import driver_api as api
from neutron.plugins.ml2.drivers.openvswitch.mech_driver import (
mech_openvswitch as base)
from neutron_lib.api.definitions import portbindings
from gbpservice.neutron.services.servicechain.plugins.ncp import plumber_base

View File

@ -46,7 +46,14 @@ class NetworkMapping(model_base.BASEV2):
primary_key=True)
network = orm.relationship(
models_v2.Network, lazy='joined',
# REVISIT: The following also had an eager loading using
# lazy='joined' which was removed since it presumably causes
# cascaded delete to be triggered twice on the NetworkSegment
# object. The issue can be reproduced with the following UT:
# gbpservice.neutron.tests.unit.plugins.ml2plus.test_apic_aim.
# TestAimMapping.test_network_lifecycle
# Removing the eager loading will cause a performance hit.
models_v2.Network,
backref=orm.backref(
'aim_mapping', lazy='joined', uselist=False, cascade='delete'))

View File

@ -32,13 +32,14 @@ from neutron.db.models import address_scope as as_db
from neutron.db.models import allowed_address_pair as n_addr_pair_db
from neutron.db.models import l3 as l3_db
from neutron.db.models import securitygroup as sg_models
from neutron.db.models import segment as segments_model
from neutron.db import models_v2
from neutron.db import rbac_db_models
from neutron.db import segments_db
from neutron.extensions import portbindings
from neutron.plugins.common import constants as pconst
from neutron.plugins.ml2 import driver_api as api
from neutron.plugins.ml2 import models
from neutron_lib.api.definitions import portbindings
from neutron_lib.api.definitions import provider_net as provider
from neutron_lib import constants as n_constants
from neutron_lib import context as nctx
@ -308,8 +309,8 @@ class ApicMechanismDriver(api_plus.MechanismDriver,
# TODO(rkukura): Move the following to calls made from
# precommit methods so AIM Tenants, ApplicationProfiles, and
# Filters are [re]created whenever needed.
session = plugin_context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(plugin_context):
session = plugin_context.session
tenant_aname = self.name_mapper.project(session, project_id)
project_name = self.project_name_cache.get_project_name(project_id)
if project_name is None:
@ -466,7 +467,9 @@ class ApicMechanismDriver(api_plus.MechanismDriver,
dist_names = {}
aim_ctx = aim_context.AimContext(session)
mapping = network_db.aim_mapping
# REVISIT: Check and revert the following to:
# mapping = network_db.aim_mapping
mapping = self._get_network_mapping(session, network_db['id'])
if mapping:
bd = self._get_network_bd(mapping)
dist_names[cisco_apic.BD] = bd.dn
@ -2993,10 +2996,10 @@ class ApicMechanismDriver(api_plus.MechanismDriver,
def _get_non_opflex_segments_on_host(self, context, host):
session = context.session
segments = (session.query(segments_db.NetworkSegment)
segments = (session.query(segments_model.NetworkSegment)
.join(models.PortBindingLevel,
models.PortBindingLevel.segment_id ==
segments_db.NetworkSegment.id)
segments_model.NetworkSegment.id)
.filter(models.PortBindingLevel.host == host)
.all())
net_ids = set([])

View File

@ -40,18 +40,44 @@ def get_current_session():
return gbp_utils.get_current_session()
from neutron_lib import context as nlib_ctx
orig_get_admin_context = nlib_ctx.get_admin_context
def new_get_admin_context():
current_context = gbp_utils.get_current_context()
if not current_context:
return orig_get_admin_context()
else:
return current_context.elevated()
nlib_ctx.get_admin_context = new_get_admin_context
from neutron.plugins.ml2 import ovo_rpc
# The following reduces the ERROR log level for a message
# which is seen when a port_update even is sent. The
# port_update is intentionally sent in the pre_commit
# phase by the apic_aim mechanism driver, but is not
# what neutron expects and hence it flags it.
ovo_rpc.LOG.error = ovo_rpc.LOG.debug
# The Neutron code is instrumented to warn whenever AFTER_CREATE/UPDATE event
# notification handling is done within a transaction. With the combination of
# GBP plugin and aim_mapping policy driver this is expected to happen all the
# time. Hence we chose to suppress this warning. It can be turned on again by
# setting the following to True.
WARN_ON_SESSION_SEMANTIC_VIOLATION = False
from neutron.callbacks import registry
def new_is_session_semantic_violated(self, context, resource, event):
return
if not WARN_ON_SESSION_SEMANTIC_VIOLATION:
setattr(ovo_rpc._ObjectChangeHandler, '_is_session_semantic_violated',
new_is_session_semantic_violated)
from neutron_lib.callbacks import registry
from gbpservice.network.neutronv2 import local_api
@ -72,8 +98,8 @@ def notify(resource, event, trigger, **kwargs):
registry.notify = notify
from neutron.callbacks import events
from neutron.callbacks import exceptions
from neutron_lib.callbacks import events
from neutron_lib.callbacks import exceptions
from oslo_log import log as logging

View File

@ -20,12 +20,8 @@ from gbpservice.neutron import extensions as gbp_extensions
from gbpservice.neutron.extensions import patch # noqa
from gbpservice.neutron.plugins.ml2plus import patch_neutron # noqa
from neutron.api.v2 import attributes
from neutron.callbacks import events
from neutron.callbacks import registry
from neutron.callbacks import resources
from neutron.db import _resource_extend as resource_extend
from neutron.db import api as db_api
from neutron.db import db_base_plugin_v2
from neutron.db.models import securitygroup as securitygroups_db
from neutron.db import models_v2
from neutron.db import provisioning_blocks
@ -34,8 +30,16 @@ from neutron.plugins.ml2.common import exceptions as ml2_exc
from neutron.plugins.ml2 import managers as ml2_managers
from neutron.plugins.ml2 import plugin as ml2_plugin
from neutron.quota import resource_registry
from neutron_lib.api.definitions import address_scope as as_def
from neutron_lib.api.definitions import network as net_def
from neutron_lib.api.definitions import port as port_def
from neutron_lib.api.definitions import subnet as subnet_def
from neutron_lib.api.definitions import subnetpool as subnetpool_def
from neutron_lib.api import validators
from oslo_config import cfg
from neutron_lib.callbacks import events
from neutron_lib.callbacks import registry
from neutron_lib.callbacks import resources
from neutron_lib.plugins import directory
from oslo_log import log
from oslo_utils import excutils
@ -46,62 +50,7 @@ from gbpservice.neutron.plugins.ml2plus import managers
LOG = log.getLogger(__name__)
opts = [
cfg.BoolOpt('refresh_network_db_obj',
default=False,
help=_("Refresh the network DB object to correctly "
"reflect the most recent state of all its "
"attributes. This refresh will be performed "
"in the _ml2_md_extend_network_dict method "
"inside the ml2plus plugin. The refresh option "
"may have a significant performace impact "
"and should be avoided. Hence this configuration "
"is set to False by default.")),
cfg.BoolOpt('refresh_port_db_obj',
default=False,
help=_("Refresh the port DB object to correctly "
"reflect the most recent state of all its "
"attributes. This refresh will be performed "
"in the _ml2_md_extend_port_dict method "
"inside the ml2plus plugin. The refresh option "
"may have a significant performace impact "
"and should be avoided. Hence this configuration "
"is set to False by default.")),
cfg.BoolOpt('refresh_subnet_db_obj',
default=False,
help=_("Refresh the subnet DB object to correctly "
"reflect the most recent state of all its "
"attributes. This refresh will be performed "
"in the _ml2_md_extend_subnet_dict method "
"inside the ml2plus plugin. The refresh option "
"may have a significant performace impact "
"and should be avoided. Hence this configuration "
"is set to False by default.")),
cfg.BoolOpt('refresh_subnetpool_db_obj',
default=False,
help=_("Refresh the subnetpool DB object to correctly "
"reflect the most recent state of all its "
"attributes. This refresh will be performed "
"in the _ml2_md_extend_subnetpool_dict method "
"inside the ml2plus plugin. The refresh option "
"may have a significant performace impact "
"and should be avoided. Hence this configuration "
"is set to False by default.")),
cfg.BoolOpt('refresh_address_scope_db_obj',
default=False,
help=_("Refresh the address_scope DB object to correctly "
"reflect the most recent state of all its "
"attributes. This refresh will be performed "
"in the _ml2_md_extend_address_scope_dict method "
"inside the ml2plus plugin. The refresh option "
"may have a significant performace impact "
"and should be avoided. Hence this configuration "
"is set to False by default.")),
]
cfg.CONF.register_opts(opts, "ml2plus")
@resource_extend.has_resource_extenders
class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
implicitsubnetpool_db.ImplicitSubnetpoolMixin):
@ -184,21 +133,8 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
self._start_rpc_notifiers()
self.add_agent_status_check_worker(self.agent_health_check)
self._verify_service_plugins_requirements()
self.refresh_network_db_obj = cfg.CONF.ml2plus.refresh_network_db_obj
self.refresh_port_db_obj = cfg.CONF.ml2plus.refresh_port_db_obj
self.refresh_subnet_db_obj = cfg.CONF.ml2plus.refresh_subnet_db_obj
self.refresh_subnetpool_db_obj = (
cfg.CONF.ml2plus.refresh_subnetpool_db_obj)
self.refresh_address_scope_db_obj = (
cfg.CONF.ml2plus.refresh_address_scope_db_obj)
LOG.info("Modular L2 Plugin (extended) initialization complete")
db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(
attributes.SUBNETPOOLS, ['_ml2_md_extend_subnetpool_dict'])
db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(
as_ext.ADDRESS_SCOPES, ['_ml2_md_extend_address_scope_dict'])
def _handle_security_group_change(self, resource, event, trigger,
**kwargs):
context = kwargs.get('context')
@ -238,48 +174,70 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
self.mechanism_manager.delete_security_group_rule_precommit(
mech_context)
def _ml2_md_extend_network_dict(self, result, netdb):
@staticmethod
@resource_extend.extends([net_def.COLLECTION_NAME])
def _ml2_md_extend_network_dict(result, netdb):
plugin = directory.get_plugin()
session = patch_neutron.get_current_session()
# REVISIT: Check if transaction begin is still
# required here, and if so, if reader pattern
# can be used instead (will require getting the
# current context)
with session.begin(subtransactions=True):
if self.refresh_network_db_obj:
# In deployment it has been observed that the subnet
# backref is sometimes stale inside the driver's
# extend_network_dict. The call to refresh below
# ensures the backrefs and other attributes are
# not stale.
session.refresh(netdb)
self.extension_manager.extend_network_dict(session, netdb, result)
plugin.extension_manager.extend_network_dict(
session, netdb, result)
def _ml2_md_extend_port_dict(self, result, portdb):
@staticmethod
@resource_extend.extends([port_def.COLLECTION_NAME])
def _ml2_md_extend_port_dict(result, portdb):
plugin = directory.get_plugin()
session = patch_neutron.get_current_session()
# REVISIT: Check if transaction begin is still
# required here, and if so, if reader pattern
# can be used instead (will require getting the
# current context)
with session.begin(subtransactions=True):
if self.refresh_port_db_obj:
session.refresh(portdb)
self.extension_manager.extend_port_dict(session, portdb, result)
plugin.extension_manager.extend_port_dict(
session, portdb, result)
def _ml2_md_extend_subnet_dict(self, result, subnetdb):
@staticmethod
@resource_extend.extends([subnet_def.COLLECTION_NAME])
def _ml2_md_extend_subnet_dict(result, subnetdb):
plugin = directory.get_plugin()
session = patch_neutron.get_current_session()
# REVISIT: Check if transaction begin is still
# required here, and if so, if reader pattern
# can be used instead (will require getting the
# current context)
with session.begin(subtransactions=True):
if self.refresh_subnet_db_obj:
session.refresh(subnetdb)
self.extension_manager.extend_subnet_dict(
session, subnetdb, result)
plugin.extension_manager.extend_subnet_dict(
session, subnetdb, result)
def _ml2_md_extend_subnetpool_dict(self, result, subnetpooldb):
@staticmethod
@resource_extend.extends([subnetpool_def.COLLECTION_NAME])
def _ml2_md_extend_subnetpool_dict(result, subnetpooldb):
plugin = directory.get_plugin()
session = patch_neutron.get_current_session()
# REVISIT: Check if transaction begin is still
# required here, and if so, if reader pattern
# can be used instead (will require getting the
# current context)
with session.begin(subtransactions=True):
if self.refresh_subnetpool_db_obj:
session.refresh(subnetpooldb)
self.extension_manager.extend_subnetpool_dict(
session, subnetpooldb, result)
plugin.extension_manager.extend_subnetpool_dict(
session, subnetpooldb, result)
def _ml2_md_extend_address_scope_dict(self, result, address_scope):
@staticmethod
@resource_extend.extends([as_def.COLLECTION_NAME])
def _ml2_md_extend_address_scope_dict(result, address_scope):
plugin = directory.get_plugin()
session = patch_neutron.get_current_session()
# REVISIT: Check if transaction begin is still
# required here, and if so, if reader pattern
# can be used instead (will require getting the
# current context)
with session.begin(subtransactions=True):
if self.refresh_address_scope_db_obj:
session.refresh(address_scope)
self.extension_manager.extend_address_scope_dict(
session, address_scope, result)
plugin.extension_manager.extend_address_scope_dict(
session, address_scope, result)
# Base version does not call _apply_dict_extend_functions()
def _make_address_scope_dict(self, address_scope, fields=None):
@ -295,9 +253,27 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_network(self, context, network):
self._ensure_tenant(context, network[attributes.NETWORK])
self._ensure_tenant(context, network[net_def.RESOURCE_NAME])
return super(Ml2PlusPlugin, self).create_network(context, network)
# We are overriding _create_network_db here to get
# around a bug that was introduced in the following commit:
# https://github.com/openstack/neutron/commit/
# 2b7c6b2e987466973d983902eded6aff7f764830#
# diff-2e958ca8f1a6e9987e28a7d0f95bc3d1L776
# which moves the call to extending the dict before the call to
# pre_commit. We need to extend_dict function to pick up the changes
# from the pre_commit operations as well.
def _create_network_db(self, context, network):
with db_api.context_manager.writer.using(context):
result, mech_context = super(
Ml2PlusPlugin, self)._create_network_db(
context, network)
net_db = (context.session.query(models_v2.Network).
filter_by(id=result['id']).one())
resource_extend.apply_funcs('networks', result, net_db)
return result, mech_context
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def update_network(self, context, id, network):
@ -311,15 +287,15 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_network_bulk(self, context, networks):
self._ensure_tenant_bulk(context, networks[attributes.NETWORKS],
attributes.NETWORK)
self._ensure_tenant_bulk(context, networks[net_def.COLLECTION_NAME],
net_def.RESOURCE_NAME)
return super(Ml2PlusPlugin, self).create_network_bulk(context,
networks)
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_subnet(self, context, subnet):
self._ensure_tenant(context, subnet[attributes.SUBNET])
self._ensure_tenant(context, subnet[subnet_def.RESOURCE_NAME])
return super(Ml2PlusPlugin, self).create_subnet(context, subnet)
@gbp_extensions.disable_transaction_guard
@ -335,22 +311,22 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_subnet_bulk(self, context, subnets):
self._ensure_tenant_bulk(context, subnets[attributes.SUBNETS],
attributes.SUBNET)
self._ensure_tenant_bulk(context, subnets[subnet_def.COLLECTION_NAME],
subnet_def.RESOURCE_NAME)
return super(Ml2PlusPlugin, self).create_subnet_bulk(context,
subnets)
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_port(self, context, port):
self._ensure_tenant(context, port[attributes.PORT])
self._ensure_tenant(context, port[port_def.RESOURCE_NAME])
return super(Ml2PlusPlugin, self).create_port(context, port)
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_port_bulk(self, context, ports):
self._ensure_tenant_bulk(context, ports[attributes.PORTS],
attributes.PORT)
self._ensure_tenant_bulk(context, ports[port_def.COLLECTION_NAME],
port_def.RESOURCE_NAME)
return super(Ml2PlusPlugin, self).create_port_bulk(context,
ports)
@ -395,14 +371,13 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def create_subnetpool(self, context, subnetpool):
self._ensure_tenant(context, subnetpool[attributes.SUBNETPOOL])
session = context.session
with session.begin(subtransactions=True):
self._ensure_tenant(context, subnetpool[subnetpool_def.RESOURCE_NAME])
with db_api.context_manager.writer.using(context):
result = super(Ml2PlusPlugin, self).create_subnetpool(context,
subnetpool)
self._update_implicit_subnetpool(context, subnetpool, result)
self.extension_manager.process_create_subnetpool(
context, subnetpool[attributes.SUBNETPOOL], result)
context, subnetpool[subnetpool_def.RESOURCE_NAME], result)
mech_context = driver_context.SubnetPoolContext(
self, context, result)
self.mechanism_manager.create_subnetpool_precommit(mech_context)
@ -421,8 +396,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
@db_api.retry_if_session_inactive()
def update_subnetpool(self, context, id, subnetpool):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_subnetpool = super(Ml2PlusPlugin, self).get_subnetpool(
context, id)
updated_subnetpool = super(Ml2PlusPlugin, self).update_subnetpool(
@ -430,7 +404,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
self._update_implicit_subnetpool(context, subnetpool,
updated_subnetpool)
self.extension_manager.process_update_subnetpool(
context, subnetpool[attributes.SUBNETPOOL],
context, subnetpool[subnetpool_def.RESOURCE_NAME],
updated_subnetpool)
mech_context = driver_context.SubnetPoolContext(
self, context, updated_subnetpool,
@ -441,8 +415,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
def delete_subnetpool(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
subnetpool = super(Ml2PlusPlugin, self).get_subnetpool(context, id)
mech_context = driver_context.SubnetPoolContext(
self, context, subnetpool)
@ -459,8 +432,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
def create_address_scope(self, context, address_scope):
self._ensure_tenant(context, address_scope[as_ext.ADDRESS_SCOPE])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
result = super(Ml2PlusPlugin, self).create_address_scope(
context, address_scope)
self.extension_manager.process_create_address_scope(
@ -485,8 +457,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
def update_address_scope(self, context, id, address_scope):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_address_scope = super(Ml2PlusPlugin,
self).get_address_scope(context, id)
updated_address_scope = super(Ml2PlusPlugin,
@ -504,8 +475,7 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
@gbp_extensions.disable_transaction_guard
def delete_address_scope(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
address_scope = super(Ml2PlusPlugin, self).get_address_scope(
context, id)
mech_context = driver_context.AddressScopeContext(

View File

@ -14,17 +14,20 @@
# under the License.
from neutron.api import extensions
from neutron.db import _resource_extend as resource_extend
from neutron.db import api as db_api
from neutron.db import common_db_mixin
from neutron.db import db_base_plugin_v2
from neutron.db import dns_db
from neutron.db import extraroute_db
from neutron.db import l3_gwmode_db
from neutron.db.models import l3 as l3_db
from neutron.extensions import l3
from neutron.extensions import portbindings
from neutron.quota import resource_registry
from neutron_lib.api.definitions import l3 as l3_def
from neutron_lib.api.definitions import portbindings
from neutron_lib import constants
from neutron_lib import exceptions
from neutron_lib.plugins import constants
from neutron_lib.plugins import directory
from oslo_log import log as logging
from oslo_utils import excutils
from sqlalchemy import inspect
@ -39,6 +42,7 @@ from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import (
LOG = logging.getLogger(__name__)
@resource_extend.has_resource_extenders
class ApicL3Plugin(common_db_mixin.CommonDbMixin,
extraroute_db.ExtraRoute_db_mixin,
l3_gwmode_db.L3_NAT_db_mixin,
@ -74,23 +78,23 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
self._mechanism_driver = mech_mgr.mech_drivers['apic_aim'].obj
return self._mechanism_driver
def _extend_router_dict_apic(self, router_res, router_db):
@staticmethod
@resource_extend.extends([l3_def.ROUTERS])
def _extend_router_dict_apic(router_res, router_db):
LOG.debug("APIC AIM L3 Plugin extending router dict: %s", router_res)
plugin = directory.get_plugin(constants.L3)
session = inspect(router_db).session
try:
self._md.extend_router_dict(session, router_db, router_res)
self._include_router_extn_attr(session, router_res)
plugin._md.extend_router_dict(session, router_db, router_res)
plugin._include_router_extn_attr(session, router_res)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception("APIC AIM extend_router_dict failed")
db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(
l3.ROUTERS, ['_extend_router_dict_apic'])
def create_router(self, context, router):
LOG.debug("APIC AIM L3 Plugin creating router: %s", router)
self._md.ensure_tenant(context, router['router']['tenant_id'])
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
# REVISIT(rkukura): The base operation may create a port,
# which should generally not be done inside a
# transaction. But we need to ensure atomicity, and are
@ -106,7 +110,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
def update_router(self, context, id, router):
LOG.debug("APIC AIM L3 Plugin updating router %(id)s with: %(router)s",
{'id': id, 'router': router})
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
# REVISIT(rkukura): The base operation sends notification
# RPCs, which should generally not be done inside a
# transaction. But we need to ensure atomicity, and are
@ -122,7 +126,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
def delete_router(self, context, id):
LOG.debug("APIC AIM L3 Plugin deleting router: %s", id)
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
# REVISIT(rkukura): The base operation may delete ports
# and sends notification RPCs, which should generally not
# be done inside a transaction. But we need to ensure
@ -147,7 +151,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
LOG.debug("APIC AIM L3 Plugin adding interface %(interface)s "
"to router %(router)s",
{'interface': interface_info, 'router': router_id})
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
# REVISIT(rkukura): The base operation may create or
# update a port and sends notification RPCs, which should
# generally not be done inside a transaction. But we need
@ -200,7 +204,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
LOG.debug("APIC AIM L3 Plugin removing interface %(interface)s "
"from router %(router)s",
{'interface': interface_info, 'router': router_id})
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
# REVISIT(rkukura): The base operation may delete or
# update a port and sends notification RPCs, which should
# generally not be done inside a transaction. But we need
@ -236,9 +240,10 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
def create_floatingip(self, context, floatingip):
fip = floatingip['floatingip']
# Verify that subnet is not a SNAT host-pool
self._md.check_floatingip_external_address(context, fip)
with context.session.begin(subtransactions=True):
self._md.ensure_tenant(context, fip['tenant_id'])
with db_api.context_manager.writer.using(context):
# Verify that subnet is not a SNAT host-pool
self._md.check_floatingip_external_address(context, fip)
if fip.get('subnet_id') or fip.get('floating_ip_address'):
result = super(ApicL3Plugin, self).create_floatingip(
context, floatingip)
@ -268,7 +273,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
return result
def update_floatingip(self, context, id, floatingip):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
old_fip = self.get_floatingip(context, id)
result = super(ApicL3Plugin, self).update_floatingip(
context, id, floatingip)
@ -279,7 +284,7 @@ class ApicL3Plugin(common_db_mixin.CommonDbMixin,
return result
def delete_floatingip(self, context, id):
with context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
old_fip = self.get_floatingip(context, id)
super(ApicL3Plugin, self).delete_floatingip(context, id)
self._md.delete_floatingip(context, old_fip)

View File

@ -749,7 +749,7 @@ class ChainMappingDriver(api.PolicyDriver, local_api.LocalAPI,
key = service_parameter.get("name")
servicepolicy_ptg_ip_map = (
self._get_ptg_policy_ipaddress_mapping(
session, provider_ptg_id))
context._plugin_context, provider_ptg_id))
servicepolicy_ip = servicepolicy_ptg_ip_map.get(
"ipaddress")
config_param_values[key] = servicepolicy_ip
@ -757,7 +757,7 @@ class ChainMappingDriver(api.PolicyDriver, local_api.LocalAPI,
key = service_parameter.get("name")
fip_maps = (
self._get_ptg_policy_fip_mapping(
context._plugin_context.session,
context._plugin_context,
provider_ptg_id))
servicepolicy_fip_ids = []
for fip_map in fip_maps:

View File

@ -19,6 +19,7 @@ from aim.api import resource as aim_resource
from aim import context as aim_context
from aim import utils as aim_utils
from neutron.agent.linux import dhcp
from neutron.db import api as db_api
from neutron import policy
from neutron_lib import constants as n_constants
from neutron_lib import context as n_context
@ -29,6 +30,7 @@ from oslo_log import helpers as log
from oslo_log import log as logging
from oslo_utils import excutils
from gbpservice.common import utils as gbp_utils
from gbpservice.network.neutronv2 import local_api
from gbpservice.neutron.db.grouppolicy import group_policy_db as gpdb
from gbpservice.neutron.db.grouppolicy import group_policy_mapping_db as gpmdb
@ -603,9 +605,9 @@ class AIMMappingDriver(nrd.CommonNeutronBase, aim_rpc.AIMMappingRPCMixin):
plugin_context = context._plugin_context
auto_ptg_id = self._get_auto_ptg_id(context.current['l2_policy_id'])
context.nsp_cleanup_ipaddress = self._get_ptg_policy_ipaddress_mapping(
context._plugin_context.session, context.current['id'])
context._plugin_context, context.current['id'])
context.nsp_cleanup_fips = self._get_ptg_policy_fip_mapping(
context._plugin_context.session, context.current['id'])
context._plugin_context, context.current['id'])
if context.current['id'] == auto_ptg_id:
raise AutoPTGDeleteNotSupported(id=context.current['id'])
ptg_db = context._plugin._get_policy_target_group(
@ -707,7 +709,7 @@ class AIMMappingDriver(nrd.CommonNeutronBase, aim_rpc.AIMMappingRPCMixin):
@log.log_method_call
def delete_policy_target_precommit(self, context):
fips = self._get_pt_floating_ip_mapping(
context._plugin_context.session, context.current['id'])
context._plugin_context, context.current['id'])
for fip in fips:
self._delete_fip(context._plugin_context, fip.floatingip_id)
pt_db = context._plugin._get_policy_target(
@ -2270,21 +2272,25 @@ class AIMMappingDriver(nrd.CommonNeutronBase, aim_rpc.AIMMappingRPCMixin):
return default_epg_name
def apic_epg_name_for_policy_target_group(self, session, ptg_id,
name=None):
ptg_db = session.query(gpmdb.PolicyTargetGroupMapping).filter_by(
id=ptg_id).first()
if ptg_db and self._is_auto_ptg(ptg_db):
l2p_db = session.query(gpmdb.L2PolicyMapping).filter_by(
id=ptg_db['l2_policy_id']).first()
network_id = l2p_db['network_id']
admin_context = self._get_admin_context_reuse_session(session)
net = self._get_network(admin_context, network_id)
default_epg_dn = net[cisco_apic.DIST_NAMES][cisco_apic.EPG]
default_epg_name = self._get_epg_name_from_dn(
admin_context, default_epg_dn)
return default_epg_name
else:
return ptg_id
name=None, context=None):
if not context:
context = gbp_utils.get_current_context()
# get_network can do a DB write, hence we use a writer
with db_api.context_manager.writer.using(context):
ptg_db = session.query(gpmdb.PolicyTargetGroupMapping).filter_by(
id=ptg_id).first()
if ptg_db and self._is_auto_ptg(ptg_db):
l2p_db = session.query(gpmdb.L2PolicyMapping).filter_by(
id=ptg_db['l2_policy_id']).first()
network_id = l2p_db['network_id']
admin_context = n_context.get_admin_context()
net = self._get_network(admin_context, network_id)
default_epg_dn = net[cisco_apic.DIST_NAMES][cisco_apic.EPG]
default_epg_name = self._get_epg_name_from_dn(
admin_context, default_epg_dn)
return default_epg_name
else:
return ptg_id
def apic_ap_name_for_application_policy_group(self, session, apg_id):
if apg_id:
@ -2317,9 +2323,9 @@ class AIMMappingDriver(nrd.CommonNeutronBase, aim_rpc.AIMMappingRPCMixin):
def _use_implicit_port(self, context, subnets=None):
self._create_default_security_group(context._plugin_context,
context.current['tenant_id'])
context.current['tenant_id'])
super(AIMMappingDriver, self)._use_implicit_port(
context, subnets=subnets)
context, subnets=subnets)
def _handle_create_network_service_policy(self, context):
self._validate_nat_pool_for_nsp(context)
@ -2348,11 +2354,6 @@ class AIMMappingDriver(nrd.CommonNeutronBase, aim_rpc.AIMMappingRPCMixin):
network = self._get_network(context, port['network_id'])
return network.get('dns_domain')
def _get_admin_context_reuse_session(self, session):
admin_context = n_context.get_admin_context()
admin_context._session = session
return admin_context
def _create_per_l3p_implicit_contracts(self):
admin_context = n_context.get_admin_context()
context = type('', (object,), {})()

View File

@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.db import api as db_api
from neutron_lib.db import model_base
import sqlalchemy as sa
@ -80,90 +81,102 @@ class ServicePolicyQosPolicyMapping(model_base.BASEV2):
class NetworkServicePolicyMappingMixin(object):
def _set_policy_ipaddress_mapping(self, session, service_policy_id,
def _set_policy_ipaddress_mapping(self, context, service_policy_id,
policy_target_group, ipaddress):
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
mapping = ServicePolicyPTGIpAddressMapping(
service_policy_id=service_policy_id,
policy_target_group=policy_target_group, ipaddress=ipaddress)
session.add(mapping)
def _get_ptg_policy_ipaddress_mapping(self, session, policy_target_group):
with session.begin(subtransactions=True):
def _get_ptg_policy_ipaddress_mapping(self, context, policy_target_group):
with db_api.context_manager.reader.using(context):
session = context.session
return (session.query(ServicePolicyPTGIpAddressMapping).
filter_by(policy_target_group=policy_target_group).first())
def _delete_policy_ipaddress_mapping(self, session, policy_target_group):
with session.begin(subtransactions=True):
def _delete_policy_ipaddress_mapping(self, context, policy_target_group):
with db_api.context_manager.writer.using(context):
session = context.session
ip_mapping = session.query(
ServicePolicyPTGIpAddressMapping).filter_by(
policy_target_group=policy_target_group).first()
if ip_mapping:
session.delete(ip_mapping)
def _set_ptg_policy_fip_mapping(self, session, service_policy_id,
def _set_ptg_policy_fip_mapping(self, context, service_policy_id,
policy_target_group_id, fip_id):
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
mapping = ServicePolicyPTGFipMapping(
service_policy_id=service_policy_id,
policy_target_group_id=policy_target_group_id,
floatingip_id=fip_id)
session.add(mapping)
def _get_ptg_policy_fip_mapping(self, session, policy_target_group_id):
with session.begin(subtransactions=True):
def _get_ptg_policy_fip_mapping(self, context, policy_target_group_id):
with db_api.context_manager.reader.using(context):
session = context.session
return (session.query(ServicePolicyPTGFipMapping).
filter_by(policy_target_group_id=policy_target_group_id).
all())
def _delete_ptg_policy_fip_mapping(self, session, policy_target_group_id):
with session.begin(subtransactions=True):
def _delete_ptg_policy_fip_mapping(self, context, policy_target_group_id):
with db_api.context_manager.writer.using(context):
session = context.session
mappings = session.query(
ServicePolicyPTGFipMapping).filter_by(
policy_target_group_id=policy_target_group_id).all()
for mapping in mappings:
session.delete(mapping)
def _set_pt_floating_ips_mapping(self, session, policy_target_id, fip_ids):
with session.begin(subtransactions=True):
def _set_pt_floating_ips_mapping(self, context, policy_target_id, fip_ids):
with db_api.context_manager.writer.using(context):
session = context.session
for fip_id in fip_ids:
mapping = PolicyTargetFloatingIPMapping(
policy_target_id=policy_target_id, floatingip_id=fip_id)
session.add(mapping)
def _set_pts_floating_ips_mapping(self, session, pt_fip_map):
with session.begin(subtransactions=True):
def _set_pts_floating_ips_mapping(self, context, pt_fip_map):
with db_api.context_manager.writer.using(context):
for policy_target_id in pt_fip_map:
self._set_pt_floating_ips_mapping(
session, policy_target_id,
context, policy_target_id,
pt_fip_map[policy_target_id])
def _get_pt_floating_ip_mapping(self, session, policy_target_id):
with session.begin(subtransactions=True):
def _get_pt_floating_ip_mapping(self, context, policy_target_id):
with db_api.context_manager.reader.using(context):
session = context.session
return (session.query(PolicyTargetFloatingIPMapping).
filter_by(policy_target_id=policy_target_id).all())
def _delete_pt_floating_ip_mapping(self, session, policy_target_id):
with session.begin(subtransactions=True):
def _delete_pt_floating_ip_mapping(self, context, policy_target_id):
with db_api.context_manager.writer.using(context):
session = context.session
fip_mappings = session.query(
PolicyTargetFloatingIPMapping).filter_by(
policy_target_id=policy_target_id).all()
for fip_mapping in fip_mappings:
session.delete(fip_mapping)
def _get_nsp_qos_mapping(self, session, service_policy_id):
with session.begin(subtransactions=True):
def _get_nsp_qos_mapping(self, context, service_policy_id):
with db_api.context_manager.reader.using(context):
session = context.session
return (session.query(ServicePolicyQosPolicyMapping).
filter_by(service_policy_id=service_policy_id).first())
def _set_nsp_qos_mapping(self, session, service_policy_id, qos_policy_id):
with session.begin(subtransactions=True):
def _set_nsp_qos_mapping(self, context, service_policy_id, qos_policy_id):
with db_api.context_manager.writer.using(context):
session = context.session
mapping = ServicePolicyQosPolicyMapping(
service_policy_id=service_policy_id,
qos_policy_id=qos_policy_id)
session.add(mapping)
def _delete_nsp_qos_mapping(self, session, mapping):
def _delete_nsp_qos_mapping(self, context, mapping):
if mapping:
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
session.delete(mapping)

View File

@ -15,11 +15,12 @@ import operator
from keystoneclient import exceptions as k_exceptions
from keystoneclient.v2_0 import client as k_client
from neutron.api.v2 import attributes
from neutron.common import exceptions as neutron_exc
from neutron.db import api as db_api
from neutron.db import models_v2
from neutron.extensions import l3 as ext_l3
from neutron.extensions import securitygroup as ext_sg
from neutron_lib.api.definitions import port as port_def
from neutron_lib import constants as n_const
from neutron_lib import context as n_context
from neutron_lib.db import model_base
@ -1062,7 +1063,7 @@ class ImplicitResourceOperations(local_api.LocalAPI,
fip_ids = self._allocate_floating_ips(
context, ptg['l2_policy_id'], context.current['port_id'])
self._set_pt_floating_ips_mapping(
context._plugin_context.session,
context._plugin_context,
context.current['id'],
fip_ids)
return
@ -1175,29 +1176,29 @@ class ImplicitResourceOperations(local_api.LocalAPI,
ipaddress=None, fip_maps=None):
if not ipaddress:
ipaddress = self._get_ptg_policy_ipaddress_mapping(
context._plugin_context.session, ptg['id'])
context._plugin_context, ptg['id'])
if ipaddress and ptg['subnets']:
# TODO(rkukura): Loop on subnets?
self._restore_ip_to_allocation_pool(
context, ptg['subnets'][0], ipaddress.ipaddress)
self._delete_policy_ipaddress_mapping(
context._plugin_context.session, ptg['id'])
context._plugin_context, ptg['id'])
if not fip_maps:
fip_maps = self._get_ptg_policy_fip_mapping(
context._plugin_context.session, ptg['id'])
context._plugin_context, ptg['id'])
for fip_map in fip_maps:
self._delete_fip(context._plugin_context, fip_map.floatingip_id)
self._delete_ptg_policy_fip_mapping(
context._plugin_context.session, ptg['id'])
context._plugin_context, ptg['id'])
for pt in ptg['policy_targets']:
pt_fip_maps = self._get_pt_floating_ip_mapping(
context._plugin_context.session, pt)
context._plugin_context, pt)
for pt_fip_map in pt_fip_maps:
self._delete_fip(context._plugin_context,
pt_fip_map.floatingip_id)
self._delete_pt_floating_ip_mapping(
context._plugin_context.session, pt)
context._plugin_context, pt)
def _handle_nsp_update_on_ptg(self, context):
old_nsp = context.original.get("network_service_policy_id")
@ -1290,7 +1291,7 @@ class ImplicitResourceOperations(local_api.LocalAPI,
self._remove_ip_from_allocation_pool(
context, context.current['subnets'][0], free_ip)
self._set_policy_ipaddress_mapping(
context._plugin_context.session,
context._plugin_context,
network_service_policy_id,
context.current['id'],
free_ip)
@ -1302,7 +1303,7 @@ class ImplicitResourceOperations(local_api.LocalAPI,
context, context.current['l2_policy_id'])
for fip_id in fip_ids:
self._set_ptg_policy_fip_mapping(
context._plugin_context.session,
context._plugin_context,
network_service_policy_id,
context.current['id'],
fip_id)
@ -1324,7 +1325,7 @@ class ImplicitResourceOperations(local_api.LocalAPI,
pt_fip_map[policy_target['id']] = fip_ids
if pt_fip_map:
self._set_pts_floating_ips_mapping(
context._plugin_context.session, pt_fip_map)
context._plugin_context, pt_fip_map)
def _restore_ip_to_allocation_pool(self, context, subnet_id, ip_address):
# TODO(Magesh):Pass subnets and loop on subnets. Better to add logic
@ -1620,12 +1621,12 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
# get QoS Policy associated to NSP
mapping = self._get_nsp_qos_mapping(
context._plugin_context.session,
context._plugin_context,
network_service_policy_id)
# apply QoS policy to PT's Neutron port
port_id = context.current['port_id']
port = {attributes.PORT:
port = {port_def.RESOURCE_NAME:
{'qos_policy_id': mapping['qos_policy_id']}}
self._core_plugin.update_port(context._plugin_context,
port_id, port)
@ -1639,7 +1640,7 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
LOG.warning("Attempted to fetch deleted Service Target (QoS)")
else:
port_id = policy_target['port_id']
port = {attributes.PORT: {'qos_policy_id': None}}
port = {port_def.RESOURCE_NAME: {'qos_policy_id': None}}
self._core_plugin.update_port(context._plugin_context,
port_id, port)
@ -1672,12 +1673,12 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
context._plugin_context, filters={'id': policy_targets})
# get QoS Policy associated to NSP
mapping = self._get_nsp_qos_mapping(
context._plugin_context.session,
context._plugin_context,
nsp['id'])
# apply QoS policy to each PT's Neutron port
for pt in policy_targets:
port_id = pt['port_id']
port = {attributes.PORT:
port = {port_def.RESOURCE_NAME:
{'qos_policy_id': mapping['qos_policy_id']}}
self._core_plugin.update_port(context._plugin_context,
port_id, port)
@ -1773,7 +1774,7 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
def delete_policy_target_precommit(self, context):
self._validate_pt_in_use_by_cluster(context)
context.fips = self._get_pt_floating_ip_mapping(
context._plugin_context.session,
context._plugin_context,
context.current['id'])
@log.log_method_call
@ -1924,9 +1925,9 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
@log.log_method_call
def delete_policy_target_group_precommit(self, context):
context.nsp_cleanup_ipaddress = self._get_ptg_policy_ipaddress_mapping(
context._plugin_context.session, context.current['id'])
context._plugin_context, context.current['id'])
context.nsp_cleanup_fips = self._get_ptg_policy_fip_mapping(
context._plugin_context.session, context.current['id'])
context._plugin_context, context.current['id'])
@log.log_method_call
def delete_policy_target_group_postcommit(self, context):
@ -2299,14 +2300,14 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
qos_policy_id = self._create_implicit_qos_policy(context)
nsp_id = context.current['id']
self._create_implicit_qos_rule(context, qos_policy_id, max, burst)
self._set_nsp_qos_mapping(context._plugin_context.session,
self._set_nsp_qos_mapping(context._plugin_context,
nsp_id,
qos_policy_id)
@log.log_method_call
def delete_network_service_policy_precommit(self, context):
nsp = context.current
mapping = self._get_nsp_qos_mapping(context._plugin_context.session,
mapping = self._get_nsp_qos_mapping(context._plugin_context,
nsp['id'])
if mapping:
qos_policy_id = mapping['qos_policy_id']
@ -3342,7 +3343,7 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
def _delete_ptg_qos_policy(self, context, qos_policy_id):
qos_rules = self._get_qos_rules(context._plugin_context, qos_policy_id)
with context._plugin_context.session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context._plugin_context):
for qos_rule in qos_rules:
self._delete_qos_rule(context._plugin_context,
qos_rule['id'], qos_policy_id)

View File

@ -14,9 +14,9 @@ import netaddr
import six
from neutron.db import api as db_api
from neutron.extensions import portbindings
from neutron.plugins.common import constants as pconst
from neutron.quota import resource_registry
from neutron_lib.api.definitions import portbindings
from neutron_lib import constants
from neutron_lib import context as n_ctx
from neutron_lib.plugins import directory
@ -359,8 +359,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
new_status = {resource_name: {'status': updated_status,
'status_details':
updated_status_details}}
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
getattr(super(GroupPolicyPlugin, self),
"update_" + resource_name)(
context, _resource['id'], new_status)
@ -370,8 +369,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
def _get_resource(self, context, resource_name, resource_id,
gbp_context_name, fields=None):
session = context.session
with session.begin(subtransactions=True):
# The following is a writer because we do DB write for status
with db_api.context_manager.writer.using(context):
session = context.session
get_method = "".join(['get_', resource_name])
result = getattr(super(GroupPolicyPlugin, self), get_method)(
context, resource_id, None)
@ -380,17 +380,19 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
getattr(self.extension_manager, extend_resources_method)(
session, result)
# Invoke drivers only if status attributes are requested
if not fields or STATUS_SET.intersection(set(fields)):
result = self._get_status_from_drivers(
context, gbp_context_name, resource_name, resource_id, result)
return self._fields(result, fields)
# Invoke drivers only if status attributes are requested
if not fields or STATUS_SET.intersection(set(fields)):
result = self._get_status_from_drivers(
context, gbp_context_name, resource_name, resource_id,
result)
return self._fields(result, fields)
def _get_resources(self, context, resource_name, gbp_context_name,
filters=None, fields=None, sorts=None, limit=None,
marker=None, page_reverse=False):
session = context.session
with session.begin(subtransactions=True):
# The following is a writer because we do DB write for status
with db_api.context_manager.writer.using(context):
session = context.session
resource_plural = gbp_utils.get_resource_plural(resource_name)
get_resources_method = "".join(['get_', resource_plural])
results = getattr(super(GroupPolicyPlugin, self),
@ -463,9 +465,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_policy_target(self, context, policy_target):
self._ensure_tenant(context, policy_target['policy_target'])
self._add_fixed_ips_to_port_attributes(policy_target)
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
self._add_fixed_ips_to_port_attributes(policy_target)
result = super(GroupPolicyPlugin,
self).create_policy_target(context, policy_target)
self.extension_manager.process_create_policy_target(
@ -493,9 +495,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_policy_target(self, context, policy_target_id, policy_target):
self._add_fixed_ips_to_port_attributes(policy_target)
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
self._add_fixed_ips_to_port_attributes(policy_target)
original_policy_target = self.get_policy_target(context,
policy_target_id)
updated_policy_target = super(
@ -520,8 +522,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_target(self, context, policy_target_id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_target = self.get_policy_target(context, policy_target_id)
policy_context = p_context.PolicyTargetContext(
self, context, policy_target)
@ -562,8 +563,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
def create_policy_target_group(self, context, policy_target_group):
self._ensure_tenant(context,
policy_target_group['policy_target_group'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_policy_target_group(
context, policy_target_group)
@ -593,8 +594,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def update_policy_target_group(self, context, policy_target_group_id,
policy_target_group):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_policy_target_group = self.get_policy_target_group(
context, policy_target_group_id)
updated_policy_target_group = super(
@ -633,8 +634,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_target_group(self, context, policy_target_group_id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_target_group = self.get_policy_target_group(
context, policy_target_group_id)
pt_ids = policy_target_group['policy_targets']
@ -667,7 +667,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
LOG.warning('PTG %s already deleted',
policy_target_group['proxy_group_id'])
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
for pt in self.get_policy_targets(context, {'id': pt_ids}):
# We will allow PTG deletion if all PTs are unused.
# We could have cleaned these opportunistically in
@ -711,9 +711,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
application_policy_group):
self._ensure_tenant(
context, application_policy_group['application_policy_group'])
session = context.session
pdm = self.policy_driver_manager
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
pdm = self.policy_driver_manager
result = super(GroupPolicyPlugin,
self).create_application_policy_group(
context, application_policy_group)
@ -742,9 +742,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
def update_application_policy_group(self, context,
application_policy_group_id,
application_policy_group):
session = context.session
pdm = self.policy_driver_manager
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
pdm = self.policy_driver_manager
original_application_policy_group = (
self.get_application_policy_group(
context, application_policy_group_id))
@ -775,9 +775,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def delete_application_policy_group(self, context,
application_policy_group_id):
session = context.session
pdm = self.policy_driver_manager
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
pdm = self.policy_driver_manager
application_policy_group = self.get_application_policy_group(
context, application_policy_group_id)
policy_context = p_context.ApplicationPolicyGroupContext(
@ -818,8 +817,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_l2_policy(self, context, l2_policy):
self._ensure_tenant(context, l2_policy['l2_policy'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_l2_policy(context, l2_policy)
self.extension_manager.process_create_l2_policy(
@ -845,8 +844,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_l2_policy(self, context, l2_policy_id, l2_policy):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_l2_policy = self.get_l2_policy(context, l2_policy_id)
updated_l2_policy = super(GroupPolicyPlugin,
self).update_l2_policy(
@ -870,8 +869,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_l2_policy(self, context, l2_policy_id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
l2_policy = self.get_l2_policy(context, l2_policy_id)
policy_context = p_context.L2PolicyContext(self, context,
l2_policy)
@ -910,8 +908,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
def create_network_service_policy(self, context, network_service_policy):
self._ensure_tenant(
context, network_service_policy['network_service_policy'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_network_service_policy(
context, network_service_policy)
@ -943,8 +941,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def update_network_service_policy(self, context, network_service_policy_id,
network_service_policy):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_network_service_policy = super(
GroupPolicyPlugin, self).get_network_service_policy(
context, network_service_policy_id)
@ -974,8 +972,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def delete_network_service_policy(
self, context, network_service_policy_id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
network_service_policy = self.get_network_service_policy(
context, network_service_policy_id)
policy_context = p_context.NetworkServicePolicyContext(
@ -1016,8 +1013,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_l3_policy(self, context, l3_policy):
self._ensure_tenant(context, l3_policy['l3_policy'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_l3_policy(context, l3_policy)
self.extension_manager.process_create_l3_policy(
@ -1045,8 +1042,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_l3_policy(self, context, l3_policy_id, l3_policy):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_l3_policy = self.get_l3_policy(context, l3_policy_id)
updated_l3_policy = super(
GroupPolicyPlugin, self).update_l3_policy(
@ -1071,8 +1068,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_l3_policy(self, context, l3_policy_id, check_unused=False):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
if (check_unused and
(session.query(group_policy_mapping_db.L2PolicyMapping).
filter_by(l3_policy_id=l3_policy_id).count())):
@ -1114,10 +1111,9 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def create_policy_classifier(self, context, policy_classifier):
self._ensure_tenant(context,
policy_classifier['policy_classifier'])
session = context.session
with session.begin(subtransactions=True):
self._ensure_tenant(context, policy_classifier['policy_classifier'])
with db_api.context_manager.writer.using(context):
session = context.session
result = super(
GroupPolicyPlugin, self).create_policy_classifier(
context, policy_classifier)
@ -1146,8 +1142,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_policy_classifier(self, context, id, policy_classifier):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_policy_classifier = super(
GroupPolicyPlugin, self).get_policy_classifier(context, id)
updated_policy_classifier = super(
@ -1172,8 +1168,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_classifier(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_classifier = self.get_policy_classifier(context, id)
policy_context = p_context.PolicyClassifierContext(
self, context, policy_classifier)
@ -1212,8 +1207,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_policy_action(self, context, policy_action):
self._ensure_tenant(context, policy_action['policy_action'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_policy_action(context, policy_action)
self.extension_manager.process_create_policy_action(
@ -1242,8 +1237,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_policy_action(self, context, id, policy_action):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_policy_action = super(
GroupPolicyPlugin, self).get_policy_action(context, id)
updated_policy_action = super(
@ -1269,8 +1264,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_action(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_action = self.get_policy_action(context, id)
policy_context = p_context.PolicyActionContext(self, context,
policy_action)
@ -1307,8 +1301,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_policy_rule(self, context, policy_rule):
self._ensure_tenant(context, policy_rule['policy_rule'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(
GroupPolicyPlugin, self).create_policy_rule(
context, policy_rule)
@ -1336,8 +1330,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_policy_rule(self, context, id, policy_rule):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_policy_rule = super(
GroupPolicyPlugin, self).get_policy_rule(context, id)
updated_policy_rule = super(
@ -1361,8 +1355,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_rule(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_rule = self.get_policy_rule(context, id)
policy_context = p_context.PolicyRuleContext(self, context,
policy_rule)
@ -1400,8 +1393,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_policy_rule_set(self, context, policy_rule_set):
self._ensure_tenant(context, policy_rule_set['policy_rule_set'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_policy_rule_set(
context, policy_rule_set)
@ -1430,8 +1423,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_policy_rule_set(self, context, id, policy_rule_set):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_policy_rule_set = super(
GroupPolicyPlugin, self).get_policy_rule_set(context, id)
updated_policy_rule_set = super(
@ -1456,8 +1449,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_policy_rule_set(self, context, id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
policy_rule_set = self.get_policy_rule_set(context, id)
policy_context = p_context.PolicyRuleSetContext(
self, context, policy_rule_set)
@ -1494,8 +1486,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_external_segment(self, context, external_segment):
self._ensure_tenant(context, external_segment['external_segment'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_external_segment(context,
external_segment)
@ -1528,8 +1520,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def update_external_segment(self, context, external_segment_id,
external_segment):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_external_segment = super(
GroupPolicyPlugin, self).get_external_segment(
context, external_segment_id)
@ -1559,8 +1551,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_external_segment(self, context, external_segment_id):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es = self.get_external_segment(context, external_segment_id)
if es['l3_policies'] or es['nat_pools'] or es['external_policies']:
raise gpex.ExternalSegmentInUse(es_id=es['id'])
@ -1602,8 +1593,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_external_policy(self, context, external_policy):
self._ensure_tenant(context, external_policy['external_policy'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin,
self).create_external_policy(
context, external_policy)
@ -1633,8 +1624,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def update_external_policy(self, context, external_policy_id,
external_policy):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_external_policy = super(
GroupPolicyPlugin, self).get_external_policy(
context, external_policy_id)
@ -1662,8 +1653,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def delete_external_policy(self, context, external_policy_id,
check_unused=False):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es = self.get_external_policy(context, external_policy_id)
policy_context = p_context.ExternalPolicyContext(
self, context, es)
@ -1701,8 +1691,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@gbp_extensions.disable_transaction_guard
def create_nat_pool(self, context, nat_pool):
self._ensure_tenant(context, nat_pool['nat_pool'])
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
result = super(GroupPolicyPlugin, self).create_nat_pool(
context, nat_pool)
self.extension_manager.process_create_nat_pool(session, nat_pool,
@ -1728,8 +1718,8 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def update_nat_pool(self, context, nat_pool_id, nat_pool):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
session = context.session
original_nat_pool = super(
GroupPolicyPlugin, self).get_nat_pool(context, nat_pool_id)
updated_nat_pool = super(
@ -1751,8 +1741,7 @@ class GroupPolicyPlugin(group_policy_mapping_db.GroupPolicyMappingDbPlugin):
@db_api.retry_if_session_inactive()
@gbp_extensions.disable_transaction_guard
def delete_nat_pool(self, context, nat_pool_id, check_unused=False):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
es = self.get_nat_pool(context, nat_pool_id)
policy_context = p_context.NatPoolContext(self, context, es)
(self.policy_driver_manager.delete_nat_pool_precommit(

View File

@ -13,9 +13,11 @@
from neutron.db import api as db_api
from neutron_lib import exceptions as n_exc
from oslo_config import cfg
from oslo_db import exception as oslo_db_excp
from oslo_log import log
from oslo_policy import policy as oslo_policy
from oslo_utils import excutils
from sqlalchemy import exc as sqlalchemy_exc
import stevedore
from gbpservice.neutron.services.grouppolicy.common import exceptions as gp_exc
@ -145,6 +147,12 @@ class PolicyDriverManager(stevedore.named.NamedExtensionManager):
" %(method)s",
{'name': driver.name, 'method': method_name}
)
elif isinstance(e, sqlalchemy_exc.InvalidRequestError):
LOG.exception(
"Policy driver '%(name)s' failed in %(method)s ",
"with sqlalchemy.exc.InvalidRequestError",
{'name': driver.name, 'method': method_name})
raise oslo_db_excp.RetryRequest(e)
else:
error = True
# We are eating a non-GBP/non-Neutron exception here

View File

@ -76,9 +76,9 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
if instance:
return instance
session = context.session
deployers = {}
with session.begin(subtransactions=True):
# REVISIT: Consider adding ensure_tenant() call here
with db_api.context_manager.writer.using(context):
instance = super(NodeCompositionPlugin,
self).create_servicechain_instance(
context, servicechain_instance)
@ -151,11 +151,10 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
if instance:
return instance
session = context.session
deployers = {}
updaters = {}
destroyers = {}
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_instance = self.get_servicechain_instance(
context, servicechain_instance_id)
updated_instance = super(
@ -192,22 +191,21 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
When a Servicechain Instance is deleted, all its nodes need to be
destroyed.
"""
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
instance = self.get_servicechain_instance(context,
servicechain_instance_id)
destroyers = self._get_scheduled_drivers(context, instance,
'destroy')
self._destroy_servicechain_nodes(context, destroyers)
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
super(NodeCompositionPlugin, self).delete_servicechain_instance(
context, servicechain_instance_id)
@log.log_method_call
def create_servicechain_node(self, context, servicechain_node):
session = context.session
with session.begin(subtransactions=True):
# REVISIT: Consider adding ensure_tenant() call here
with db_api.context_manager.writer.using(context):
result = super(NodeCompositionPlugin,
self).create_servicechain_node(context,
servicechain_node)
@ -223,9 +221,8 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
need to be updated as well. This usually results in a node
reconfiguration.
"""
session = context.session
updaters = {}
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_sc_node = self.get_servicechain_node(
context, servicechain_node_id)
updated_sc_node = super(NodeCompositionPlugin,
@ -267,8 +264,8 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
@log.log_method_call
def create_servicechain_spec(self, context, servicechain_spec):
session = context.session
with session.begin(subtransactions=True):
# REVISIT: Consider adding ensure_tenant() call here
with db_api.context_manager.writer.using(context):
result = super(
NodeCompositionPlugin, self).create_servicechain_spec(
context, servicechain_spec, set_params=False)
@ -278,8 +275,7 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
@log.log_method_call
def update_servicechain_spec(self, context, servicechain_spec_id,
servicechain_spec):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_sc_spec = self.get_servicechain_spec(
context, servicechain_spec_id)
updated_sc_spec = super(NodeCompositionPlugin,
@ -304,8 +300,8 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
@log.log_method_call
def create_service_profile(self, context, service_profile):
session = context.session
with session.begin(subtransactions=True):
# REVISIT: Consider adding ensure_tenant() call here
with db_api.context_manager.writer.using(context):
result = super(
NodeCompositionPlugin, self).create_service_profile(
context, service_profile)
@ -315,8 +311,7 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
@log.log_method_call
def update_service_profile(self, context, service_profile_id,
service_profile):
session = context.session
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
original_profile = self.get_service_profile(
context, service_profile_id)
updated_profile = super(NodeCompositionPlugin,
@ -485,9 +480,8 @@ class NodeCompositionPlugin(servicechain_db.ServiceChainDbPlugin,
return result
def _get_resource(self, context, resource_name, resource_id, fields=None):
session = context.session
deployers = {}
with session.begin(subtransactions=True):
with db_api.context_manager.writer.using(context):
resource = getattr(super(NodeCompositionPlugin,
self), 'get_' + resource_name)(context, resource_id)
if resource_name == 'servicechain_instance':

View File

@ -9,6 +9,13 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# The following two are required when running tox since the nsx packages
# have a foreign key dependency on LBaaS tables. Those tables are not
# getting loaded without these explicit imports.
from neutron_lbaas.db.loadbalancer import loadbalancer_dbv2 # noqa
from neutron_lbaas.db.loadbalancer import models # noqa
from neutron.agent import securitygroups_rpc
from neutron.api import extensions
from neutron.quota import resource

View File

@ -13,8 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api.v2 import attributes as attr
from neutron.extensions import address_scope as as_ext
from neutron_lib.api.definitions import subnetpool as subnetpool_def
from neutron_lib.api import extensions
from neutron_lib import constants
@ -22,7 +22,7 @@ from gbpservice._i18n import _
EXTENDED_ATTRIBUTES_2_0 = {
attr.SUBNETPOOLS: {
subnetpool_def.COLLECTION_NAME: {
'subnetpool_extension': {'allow_post': True,
'allow_put': True,
'default': constants.ATTR_NOT_SPECIFIED,

View File

@ -33,16 +33,15 @@ from aim import utils as aim_utils
from keystoneclient.v3 import client as ksc_client
from neutron.api import extensions
from neutron.callbacks import registry
from neutron.db import api as db_api
from neutron.db import segments_db
from neutron.plugins.ml2 import config
from neutron.tests.unit.api import test_extensions
from neutron.tests.unit.db import test_db_base_plugin_v2 as test_plugin
from neutron.tests.unit.extensions import test_address_scope
from neutron.tests.unit.extensions import test_l3
from neutron.tests.unit.extensions import test_securitygroup
from neutron.tests.unit import testlib_api
from neutron_lib.callbacks import registry
from neutron_lib import constants as n_constants
from neutron_lib import context as n_context
from neutron_lib.plugins import directory
@ -52,12 +51,13 @@ import webob.exc
from gbpservice.neutron.db import all_models # noqa
from gbpservice.neutron.extensions import cisco_apic_l3 as l3_ext
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import ( # noqa
config as aimcfg)
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import (
extension_db as extn_db)
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import (
mechanism_driver as md)
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import apic_mapper
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import config # noqa
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import data_migrations
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import db
from gbpservice.neutron.plugins.ml2plus.drivers.apic_aim import exceptions
@ -195,21 +195,20 @@ class ApicAimTestCase(test_address_scope.AddressScopeTestCase,
# we can successfully call through to all mechanism
# driver apis.
mech = mechanism_drivers or ['logger', 'apic_aim']
config.cfg.CONF.set_override('mechanism_drivers', mech, 'ml2')
config.cfg.CONF.set_override('extension_drivers',
['apic_aim', 'port_security', 'dns'],
'ml2')
config.cfg.CONF.set_override('api_extensions_path', None)
config.cfg.CONF.set_override('type_drivers',
['opflex', 'local', 'vlan'],
'ml2')
cfg.CONF.set_override('mechanism_drivers', mech,
group='ml2')
cfg.CONF.set_override('extension_drivers',
['apic_aim', 'port_security', 'dns'],
group='ml2')
cfg.CONF.set_override('api_extensions_path', None)
cfg.CONF.set_override('type_drivers',
['opflex', 'local', 'vlan'], group='ml2')
net_type = tenant_network_types or ['opflex']
config.cfg.CONF.set_override('tenant_network_types', net_type, 'ml2')
config.cfg.CONF.set_override('network_vlan_ranges',
['physnet1:1000:1099',
'physnet2:123:165',
'physnet3:347:513'],
group='ml2_type_vlan')
cfg.CONF.set_override('tenant_network_types', net_type,
group='ml2')
cfg.CONF.set_override('network_vlan_ranges',
['physnet1:1000:1099', 'physnet2:123:165',
'physnet3:347:513'], group='ml2_type_vlan')
service_plugins = {
'L3_ROUTER_NAT':
'gbpservice.neutron.services.apic_aim.l3_plugin.ApicL3Plugin'}
@ -217,7 +216,7 @@ class ApicAimTestCase(test_address_scope.AddressScopeTestCase,
self.useFixture(AimSqlFixture())
super(ApicAimTestCase, self).setUp(PLUGIN_NAME,
service_plugins=service_plugins)
self.db_session = db_api.get_session()
self.db_session = db_api.get_writer_session()
self.initialize_db_config(self.db_session)
ext_mgr = extensions.PluginAwareExtensionManager.get_instance()
@ -3402,7 +3401,8 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
scope1_id = scope['id']
scope1_vrf = scope[DN]['VRF']
mapping = self._get_address_scope_mapping(self.db_session, scope1_id)
self.db_session.delete(mapping)
with self.db_session.begin():
self.db_session.delete(mapping)
# Create an address scope with pre-existing VRF, delete its
# mapping, and create record in old DB table.
@ -3416,10 +3416,12 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
scope2_vrf = scope[DN]['VRF']
self.assertEqual(vrf.dn, scope2_vrf)
mapping = self._get_address_scope_mapping(self.db_session, scope2_id)
self.db_session.delete(mapping)
with self.db_session.begin():
self.db_session.delete(mapping)
old_db = data_migrations.DefunctAddressScopeExtensionDb(
address_scope_id=scope2_id, vrf_dn=scope2_vrf)
self.db_session.add(old_db)
with self.db_session.begin():
self.db_session.add(old_db)
# Create a normal network and delete its mapping.
net = self._make_network(self.fmt, 'net1', True)['network']
@ -3428,7 +3430,8 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
net1_epg = net[DN]['EndpointGroup']
net1_vrf = net[DN]['VRF']
mapping = self._get_network_mapping(self.db_session, net1_id)
self.db_session.delete(mapping)
with self.db_session.begin():
self.db_session.delete(mapping)
# Create an external network and delete its mapping.
net = self._make_ext_network('net2', dn=self.dn_t1_l1_n1)
@ -3437,7 +3440,8 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
net2_epg = net[DN]['EndpointGroup']
net2_vrf = net[DN]['VRF']
mapping = self._get_network_mapping(self.db_session, net2_id)
self.db_session.delete(mapping)
with self.db_session.begin():
self.db_session.delete(mapping)
# Create an unmanaged external network and verify it has no
# mapping.
@ -3446,10 +3450,6 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
mapping = self._get_network_mapping(self.db_session, net3_id)
self.assertIsNone(mapping)
# Flush session to ensure sqlalchemy relationships are all up
# to date.
self.db_session.flush()
# Verify normal address scope is missing DN.
scope = self._show('address-scopes', scope1_id)['address_scope']
self.assertNotIn('VRF', scope[DN])
@ -3528,9 +3528,9 @@ class TestMigrations(ApicAimTestCase, db.DbMixin):
# Verify deleting external network deletes BD and EPG.
self._delete('networks', net2_id)
bd = self._find_by_dn(net1_bd, aim_resource.BridgeDomain)
bd = self._find_by_dn(net2_bd, aim_resource.BridgeDomain)
self.assertIsNone(bd)
epg = self._find_by_dn(net1_epg, aim_resource.EndpointGroup)
epg = self._find_by_dn(net2_epg, aim_resource.EndpointGroup)
self.assertIsNone(epg)
def test_ap_name_change(self):

View File

@ -16,9 +16,9 @@
import mock
import uuid
from neutron.plugins.ml2 import config
from neutron_lib import context
from neutron_lib.plugins import directory
from oslo_config import cfg
from gbpservice.neutron.tests.unit.plugins.ml2plus.drivers import (
extension_test as ext_test)
@ -30,9 +30,8 @@ class ExtensionDriverTestCase(test_plugin.Ml2PlusPluginV2TestCase):
_extension_drivers = ['test_ml2plus']
def setUp(self):
config.cfg.CONF.set_override('extension_drivers',
self._extension_drivers,
group='ml2')
cfg.CONF.set_override('extension_drivers',
self._extension_drivers, group='ml2')
super(ExtensionDriverTestCase, self).setUp()
self._plugin = directory.get_plugin()
self._ctxt = context.get_admin_context()
@ -222,9 +221,8 @@ class DBExtensionDriverTestCase(test_plugin.Ml2PlusPluginV2TestCase):
_extension_drivers = ['testdb_ml2plus']
def setUp(self):
config.cfg.CONF.set_override('extension_drivers',
self._extension_drivers,
group='ml2')
cfg.CONF.set_override('extension_drivers',
self._extension_drivers, group='ml2')
super(DBExtensionDriverTestCase, self).setUp()
self._plugin = directory.get_plugin()
self._ctxt = context.get_admin_context()

View File

@ -17,11 +17,11 @@ import mock
import testtools
from neutron.api import extensions
from neutron.plugins.ml2 import config
from neutron.tests.unit.api import test_extensions
from neutron.tests.unit.db import test_db_base_plugin_v2 as test_plugin
from neutron.tests.unit.extensions import test_address_scope
from neutron_lib.plugins import directory
from oslo_config import cfg
from gbpservice.neutron.db import all_models # noqa
import gbpservice.neutron.extensions
@ -40,12 +40,10 @@ class Ml2PlusPluginV2TestCase(test_address_scope.AddressScopeTestCase):
# Enable the test mechanism driver to ensure that
# we can successfully call through to all mechanism
# driver apis.
config.cfg.CONF.set_override('mechanism_drivers',
['logger_plus', 'test'],
'ml2')
config.cfg.CONF.set_override('network_vlan_ranges',
['physnet1:1000:1099'],
group='ml2_type_vlan')
cfg.CONF.set_override('mechanism_drivers',
['logger_plus', 'test'], group='ml2')
cfg.CONF.set_override('network_vlan_ranges',
['physnet1:1000:1099'], group='ml2_type_vlan')
extensions.append_api_extensions_path(
gbpservice.neutron.extensions.__path__)

View File

@ -25,13 +25,13 @@ from aim import context as aim_context
from keystoneclient.v3 import client as ksc_client
from netaddr import IPSet
from neutron.api.rpc.agentnotifiers import dhcp_rpc_agent_api
from neutron.callbacks import registry
from neutron.common import utils as n_utils
from neutron.db import api as db_api
from neutron.extensions import dns
from neutron.notifiers import nova
from neutron.tests.unit.db import test_db_base_plugin_v2 as test_plugin
from neutron.tests.unit.extensions import test_address_scope
from neutron_lib.callbacks import registry
from neutron_lib import constants as n_constants
from neutron_lib import context as nctx
from neutron_lib.plugins import directory
@ -741,7 +741,8 @@ class AIMBaseTestCase(test_nr_base.CommonNeutronBaseTestCase,
ptg_show = self.show_policy_target_group(
ptg_id, expected_res_status=200)['policy_target_group']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg_id)
self._neutron_context.session, ptg_id,
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_app_profiles = self.aim_mgr.find(
@ -1764,7 +1765,7 @@ class TestL2PolicyWithAutoPTG(TestL2PolicyBase):
'scope'})['policy_target_group']
self._test_policy_target_group_aim_mappings(
ptg, prs_lists, l2p)
# the test policy.json restricts auto-ptg access to admin
# Shared status cannot be updated
self.update_policy_target_group(
ptg['id'], is_admin_context=True, shared=(not shared),
expected_res_status=webob.exc.HTTPBadRequest.code)
@ -1774,7 +1775,8 @@ class TestL2PolicyWithAutoPTG(TestL2PolicyBase):
self.assertEqual('AutoPTGDeleteNotSupported',
res['NeutronError']['type'])
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_epg = self.aim_mgr.find(
self._aim_context, aim_resource.EndpointGroup,
name=aim_epg_name)[0]
@ -1813,7 +1815,8 @@ class TestL2PolicyWithAutoPTG(TestL2PolicyBase):
def _test_epg_policy_enforcement_attr(self, ptg):
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
db_api.get_session(), ptg['id'])
db_api.get_session(), ptg['id'],
context=self._neutron_context)
aim_epg = self.aim_mgr.find(
self._aim_context, aim_resource.EndpointGroup,
name=aim_epg_name)[0]
@ -2164,7 +2167,8 @@ class TestPolicyTargetGroupVmmDomains(AIMBaseTestCase):
'policy_target_group']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_app_profiles = self.aim_mgr.find(
@ -2285,7 +2289,8 @@ class TestPolicyTargetGroupIpv4(AIMBaseTestCase):
ptg_name = ptg['name']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg_id, ptg_name)
self._neutron_context.session, ptg_id, ptg_name,
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_app_profiles = self.aim_mgr.find(
@ -2320,7 +2325,8 @@ class TestPolicyTargetGroupIpv4(AIMBaseTestCase):
consumed_policy_rule_sets={new_prs_lists['consumed']['id']:
'scope'})['policy_target_group']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg_id, new_name)
self._neutron_context.session, ptg_id, new_name,
context=self._neutron_context)
aim_epgs = self.aim_mgr.find(
self._aim_context, aim_resource.EndpointGroup, name=aim_epg_name)
self.assertEqual(1, len(aim_epgs))
@ -2454,7 +2460,8 @@ class TestPolicyTargetGroupIpv4(AIMBaseTestCase):
intra_ptg_allow=False)['policy_target_group']
self.assertFalse(ptg['intra_ptg_allow'])
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_epgs = self.aim_mgr.find(
self._aim_context, aim_resource.EndpointGroup, name=aim_epg_name)
self.assertEqual(1, len(aim_epgs))
@ -2669,7 +2676,8 @@ class TestPolicyTarget(AIMBaseTestCase):
self._bind_port_to_host(pt['port_id'], 'h1')
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_epg = self.aim_mgr.get(
@ -2721,7 +2729,8 @@ class TestPolicyTarget(AIMBaseTestCase):
port_id=port_id)['policy_target']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_epg = self.aim_mgr.get(
@ -2783,7 +2792,8 @@ class TestPolicyTarget(AIMBaseTestCase):
self._bind_port_to_host(pt['port_id'], 'opflex-1')
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_epg = self.aim_mgr.get(
@ -2861,7 +2871,8 @@ class TestPolicyTarget(AIMBaseTestCase):
port_id=port_id)['policy_target']
aim_epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'])
self._neutron_context.session, ptg['id'],
context=self._neutron_context)
aim_tenant_name = self.name_mapper.project(None, self._tenant_id)
aim_app_profile_name = self.driver.aim_mech_driver.ap_name
aim_epg = self.aim_mgr.get(
@ -3034,7 +3045,8 @@ class TestPolicyTarget(AIMBaseTestCase):
'timestamp': 0, 'request_id': 'request_id'},
host='h1')
epg_name = self.driver.apic_epg_name_for_policy_target_group(
self._neutron_context.session, ptg['id'], ptg['name'])
self._neutron_context.session, ptg['id'], ptg['name'],
context=self._neutron_context)
epg_tenant = ptg['tenant_id']
subnet = self._get_object('subnets', ptg['subnets'][0], self.api)
@ -3832,6 +3844,8 @@ class TestPolicyRuleSetRollback(AIMBaseTestCase):
class NotificationTest(AIMBaseTestCase):
block_dhcp_notifier = False
def setUp(self, policy_drivers=None, core_plugin=None, ml2_options=None,
l3_plugin=None, sc_plugin=None, **kwargs):
self.queue_notification_call_count = 0

View File

@ -22,6 +22,7 @@ from neutron.db.qos import models as qos_models
from neutron.extensions import external_net as external_net
from neutron.extensions import securitygroup as ext_sg
from neutron.plugins.common import constants as pconst
from neutron.services.qos.drivers.openvswitch import driver as qos_ovs_driver
from neutron.tests.unit.extensions import test_address_scope
from neutron.tests.unit.extensions import test_l3
from neutron.tests.unit.extensions import test_securitygroup
@ -4391,6 +4392,9 @@ class TestNetworkServicePolicy(ResourceMappingTestCase):
def setUp(self):
qos_plugin = 'qos'
# The following forces the QoS driver to be reset
# and subsequently reinitialized for every test in this class.
qos_ovs_driver.DRIVER = None
super(TestNetworkServicePolicy, self).setUp(qos_plugin=qos_plugin)
def test_create_nsp_multiple_ptgs(self):

View File

@ -120,6 +120,9 @@ gbp policy-classifier-delete icmp-traffic
gbp policy-action-delete allow
# Delete Network Service Policy that includes QoS parameters
gbp network-service-policy-delete "qos"
set +o xtrace
echo "*********************************************************************"
echo "SUCCESS: End DevStack Exercise: $0"

View File

@ -16,8 +16,6 @@ enable_plugin group-based-policy https://github.com/openstack/group-based-policy
ENABLE_APIC_AIM_GATE=True
# Update the following (most likely remove this override) once the master branch is moved to ocata
AIM_BRANCH=master
APICML2_BRANCH=sumit/ocata
OPFLEX_BRANCH=master
APICAPI_BRANCH=master

View File

@ -15,7 +15,7 @@ XTRACE=$(set +o | grep xtrace)
function prepare_gbp_devstack_pre {
cd $TOP_DIR
sudo git checkout stable/ocata
sudo git checkout stable/pike
sudo sed -i 's/DEST=\/opt\/stack/DEST=\/opt\/stack\/new/g' $TOP_DIR/stackrc
sudo sed -i 's/source $TOP_DIR\/lib\/neutron/source $TOP_DIR\/lib\/neutron\nsource $TOP_DIR\/lib\/neutron-legacy/g' $TOP_DIR/stack.sh
}
@ -24,15 +24,15 @@ function prepare_gbp_devstack_post {
# The following should updated when master moves to a new release
# We need to do the following since the infra job clones these repos and
# checks out the master branch (as this is the master branch) and later
# does not switch to the stable/ocata branch when installing devstack
# does not switch to the stable/pike branch when installing devstack
# since the repo is already present.
# This can be worked around by changing the job description in
# project-config to set BRANCH_OVERRIDE to use the stable/ocata branch
sudo git --git-dir=/opt/stack/new/neutron/.git --work-tree=/opt/stack/new/neutron checkout stable/ocata
sudo git --git-dir=/opt/stack/new/nova/.git --work-tree=/opt/stack/new/nova checkout stable/ocata
sudo git --git-dir=/opt/stack/new/keystone/.git --work-tree=/opt/stack/new/keystone checkout stable/ocata
sudo git --git-dir=/opt/stack/new/cinder/.git --work-tree=/opt/stack/new/cinder checkout stable/ocata
sudo git --git-dir=/opt/stack/new/requirements/.git --work-tree=/opt/stack/new/requirements checkout stable/ocata
# project-config to set BRANCH_OVERRIDE to use the stable/pike branch
sudo git --git-dir=/opt/stack/new/neutron/.git --work-tree=/opt/stack/new/neutron checkout stable/pike
sudo git --git-dir=/opt/stack/new/nova/.git --work-tree=/opt/stack/new/nova checkout stable/pike
sudo git --git-dir=/opt/stack/new/keystone/.git --work-tree=/opt/stack/new/keystone checkout stable/pike
sudo git --git-dir=/opt/stack/new/cinder/.git --work-tree=/opt/stack/new/cinder checkout stable/pike
sudo git --git-dir=/opt/stack/new/requirements/.git --work-tree=/opt/stack/new/requirements checkout stable/pike
source $TOP_DIR/functions
source $TOP_DIR/functions-common
@ -80,6 +80,8 @@ function source_creds {
}
function run_gbp_rally {
# REVISIT: Temporarily disabling this job until its updated to run with Ocata
exit 1
cd $NEW_BASE
git clone http://github.com/group-policy/rally.git -b dev-ocata
cd rally

View File

@ -8,6 +8,9 @@ set -x
trap prepare_logs ERR
# Make the workspace owned by the stack user
sudo chown -R $STACK_USER:$STACK_USER $BASE
# temporary fix for bug 1693689
export IPV4_ADDRS_SAFE_TO_USE=${DEVSTACK_GATE_IPV4_ADDRS_SAFE_TO_USE:-${DEVSTACK_GATE_FIXED_RANGE:-10.1.0.0/20}}

View File

@ -8,6 +8,9 @@ set -x
trap prepare_logs ERR
# Make the workspace owned by the stack user
sudo chown -R $STACK_USER:$STACK_USER $BASE
# temporary fix for bug 1693689
export IPV4_ADDRS_SAFE_TO_USE=${DEVSTACK_GATE_IPV4_ADDRS_SAFE_TO_USE:-${DEVSTACK_GATE_FIXED_RANGE:-10.1.0.0/20}}

View File

@ -32,14 +32,14 @@ check_residual_resources demo demo
echo "Running gbpfunc test suite"
export PYTHONPATH="$GBP_FUNC_DIR:${PYTHONPATH}"
cd $GBP_FUNC_DIR/testcases
# Run tests as non-admin cred
source_creds $TOP_DIR/openrc demo demo
python suite_non_admin_run.py upstream
gbpfunc_non_admin_exit_code=$?
# Run shared_resource tests as admin cred
source_creds $TOP_DIR/openrc admin admin
python suite_admin_run.py
gbpfunc_admin_exit_code=$?
# Run rest of the tests as non-admin cred
source_creds $TOP_DIR/openrc demo demo
python suite_non_admin_run.py upstream
gbpfunc_non_admin_exit_code=$?
# Prepare the log files for Jenkins to upload
prepare_logs

View File

@ -2,19 +2,19 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
-e git+https://git.openstack.org/openstack/neutron.git@stable/ocata#egg=neutron
-e git+https://git.openstack.org/openstack/neutron.git@stable/pike#egg=neutron
-e git+https://github.com/noironetworks/apicapi.git@master#egg=apicapi
-e git+https://github.com/noironetworks/python-opflex-agent.git@master#egg=python-opflexagent-agent
-e git+https://github.com/openstack/vmware-nsx.git@stable/ocata#egg=vmware_nsx
-e git+https://github.com/openstack/vmware-nsxlib.git@stable/ocata#egg=vmware_nsxlib
-e git+https://github.com/openstack/vmware-nsx.git@stable/pike#egg=vmware_nsx
-e git+https://github.com/openstack/vmware-nsxlib.git@stable/pike#egg=vmware_nsxlib
-e git+https://git.openstack.org/openstack/python-group-based-policy-client@master#egg=gbpclient
-e git+https://git.openstack.org/openstack/neutron-vpnaas@stable/ocata#egg=neutron-vpnaas
-e git+https://git.openstack.org/openstack/neutron-lbaas@stable/ocata#egg=neutron-lbaas
-e git+https://git.openstack.org/openstack/neutron-fwaas@stable/ocata#egg=neutron-fwaas
-e git+https://git.openstack.org/openstack/neutron-vpnaas@stable/pike#egg=neutron-vpnaas
-e git+https://git.openstack.org/openstack/neutron-lbaas@stable/pike#egg=neutron-lbaas
-e git+https://git.openstack.org/openstack/neutron-fwaas@stable/pike#egg=neutron-fwaas
hacking<0.12,>=0.11.0 # Apache-2.0
cliff>=2.3.0 # Apache-2.0

View File

@ -9,7 +9,7 @@ setenv = VIRTUAL_ENV={envdir}
passenv = TRACE_FAILONLY GENERATE_HASHES http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
usedevelop = True
install_command =
pip install -U -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata} {opts} {packages}
pip install -U -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/pike} {opts} {packages}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
whitelist_externals = sh