Add support for upstream Stein release

Import stable/stein rather than stable/rocky branches of upstream
and ACI-specific repositories.

Changes include:
* https://review.opendev.org/#/c/634790/ removed the rpc module
  from neutron.common, which was rehomed to neutron-lib.
* https://review.opendev.org/#/c/634497/ removed the exceptions
  module from neutron.common, which was rehomed to neutron-lib.
* https://review.opendev.org/#/c/581377/ removed exercises from the
  devstack gate. The shell scripts that ran the tests from the
  devstack exercises are now called directly.
* https://review.opendev.org/#/c/619087/ removed the common_db_mixin
  from the FlowClassifierDbPlugin, replacing it with the use of a
  method in neutron-lib.
* https://review.opendev.org/#/c/595369/ removed _setUpExtension,
  replacing it with the setup_extension method.
* https://review.opendev.org/#/c/623415/ added validation to host
  route CIDRs. The metadata CIDRs have been corrected to pass
  this new validation.
* https://review.opendev.org/#/c/615486/ added a call to get a
  nova client, and https://review.opendev.org/#/c/368631/ was
  added to ensure it was a singleton. These are now used to get
  a notifier for nova.
* https://review.opendev.org/#/c/628033/ removed the use of the
  _resource_extend module, which has been moved to neutron-lib.
* https://review.opendev.org/#/c/585037/ converted policy.json
  to policy in code. This resulted in better policy enforcement,
  and flagged problems with existing UTs, mainly in the use of
  shared resources (requires admin privileges). These UTs have
  been fixed.

Change-Id: Ia7bd0799a814e38ff37b7ff062fa1eae7928991c
This commit is contained in:
Robert Kukura 2020-04-30 15:48:45 -04:00 committed by Thomas Bachman
parent ee47b3e132
commit 25c7d0c6c8
40 changed files with 188 additions and 147 deletions

View File

@ -14,19 +14,23 @@
- openstack-tox-pep8:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py27:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py35:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py36:
required-projects:
- name: openstack/requirements
override-checkout: stable/stein
- legacy-group-based-policy-dsvm-functional:
voting: false
- legacy-group-based-policy-dsvm-aim:
voting: true
voting: false
- legacy-group-based-policy-dsvm-nfp:
voting: false
gate:
@ -34,12 +38,16 @@
- openstack-tox-pep8:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py27:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py35:
required-projects:
- name: openstack/requirements
override-checkout: stable/rocky
override-checkout: stable/stein
- openstack-tox-py36:
required-projects:
- name: openstack/requirements
override-checkout: stable/stein

View File

@ -25,7 +25,7 @@ NEUTRON_CONF_DIR=/etc/neutron
NEUTRON_CONF=$NEUTRON_CONF_DIR/neutron.conf
NFP_CONF_DIR=/etc/nfp
DISKIMAGE_CREATE_DIR=$NFPSERVICE_DIR/gbpservice/contrib/nfp/tools/image_builder
NEUTRON_SRC_BRANCH_FOR_NFP_CONTROLLER=stable/rocky
NEUTRON_SRC_BRANCH_FOR_NFP_CONTROLLER=stable/stein
# Save trace setting
XTRACE=$(set +o | grep xtrace)

View File

@ -43,11 +43,11 @@ if [[ $ENABLE_NFP = True ]]; then
# Make sure that your public interface is not attached to any bridge.
PUBLIC_INTERFACE=
enable_plugin neutron-fwaas http://opendev.org/openstack/neutron-fwaas.git stable/rocky
enable_plugin neutron-lbaas https://opendev.org/openstack/neutron-lbaas.git stable/rocky
enable_plugin neutron https://opendev.org/openstack/neutron.git stable/rocky
enable_plugin neutron-vpnaas https://opendev.org/openstack/neutron-vpnaas.git stable/rocky
enable_plugin octavia https://opendev.org/openstack/octavia.git stable/rocky
enable_plugin neutron-fwaas http://opendev.org/openstack/neutron-fwaas.git stable/stein
enable_plugin neutron-lbaas https://opendev.org/openstack/neutron-lbaas.git stable/stein
enable_plugin neutron https://opendev.org/openstack/neutron.git stable/stein
enable_plugin neutron-vpnaas https://opendev.org/openstack/neutron-vpnaas.git stable/stein
enable_plugin octavia https://opendev.org/openstack/octavia.git stable/stein
fi
fi

View File

@ -25,13 +25,13 @@ GIT_BASE=${GIT_BASE:-https://opendev.org}
GBPSERVICE_REPO=${GBPSERVICE_REPO:-${GIT_BASE}/x/group-based-policy.git}
GBPSERVICE_BRANCH=${GBPSERVICE_BRANCH:-master}
GBPCLIENT_REPO=${GBPCLIENT_REPO:-${GIT_BASE}/x/python-group-based-policy-client.git}
GBPCLIENT_BRANCH=${GBPCLIENT_BRANCH:-stable/rocky}
GBPCLIENT_BRANCH=${GBPCLIENT_BRANCH:-stable/stein}
GBPUI_REPO=${GBPUI_REPO:-${GIT_BASE}/x/group-based-policy-ui.git}
GBPUI_BRANCH=${GBPUI_BRANCH:-master}
GBPHEAT_REPO=${GBPHEAT_REPO:-${GIT_BASE}/x/group-based-policy-automation.git}
GBPHEAT_BRANCH=${GBPHEAT_BRANCH:-master}
AIM_BRANCH=${AIM_BRANCH:-master}
OPFLEX_BRANCH=${OPFLEX_BRANCH:-stable/rocky}
OPFLEX_BRANCH=${OPFLEX_BRANCH:-stable/stein}
APICAPI_BRANCH=${APICAPI_BRANCH:-master}
ACITOOLKIT_BRANCH=${ACITOOLKIT_BRANCH:-noiro-lite}

View File

@ -17,8 +17,8 @@ from gbpservice.nfp.lib import transport
from gbpservice.nfp.orchestrator.openstack import openstack_driver
from neutron.common import constants as n_constants
from neutron.common import rpc as n_rpc
from neutron_lib.agent import topics as n_topics
from neutron_lib import rpc as n_rpc
import oslo_messaging as messaging

View File

@ -10,12 +10,11 @@
# License for the specific language governing permissions and limitations
# under the License.
import oslo_serialization.jsonutils as jsonutils
from neutron.common import rpc as n_rpc
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
import oslo_serialization.jsonutils as jsonutils
import pecan
import pika

View File

@ -10,15 +10,14 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
import oslo_messaging as messaging
from gbpservice.contrib.nfp.configurator.lib import constants as const
from gbpservice.nfp.core import log as nfp_logging
from gbpservice.nfp.core import module as nfp_api
from neutron.common import rpc as n_rpc
from oslo_config import cfg
import oslo_messaging as messaging
n_rpc.init(cfg.CONF)
LOG = nfp_logging.getLogger(__name__)

View File

@ -13,20 +13,20 @@
import socket
import time
from gbpservice.contrib.nfp.config_orchestrator.common import topics
from gbpservice.nfp.core import log as nfp_logging
from neutron.common import rpc as n_rpc
from neutron.db import agents_db
from neutron.db import agentschedulers_db
from neutron_lib import exceptions
from neutron_lib.plugins import directory
from neutron_lib import rpc as n_rpc
from neutron_vpnaas.db.vpn import vpn_validator
from neutron_vpnaas.services.vpn.plugin import VPNDriverPlugin
from neutron_vpnaas.services.vpn.plugin import VPNPlugin
from neutron_vpnaas.services.vpn.service_drivers import base_ipsec
import oslo_messaging
from gbpservice.contrib.nfp.config_orchestrator.common import topics
from gbpservice.nfp.core import log as nfp_logging
LOG = nfp_logging.getLogger(__name__)
BASE_VPN_VERSION = '1.0'
AGENT_TYPE_VPN = 'NFP Vpn agent'

View File

@ -16,12 +16,12 @@ import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy import sql
from neutron.db import _model_query as model_query
from neutron.db import _resource_extend as resource_extend
from neutron.db import models_v2
from neutron_lib.api.definitions import subnetpool as subnetpool_def
from neutron_lib.api import validators
from neutron_lib.db import model_base
from neutron_lib.db import model_query
from neutron_lib.db import resource_extend
from neutron_lib import exceptions as n_exc
from gbpservice._i18n import _

View File

@ -22,6 +22,7 @@ from neutron.plugins.ml2 import db as ml2_db
from neutron.quota import resource as quota_resource
from neutron_lib.api import attributes
from neutron_lib.api import validators
from neutron_lib.db import model_query
from neutron_lib import exceptions
from neutron_lib.plugins import directory
from oslo_log import log
@ -351,7 +352,7 @@ try:
self._get_port(context, logical_source_port)
if logical_destination_port is not None:
self._get_port(context, logical_destination_port)
query = self._model_query(
query = model_query.query_with_hooks(
context, flowclassifier_db.FlowClassifier)
for flow_classifier_db in query.all():
if self.flowclassifier_conflict(

View File

@ -22,7 +22,7 @@ DEVICE_OWNER_SNAT_PORT = 'apic:snat-pool'
DEVICE_OWNER_SVI_PORT = 'apic:svi'
IPV4_ANY_CIDR = '0.0.0.0/0'
IPV4_METADATA_CIDR = '169.254.169.254/16'
IPV4_METADATA_CIDR = '169.254.0.0/16'
PROMISCUOUS_TYPES = [n_constants.DEVICE_OWNER_DHCP,
n_constants.DEVICE_OWNER_LOADBALANCER]

View File

@ -31,7 +31,6 @@ from aim import exceptions as aim_exceptions
from aim import utils as aim_utils
import netaddr
from neutron.agent import securitygroups_rpc
from neutron.common import rpc as n_rpc
from neutron.common import utils as n_utils
from neutron.db.models import address_scope as as_db
from neutron.db.models import allowed_address_pair as n_addr_pair_db
@ -63,6 +62,7 @@ from neutron_lib import exceptions as n_exceptions
from neutron_lib.plugins import constants as pconst
from neutron_lib.plugins import directory
from neutron_lib.plugins.ml2 import api
from neutron_lib import rpc as n_rpc
from neutron_lib.services.qos import constants as qos_consts
from neutron_lib.utils import net
from opflexagent import constants as ofcst

View File

@ -26,7 +26,7 @@ client = None
def _get_client():
global client
if client is None:
client = n_nova.Notifier().nclient
client = n_nova.Notifier.get_instance()._get_nova_client()
return client

View File

@ -17,7 +17,6 @@ from collections import defaultdict
from collections import namedtuple
import netaddr
from neutron.common import rpc as n_rpc
from neutron.db.extra_dhcp_opt import models as dhcp_models
from neutron.db.models import allowed_address_pair as aap_models
from neutron.db.models import dns as dns_models
@ -31,6 +30,7 @@ from neutron.services.trunk import models as trunk_models
from neutron_lib.api.definitions import portbindings
from neutron_lib import constants as n_constants
from neutron_lib import context as n_context
from neutron_lib import rpc as n_rpc
from opflexagent import host_agent_rpc as oa_rpc
from opflexagent import rpc as o_rpc
from oslo_log import log

View File

@ -20,7 +20,6 @@ from gbpservice.neutron.extensions import patch # noqa
from neutron.common import constants as n_const
from neutron.common import utils as n_utils
from neutron.db import _resource_extend as resource_extend
from neutron.db.models import securitygroup as securitygroups_db
from neutron.db import models_v2
from neutron.plugins.ml2.common import exceptions as ml2_exc
@ -36,6 +35,7 @@ from neutron_lib.api import validators
from neutron_lib.callbacks import events
from neutron_lib.callbacks import registry
from neutron_lib.callbacks import resources
from neutron_lib.db import resource_extend
from neutron_lib.db import utils as db_utils
from neutron_lib.plugins import directory
from oslo_log import log
@ -308,9 +308,9 @@ class Ml2PlusPlugin(ml2_plugin.Ml2Plugin,
'tenant_id': address_scope['tenant_id'],
'shared': address_scope['shared'],
'ip_version': address_scope['ip_version']}
self._apply_dict_extend_functions(as_def.COLLECTION_NAME, res,
address_scope)
return self._fields(res, fields)
resource_extend.apply_funcs(
as_def.COLLECTION_NAME, res, address_scope)
return db_utils.resource_fields(res, fields)
@n_utils.transaction_guard
@db_api.retry_if_session_inactive()

View File

@ -15,8 +15,6 @@
from neutron.api import extensions
from neutron.common import utils as n_utils
from neutron.db import _model_query as model_query
from neutron.db import _resource_extend as resource_extend
from neutron.db import common_db_mixin
from neutron.db import dns_db
from neutron.db import extraroute_db
@ -25,6 +23,8 @@ from neutron.db.models import l3 as l3_db
from neutron.quota import resource_registry
from neutron_lib.api.definitions import l3 as l3_def
from neutron_lib.api.definitions import portbindings
from neutron_lib.db import model_query
from neutron_lib.db import resource_extend
from neutron_lib.db import utils as db_utils
from neutron_lib import exceptions
from neutron_lib.plugins import constants

View File

@ -15,7 +15,6 @@ import operator
from keystoneclient import exceptions as k_exceptions
from keystoneclient.v2_0 import client as k_client
import netaddr
from neutron.common import exceptions as neutron_exc
from neutron.db import models_v2
from neutron.extensions import securitygroup as ext_sg
from neutron_lib.api.definitions import port as port_def
@ -1842,7 +1841,7 @@ class ResourceMappingDriver(api.PolicyDriver, ImplicitResourceOperations,
subnets = self._use_implicit_subnet_from_subnetpool(
context, subnet_specifics)
context.add_subnets([sub['id'] for sub in subnets])
except neutron_exc.SubnetAllocationError:
except n_exc.SubnetAllocationError:
# Translate to GBP exception
raise exc.NoSubnetAvailable()

View File

@ -18,10 +18,10 @@ import eventlet
from eventlet import greenpool
from keystoneclient import exceptions as k_exceptions
from keystoneclient.v2_0 import client as keyclient
from neutron.common import rpc as n_rpc
from neutron_lib.db import model_base
from neutron_lib import exceptions as n_exc
from neutron_lib.plugins import constants as pconst
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging

View File

@ -10,15 +10,16 @@
# License for the specific language governing permissions and limitations
# under the License.
from gbpservice.nfp.lib import transport
import mock
from neutron.common import rpc as n_rpc
from neutron_lib import context as ctx
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
from oslo_serialization import jsonutils
import six
import unittest2
from gbpservice.nfp.lib import transport
"""
Common class used to create configuration mapping
"""

View File

@ -10,13 +10,13 @@
# License for the specific language governing permissions and limitations
# under the License.
from gbpservice.nfp.proxy_agent.notifications import pull
import mock
from neutron_lib import context as ctx
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
import unittest2
from neutron.common import rpc as n_rpc
from oslo_config import cfg
from gbpservice.nfp.proxy_agent.notifications import pull
pull_notification = pull.PullNotification

View File

@ -49,6 +49,7 @@ from neutron_lib import constants as n_constants
from neutron_lib import context as n_context
from neutron_lib.plugins import constants as pconst
from neutron_lib.plugins import directory
from neutron_lib import rpc
from opflexagent import constants as ofcst
from oslo_config import cfg
from oslo_utils import uuidutils
@ -703,11 +704,12 @@ class TestRpcListeners(ApicAimTestCase):
# REVISIT: Remove new_rpc option with old RPC cleanup.
def test_start_rpc_listeners(self):
# Override mock from
# neutron.tests.base.BaseTestCase.setup_rpc_mocks(), so that
# it returns servers, but still avoids starting them.
with mock.patch('neutron.common.rpc.Connection.consume_in_threads',
TestRpcListeners._consume_in_threads):
# Override mock from neutron_lib.fixture.RPCFixture installed
# by neutron.tests.base.BaseTestCase.setUp(), so that it
# returns servers, but still avoids starting them.
with mock.patch.object(
rpc.Connection, 'consume_in_threads',
TestRpcListeners._consume_in_threads):
# Call plugin method and verify that the apic_aim MD's
# RPC servers are returned.
servers = self.plugin.start_rpc_listeners()
@ -10763,7 +10765,7 @@ class TestOpflexRpc(ApicAimTestCase):
for route in host_routes:
if route['destination'] == '0.0.0.0/0':
default_routes.append(route)
elif route['destination'] == '169.254.169.254/16':
elif route['destination'] == '169.254.0.0/16':
metadata_routes.append(route)
if not default_routes and gateway_ip:
host_routes.append(
@ -10778,7 +10780,7 @@ class TestOpflexRpc(ApicAimTestCase):
# multiple DHCP ports.
for ip in list(dhcp_server_ports.values())[0]:
host_routes.append(
{'destination': '169.254.169.254/16',
{'destination': '169.254.0.0/16',
'nexthop': ip})
self.assertEqual(subnet['cidr'], details['cidr'])
@ -10830,7 +10832,7 @@ class TestOpflexRpc(ApicAimTestCase):
subnet1_id = subnet1['id']
host_routes2 = [
{'destination': '169.254.169.254/16', 'nexthop': '10.0.1.2'},
{'destination': '169.254.0.0/16', 'nexthop': '10.0.1.2'},
]
if active_active_aap:
subnet2 = self._create_subnet_with_extension(

View File

@ -17,7 +17,6 @@ import mock
import testtools
from neutron.api import extensions
from neutron.common import rpc as n_rpc
from neutron.conf.plugins.ml2 import config # noqa
from neutron.conf.plugins.ml2.drivers import driver_type
from neutron.tests.unit.api import test_extensions
@ -25,6 +24,7 @@ from neutron.tests.unit.db import test_db_base_plugin_v2 as test_plugin
from neutron.tests.unit.extensions import test_address_scope
from neutron_lib.agent import topics
from neutron_lib.plugins import directory
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
from gbpservice.neutron.db import all_models # noqa
@ -80,11 +80,12 @@ class TestRpcListeners(Ml2PlusPluginV2TestCase):
return conn.consume_in_threads()
def test_start_rpc_listeners(self):
# Override mock from
# neutron.tests.base.BaseTestCase.setup_rpc_mocks(), so that
# it returns servers, but still avoids starting them.
with mock.patch('neutron.common.rpc.Connection.consume_in_threads',
TestRpcListeners._consume_in_threads):
# Override mock from neutron_lib.fixture.RPCFixture installed
# by neutron.tests.base.BaseTestCase.setUp(), so that it
# returns servers, but still avoids starting them.
with mock.patch.object(
n_rpc.Connection, 'consume_in_threads',
TestRpcListeners._consume_in_threads):
# Mock logger MD to start an RPC listener.
with mock.patch(
'gbpservice.neutron.tests.unit.plugins.ml2plus.drivers.'

View File

@ -244,7 +244,7 @@ class AIMBaseTestCase(test_nr_base.CommonNeutronBaseTestCase,
return super(AIMBaseTestCase, self)._bind_port_to_host(
port_id, host, data=data)
def _make_address_scope_for_vrf(self, vrf_dn, ip_version=4,
def _make_address_scope_for_vrf(self, vrf_dn, ip_version=4, admin=False,
expected_status=None, **kwargs):
attrs = {'ip_version': ip_version}
if vrf_dn:
@ -253,9 +253,10 @@ class AIMBaseTestCase(test_nr_base.CommonNeutronBaseTestCase,
req = self.new_create_request('address-scopes',
{'address_scope': attrs}, self.fmt)
neutron_context = nctx.Context('', kwargs.get('tenant_id',
self._tenant_id))
req.environ['neutron.context'] = neutron_context
if not admin:
neutron_context = nctx.Context('', kwargs.get('tenant_id',
self._tenant_id))
req.environ['neutron.context'] = neutron_context
res = req.get_response(self.ext_api)
if expected_status:
@ -1044,9 +1045,10 @@ class TestL3Policy(AIMBaseTestCase):
if not tenant_id:
tenant_id = self._tenant_id
admin = True if shared else False
sp2 = self._make_subnetpool(
self.fmt, prefixes, name='sp2', address_scope_id=ascp_id,
tenant_id=tenant_id, shared=shared)['subnetpool']
admin=admin, tenant_id=tenant_id, shared=shared)['subnetpool']
self.assertEqual(ascp_id, sp2['address_scope_id'])
self.assertEqual(prefixes, sp2['prefixes'])
implicit_ip_pool = l3p['ip_pool']
@ -1095,10 +1097,12 @@ class TestL3Policy(AIMBaseTestCase):
'subnet_prefix_length': subnet_prefix_length}
address_scope_v4 = address_scope_v6 = None
admin = True if shared else False
if explicit_address_scope or v4_default or v6_default:
if ip_version == 4 or ip_version == 46:
address_scope_v4 = self._make_address_scope(
self.fmt, 4, name='as1v4',
self.fmt, 4, name='as1v4', admin=admin,
project_id=tenant_id,
shared=shared)['address_scope']
if not v4_default:
attrs['address_scope_v4_id'] = address_scope_v4['id']
@ -1106,13 +1110,15 @@ class TestL3Policy(AIMBaseTestCase):
if ((isomorphic and address_scope_v4) or
(v4_default and v6_default)):
address_scope_v6 = self._make_address_scope_for_vrf(
address_scope_v4[DN][VRF],
6, name='as1v6', shared=shared)['address_scope']
address_scope_v4[DN][VRF], 6, admin=admin,
project_id=tenant_id,
name='as1v6', shared=shared)['address_scope']
self.assertEqual(address_scope_v6[DN],
address_scope_v4[DN])
else:
address_scope_v6 = self._make_address_scope(
self.fmt, 6, name='as1v6',
self.fmt, 6, name='as1v6', admin=admin,
project_id=tenant_id,
shared=shared)['address_scope']
if not v6_default:
attrs['address_scope_v6_id'] = address_scope_v6['id']
@ -1122,9 +1128,11 @@ class TestL3Policy(AIMBaseTestCase):
if not ip_pool:
ip_pool_v4 = '192.168.0.0/16'
if explicit_subnetpool or v4_default:
admin = True if v4_default else admin
sp = self._make_subnetpool(
self.fmt, [ip_pool_v4], name='sp1v4',
is_default=v4_default,
is_default=v4_default, admin=admin,
project_id=tenant_id,
address_scope_id=address_scope_v4['id'],
tenant_id=tenant_id, shared=shared)['subnetpool']
if explicit_subnetpool:
@ -1133,9 +1141,11 @@ class TestL3Policy(AIMBaseTestCase):
if not ip_pool:
ip_pool_v6 = 'fd6d:8d64:af0c::/64'
if explicit_subnetpool or v6_default:
admin = True if v6_default else admin
sp = self._make_subnetpool(
self.fmt, [ip_pool_v6], name='sp1v6',
is_default=v6_default,
is_default=v6_default, admin=admin,
project_id=tenant_id,
address_scope_id=address_scope_v6['id'],
tenant_id=tenant_id, shared=shared)['subnetpool']
if explicit_subnetpool:
@ -1296,9 +1306,11 @@ class TestL3Policy(AIMBaseTestCase):
no_address_scopes=True)
def test_unshared_l3_policy_lifecycle_no_address_scope(self):
with self.address_scope(ip_version=4, shared=True) as ascpv4:
with self.address_scope(ip_version=4, shared=True, admin=True,
project_id=self._tenant_id) as ascpv4:
ascpv4 = ascpv4['address_scope']
with self.address_scope(ip_version=6, shared=True) as ascpv6:
with self.address_scope(ip_version=6, shared=True, admin=True,
project_id=self._tenant_id) as ascpv6:
ascpv6 = ascpv6['address_scope']
self.assertRaises(webob.exc.HTTPClientError,
self.create_l3_policy,
@ -1307,7 +1319,9 @@ class TestL3Policy(AIMBaseTestCase):
address_scope_v6_id=ascpv6['id'])
def test_create_l3p_shared_addr_scp_explicit_unshared_subnetpools(self):
with self.address_scope(ip_version=4, shared=True) as ascpv4:
with self.address_scope(ip_version=4, admin=True,
project_id=self._tenant_id,
shared=True) as ascpv4:
ascpv4 = ascpv4['address_scope']
with self.subnetpool(
name='sp1v4', prefixes=['192.168.0.0/16'],
@ -6040,7 +6054,7 @@ class TestL2PolicyRouteInjection(AIMBaseTestCase):
'nexthop': subnet_details['gateway_ip']})
if metadata:
expected_host_routes.append(
{'destination': '169.254.169.254/16',
{'destination': '169.254.0.0/16',
'nexthop': subnet_details['dns_nameservers'][0]})
else:
self.assertNotIn('gateway_ip', subnet_details)

View File

@ -2054,9 +2054,10 @@ class TestL3Policy(ResourceMappingTestCase,
if not tenant_id:
tenant_id = self._tenant_id
admin = True if shared else False
sp2 = self._make_subnetpool(
self.fmt, prefixes, name='sp2', address_scope_id=ascp_id,
tenant_id=tenant_id, shared=shared)['subnetpool']
admin=admin, tenant_id=tenant_id, shared=shared)['subnetpool']
self.assertEqual(ascp_id, sp2['address_scope_id'])
self.assertEqual(prefixes, sp2['prefixes'])
implicit_ip_pool = l3p['ip_pool']
@ -2103,16 +2104,19 @@ class TestL3Policy(ResourceMappingTestCase,
'subnet_prefix_length': subnet_prefix_length}
address_scope_v4 = address_scope_v6 = None
admin = True if shared else False
if explicit_address_scope or v4_default or v6_default:
if ip_version == 4 or ip_version == 46:
address_scope_v4 = self._make_address_scope(
self.fmt, 4, name='as1v4',
self.fmt, 4, name='as1v4', admin=admin,
project_id=tenant_id,
shared=shared)['address_scope']
if not v4_default:
attrs['address_scope_v4_id'] = address_scope_v4['id']
if ip_version == 6 or ip_version == 46:
address_scope_v6 = self._make_address_scope(
self.fmt, 6, name='as1v6',
self.fmt, 6, name='as1v6', admin=admin,
project_id=tenant_id,
shared=shared)['address_scope']
if not v6_default:
attrs['address_scope_v6_id'] = address_scope_v6['id']
@ -2122,9 +2126,12 @@ class TestL3Policy(ResourceMappingTestCase,
if not ip_pool:
ip_pool_v4 = '192.168.0.0/16'
if explicit_subnetpool or v4_default:
admin = True if v4_default else admin
sp = self._make_subnetpool(
self.fmt, [ip_pool_v4], name='sp1v4',
is_default=v4_default,
admin=admin,
project_id=tenant_id,
address_scope_id=address_scope_v4['id'],
tenant_id=tenant_id, shared=shared)['subnetpool']
if explicit_subnetpool:
@ -2133,9 +2140,12 @@ class TestL3Policy(ResourceMappingTestCase,
if not ip_pool:
ip_pool_v6 = 'fd6d:8d64:af0c::/64'
if explicit_subnetpool or v6_default:
admin = True if v6_default else admin
sp = self._make_subnetpool(
self.fmt, [ip_pool_v6], name='sp1v6',
is_default=v6_default,
admin=admin,
project_id=tenant_id,
address_scope_id=address_scope_v6['id'],
tenant_id=tenant_id, shared=shared)['subnetpool']
if explicit_subnetpool:
@ -2283,9 +2293,11 @@ class TestL3Policy(ResourceMappingTestCase,
no_address_scopes=True)
def test_l3_policy_lifecycle_dual_address_scope(self):
with self.address_scope(ip_version=4, shared=True) as ascpv4:
with self.address_scope(ip_version=4, shared=True, admin=True,
project_id=self._tenant_id) as ascpv4:
ascpv4 = ascpv4['address_scope']
with self.address_scope(ip_version=6, shared=True) as ascpv6:
with self.address_scope(ip_version=6, shared=True, admin=True,
project_id=self._tenant_id) as ascpv6:
ascpv6 = ascpv6['address_scope']
l3p = self.create_l3_policy(
ip_version=46,
@ -2296,12 +2308,14 @@ class TestL3Policy(ResourceMappingTestCase,
self.assertEqual(ascpv6['id'], l3p['address_scope_v6_id'])
def test_create_l3p_shared_addr_scp_explicit_unshared_subnetpools(self):
with self.address_scope(ip_version=4, shared=True) as ascpv4:
with self.address_scope(ip_version=4, shared=True, admin=True,
project_id=self._tenant_id) as ascpv4:
ascpv4 = ascpv4['address_scope']
with self.subnetpool(
name='sp1v4', prefixes=['192.168.0.0/16'],
tenant_id=ascpv4['tenant_id'], default_prefixlen=24,
address_scope_id=ascpv4['id'], shared=False) as sp1v4:
address_scope_id=ascpv4['id'], shared=False, admin=True,
project_id=self._tenant_id) as sp1v4:
sp1v4 = sp1v4['subnetpool']
# As admin, create a subnetpool in a different tenant
# but associated with the same address_scope

View File

@ -60,9 +60,9 @@ class GroupPolicyExtensionTestCase(test_extensions_base.ExtensionTestCase):
'l2_policy': 'l2_policies', 'l3_policy': 'l3_policies',
'network_service_policy': 'network_service_policies',
'external_policy': 'external_policies'}
self._setUpExtension(
self.setup_extension(
GP_PLUGIN_BASE_NAME, constants.GROUP_POLICY,
gp.RESOURCE_ATTRIBUTE_MAP, gp.Group_policy, GROUPPOLICY_URI,
gp.Group_policy, GROUPPOLICY_URI,
plural_mappings=plural_mappings)
self.instance = self.plugin.return_value

View File

@ -48,8 +48,8 @@ class GroupPolicyMappingExtTestCase(tgp.GroupPolicyExtensionTestCase):
'network_service_policies',
'external_policy':
'external_policies'}
self._setUpExtension(
tgp.GP_PLUGIN_BASE_NAME, constants.GROUP_POLICY, attr_map,
self.setup_extension(
tgp.GP_PLUGIN_BASE_NAME, constants.GROUP_POLICY,
gp.Group_policy, tgp.GROUPPOLICY_URI,
plural_mappings=plural_mappings)
self.instance = self.plugin.return_value

View File

@ -41,9 +41,9 @@ class ServiceChainExtensionTestCase(test_extensions_base.ExtensionTestCase):
def setUp(self):
super(ServiceChainExtensionTestCase, self).setUp()
plural_mappings = {}
self._setUpExtension(
self.setup_extension(
SERVICE_CHAIN_PLUGIN_BASE_NAME, constants.SERVICECHAIN,
servicechain.RESOURCE_ATTRIBUTE_MAP, servicechain.Servicechain,
servicechain.Servicechain,
SERVICECHAIN_URI, plural_mappings=plural_mappings)
self.instance = self.plugin.return_value

View File

@ -12,8 +12,8 @@
from neutron.agent import rpc as n_agent_rpc
from neutron.common import rpc as n_rpc
from neutron_lib import context as n_context
from neutron_lib import rpc as n_rpc
from oslo_config import cfg as oslo_config
from oslo_service import loopingcall as oslo_looping_call
from oslo_service import periodic_task as oslo_periodic_task

View File

@ -13,22 +13,20 @@
import exceptions
from gbpservice._i18n import _
from gbpservice.nfp.common import constants as nfp_constants
from gbpservice.nfp.core import log as nfp_logging
from gbpservice.nfp.lib import rest_client_over_unix as unix_rc
from neutron.common import rpc as n_rpc
from neutron_lib import context as n_context
from neutron_lib import rpc as n_rpc
from oslo_config import cfg
from oslo_config import cfg as oslo_config
import oslo_messaging as messaging
from oslo_serialization import jsonutils
import requests
import six
from gbpservice._i18n import _
from gbpservice.nfp.common import constants as nfp_constants
from gbpservice.nfp.core import log as nfp_logging
from gbpservice.nfp.lib import rest_client_over_unix as unix_rc
LOG = nfp_logging.getLogger(__name__)
Version = 'v1' # v1/v2/v3#

View File

@ -14,8 +14,8 @@ import copy
import sys
import traceback
from neutron.common import rpc as n_rpc
from neutron_lib import context as n_context
from neutron_lib import rpc as n_rpc
import oslo_messaging as messaging
from gbpservice._i18n import _

View File

@ -13,8 +13,8 @@
import sys
import traceback
from neutron.common import rpc as n_rpc
from neutron_lib import context as n_context
from neutron_lib import rpc as n_rpc
from oslo_log import helpers as log_helpers
import oslo_messaging

View File

@ -20,15 +20,15 @@ EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
# Import common functions
source $TOP_DIR/functions
source $TOP_DIR/devstack/functions
# Import configuration
source $TOP_DIR/openrc
source $TOP_DIR/devstack/openrc
# Import exercise configuration
source $TOP_DIR/exerciserc
#source $TOP_DIR/exerciserc
source $TOP_DIR/openrc demo demo
source $TOP_DIR/devstack/openrc demo demo
# Print the commands being run so that we can see the command that triggers
# an error. It is also useful for following allowing as the install occurs.

View File

@ -18,15 +18,15 @@ EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
# Import common functions
source $TOP_DIR/functions
source $TOP_DIR/devstack/functions
# Import configuration
source $TOP_DIR/openrc
source $TOP_DIR/devstack/openrc
# Import exercise configuration
source $TOP_DIR/exerciserc
#source $TOP_DIR/exerciserc
source $TOP_DIR/openrc demo demo
source $TOP_DIR/devstack/openrc demo demo
VALIDATE_OPTS=${VALIDATE_OPTS:-"--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini"}

View File

@ -13,11 +13,11 @@ SKIP_EXERCISES=volumes,trove,swift,sahara,euca,bundle,boot_from_volume,aggregate
enable_plugin group-based-policy https://opendev.org/x/group-based-policy.git master
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc.git stable/rocky
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc.git stable/stein
ENABLE_APIC_AIM_GATE=True
AIM_BRANCH=master
OPFLEX_BRANCH=stable/rocky
OPFLEX_BRANCH=stable/stein
APICAPI_BRANCH=master
ACITOOLKIT_BRANCH=noiro-lite

View File

@ -20,7 +20,7 @@ GBPSERVICE_BRANCH=master
#GBPSERVICE_BRANCH=refs/changes/85/298385/154
GBPCLIENT_REPO=${GIT_BASE}/x/python-group-based-policy-client.git
GBPCLIENT_BRANCH=stable/rocky
GBPCLIENT_BRANCH=stable/stein
#GBPCLIENT_REPO=https://review.openstack.org/openstack/python-group-based-policy-client
#GBPCLIENT_BRANCH=refs/changes/95/311695/3
@ -45,13 +45,13 @@ enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_plugin neutron-fwaas https://opendev.org/openstack/neutron-fwaas.git stable/rocky
enable_plugin neutron-lbaas https://opendev.org/openstack/neutron-lbaas.git stable/rocky
enable_plugin neutron https://opendev.org/openstack/neutron.git stable/rocky
enable_plugin neutron-fwaas https://opendev.org/openstack/neutron-fwaas.git stable/stein
enable_plugin neutron-lbaas https://opendev.org/openstack/neutron-lbaas.git stable/stein
enable_plugin neutron https://opendev.org/openstack/neutron.git stable/stein
#ENBALE OCTAVIA
enable_plugin octavia https://opendev.org/openstack/octavia stable/rocky
enable_plugin octavia https://opendev.org/openstack/octavia stable/stein
#ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
enable_service q-fwaas-v1

View File

@ -20,7 +20,7 @@ GBPSERVICE_BRANCH=master
#GBPSERVICE_BRANCH=refs/changes/54/240954/47
GBPCLIENT_REPO=${GIT_BASE}/x/python-group-based-policy-client.git
GBPCLIENT_BRANCH=stable/rocky
GBPCLIENT_BRANCH=stable/stein
#GBPCLIENT_REPO=https://review.openstack.org/openstack/python-group-based-policy-client
#GBPCLIENT_BRANCH=refs/changes/55/435155/1

View File

@ -16,7 +16,7 @@ XTRACE=$(set +o | grep xtrace)
function prepare_gbp_devstack_pre {
cd $TOP_DIR
sudo git checkout stable/rocky
sudo git checkout stable/stein
sudo sed -i 's/DEST=\/opt\/stack/DEST=\/opt\/stack\/new/g' $TOP_DIR/stackrc
sudo sed -i 's/source $TOP_DIR\/lib\/neutron/source $TOP_DIR\/lib\/neutron\nsource $TOP_DIR\/lib\/neutron-legacy/g' $TOP_DIR/stack.sh
}
@ -25,15 +25,15 @@ function prepare_gbp_devstack_post {
# The following should updated when master moves to a new release
# We need to do the following since the infra job clones these repos and
# checks out the master branch (as this is the master branch) and later
# does not switch to the stable/rocky branch when installing devstack
# does not switch to the stable/stein branch when installing devstack
# since the repo is already present.
# This can be worked around by changing the job description in
# project-config to set BRANCH_OVERRIDE to use the stable/rocky branch
sudo git --git-dir=/opt/stack/new/neutron/.git --work-tree=/opt/stack/new/neutron checkout stable/rocky
sudo git --git-dir=/opt/stack/new/nova/.git --work-tree=/opt/stack/new/nova checkout stable/rocky
sudo git --git-dir=/opt/stack/new/keystone/.git --work-tree=/opt/stack/new/keystone checkout stable/rocky
sudo git --git-dir=/opt/stack/new/cinder/.git --work-tree=/opt/stack/new/cinder checkout stable/rocky
sudo git --git-dir=/opt/stack/new/requirements/.git --work-tree=/opt/stack/new/requirements checkout stable/rocky
# project-config to set BRANCH_OVERRIDE to use the stable/stein branch
sudo git --git-dir=/opt/stack/new/neutron/.git --work-tree=/opt/stack/new/neutron checkout stable/stein
sudo git --git-dir=/opt/stack/new/nova/.git --work-tree=/opt/stack/new/nova checkout stable/stein
sudo git --git-dir=/opt/stack/new/keystone/.git --work-tree=/opt/stack/new/keystone checkout stable/stein
sudo git --git-dir=/opt/stack/new/cinder/.git --work-tree=/opt/stack/new/cinder checkout stable/stein
sudo git --git-dir=/opt/stack/new/requirements/.git --work-tree=/opt/stack/new/requirements checkout stable/stein
source $TOP_DIR/functions
source $TOP_DIR/functions-common
@ -74,8 +74,8 @@ function prepare_gbp_aim_devstack {
prepare_gbp_devstack_pre
sudo cp $CONTRIB_DIR/devstack/local-aim.conf $TOP_DIR/local.conf
append_to_localconf
sudo cp $CONTRIB_DIR/devstack/exercises-aim/gbp_aim.sh $TOP_DIR/exercises/
sudo cp $CONTRIB_DIR/devstack/exercises-aim/neutron_aim.sh $TOP_DIR/exercises/
sudo cp $CONTRIB_DIR/devstack/exercises-aim/gbp_aim.sh $TOP_DIR
sudo cp $CONTRIB_DIR/devstack/exercises-aim/neutron_aim.sh $TOP_DIR
# Use the aim version of the shared PRS test
sudo mv $GBP_FUNC_DIR/testcases/tc_gbp_prs_pr_shared_func.py.aim $GBP_FUNC_DIR/testcases/tc_gbp_prs_pr_shared_func.py
sudo mv $GBP_FUNC_DIR/testcases/tc_gbp_prs_func.py.aim $GBP_FUNC_DIR/testcases/tc_gbp_prs_func.py
@ -91,6 +91,11 @@ function source_creds {
}
function run_exercises {
$TOP_DIR/gbp_aim.sh
$TOP_DIR/neutron_aim.sh
}
function run_gbp_rally {
# REVISIT: Temporarily disabling this job until its updated to run with Ocata
exit 1
@ -138,9 +143,9 @@ function check_residual_resources {
gbp external-segment-list
gbp apg-list
neutron router-list
neutron net-list
neutron subnet-list
neutron subnetpool-list
neutron port-list
openstack router list
openstack network list
openstack subnet list
openstack subnet pool list
openstack port list
}

View File

@ -20,7 +20,7 @@ sudo /bin/systemctl restart memcached
sudo chown -R stack:stack $TOP_DIR
# Run exercise scripts
$TOP_DIR/exercise.sh
run_exercises
exercises_exit_code=$?
# Check if exercises left any resources undeleted

View File

@ -3,17 +3,23 @@
# process, which may cause wedges in the gate later.
hacking>=1.1.0,<1.2.0 # Apache-2.0
-e git+https://opendev.org/openstack/neutron.git@stable/rocky#egg=neutron
-e git+https://opendev.org/openstack/neutron-vpnaas.git@stable/rocky#egg=neutron-vpnaas
-e git+https://opendev.org/openstack/neutron-lbaas.git@stable/rocky#egg=neutron-lbaas
-e git+https://opendev.org/openstack/neutron-fwaas.git@stable/rocky#egg=neutron-fwaas
-e git+https://opendev.org/openstack/networking-sfc.git@stable/rocky#egg=networking-sfc
# Since version numbers for these are specified in
# https://releases.openstack.org/constraints/upper/stein, they cannot be
# referenced as GIT URLs.
neutron
python-heatclient
python-keystoneclient
-e git+https://opendev.org/openstack/neutron-vpnaas.git@stable/stein#egg=neutron-vpnaas
-e git+https://opendev.org/openstack/neutron-lbaas.git@stable/stein#egg=neutron-lbaas
-e git+https://opendev.org/openstack/neutron-fwaas.git@stable/stein#egg=neutron-fwaas
-e git+https://opendev.org/openstack/networking-sfc.git@stable/stein#egg=networking-sfc
-e git+https://github.com/noironetworks/apicapi.git@master#egg=apicapi
-e git+https://github.com/noironetworks/python-opflex-agent.git@stable/rocky#egg=python-opflexagent-agent
-e git+https://github.com/noironetworks/python-opflex-agent.git@stable/stein#egg=python-opflexagent-agent
-e git+https://opendev.org/x/python-group-based-policy-client.git@stable/rocky#egg=gbpclient
-e git+https://opendev.org/x/python-group-based-policy-client.git@stable/stein#egg=gbpclient
coverage!=4.4,>=4.0 # Apache-2.0
flake8-import-order==0.12 # LGPLv3
@ -26,12 +32,6 @@ WebTest>=2.0.27 # MIT
oslotest>=3.2.0 # Apache-2.0
stestr>=1.0.0 # Apache-2.0
# Since version numbers for these are specified in
# https://releases.openstack.org/constraints/upper/rocky, they cannot be
# referenced as GIT URLs.
python-heatclient
python-keystoneclient
# REVISIT: Until co-gating and/or stable branches are implemented for
# the aci-integration-module repo, it may be necessary to pin to a
# working commit. Also, specific branches in indirect dependencies

View File

@ -14,7 +14,7 @@ usedevelop = True
install_command =
pip install {opts} {packages}
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/rocky}
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/stein}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
whitelist_externals = sh