Implement update VNFFG functionality

Until now there was no implementation of updating an existing
VNFFG with the use of the 'vnffgd-template' parameter which is
introduced in the Tacker client side. This patch addresses exactly
that.

Implements: blueprint update-vnffg

Change-Id: I8fa748a76fd479649be5dd7a19244f7143849687
Signed-off-by: Dimitrios Markou <mardim@intracom-telecom.com>
This commit is contained in:
Dimitrios Markou 2018-01-08 17:12:33 +02:00
parent 073407ff27
commit 452f95fe0c
11 changed files with 834 additions and 184 deletions

View File

@ -283,7 +283,8 @@ Using the below command query the list of existing VNFFG templates.
After the user located the VNFFG the subsequent action is to update it.
Based on the appropriate choice, update VNFFG template.
Currently we support only the update of the vnf-mapping in a VNFFG.
Currently two choices are supported for the update of an existing VNFFG.
The first choice is the use of the vnf-mapping parameter.
The user needs to use a VNF which is actually derived from the VNFD which
is going to be used in the vnf-mapping parameter.
If the user is not sure which VNF was used for the mapping during the time
@ -307,6 +308,126 @@ To update the VNF mappings to VNFFG, execute the below command
Updated vnffg: myvnffg
The second choice is the use of the vnffgd-template parameter.
The aforementioned parameter provides the ability to use a vnffgd formated yaml
template which contains all the elements and their parameters that Tacker is
going to apply to its ecosystem.
Below there is an example usage of updating an existing VNFFG:
Assuming that the existing VNFFG in the system that we want to update is
derived from the following VNFFGD template.
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
description: Sample VNFFG template
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: creates path (CP1)
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_udp
classifier:
destination_port_range: 80-1024
ip_proto: 17
path:
- forwarder: VNFD3
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: UDP to Corporate Net
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNFD3]
members: [Forwarding_path1]
By using the below VNFFGD template we can update the exisitng VNFFG.
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
description: Sample VNFFG template
node_templates:
Forwarding_path2:
type: tosca.nodes.nfv.FP.TackerV2
description: creates path (CP1->CP2)
properties:
id: 52
policy:
type: ACL
criteria:
- name: block_tcp
classifier:
network_src_port_id: 640dfd77-c92b-45a3-b8fc-22712de480e1
destination_port_range: 22-28
ip_proto: 6
ip_dst_prefix: 192.168.1.2/24
path:
- forwarder: VNFD1
capability: CP1
- forwarder: VNFD2
capability: CP2
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: SSH to Corporate Net
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 2
dependent_virtual_link: [VL1,VL2]
connection_point: [CP1,CP2]
constituent_vnfs: [VNFD1,VNFD2]
members: [Forwarding_path2]
The above template informs Tacker to update the current classifier,NFP and
path (chain) with the ones that are described in that template. After the completion
of the update procedure the new NFP will be named 'Forwarding_path2' with an id of
'52',the classifier in that NFP will be named 'block_tcp' and will have the corresponding
match criteria and the updated chain will be consisted by two NVFs which are derived from
VNFD1,VNFD2 VNFDs.
To update the existing VNFFG through the vnffgd-template parameter, execute the
below command:
.. code-block:: console
tacker vnffg-update --vnffgd-template myvnffgd.yaml myvnffg
Updated vnffg: myvnffg
Of course the above update VNFFG's choices can be combined in a single command.
.. code-block:: console
tacker vnffg-update --vnf-mapping VNFD1:vnf1,VNFD2:vnf2 --vnffgd-template myvnffgd.yaml myvnffg
Updated vnffg: myvnffg
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -315,6 +436,7 @@ Known Issues and Limitations
- Matching on criteria with postfix 'name' does not work, for example
'network_name'
- NSH attributes not yet supported
- n-sfc Bug: https://bugs.launchpad.net/networking-sfc/+bug/1746686
.. _VNF1: https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnffgd/tosca-vnffg-vnfd1.yaml
.. _VNF2: https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnffgd/tosca-vnffg-vnfd2.yaml

View File

@ -0,0 +1,5 @@
---
features:
- Update an existing VNFFG. This functionality gives the ability
to update an existing VNFFG's components such as NFP, classifiers and
chain.

View File

@ -334,6 +334,17 @@ class VnffgPluginDbMixin(vnffg.VNFFGPluginBase, db_base.CommonDbMixin):
if param_matched.get(param_key) is None:
raise nfvo.VnffgParamValueNotUsed(param_key=param_key)
def _parametrize_topology_template(self, vnffg, template_db):
if vnffg.get('attributes') and \
vnffg['attributes'].get('param_values'):
vnffg_param = vnffg['attributes']
vnffgd_topology_template = \
template_db.template['vnffgd']['topology_template']
self._process_parameterized_template(vnffg_param,
vnffgd_topology_template)
template_db.template['vnffgd']['topology_template'] = \
vnffgd_topology_template
# called internally, not by REST API
def _create_vnffg_pre(self, context, vnffg):
vnffg = vnffg['vnffg']
@ -349,15 +360,7 @@ class VnffgPluginDbMixin(vnffg.VNFFGPluginBase, db_base.CommonDbMixin):
template_id)
LOG.debug('vnffg template %s', template_db)
if vnffg.get('attributes') and \
vnffg['attributes'].get('param_values'):
vnffg_param = vnffg['attributes']
vnffgd_topology_template = \
template_db.template['vnffgd']['topology_template']
self._process_parameterized_template(vnffg_param,
vnffgd_topology_template)
template_db.template['vnffgd']['topology_template'] = \
vnffgd_topology_template
self._parametrize_topology_template(vnffg, template_db)
vnf_members = self._get_vnffg_property(template_db.template,
'constituent_vnfs')
@ -809,7 +812,17 @@ class VnffgPluginDbMixin(vnffg.VNFFGPluginBase, db_base.CommonDbMixin):
constants.ERROR)
def _update_all_status(self, context, vnffg_id, nfp_id, status):
nfp_dict = self.get_nfp(context, nfp_id)
sfc_id = nfp_dict['chain_id']
with context.session.begin(subtransactions=True):
for classifier_id in nfp_dict['classifier_ids']:
query_cls = (self._model_query(context, VnffgClassifier).
filter(VnffgClassifier.id == classifier_id))
query_cls.update({'status': status})
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == sfc_id))
query_chain.update({'status': status})
query = (self._model_query(context, Vnffg).
filter(Vnffg.id == vnffg_id))
query.update({'status': status})
@ -828,7 +841,7 @@ class VnffgPluginDbMixin(vnffg.VNFFGPluginBase, db_base.CommonDbMixin):
res.update((key, vnffg_db[key]) for key in key_list)
return self._fields(res, fields)
def _update_vnffg_pre(self, context, vnffg_id):
def _update_vnffg_status_pre(self, context, vnffg_id):
vnffg = self.get_vnffg(context, vnffg_id)
nfp = self.get_nfp(context, vnffg['forwarding_paths'])
sfc = self.get_sfc(context, nfp['chain_id'])
@ -847,82 +860,328 @@ class VnffgPluginDbMixin(vnffg.VNFFGPluginBase, db_base.CommonDbMixin):
constants.PENDING_UPDATE)
return self._make_vnffg_dict(vnffg_db)
def _update_vnffg_post(self, context, vnffg_id,
updated_items,
n_sfc_chain_id=None):
vnffg = self.get_vnffg(context, vnffg_id)
nfp = self.get_nfp(context, vnffg['forwarding_paths'])
def _update_vnffg_pre(self, context, vnffg, vnffg_id, vnffg_old):
vnffg = vnffg['vnffg']
del vnffg['symmetrical']
if vnffg.get('vnffgd_template') is None:
try:
return self._update_vnffg_without_template(context, vnffg_old,
vnffg, vnffg_id)
except (nfvo.VnfMappingNotFoundException,
nfvo.VnfMappingNotValidException) as e:
raise e
with context.session.begin(subtransactions=True):
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == nfp['chain_id']).
filter(VnffgChain.status ==
constants.PENDING_UPDATE))
# Templates
template_db_new = self._get_resource(context, VnffgTemplate,
vnffg['vnffgd_id'])
LOG.debug('vnffg new template %s', template_db_new)
template_db_old = self._get_resource(context, VnffgTemplate,
vnffg_old['vnffgd_id'])
LOG.debug('vnffg old template %s', template_db_old)
self._parametrize_topology_template(vnffg, template_db_new)
# VNF-Members
vnf_members_new = self._get_vnffg_property(
template_db_new.template, 'constituent_vnfs')
LOG.debug('New Constituent VNFs: %s', vnf_members_new)
vnf_members_old = self._get_vnffg_property(
template_db_old.template, 'constituent_vnfs')
LOG.debug('Old Constituent VNFs: %s', vnf_members_old)
if set(vnf_members_new) == set(vnf_members_old):
if vnffg.get('vnf_mapping') is None:
final_vnf_mapping = vnffg_old['vnf_mapping']
else:
try:
self._validate_vnfd_in_vnf_mapping(
vnffg['vnf_mapping'], vnf_members_new)
except (nfvo.VnfMappingNotFoundException,
nfvo.VnfMappingNotValidException) as e:
raise e
updated_vnf_mapping = \
self._combine_current_and_new_vnf_mapping(
context, vnffg['vnf_mapping'],
vnffg_old['vnf_mapping'])
final_vnf_mapping = self._get_vnf_mapping(
context, updated_vnf_mapping, vnf_members_new)
else:
final_vnf_mapping = self._get_vnf_mapping(context, vnffg.get(
'vnf_mapping'),
vnf_members_new)
LOG.debug('VNF Mapping: %s', final_vnf_mapping)
# Update the vnffg with the new template.
query_vnffg = (self._model_query(context, Vnffg).
filter(Vnffg.id == vnffg_old['id']).
filter(Vnffg.status == constants.PENDING_UPDATE))
query_vnffg.update({'vnf_mapping': final_vnf_mapping,
'vnffgd_id': vnffg['vnffgd_id'],
'description': template_db_new.description,
'attributes': template_db_new.get('template')})
# Delete the old_vnffgd_template if template_source is 'inline'
if template_db_old.template_source == 'inline':
self.delete_vnffgd(context, vnffg_old['vnffgd_id'])
# update NFP
nfp_dict_old = self.get_nfp(context, vnffg_old['forwarding_paths'])
LOG.debug('Current NFP: %s', nfp_dict_old)
nfp_dict_new = self._update_nfp_pre(template_db_new, nfp_dict_old)
LOG.debug('New NFP: %s', nfp_dict_new)
query_nfp = (self._model_query(context, VnffgNfp).
filter(VnffgNfp.id == nfp['id']).
filter(VnffgNfp.status ==
constants.PENDING_UPDATE))
filter(VnffgNfp.id == nfp_dict_old['id']).
filter(VnffgNfp.status == constants.PENDING_UPDATE))
query_nfp.update(nfp_dict_new)
query_vnffg = (self._model_query(context, Vnffg).
filter(Vnffg.id == vnffg['id']).
filter(Vnffg.status ==
constants.PENDING_UPDATE))
# update chain
chain_old = self.get_sfc(context, nfp_dict_old['chain_id'])
LOG.debug('Current chain: %s', chain_old)
chain_new = self._create_port_chain(context, final_vnf_mapping,
template_db_new,
nfp_dict_new['name'])
LOG.debug('New chain: %s', chain_new)
# to check if it is updated
update_chain = self._set_updated_chain(chain_old['chain'],
chain_new)
if update_chain:
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == chain_old['id']).
filter(VnffgChain.status == constants.
PENDING_UPDATE))
query_chain.update({'chain': chain_new,
'path_id': nfp_dict_new['path_id']})
# update classifiers
classifiers_old = []
for classifier_id in nfp_dict_old['classifier_ids']:
classifiers_old.append(self.
get_classifier(context,
classifier_id,
fields=['name', 'match', 'id']))
classifiers_new = self._policy_to_acl_criteria(context,
template_db_new,
nfp_dict_new['name'],
final_vnf_mapping)
try:
classifiers_update, classifiers_delete = \
self._find_classifiers_to_update(classifiers_old,
classifiers_new)
except nfvo.UpdateVnffgException as e:
raise e
for clsfr in classifiers_update:
if clsfr.get('id'):
for item in MATCH_DB_KEY_LIST:
if clsfr['match'].get(item) is None:
clsfr['match'][item] = None
query_match = (self._model_query(context,
ACLMatchCriteria).
filter(ACLMatchCriteria.vnffgc_id == clsfr['id']))
query_match.update(clsfr['match'])
else:
classifier_id = uuidutils.generate_uuid()
sfcc_db = VnffgClassifier(id=classifier_id,
name=clsfr['name'],
tenant_id=vnffg_old['tenant_id'],
status=constants.PENDING_CREATE,
nfp_id=nfp_dict_old['id'],
chain_id=chain_old['id'])
context.session.add(sfcc_db)
match_db = ACLMatchCriteria(
id=uuidutils.generate_uuid(),
vnffgc_id=classifier_id,
**clsfr['match'])
context.session.add(match_db)
for clsfr in classifiers_delete:
query_clsfr = (self._model_query(context, VnffgClassifier).
filter(VnffgClassifier.id == clsfr['id']).
filter(VnffgClassifier.status == constants.
PENDING_UPDATE))
query_clsfr.update({'status': constants.PENDING_DELETE})
return self.get_vnffg(context, vnffg_id)
def _find_classifiers_to_update(self, current_classifiers,
new_classifiers):
update_classifiers = []
delete_classifiers = []
names_list = []
for new_clsfr in new_classifiers:
found_name = False
if new_clsfr['name'] is None:
LOG.error('VNFFG update requires named classifiers')
raise nfvo.UpdateVnffgException(
message="Failed to update VNFFG")
for cur_clsfr in current_classifiers:
if cur_clsfr['name'] == new_clsfr['name']:
new_clsfr['id'] = cur_clsfr['id']
names_list.append(new_clsfr['name'])
update_classifiers.append(new_clsfr)
found_name = True
break
if not found_name:
names_list.append(new_clsfr['name'])
update_classifiers.append(new_clsfr)
for cur_clsfr in current_classifiers:
if cur_clsfr['name'] not in names_list:
delete_classifiers.append(cur_clsfr)
return update_classifiers, delete_classifiers
def _set_updated_chain(self, current_chain, new_chain):
if len(current_chain) != len(new_chain):
return True
else:
for i, item in enumerate(current_chain):
cp_vnf = new_chain[i]
if (cp_vnf['name'] == item['name'] and
cp_vnf['connection_points'] == item[
'connection_points']):
continue
else:
return True
return False
def _update_vnffg_without_template(self, context, old_vnffg, new_vnffg,
vnffg_id):
template_db = self._get_resource(context, VnffgTemplate,
old_vnffg['vnffgd_id'])
vnfd_members = self._get_vnffg_property(template_db.template,
'constituent_vnfs')
nfp = self.get_nfp(context, old_vnffg['forwarding_paths'])
chain_dict = self.get_sfc(context, nfp['chain_id'])
try:
self._validate_vnfd_in_vnf_mapping(new_vnffg.get('vnf_mapping'),
vnfd_members)
except (nfvo.VnfMappingNotFoundException,
nfvo.VnfMappingNotValidException) as e:
raise e
combined_vnf_mapping = self._combine_current_and_new_vnf_mapping(
context, new_vnffg['vnf_mapping'], old_vnffg['vnf_mapping'])
new_vnffg['vnf_mapping'] = self._get_vnf_mapping(context,
combined_vnf_mapping,
vnfd_members)
new_chain = self._create_port_chain(context,
new_vnffg['vnf_mapping'],
template_db,
nfp['name'])
LOG.debug('chain update: %s', new_chain)
query_vnffg = (self._model_query(context, Vnffg).
filter(Vnffg.id == old_vnffg['id']).
filter(Vnffg.status == constants.PENDING_UPDATE))
query_vnffg.update({'vnf_mapping': new_vnffg['vnf_mapping']})
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == chain_dict['id']).
filter(VnffgChain.status == constants.
PENDING_UPDATE))
query_chain.update({'chain': new_chain})
return self.get_vnffg(context, vnffg_id)
def _update_nfp_pre(self, template_db, nfp_dict_old):
template_new = template_db.template['vnffgd']['topology_template']
nfp_dict_new = dict()
vnffg_name = list(template_new['groups'].keys())[0]
nfp_dict_new['name'] = template_new['groups'][vnffg_name]['members'][0]
nfp_dict_new['path_id'] = template_new['node_templates'][nfp_dict_new[
'name']]['properties'].get('id')
if not nfp_dict_new['path_id']:
nfp_dict_new['path_id'] = nfp_dict_old['path_id']
return nfp_dict_new
def _update_vnffg_post(self, context, n_sfc_chain_id,
classifiers_map, vnffg_dict):
"""Updates the status and the n-sfc instance_ids in the db
:param context: SQL Session Context
:param n_sfc_chain_id: Id of port-chain in n-sfc side
:param classifiers_map: classifier and instance Ids map
:param vnffg_dict: vnffg dictionary
:return: None
"""
nfp_dict = self.get_nfp(context, vnffg_dict['forwarding_paths'])
sfc_id = nfp_dict['chain_id']
with context.session.begin(subtransactions=True):
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == sfc_id).
filter(VnffgChain.status == constants.PENDING_UPDATE).one())
if n_sfc_chain_id is None:
query_chain.update({'status': constants.ERROR})
else:
for key in _VALID_VNFFG_UPDATE_ATTRIBUTES:
if updated_items.get(key) is not None:
query_vnffg.update({key: updated_items[key]})
for key in _VALID_NFP_UPDATE_ATTRIBUTES:
if updated_items.get(key) is not None:
query_nfp.update({key: updated_items[key]})
for key in _VALID_SFC_UPDATE_ATTRIBUTES:
if updated_items.get(key) is not None:
query_chain.update({key: updated_items[key]})
query_chain.update({'status': constants.ACTIVE})
def _update_vnffg_status(self, context, vnffg_id, error=False,
db_state=constants.ERROR):
query_cls = []
vnffg = self.get_vnffg(context, vnffg_id)
nfp = self.get_nfp(context, vnffg['forwarding_paths'])
classifier_ids = nfp['classifier_ids']
chain_dict = self.get_sfc(context, nfp['chain_id'])
with context.session.begin(subtransactions=True):
query_chain = (self._model_query(context, VnffgChain).
filter(VnffgChain.id == nfp['chain_id']))
for classifier_id in classifier_ids:
query_cl = (self._model_query(context, VnffgClassifier).
filter(VnffgClassifier.id == classifier_id))
query_cls.append(query_cl)
query_nfp = (self._model_query(context, VnffgNfp).
filter(VnffgNfp.id == nfp['id']))
query_vnffg = (self._model_query(context, Vnffg).
filter(Vnffg.id == vnffg['id']))
if not error and chain_dict['status'] == constants.ACTIVE:
for query_cl in query_cls:
query_cl.update({'status': constants.ACTIVE})
query_nfp.update({'status': constants.ACTIVE})
query_vnffg.update({'status': constants.ACTIVE})
else:
if db_state == constants.ACTIVE:
query_chain.update({'status': constants.ACTIVE})
for query_cl in query_cls:
query_cl.update({'status': constants.ACTIVE})
query_nfp.update({'status': constants.ACTIVE})
query_vnffg.update({'status': constants.ACTIVE})
for clsfr_id in nfp_dict['classifier_ids']:
query_clsfr = (self._model_query(context, VnffgClassifier).
filter(VnffgClassifier.id == clsfr_id))
if classifiers_map.get(clsfr_id):
query_clsfr.update({
'instance_id': classifiers_map[clsfr_id]})
if classifiers_map[clsfr_id]:
query_clsfr.update({'status': constants.ACTIVE})
else:
query_clsfr.update({'status': constants.ERROR})
else:
query_chain.update({'status': constants.ERROR})
for query_cl in query_cls:
query_cl.update({'status': constants.ERROR})
query_nfp.update({'status': constants.ERROR})
query_vnffg.update({'status': constants.ERROR})
# Deletion of unused Match criterias which are
# not longer required due to the update classifier
# procedure.
query_match = (
self._model_query(context, ACLMatchCriteria).
filter(ACLMatchCriteria.vnffgc_id == clsfr_id))
query_match.delete()
query_clsfr.delete()
def _update_vnffg_status_post(self, context, vnffg, error=False,
db_state=constants.ERROR):
nfp = self.get_nfp(context, vnffg['forwarding_paths'])
chain = self.get_sfc(context, nfp['chain_id'])
if error:
if db_state == constants.ACTIVE:
self._update_all_status(context, vnffg['id'], nfp['id'],
constants.ACTIVE)
else:
self._update_all_status(context, vnffg['id'], nfp['id'],
constants.ERROR)
else:
if chain['status'] == constants.ERROR:
self._update_all_status(context, vnffg['id'], nfp['id'],
constants.ERROR)
elif chain['status'] == constants.ACTIVE:
classifiers_active_state = True
for classifier in [self.get_classifier(context, classifier_id)
for classifier_id in nfp['classifier_ids']]:
if classifier['status'] == constants.ACTIVE:
continue
elif classifier['status'] == constants.ERROR:
classifiers_active_state = False
break
if classifiers_active_state:
self._update_all_status(context, vnffg['id'], nfp['id'],
constants.ACTIVE)
else:
self._update_all_status(context, vnffg['id'], nfp['id'],
constants.ERROR)
def _get_vnffg_db(self, context, vnffg_id, current_statuses, new_status):
try:

View File

@ -264,6 +264,14 @@ class UpdateChainException(exceptions.TackerException):
message = _("%(message)s")
class UpdateClassifierException(exceptions.TackerException):
message = _("%(message)s")
class UpdateVnffgException(exceptions.TackerException):
message = _("%(message)s")
NAME_MAX_LEN = 255
RESOURCE_ATTRIBUTE_MAP = {
@ -471,7 +479,7 @@ RESOURCE_ATTRIBUTE_MAP = {
},
'vnffgd_template': {
'allow_post': True,
'allow_put': False,
'allow_put': True,
'validate': {'type:dict_or_nodata': None},
'is_visible': True,
'default': None,

View File

@ -36,6 +36,7 @@ from tacker.mistral import mistral_client
from tacker.nfvo.drivers.vim import abstract_vim_driver
from tacker.nfvo.drivers.vnffg import abstract_vnffg_driver
from tacker.nfvo.drivers.workflow import workflow_generator
from tacker.plugins.common import constants
from tacker.vnfm import keystone
LOG = logging.getLogger(__name__)
@ -342,8 +343,7 @@ class OpenStack_Driver(abstract_vim_driver.VimAbstractDriver,
sess = session.Session(auth=auth_plugin)
return client_type(session=sess)
def create_flow_classifier(self, name, fc, auth_attr=None):
def _translate_ip_protocol(ip_proto):
def _translate_ip_protocol(self, ip_proto):
if ip_proto == '1':
return 'icmp'
elif ip_proto == '6':
@ -353,26 +353,31 @@ class OpenStack_Driver(abstract_vim_driver.VimAbstractDriver,
else:
return None
if not auth_attr:
LOG.warning("auth information required for n-sfc driver")
return None
LOG.debug('fc passed is %s', fc)
sfc_classifier_params = {}
sfc_classifier_params['name'] = name
def _create_classifier_params(self, fc):
classifier_params = {}
for field in fc:
if field in FC_MAP:
sfc_classifier_params[FC_MAP[field]] = fc[field]
classifier_params[FC_MAP[field]] = fc[field]
elif field == 'ip_proto':
protocol = _translate_ip_protocol(str(fc[field]))
protocol = self._translate_ip_protocol(str(fc[field]))
if not protocol:
raise ValueError('protocol %s not supported' % fc[field])
sfc_classifier_params['protocol'] = protocol
classifier_params['protocol'] = protocol
else:
LOG.warning("flow classifier %s not supported by "
"networking-sfc driver", field)
return classifier_params
def create_flow_classifier(self, name, fc, auth_attr=None):
if not auth_attr:
LOG.warning("auth information required for n-sfc driver")
return None
fc['name'] = name
LOG.debug('fc passed is %s', fc)
sfc_classifier_params = self._create_classifier_params(fc)
LOG.debug('sfc_classifier_params is %s', sfc_classifier_params)
if len(sfc_classifier_params) > 0:
neutronclient_ = NeutronClient(auth_attr)
@ -471,6 +476,8 @@ class OpenStack_Driver(abstract_vim_driver.VimAbstractDriver,
new_ppgs = []
updated_port_chain = dict()
pc_info = neutronclient_.port_chain_show(chain_id)
if set(fc_ids) != set(pc_info['port_chain']['flow_classifiers']):
updated_port_chain['flow_classifiers'] = fc_ids
old_ppgs = pc_info['port_chain']['port_pair_groups']
old_ppgs_dict = {neutronclient_.
port_pair_group_show(ppg_id)['port_pair_group']['name'].
@ -535,6 +542,7 @@ class OpenStack_Driver(abstract_vim_driver.VimAbstractDriver,
raise e
updated_port_chain['port_pair_groups'] = new_ppgs
updated_port_chain['flow_classifiers'] = fc_ids
try:
pc_id = neutronclient_.port_chain_update(chain_id,
updated_port_chain)
@ -560,28 +568,89 @@ class OpenStack_Driver(abstract_vim_driver.VimAbstractDriver,
neutronclient_ = NeutronClient(auth_attr)
neutronclient_.port_chain_delete(chain_id)
def update_flow_classifier(self, fc_id, fc, auth_attr=None):
def update_flow_classifier(self, chain_id, fc, auth_attr=None):
if not auth_attr:
LOG.warning("auth information required for n-sfc driver")
return None
# for now, the only parameters allowed for flow-classifier-update
# is 'name' and/or 'description'.
# Currently we do not store the classifiers in the db with
# a name and/or a description which means that the default
# values of the name and/or description will be None.
sfc_classifier_params = {}
if 'name' in fc:
sfc_classifier_params['name'] = fc['name']
if 'description' in fc:
sfc_classifier_params['description'] = fc['description']
LOG.debug('sfc_classifier_params is %s', sfc_classifier_params)
fc_id = fc.pop('instance_id')
fc_status = fc.pop('status')
match_dict = fc.pop('match')
fc.update(match_dict)
sfc_classifier_params = self._create_classifier_params(fc)
neutronclient_ = NeutronClient(auth_attr)
return neutronclient_.flow_classifier_update(fc_id,
sfc_classifier_params)
if fc_status == constants.PENDING_UPDATE:
fc_info = neutronclient_.flow_classifier_show(fc_id)
for field in sfc_classifier_params:
# If the new classifier is the same with the old one then
# no change needed.
if (fc_info['flow_classifier'].get(field) is not None) and \
(sfc_classifier_params[field] == fc_info[
'flow_classifier'][field]):
continue
# If the new classifier has different match criteria
# with the old one then we strip the classifier from
# the chain we delete the old classifier and we create
# a new one with the same name as before but with different
# match criteria. We are not using the flow_classifier_update
# from the n-sfc because it does not support match criteria
# update for an existing classifier yet.
else:
try:
self._dissociate_classifier_from_chain(chain_id,
[fc_id],
neutronclient_)
except Exception as e:
raise e
fc_id = neutronclient_.flow_classifier_create(
sfc_classifier_params)
if fc_id is None:
raise nfvo.UpdateClassifierException(
message="Failed to update classifiers")
break
# If the new classifier is completely different from the existing
# ones (name and match criteria) then we just create it.
else:
fc_id = neutronclient_.flow_classifier_create(
sfc_classifier_params)
if fc_id is None:
raise nfvo.UpdateClassifierException(
message="Failed to update classifiers")
return fc_id
def _dissociate_classifier_from_chain(self, chain_id, fc_ids,
neutronclient):
pc_info = neutronclient.port_chain_show(chain_id)
current_fc_list = pc_info['port_chain']['flow_classifiers']
for fc_id in fc_ids:
current_fc_list.remove(fc_id)
pc_id = neutronclient.port_chain_update(chain_id,
{'flow_classifiers': current_fc_list})
if pc_id is None:
raise nfvo.UpdateClassifierException(
message="Failed to update classifiers")
for fc_id in fc_ids:
try:
neutronclient.flow_classifier_delete(fc_id)
except ValueError as e:
raise e
def remove_and_delete_flow_classifiers(self, chain_id, fc_ids,
auth_attr=None):
if not auth_attr:
LOG.warning("auth information required for n-sfc driver")
raise EnvironmentError('auth attribute required for'
' networking-sfc driver')
neutronclient_ = NeutronClient(auth_attr)
try:
self._dissociate_classifier_from_chain(chain_id, fc_ids,
neutronclient_)
except Exception as e:
raise e
def delete_flow_classifier(self, fc_id, auth_attr=None):
if not auth_attr:
@ -656,6 +725,16 @@ class NeutronClient(object):
sess = session.Session(auth=auth, verify=verify)
self.client = neutron_client.Client(session=sess)
def flow_classifier_show(self, fc_id):
try:
fc = self.client.show_flow_classifier(fc_id)
if fc is None:
raise ValueError('classifier %s not found' % fc_id)
return fc
except nc_exceptions.NotFound:
LOG.error('classifier %s not found', fc_id)
raise ValueError('classifier %s not found' % fc_id)
def flow_classifier_create(self, fc_dict):
LOG.debug("fc_dict passed is {fc_dict}".format(fc_dict=fc_dict))
fc = self.client.create_flow_classifier({'flow_classifier': fc_dict})

View File

@ -365,94 +365,105 @@ class NfvoPlugin(nfvo_db_plugin.NfvoPluginDb, vnffg_db.VnffgPluginDbMixin,
@log.log
def update_vnffg(self, context, vnffg_id, vnffg):
vnffg_dict = super(NfvoPlugin, self)._update_vnffg_pre(context,
vnffg_id)
new_vnffg = vnffg['vnffg']
LOG.debug('vnffg update: %s', vnffg)
vnffg_info = vnffg['vnffg']
# put vnffg related objects in PENDING_UPDATE status
vnffg_old = super(NfvoPlugin, self)._update_vnffg_status_pre(
context, vnffg_id)
name = vnffg_old['name']
# create inline vnffgd if given by user
if vnffg_info.get('vnffgd_template'):
vnffgd_name = utils.generate_resource_name(name, 'inline')
vnffgd = {'vnffgd': {'tenant_id': vnffg_old['tenant_id'],
'name': vnffgd_name,
'template': {
'vnffgd': vnffg_info['vnffgd_template']},
'template_source': 'inline',
'description': vnffg_old['description']}}
try:
vnffg_info['vnffgd_id'] = \
self.create_vnffgd(context, vnffgd).get('id')
except Exception:
with excutils.save_and_reraise_exception():
super(NfvoPlugin, self)._update_vnffg_status_post(context,
vnffg_old, error=True, db_state=constants.ACTIVE)
try:
vnffg_dict = super(NfvoPlugin, self). \
_update_vnffg_pre(context, vnffg, vnffg_id, vnffg_old)
except (nfvo.VnfMappingNotFoundException,
nfvo.VnfMappingNotValidException):
with excutils.save_and_reraise_exception():
if vnffg_info.get('vnffgd_template'):
super(NfvoPlugin, self).delete_vnffgd(
context, vnffg_info['vnffgd_id'])
super(NfvoPlugin, self)._update_vnffg_status_post(
context, vnffg_old, error=True, db_state=constants.ACTIVE)
except nfvo.UpdateVnffgException:
with excutils.save_and_reraise_exception():
super(NfvoPlugin, self).delete_vnffgd(context,
vnffg_info['vnffgd_id'])
super(NfvoPlugin, self)._update_vnffg_status_post(context,
vnffg_old,
error=True)
nfp = super(NfvoPlugin, self).get_nfp(context,
vnffg_dict['forwarding_paths'])
sfc = super(NfvoPlugin, self).get_sfc(context, nfp['chain_id'])
classifiers = [super(NfvoPlugin, self).
get_classifier(context, classifier_id) for classifier_id
in nfp['classifier_ids']]
template_db = self._get_resource(context, vnffg_db.VnffgTemplate,
vnffg_dict['vnffgd_id'])
vnfd_members = self._get_vnffg_property(template_db.template,
'constituent_vnfs')
try:
super(NfvoPlugin, self)._validate_vnfd_in_vnf_mapping(
new_vnffg.get('vnf_mapping'), vnfd_members)
combined_vnf_mapping = super(
NfvoPlugin, self)._combine_current_and_new_vnf_mapping(
context, new_vnffg['vnf_mapping'],
vnffg_dict['vnf_mapping'])
new_vnffg['vnf_mapping'] = super(
NfvoPlugin, self)._get_vnf_mapping(context,
combined_vnf_mapping,
vnfd_members)
except Exception:
with excutils.save_and_reraise_exception():
super(NfvoPlugin, self)._update_vnffg_status(
context, vnffg_id, error=True, db_state=constants.ACTIVE)
template_id = vnffg_dict['vnffgd_id']
template_db = self._get_resource(context, vnffg_db.VnffgTemplate,
template_id)
# functional attributes that allow update are vnf_mapping,
# and symmetrical. Therefore we need to figure out the new chain if
# it was updated by new vnf_mapping. Symmetrical is handled by driver.
chain = super(NfvoPlugin, self)._create_port_chain(context,
new_vnffg[
'vnf_mapping'],
template_db,
nfp['name'])
LOG.debug('chain update: %s', chain)
sfc['chain'] = chain
# Symmetrical update currently is not supported
del new_vnffg['symmetrical']
classifier_dict = dict()
classifier_update = []
classifier_delete_ids = []
classifier_ids = []
for classifier_id in nfp['classifier_ids']:
classifier_dict = super(NfvoPlugin, self).get_classifier(
context, classifier_id, fields=['id', 'name', 'match',
'instance_id', 'status'])
if classifier_dict['status'] == constants.PENDING_DELETE:
classifier_delete_ids.append(
classifier_dict.pop('instance_id'))
else:
classifier_ids.append(classifier_dict.pop('id'))
classifier_update.append(classifier_dict)
# TODO(gongysh) support different vim for each vnf
vim_obj = self._get_vim_from_vnf(context,
list(vnffg_dict[
'vnf_mapping'].values())[0])
driver_type = vim_obj['type']
try:
# we don't support updating the match criteria in first iteration
# so this is essentially a noop. Good to keep for future use
# though.
# In addition to that the code we are adding for the multiple
# classifier support is also a noop and we are adding it so we
# do not get compilation errors. It should be changed when the
# update of the classifier will be supported.
classifier_instances = []
for classifier in classifiers:
self._vim_drivers.invoke(driver_type, 'update_flow_classifier',
fc_id=classifier['instance_id'],
fc=classifier['match'],
auth_attr=vim_obj['auth_cred'])
classifier_instances.append(classifier['instance_id'])
fc_ids = []
self._vim_drivers.invoke(driver_type,
'remove_and_delete_flow_classifiers',
chain_id=sfc['instance_id'],
fc_ids=classifier_delete_ids,
auth_attr=vim_obj['auth_cred'])
for item in classifier_update:
fc_ids.append(self._vim_drivers.invoke(driver_type,
'update_flow_classifier',
chain_id=sfc['instance_id'],
fc=item,
auth_attr=vim_obj['auth_cred']))
n_sfc_chain_id = self._vim_drivers.invoke(
driver_type, 'update_chain',
vnfs=sfc['chain'], fc_ids=classifier_instances,
vnfs=sfc['chain'], fc_ids=fc_ids,
chain_id=sfc['instance_id'], auth_attr=vim_obj['auth_cred'])
except Exception:
with excutils.save_and_reraise_exception():
super(NfvoPlugin, self)._update_vnffg_status(context,
vnffg_id,
super(NfvoPlugin, self)._update_vnffg_status_post(context,
vnffg_dict,
error=True)
new_vnffg['chain'] = chain
super(NfvoPlugin, self)._update_vnffg_post(context, vnffg_id,
new_vnffg,
n_sfc_chain_id)
super(NfvoPlugin, self)._update_vnffg_status(context,
vnffg_id,
db_state=constants.ACTIVE)
classifiers_map = super(NfvoPlugin, self).create_classifiers_map(
classifier_ids, fc_ids)
super(NfvoPlugin, self)._update_vnffg_post(context, n_sfc_chain_id,
classifiers_map,
vnffg_dict)
super(NfvoPlugin, self)._update_vnffg_status_post(context, vnffg_dict)
return vnffg_dict
@log.log

View File

@ -37,6 +37,10 @@ vnffg_multi_params = _get_template('vnffg_multi_params.yaml')
vnffgd_template = yaml.safe_load(_get_template('vnffgd_template.yaml'))
vnffgd_tosca_template = yaml.safe_load(_get_template(
'tosca_vnffgd_template.yaml'))
vnffgd_tosca_template_for_update = yaml.safe_load(_get_template(
'tosca_vnffgd_template_for_update.yaml'))
vnffgd_legacy_template = yaml.safe_load(_get_template(
'tosca_vnffgd_legacy_template_for_update.yaml'))
vnffgd_tosca_param_template = yaml.safe_load(_get_template(
'tosca_vnffgd_param_template.yaml'))
vnffgd_tosca_str_param_template = yaml.safe_load(_get_template(
@ -205,6 +209,20 @@ def get_dummy_vnffg_obj_inline():
'vnffgd_template': vnffgd_tosca_template}}
def get_dummy_vnffg_obj_update_vnffgd_template():
return {'vnffg': {'tenant_id': u'ad7ebc56538745a08ef7c5e97f8bd437',
'name': 'dummy_vnffg',
'symmetrical': False,
'vnffgd_template': vnffgd_tosca_template_for_update}}
def get_dummy_vnffg_obj_legacy_vnffgd_template():
return {'vnffg': {'tenant_id': u'ad7ebc56538745a08ef7c5e97f8bd437',
'name': 'dummy_vnffg',
'symmetrical': False,
'vnffgd_template': vnffgd_legacy_template}}
def get_dummy_vnffg_param_obj():
return {'vnffg': {'description': 'dummy_vnf_description',
'vnffgd_id': u'eb094833-995e-49f0-a047-dfb56aaf7c4e',

View File

@ -134,22 +134,49 @@ class TestChainSFC(base.TestCase):
self.assertIsNotNone(result)
def test_update_flow_classifier(self):
auth_attr = utils.get_vim_auth_obj()
flow_classifier = {'name': 'next_fake_fc',
'description': 'fake flow-classifier',
'source_port_range': '2005-2010',
'ip_proto': 6,
'destination_port_range': '80-180'}
fc_id = self.sfc_driver.\
create_flow_classifier(name='fake_ffg', fc=flow_classifier,
auth_attr=utils.get_vim_auth_obj())
flow_classifier_update = {'name': 'next_fake_fc_two',
'instance_id': None,
'status': 'PENDING_CREATE',
'match': {'source_port_range': '5-10',
'ip_proto': 17,
'destination_port_range': '2-4'}}
fc_id = self.sfc_driver.\
create_flow_classifier(name='fake_ffg', fc=flow_classifier,
auth_attr=utils.get_vim_auth_obj())
self.assertIsNotNone(fc_id)
flow_classifier['description'] = 'next fake flow-classifier'
vnf_1 = {'name': 'test_create_chain_vnf_1',
'connection_points': [uuidutils.generate_uuid(),
uuidutils.generate_uuid()]}
vnf_2 = {'name': 'test_create_chain_vnf_2',
'connection_points': [uuidutils.generate_uuid(),
uuidutils.generate_uuid()]}
vnf_3 = {'name': 'test_create_chain_vnf_3',
'connection_points': [uuidutils.generate_uuid(),
uuidutils.generate_uuid()]}
vnfs = [vnf_1, vnf_2, vnf_3]
chain_id = self.sfc_driver.create_chain(name='fake_ffg',
fc_ids=fc_id,
vnfs=vnfs,
auth_attr=auth_attr)
self.assertIsNotNone(chain_id)
result = self.sfc_driver.\
update_flow_classifier(fc_id=fc_id,
fc=flow_classifier,
update_flow_classifier(chain_id=chain_id,
fc=flow_classifier_update,
auth_attr=utils.get_vim_auth_obj())
self.assertIsNotNone(result)

View File

@ -824,7 +824,7 @@ class TestNfvoPlugin(db_base.SqlTestCase):
self.nfvo_plugin.update_vnffg,
self.context, vnffg['id'], updated_vnffg)
def test_update_vnffg(self):
def test_update_vnffg_vnf_mapping(self):
with patch.object(TackerManager, 'get_service_plugins') as \
mock_plugins:
mock_plugins.return_value = {'VNFM': FakeVNFMPlugin()}
@ -834,8 +834,9 @@ class TestNfvoPlugin(db_base.SqlTestCase):
vnffg = self._insert_dummy_vnffg()
updated_vnffg = utils.get_dummy_vnffg_obj_vnf_mapping()
updated_vnffg['vnffg']['symmetrical'] = True
expected_mapping = {'VNF1': '91e32c20-6d1f-47a4-9ba7-08f5e5effe07',
'VNF3': '7168062e-9fa1-4203-8cb7-f5c99ff3ee1b'}
expected_mapping = {'VNF1': '91e32c20-6d1f-47a4-9ba7-08f5e5effaf6',
'VNF3': '10f66bc5-b2f1-45b7-a7cd-6dd6ad0017f5'}
updated_vnf_mapping = \
{'VNF1': '91e32c20-6d1f-47a4-9ba7-08f5e5effaf6',
'VNF3': '10f66bc5-b2f1-45b7-a7cd-6dd6ad0017f5'}
@ -858,6 +859,50 @@ class TestNfvoPlugin(db_base.SqlTestCase):
chain_id=mock.ANY,
auth_attr=mock.ANY)
def test_update_vnffg_vnffgd_template(self):
with patch.object(TackerManager, 'get_service_plugins') as \
mock_plugins:
mock_plugins.return_value = {'VNFM': FakeVNFMPlugin()}
mock.patch('tacker.common.driver_manager.DriverManager',
side_effect=FakeDriverManager()).start()
self._insert_dummy_vnffg_template()
vnffg = self._insert_dummy_vnffg()
updated_vnffg = utils.get_dummy_vnffg_obj_update_vnffgd_template()
expected_mapping = {'VNF1': '91e32c20-6d1f-47a4-9ba7-08f5e5effaf6'}
updated_vnf_mapping = \
{'VNF1': '91e32c20-6d1f-47a4-9ba7-08f5e5effaf6'}
updated_vnffg['vnffg']['vnf_mapping'] = updated_vnf_mapping
result = self.nfvo_plugin.update_vnffg(self.context, vnffg['id'],
updated_vnffg)
self.assertIn('id', result)
self.assertIn('status', result)
self.assertIn('vnf_mapping', result)
self.assertEqual('ffc1a59b-65bb-4874-94d3-84f639e63c74',
result['id'])
for vnfd, vnf in result['vnf_mapping'].items():
self.assertIn(vnfd, expected_mapping)
self.assertEqual(vnf, expected_mapping[vnfd])
self._driver_manager.invoke.assert_called_with(mock.ANY,
mock.ANY,
vnfs=mock.ANY,
fc_ids=mock.ANY,
chain_id=mock.ANY,
auth_attr=mock.ANY)
def test_update_vnffg_legacy_vnffgd_template(self):
with patch.object(TackerManager, 'get_service_plugins') as \
mock_plugins:
mock_plugins.return_value = {'VNFM': FakeVNFMPlugin()}
mock.patch('tacker.common.driver_manager.DriverManager',
side_effect=FakeDriverManager()).start()
self._insert_dummy_vnffg_template()
vnffg = self._insert_dummy_vnffg()
updated_vnffg = utils.get_dummy_vnffg_obj_legacy_vnffgd_template()
self.assertRaises(nfvo.UpdateVnffgException,
self.nfvo_plugin.update_vnffg,
self.context, vnffg['id'], updated_vnffg)
def test_delete_vnffg(self):
self._insert_dummy_vnffg_template()
vnffg = self._insert_dummy_vnffg()

View File

@ -0,0 +1,37 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: example template for update
topology_template:
description: Example VNFFG template for update
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.Tacker
description: creates path (CP11->CP12->CP32)
properties:
id: 51
policy:
type: ACL
criteria:
- network_name: tenant2_net
source_port_range: 80-1024
ip_proto: 17
ip_dst_prefix: 192.168.1.3/24
path:
- forwarder: VNF1
capability: CP11
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: HTTP to Corporate Net
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP11]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]

View File

@ -0,0 +1,39 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: example template for update
topology_template:
description: Example VNFFG template for update
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: creates path (CP11->CP12->CP32)
properties:
id: 51
policy:
type: ACL
criteria:
- name: classifier_two
classifier:
network_name: tenant2_net
source_port_range: 80-1024
ip_proto: 17
ip_dst_prefix: 192.168.1.3/24
path:
- forwarder: VNF1
capability: CP11
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: HTTP to Corporate Net
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP11]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]