Remove B324 (prohibit list calls: md5, sha1 for python>=3.9)
from bandit skip list, for this replace sha1 with blake2b.
Change-Id: Iafe571ad0de0408414ed321f4b9e9588916a873d
OVS-agent wants to clean flows table by table during restart,
but actually it does not. If one table has same cookie with
other tables, all related flows will be clean at once.
This patch adds the table_id param to the related call
to limit the flow clean on one table at once.
Closes-Bug: #2060587
Change-Id: I266eb0f5115af718b91f930d759581616310999d
We recently met an issue during VM live migration:
1. nova starts live migration
2. plug ports on new host
3. neutron-ovs-agent starts to process the port,
but the port is in 'added' and 'updated' set
at the same time.
4. because nova still not activate the destination
port binding, so there is no local vlan for
this port.
Then, ovs-agent met errors:
Error while processing VIF ports: OVSFWTagNotFound:
Cannot get tag for port tap092f38ed-a7 from its other_config: {}
This fix is to remove ports of the
"binding_no_activated_devices" for ``setup_port_filters``.
Closes-Bug: #2048979
Change-Id: I0f1e6bf202ef08f75246d6e99b3774d0b6fc9e2b
Prior to this patch, ML2/OVS and ML2/OVN had inconsistent IGMP
configurations. Neutron only exposed one configuration option for IGMP:
igmp_snooping_enabled.
Other features such as IGMP flood, IGMP flood reports and IGMP flood
unregistered were hardcoded differently on each driver (see LP#2044272
for a more details).
These hardcoded values has led to many changes over the years tweaking
them to work on different scenarios but they were never final because
the fix for one case would break the other.
This patch introduces 3 new configuration options for these other IGMP
features that can be enabled or disabled on both backends. Operators
can now fine tune their deployments in the way that will work for them.
As a consequence of the hardcoded values for each driver we had to break
some defaults and, in the case of ML2/OVS, if operators want to keep
things as they were before this patch they will need to enable the new
mcast_flood and mcast_flood_unregistered configuration options.
That said, the for ML2/OVS there was also an inconsistency with the help
string of igmp_snooping_enabled configuration option as it mentioned
that enabling snooping would disable flooding to unregistered ports but
that was not true anymore after the fix [0].
[0] https://bugs.launchpad.net/neutron/+bug/1884723
Closes-Bug: #2044272
Change-Id: Ic4dde46aa0ea2b03362329c87341c83b24d32176
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
When neutron-server is down, ovs-agent waits for it to become available
during agent startup. When neutron-server is up, but it cannot reach the
DB, it can do nothing pretty much the same way. However ovs-agent
reacted differently to this failure. With this patch it reacts the same
way and delays its startup until neutron-server is up together with its
DB.
Change-Id: Ia55e82540aedc236e9b016bb58047d0b437eeb99
Closes-Bug: #2025341
If openvswitch is restarted, try to notify neutron-server
that to refresh tunnel flows for every ports.
Closes-Bug: #2004041
Change-Id: Iba0ae947e3595674e63b998826daae2582bb7668
Bug introduced by Ic3c147136549b17aea0fe78e930a41a5b33ab9d8, when a
VLAN mapping is not registered during a call to
update_network_segement, the function should return None.
Closes-Bug: #2009215
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: I91f8e8bd18d9956216e5715c658dfb408a2cbf07
When libvirt (nova) detach a port on OVS bridge, two events are sent:
* one event with 2 actions "old" and "new": a change on ofport (from a
regular value to -1)
* a second event with action "delete"
If, for some reason, the second event is delayed, the rpc_loop iteration
will consider this port as "updated" instead of "deleted".
But, because ofport == -1, the port update will be discarded, and
finally removed from port_info["current"].
As a result, on next iteration, the deletion wont be performed.
Most of the time, we endup with some leftovers (like openflow rules,
etc.)
The purpose of this patch is very simple, when looping over ports in
_get_ofport_moves, we will discards the ports that have ofport == -1, so
the port will not be considered as updated and next iteration will be
able to delete it correctly.
Closes-Bug: #1992109
Change-Id: Ib4a7183867e1b21810b6915a475a234278bf884c
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Previously when a neutron-openvswitch-agent was stopped it left
behind the following fanout queues in rabbitmq:
neutron-vo-Network-1.0_fanout_someuuid
neutron-vo-Port-1.1_fanout_someuuid
neutron-vo-SecurityGroup-1.0_fanout_someuuid
neutron-vo-SecurityGroupRule-1.0_fanout_someuuid
neutron-vo-SubPort-1.0_fanout_someuuid
neutron-vo-Subnet-1.0_fanout_someuuid
neutron-vo-Trunk-1.1_fanout_someuuid
In this change we ensure that all but the SubPort and Trunk fanout
queues are correctly removed from rabbitmq by cleanly stopping the
RemoteResourceCache when the agent stops.
Partial-Bug: #1586731
Change-Id: I672f9414a1a8ed91e259e9379ca707a70f6b4467
This is using changes introduced before to support for a network more
than one vlan.
Partial-Bug: #1956435
Partial-Bug: #1764738
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: Ifd61e379c3cef3589803c96a276da9827051f660
This change is updating the vlanmanager data structure to handle for a
given network more than one vlan mapping. This is a prerequisite work
needed to progress on accepting several segments per network per
host.
The work done here is trying to avoid changing logic in the
current implementation. Unit test should not have value updated,
but probably signatures changed.
Partial-Bug: #1956435
Partial-Bug: #1764738
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: Ic3c147136549b17aea0fe78e930a41a5b33ab9d8
This is changing the datastructure that maintains the relationship
between ports and networks to also handle the segmenation ids related.
This will be necessary in future to support multiple segments per
networks on a same physical provider network.
Partial-Bug: #1956435
Partial-Bug: #1764738
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: Iaf40ddc20692a3a51a8d5f5acfc2094b2d5c00c4
This reverts commit: b83fedbd78.
Since port is set to dead by default after the commits of:
7aae31c9f90ddca28454
And we add the local vlan tag to the port right after it is
bound to aviod trunk port flood issue:
c63ebef2d5
So that _add_port_tag_info function is not necessary anymore,
and we will save a large OVSDB read action which is dumping
the entire table of Port, for hosts with a huge number of
ports this is time-comsuming. So removed it.
Related-Bug: #1968896
Related-Bug: #1952567
Change-Id: Iefd765d497c7e2d4bb093052478185125b907025
Port admin state down will add 4095 tag to it while
it is adding a drop flow for this ofport.
When port is back UP again, remove the drop flow.
Closes-bug: #1968896
Change-Id: Ie8f67def69ae0e5d425d0e6fc43e35373a96bd88
During some ml2 ovs agent port processing performance test, we noticed
that some ports are missing tag before it really done processing. While
ovs treats those ports without tag as trunk port, so some packets will
be flooded to it. In large scale cloud, if too many port added to the
bridge, the ovs-vswitchd will consume a huge amount of CPU cores if
ports are not bound in a short time.
So, in the port_bound function of ovs-agent, we set the port tag to
it after a local_vlan id is allocated. Because after that, setup
security groups (setup_port_filters) and bind devices in DB
(update_device_list) are really time-consuming.
And also fix a potential bug, port is processed as created first,
but no tag in ovsdb, so openflow security group will not be processed
successfully [1]. It must be done in a update event during next loop,
after port bound and ovsdb set the required value.
This patch can also fix some upstream test failures of waiting too
long time to ping some cases.
[1] https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/firewall.py#L112
Closes-Bug: #1952567
Change-Id: I3533f0d416d32f8d0888ad58f975960d89a985d9
During e.g. rebuild of the server by Nova, ports plugged to such server
are quickly removed and added again into br-int. In such case, ports are
in the "re_added" ports set in the neutron-ovs-agent.
But it seems that in some cases it may happen that such port isn't
switched to be DOWN first and then, when neutron-ovs-agent treats port
as added/updated and reports to the server that port is UP, there is no
notification to nova-compute send (because port's status was UP and new
status is still UP in the Neutron DB).
As Nova waits for the notification from Neutron in such case server
could ends up in the ERROR state.
To avoid such issue, all ports which are treated as "re_added" by the
neutron-ovs-agent are now first switched to be DOWN on the server side.
That way, when those ports are treated as added/updated in the same
rpc_loop iteration, switching their status to UP will for sure trigger
notification to nova.
Closes-Bug: #1963899
Change-Id: I0df376a80140ead7ff1fbf7f5ffef08a999dbe0b
Unit tests are taking longer than usual since [1] and
causing timeouts in unit tests jobs intermittently. Set
report_interval(as this is now used to set RPC timeout)
to 0 so report_state responds immediately.
[1] https://review.opendev.org/q/I8a95e80ca74edc8f8f394cefc749c4065a8e0575
Related-Bug: #1948676
Change-Id: I653c907a4323e19d5bc381cd3716d42c45a75e15
OVS agent configuration is extended to support new configuration
options:
- 'resource_provider_packet_processing_without_direction'
- 'resource_provider_packet_processing_with_direction'
- 'resource_provider_packet_processing_inventory_defaults'
OVS agent RPC hearthbeat now reports this information to neutron
server in 'configuration' field .
Example config:
ml2_conf.ini:
[ovs]
resource_provider_packet_processing_with_direction = :1000:1000
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: Ief554bc445dfd93ea6995bb42b4d010674c7a091
Add a new ovs agent extension to support distributed DHCP for
VMs in compute nodes directly. For large scale deployment, this
can be used to reduce the number of neutron agents. Large scale
cloud can benefit from it.
From the perspective of virtual machine, this will reduce the
probability of DHCP request failure. The VMs will get a higher
level availability for DHCP R/R, no single point of failure
permanently. If one host goes down, VMs in other hosts will not
be influnced by it.
For the perspective of network performance, after using this
extension, the DHCP broadcasting packages will be limited
to the host locally.
Partially-Implements: bp/distributed-dhcp-for-ml2-ovs
Closes-Bug: #1900934
Change-Id: Id8a4c501daad7c2185e6d69441182666ef987e61
This patch moves phys_brs and 'self' associated attributes from
setup_physical_bridges to __init__. Otherwise, bridges from bridge_mapping
could be lost, e.g. when setup_physical_bridges is called with only
reconfigured bridges from _do_reconfigure_physical_bridges.
Closes-Bug: #1929438
Signed-off-by: Anton Kurbatov <Anton.Kurbatov@acronis.com>
Change-Id: Ieadb801e48898e8b654153ad37be80dd9c865413
When ovs restarts and needs to regenerate flows, some flows will be
missing in br-int, as follows:
* table=60,priority=4,in_port="int-br-floating" actions=resubmit(,61)
* table=60,priority=4,in_port="patch-tun" actions=resubmit(,61)
* table=61,priority=0 actions=resubmit(,62)
Call install_ingress_direct_goto_flows() again in the
_handle_ovs_restart() to ensure generate these flows.
Change-Id: I240a78879db757592df138a53b2c22d7f5a9ae13
Closes-Bug: #1920700
In some scenario, dvr router interface will try to ARP some device
which is not hosted in the same host. When the ARP request send
out, the ethernet source MAC will be changed to dvr_host_mac. Then
thoses devices will reply ARP with the dvr_host_mac in ethernet dest
MAC. So finally the dvr router interface will drop this, and the ARP
get failed.
This patch adds one flow for this, it will match the dest MAC, ARP
op-code=2 and arp_tha address, then change the dest MAC to the right
router interface's MAC address.
Closes-Bug: #1913646
Related-Bug: #1859638
Change-Id: Ibc7f01450a3da026ca5c4fb667dada912cf472e3
The goal of this patch is to avoid the connection disruption during
the live-migration using OVS. Since [1], when a port is migrated,
both the source and the destination hosts are added to the profile
binding information. Initially, the source host binding is activated
and the destination is deactivated.
When the port is created in the destination host (created by Nova),
the port was not configured because the binding was not activated.
The binding (that means, all the OpenFlow rules) was done when Nova
sent the port activation. That happend when the VM was already
running in the destination host. If the OVS agent was loaded, the
port was bound seconds later to the port activation.
Instead, this patch enables the OpenFlow rule creation in the
destination host when the port is created.
Another problem are the "neutron-vif-plugged" events sent by Neutron
to Nova to inform about the port binding. Nova is expecting one single
event informing about the destination port binding. At this moment,
Nova considers the port is bound and ready to transmit data.
Several triggers were firing expectedly this event:
- When the port binding was updated, the port is set to down and then
up again, forcing this event.
- When the port binding was updated, first the binding is deleted and
then updated with the new information. That triggers in the source
host to set the port down and the up again, sending the event.
This patch removes those events, sending the "neutron-vif-plugged"
event only when the port is bound to the destination host (and as
commented before, this is happening now regardless of the binding
activation status).
This feature depends on [2]. If this Nova patch is not in place, Nova
will never plug the port in the destination host and Neutron won't be
able to send the vif-plugged event to Nova to finish the
live-migration process.
Because from Neutron cannot query Nova to know if this patch is in
place, a new temporary configuration option has been created to enable
this feature. The default value will be "False"; that means Neutron
will behave as before.
[1]https://bugs.launchpad.net/neutron/+bug/1580880
[2]https://review.opendev.org/c/openstack/nova/+/767368
Closes-Bug: #1901707
Change-Id: Iee323943ac66e566e5a5e92de1861832e86fc7fc
Do not report ovs agent state when ovs is dead,
and let neutron-server mark service as down. So
cluster admin could determine there is a problem
of the given ovs agent
Change-Id: Ib4b06c7877a7343f4204d4f4f5863931717ff507
Closes-Bug: #1910946
Removal of non-gateway port on DVR router deletes all the DVR to
SRC mac flows for the instances of same subnet on that compute node.
The instances are not reachable from any other network.
This patch checks if the DVR router port is gateway for the subnet
or not. And deletes the DVR-SRC mac flows only if it is gateway port.
The DVR-SRC mac flows are deleted if the gateway is not set for the subnet.
Change-Id: Iadc1671c862f8c01e5761e92b82a04849d4bb411
Closes-Bug: #1892405
In the ML2/OVS when igmp_snooping is enabled but there is no
external querier multicast traffic will stop working after few minutes
as packets will not be flooded to tunnel/external bridges.
So this patch sets "mcast-snooping-disable-flood-unregistered" option
of the br-int to False (default value) even when igmp_snooping is
enabled in the neutron-ovs-agent's config file.
Additionally it configures "mcast-snooping-flood-reports" and
"mcast-snooping-flood" on patch ports in br-int to True.
That way we can provide best effort snooping: multicast isolation where
IGMP queriers are available and flood everywhere else?
Closes-Bug: #1884723
Change-Id: Iefa0044dba9e92592295a79448e5d57d9e14a40b
The smartnic port's MTU should be set according to the network's MTU
which the port belongs to.
Closes-Bug: #1899864
Change-Id: Ibcc29c998065da521b35e5845727794a68782db0
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Add a filter in SimpleInterfaceMonitor. If provided, the Interface
events received will be filtered by port. For each Interface event
received, the bridge name will be retrieved and compared with the
list of bridge names given.
If no names are provided, the monitor won't filter any event
(current behavior).
This filter is meant to be used in Fullstack test only. This filtering
add an extra overload (two OVS DB queries per event) that should not
be needed in a production environment.
Closes-Bug: #1885547
Change-Id: Ie1fc8cf7d29c71eb358e593726b446787d8022c2
Using veth to interconnect openvswitch bridges was deprecated
in Victoria cycle. Now it's time to remove it from the code.
In neutron-ovs-agent code, there is still kept piece of code which
migrates from the veth to the patch ports for bridges interconnection.
We will be able to remove that piece of code in X release.
Change-Id: I94545c3c3d9be46ac2062691f69663e5e59cd648
Closes-Bug: #1587296
This change removes the "_check_ofport" function and its use form
the ovs_lib.py file.
By skipping ports without a unique ofport in the "get_vifs_by_ids"
and "get_vifs_by_id" functions, the OVS agent incorrectly treated
newly added port with an ofport of -1 as removed ports in the
"treat_devices_added_or_updated" function.
Co-Authored-By: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
Change-Id: I79158baafbb99bee99a1d687039313eb454d3a9b
Partial-Bug: #1734320
Partial-Bug: #1815989
Commit 90212b12 changed the OVS agent so adding vital drop flows on
br-int (table 0 priority 2) for packets from physical bridges was
deferred until DVR initialization later on. But if br-int has no flows
from a previous run (eg after host reboot), then these packets will hit
the NORMAL flow in table 60. And if there is more than one physical
bridge, then the physical interfaces from the different bridges are now
essentially connected at layer 2 and a network loop is possible in the
time before the flows are added by DVR. Also the DVR code won't add them
until after RPC calls to the server, so a loop is more likely if the
server is not available.
This patch restores adding these flows to when the physical bridges are
first configured. Also updated a comment that was no longer correct and
updated the unit test.
Change-Id: I42c33fefaae6a7bee134779c840f35632823472e
Closes-Bug: #1887148
Related-Bug: #1869808
Currently codes only support assocate tunnel network and vlan network
to DVR router. This patch add codes that make the flat network assocate
to DVR router and make it work fine.
The patch also remove two unused constant entries: 'FLAT_VLAN_ID' and
'LOCAL_VLAN_ID'
Change-Id: I7d792ce288d96548298f169748565266a130bd86
Closes-Bug: #1876092