Added a script to create add_table_names script

The add_table_names script is to be used with the output of the
ovs-ofctl dump-flows command, and adds the names of the tables being
used to their IDs. This is done for better readability and easier
debugging.

Change-Id: I52907e8d3b81f3f23eff2b0e062160141285bfed
Related-Bug: #1740867
This commit is contained in:
Shachar Snapiri 2018-01-02 12:43:37 +02:00
parent 7db9e4e7a7
commit a644ed7a67
3 changed files with 43 additions and 5 deletions

View File

@ -502,6 +502,11 @@ function setup_rootwrap_filters {
fi
}
function create_tables_script {
echo "Creating add_table_names script"
$DRAGONFLOW_DIR/tools/create_add_tables_script.sh $DRAGONFLOW_DIR $DRAGONFLOW_DIR/tools/add_table_names
}
function stop_df_bgp_service {
if is_service_enabled df-bgp ; then
echo "Stopping Dragonflow BGP dynamic routing service"
@ -589,6 +594,7 @@ if [[ "$Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER" == "True" ]]; then
start_df_metadata_agent
start_df_bgp_service
setup_rootwrap_filters
create_tables_script
fi
if [[ "$1" == "unstack" ]]; then

View File

@ -43,6 +43,8 @@ Debugging
There are a few tools that exist to debug Dragonflow. Most of them are geared
towards verifying the pipeline was installed correctly.
**ovs-ofctl dump-flows**
.. code-block:: shell
sudo ovs-ofctl dump-flows br-int -O OpenFlow13
@ -52,27 +54,41 @@ manual page for ovs-ofctl for more info.
It is worthwhile to note that each flow contains statistics such as how many
packets matched this flow, and what is the cumulative size.
The output shows the table IDs that the packets go through.
In order to make the output more readable and add the table names, one can
use the following script (relative path from the dragonflow project root):
``tools/add_table_names``
The usage is very straight forward:
.. code-block:: bash
sudo ovs-ofctl dump-flows br-int -O OpenFlow13 | /opt/stack/dragonflow/tools/add_table_names
**ovs-appctl ofproto/trace**
.. code-block:: shell
sudo ovs-appctl ofproto/trace br-int <flow>
This command will simulate a packet matching the given flow through
the pipeline. The perl script in [#]_ can be used to facilitate the use
of this tool. The script also recirculates the packet when necessary,
e.g. for connection tracking.
**df-db**
.. code-block:: shell
df-db
This utility will allow to inspect Dragonflow northbound database. You can
dump db content, list db tables, list db table keys and print value of the
specific key in the table. Use *df-db --help* to list and get details on
supported sub-commands.
**df-model**
.. code-block:: shell
df-model
@ -88,9 +104,7 @@ to a file.
* PlantUML output may be visualized using the PlantUML Server [#]_
* rst output may be visualized using Online reStructuredText editor [#]_
::
SimulateAndSendAction class
**SimulateAndSendAction class**
In the tests, you can have the above mentioned script run automatically for
a packet you are about to send before actually sending it.

View File

@ -0,0 +1,18 @@
#!/bin/bash
# If no root path was supplied, we assume we are at the root of DragonFlow
# project
DRAGONFLOW_DIR=${1:-.}
SRC_FILE=${DRAGONFLOW_DIR}/dragonflow/controller/common/constants.py
DEST_FILE=${2:-${DRAGONFLOW_DIR}/tools/add_table_names}
# The following one-liner awk script does the magic.
# First - adds the script prefix
# Then - it parses the SRC_FILE, for every constant that contains the word
# TABLE, it creates an entry in the awk file dictionary from the table ID to
# its name
# Lastly - after all lines are done, it adds the hard-coded actual body of
# the script
awk 'BEGIN {FS="="; print "#!/bin/awk -f\n\nBEGIN {"}; /^[^#].*TABLE[\w]*/{gsub(" ", ""); name=$1; id=$2; line=" id_to_name["id"]=\""name"\""; print line }; END {print "}\n\n{\n head = \"\"\n tail=$0\n while (match(tail, /(resubmit\\(,|table=)([0-9]+)/, arr)) {\n repl = substr(tail, RSTART, RLENGTH)\n head = head substr(tail,1, RSTART-1) repl \"(\" id_to_name[arr[2]] \")\"\n tail = substr(tail, RSTART+RLENGTH)\n }\n print head tail\n}\n"}' ${SRC_FILE} > ${DEST_FILE}
chmod +x ${DEST_FILE}