RPi Network Packet Analysis Tooling

RPi Network Packet Analysis Tooling

In this post, I’ll cover using the Rpi as a network analysis tool. The hardware of the Rpi and the open source software that runs on it is an extremely cost effective method to network diagnostics and troubleshooting.

The hardware layout I’ll be using is fairly simple. The Rpi 3 has built in interfaces for both wired and wireless connections.

I’ll be using the wired connection as my packet sniffing port. It will not have any IP bound to it. The physical connection will link to a managed switch. The switch port is configured with data mirrored from my WAN port connection, which is on another port of the same switch. Any data packets passing through the WAN port will be mirrored on the port used by the Rpi. I will not be performing a man in the middle attack in this example.

The wireless connection on the Rpi will be my primary IP connection, allowing me to manage and perform tasks on the Rpi. I’ll connect to the Rpi using a SSH terminal connection.

There are several software tools available. The ones I’ll be covering are:

  • TCPDump
  • URLSnarf

There are a host of other tools, here is a brief list. See Credits for more at the bottom of the post.

  • TCPTrace
  • DriftNet
  • WireShark
  • EtterCap
  • TCPXtract

First, I’ll need to install the software since it does not come pre-installed on my Rpi OS. TCPDump does come pre-installed, however, URLSnarf (ala Dsniff) does not. To install URLSnarf, run the following command on the Rpi.

sudo apt-get install dsniff -y

Now, we should be ready to start capturing packets. Initially, I ran the URLSnarf tool with the standard output so I can observe it in real time. Here is the command I used.


This gave me the list of my interfaces so I can set the correct parameters in URLSnarf.

eth0 Link encap:Ethernet HWaddr b8:27:eb…

With that information, now I can enter in the command to start my URLSnarf capture.

sudo urlsnarf -i eth0

It starts and displays web traffic as it occurs.

urlsnarf: listening on eth0 [tcp port 80 or port 8080 or port 3128]

x.x.x.x – – [12/Aug/2016:04:56:53 -0700] “GET http://www.computerhope.com/unix/ugrep.htm HTTP/1.1” – – “https://duckduckgo.com/” “Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:48.0) Gecko/20100101 Firefox/48.0”

x.x.x.x – – [12/Aug/2016:04:56:53 -0700] “GET http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js HTTP/1.1” – – “http://www.computerhope.com/unix/ugrep.htm” “Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:48.0) Gecko/20100101 Firefox/48.0”

x.x.x.x – – [12/Aug/2016:04:56:53 -0700] “GET http://www.computerhope.com/recent.js HTTP/1.1” – – “http://www.computerhope.com/unix/ugrep.htm” “Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:48.0) Gecko/20100101 Firefox/48.0”

It appears to be working, good. Now what I’ll do is start a capture to a log file for post analysis. To do that, I’ll run this command.

sudo urlsnarf -i eth0 > /home/user/Desktop/URL_20160812A.txt

This does not display any progress on the screen, so this is why I run the initial command to verify all is working first. You can also make a copy of the log file and open it to verify.

After a period of time passes, I’ll stop the command by issuing a [Ctrl] + [C] key combination. I can open the file in an editor to view it. I typically use nano, use this command.

nano /home/user/Desktop/URL_20160812A.txt

Up will appear the captured web traffic. Depending on how much traffic traverses the port and the length of time running the capture, finding useful info can be painstaking if done manually. For this reason, I like to issue grep commands to parse out more relivant details and go from there.

grep -c “ip address” /home/user/Desktop/URL_20160812A.txt

This returns the number of lines that are in the log file that have that specific ip address. I could enter in any string in the text to get search results for a number of items, here are some examples:

  • Time of day
  • domain
  • browser type
  • file type

It’s quite nice to see such clear results.

URLSnarf gives great detail about web traffic passing by, but TCPDump is the kitchen sink of packet capture. I would rarely run it on the Rpi for any length of time, especially when logging to a file. You can quickly exhaust your resources with an unabated full packet capture.

Here is a command that will get it all. Remember to hit [Ctrl] + [C] to stop the capture.

sudo tcpdump -i eth0 -s 0

It really is the kitchen sink and the amount of information to sif through would be overwhelming. For that reason, adding filters to the command will let you be more granular in your capture endeavors. The best approach is to capture a short unfiltered event to a log file and get your filters based on those results. Then you can apply them on your subsequent captures.

One last thing I’d like to point out is that the TCPDump log files can be analyzed post capture. This means you can view them offline, on another system.

As you can see, the Rpi has some really great potential at helping you troubleshoot network packet related issues. Happy caping.

I’d like to give credit to the following sites that I referenced for the information in the post.

  1. TCP Reassembly – https://wiki.wireshark.org/TCP_Reassembly
  2. DriftNet Tutorial – http://lifeofpentester.blogspot.com/2013/10/driftnet-tutorial-how-to-sniff-images.html
  3. URLSnarf Info – http://jermsmit.com/ettercap-and-urlsnarf-fun/
  4. DriftNet Info – https://blackundertone.wordpress.com/tag/urlsnarf/
  5. Piping TCPDump through SSH – http://blog.db-network.de/tcpdump-piped-through-ssh-and-wireshark/
  6. Network Capture and Analysis – http://adaywithtape.blogspot.com/2010/03/network-captures-revisited.html
  7. Packet Sniffing – http://noah.org/wiki/Packet_sniffing
Comments are closed.