Sunday, April 7, 2013

TCP multi-flows probe -- a tcp_probe extension

I modified the tcp_probe kernel module to help study multiple flows' some TCP stack status. I called it "tcp_m_probe". This code was originally used to help research on TCP loss synchronization over buffer-overflowed bottleneck of high-speed networks. Since it is a kernel module when the kernel is compiled with kernel probes (called kprobes) that provide hooks to kernel functions, the performance is really good. I have tested my version under 1Gbps and 10Gbps networks, see my small benchmark bellow.

I think it is generally helpful for anyone who is interested in Linux network stack of TCP or other layers. It is free under "GNU General Public License". 

 This link contains my "tcp_probe_ccui.c" and the Makefile. Additionally, I also attach a shell script "runIperf.sh" which uses Iperf as a traffic generator to trigger the function of tcp_probe_ccui. The format of dumped data file (${Protocol}${Counts}flows${Destination}.$index.tsv) is similar to the original tcp_probe. The data file is a database which contains the source port number as the key for each records.

Any questions, please leave a comment and I will respond as fast as I can. Have fun!

Figure of 6 TCP-CUBIC flows' congestion window behaviors as a function of time through a 1Gbps bottleneck link, MTU==1500 bytes.



Sniffing 10 TCP-CUBIC flows: CPU and storage cost benchmark between tcpdump and tcp_probe_ccui: tcpdump only records 100 bytes of header in each packet, tcp_probe_ccui only records when cwnd changes.