NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

WLAN Throughput Test

From Texas Instruments Wiki
Jump to: navigation, search

Introduction to TCP and UDP[edit]

The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.

The User Datagram Protocol (UDP) is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. UDP is considered an unreliable protocol.

Measurement Tools - NetPerf[edit]

Netperf is a benchmark that can be used to measure various aspects of networking performance. Its primary focus is on bulk data transfer and request/response performance using either TCP or UDP and the Berkeley Sockets interface.
The NetPerf tool requires a server side running the NetServer in order to communicate with. If you're running a Linux distribution it's likely you're already running the NetServer.
The NetPerf client command configures both client and server parameters.
We need to run NetPerf client on a host used as an upstream device. This means:
If we want to test EVM upstream throughput, we should run the NetPerf client on the EVM.
If we want to test EVM downstream throughput, we should run the NetPerf client on a host connected to the EVM.

Measurement Tools - Iperf[edit]

Iperf was developed as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.
Iperf can run as a client or as a server according to the arguments passed to it. Unlike NetPerf, when using Iperf we need to configure both client and server sides.

Performance Measuring[edit]

Two EVM setup[edit]

We have the following setup for testing the throughput:

FAE summit TP test 1.png

Where one EVM is configured as station and the other one is configured as AP. The stations are connected via WLAN.

Once you set your EVM to be Station or AP, you can perform one of the tests below:

One EVM Setup[edit]

We will measure upload and download bandwidth, using both TCP and UDP. We will need the following setup:

  • an EVM with wireless connectivity.
  • a PC connected via serial port to EVM, which is used as console for the EVM.
  • an AP
  • a PC connected to the AP via Ethernet cable.

The EVM will be configured to station mode and connect the AP.

The following diagram shows the setup described above:

WLAN Station up down stream.jpg

We would like to check the wireless performance between the EVM and the AP. The Ethernet connected PC is used as an endpoint with the Iperf on it to analyze the data traffic. Now we are ready to do the testing:

Upstream Test using IPerf[edit]

Here we will test the data bandwidth going from the EVM towards the AP.
We should run the Iperf as a client on the EVM, and as a server on the PC connected to AP.

For the PC (server side) we use one the options:
For TCP Server:

iperf.exe -s -i2

For UDP Server:

iperf.exe -s -u -i2

Where:

-s means run as a server.
-i means the interval between the traffic reports (in the example it is set to 2 seconds).
-u means set the server to be a UDP server.

You should see an output like:

------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------

As you can see in the example, the listening port is 5001, but this port can change on different machines.

For the EVM (client side) we use one the options:
For TCP Client:

iperf -c 192.168.1.109 -t20 -i2 -w64k -p5001

For UDP Client:

iperf -c 192.168.1.109 -u -t20 -i2 -b10M -p5001

Where:

The IP address is the server IP address we ran before.
-c means run as a client.
-t means run for certain number of seconds (in the example, it is 20 seoconds).
-i means the interval between the traffic reports (in the example it is set to 2 seconds).
-u means set the client is a UDP server.
-w means the TCP window size.
-b means the UDP bandwidth.

NoteNote: The port in the client command line has to match the port appearing on the server.

Once we activated the client we should start seeing the results on both client and server. For the TCP example we would see the following:
On client side:

root@am180x-evm:/usr/sbin# iperf -c 192.168.1.109 -t20 -i2 -w64k -p5001
------------------------------------------------------------
Client connecting to 192.168.1.109, TCP port 5001
TCP window size:   128 KByte (WARNING: requested 64.0 KByte)
------------------------------------------------------------
[  3] local 192.168.1.110 port 45551 connected with 192.168.1.109 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  2.03 MBytes  8.52 Mbits/sec
[  3]  2.0- 4.0 sec  1.77 MBytes  7.41 Mbits/sec
[  3]  4.0- 6.0 sec  2.11 MBytes  8.85 Mbits/sec
[  3]  6.0- 8.0 sec  2.01 MBytes  8.42 Mbits/sec
[  3]  8.0-10.0 sec  1.89 MBytes  7.93 Mbits/sec
[  3] 10.0-12.0 sec  2.05 MBytes  8.62 Mbits/sec
[  3] 12.0-14.0 sec  1.95 MBytes  8.16 Mbits/sec
[  3] 14.0-16.0 sec  1.80 MBytes  7.54 Mbits/sec
[  3] 16.0-18.0 sec  1.92 MBytes  8.06 Mbits/sec
[  3] 18.0-20.0 sec  1.82 MBytes  7.63 Mbits/sec
[  3]  0.0-20.0 sec  19.4 MBytes  8.11 Mbits/sec

On server side:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1848] local 192.168.1.109 port 5001 connected with 192.168.1.110 port 45551
[ ID] Interval       Transfer     Bandwidth
[1848]  0.0- 2.0 sec  1.94 MBytes  8.13 Mbits/sec
[1848]  2.0- 4.0 sec  1.88 MBytes  7.87 Mbits/sec
[1848]  4.0- 6.0 sec  2.11 MBytes  8.85 Mbits/sec
[1848]  6.0- 8.0 sec  1.99 MBytes  8.34 Mbits/sec
[1848]  8.0-10.0 sec  1.86 MBytes  7.79 Mbits/sec
[1848] 10.0-12.0 sec  2.10 MBytes  8.80 Mbits/sec
[1848] 12.0-14.0 sec  1.93 MBytes  8.10 Mbits/sec
[1848] 14.0-16.0 sec  1.81 MBytes  7.60 Mbits/sec
[1848] 16.0-18.0 sec  1.92 MBytes  8.06 Mbits/sec
[1848] 18.0-20.0 sec  1.81 MBytes  7.60 Mbits/sec
[1848]  0.0-20.0 sec  19.4 MBytes  8.11 Mbits/sec


The result shows the intervals between the tests, the amount of data passed and the bandwidth for each interval.

Upstream Test using NetPerf[edit]

If we are using NetPerf for upstream test, we need the server side (PC) to run netserver application. If the PC is running a Linux distribution, the netserver might be already running. If not, please install the netserver from the repository of your destribution.
Unlike the IPerf, the NetPerf client command configures both client and server side. Once the netserver is running, We need to type the following command on the EVM:
For TCP

netperf -H 192.168.1.104 -D 2 -l 10 -t TCP_STREAM -f m -- -m 1472 -s64k -S64k

For UDP

netperf -H 192.168.1.104 -l 10 -t UDP_STREAM -f m -- -m 1472 -s64k -S64k

Where

-H [Remote Host] specifies the name (or IP address) of the remote host where netserver is running
-l specifies the length of the test in seconds
-D set the TCP_NODELAY option to true on both systems
-t UDP_STREAM is the test type
-m [number] specifies the packet size
-s and -S specify the local and remote socket size respectively.
For the complete list of arguments, please refer to the NetPerf Manual

TCP Test output: After number of seconds (in our example 10 seconds specified by -l), the result of the above UDP test command would be something like:

Sorry, Demo Mode not configured into this netperf.
please consider reconfiguring netperf with
--enable-demo=yes and recompiling
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.104 (192.168.1.104) port 0 AF_INET : interval
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

128000 128000   1472    10.07       9.36

UDP Test output: After number of seconds (in our example 10 seconds specified by -l), the result of the above UDP test command would be something like:

Sorry, Demo Mode not configured into this netperf.
please consider reconfiguring netperf with
--enable-demo=yes and recompiling
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.104 (192.168.1.104) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

128000    1472   10.00        8128      0       9.57
128000           10.00        8078              9.51

For the UDP output, the first line of numbers are statistics from the sending (netperf) side. The second line of numbers are from the receiving (netserver) side. In this case, 8128 - 8078 or 50 messages did not make it all the way to the remote netserver process.

Downstream Test Using IPerf[edit]

Here we will test the data bandwidth going from the AP towards the EVM.
We should run the Iperf as a client on the PC connected to AP, and as a server on the EVM.

For the EVM (server side) we use one the options:
For TCP Server:

root@am180x-evm:/usr/sbin# iperf -s -i2

For UDP Server:

root@am180x-evm:/usr/sbin# iperf -s -u -i2

Where:

-s means run as a server.
-i means the interval between the traffic reports (in the example it is set to 2 seconds).
-u means set the server to be a UDP server.

You should see an output like:

------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   108 KByte (default)
------------------------------------------------------------

As you can see in the example, the listening port is 5001, but this port can change on different machines.

For the PC (client side) we use one the options:
For TCP Client:

C:\Projects\iperf>iperf -c 192.168.1.110 -t20 -i2 -w64k -p5001

For UDP Client:

C:\Projects\iperf>iperf -c 192.168.1.110 -t20 -u -i2 -b10M -p5001

Where:

The IP address is the server IP address we ran before.
-c means run as a client.
-t means run for certain number of seconds (in the example, it is 20 seoconds).
-i means the interval between the traffic reports (in the example it is set to 2 seconds).
-u means set the client is a UDP server.
-w means the TCP window size.
-b means the UDP bandwidth.

NoteNote: The port in the client command line has to match the port appearing on the server.

Once we activated the client we should start seeing the results on both client and server. For the UDP example we would see the following:
On client side:

------------------------------------------------------------
Client connecting to 192.168.1.110, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 63.0 KByte (default)
------------------------------------------------------------
[1900] local 192.168.1.109 port 3582 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[1900]  0.0- 2.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900]  2.0- 4.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900]  4.0- 6.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900]  6.0- 8.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900]  8.0-10.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900] 10.0-12.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900] 12.0-14.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900] 14.0-16.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900] 16.0-18.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900] 18.0-20.0 sec  2.38 MBytes  10.0 Mbits/sec
[1900]  0.0-20.0 sec  23.8 MBytes  9.99 Mbits/sec
[1900] Server Report:
[1900]  0.0-19.9 sec  23.8 MBytes  10.0 Mbits/sec  1.289 ms    6/17008 (0.035%)
[1900] Sent 17008 datagrams

On server side:

------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   108 KByte (default)
------------------------------------------------------------

[  3] local 192.168.1.110 port 5001 connected with 192.168.1.109 port 3582
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0- 2.0 sec  2.46 MBytes  10.3 Mbits/sec  1.482 ms    6/ 1758 (0.34%)
[  3]  2.0- 4.0 sec  2.39 MBytes  10.0 Mbits/sec  1.399 ms    0/ 1702 (0%)
[  3]  4.0- 6.0 sec  2.38 MBytes  10.0 Mbits/sec  1.456 ms    0/ 1701 (0%)
[  3]  6.0- 8.0 sec  2.38 MBytes  9.99 Mbits/sec  1.519 ms    0/ 1699 (0%)
[  3]  8.0-10.0 sec  2.39 MBytes  10.0 Mbits/sec  1.526 ms    0/ 1703 (0%)
[  3] 10.0-12.0 sec  2.38 MBytes  10.0 Mbits/sec  1.250 ms    0/ 1700 (0%)
[  3] 12.0-14.0 sec  2.38 MBytes  10.0 Mbits/sec  1.631 ms    0/ 1700 (0%)
[  3] 14.0-16.0 sec  2.39 MBytes  10.0 Mbits/sec  1.257 ms    0/ 1703 (0%)
[  3] 16.0-18.0 sec  2.38 MBytes  9.99 Mbits/sec  1.720 ms    0/ 1699 (0%)
[  3]  0.0-19.9 sec  23.8 MBytes  10.0 Mbits/sec  1.290 ms    6/17008 (0.035%)

The result shows the intervals between the tests, the amount of data passed and the bandwidth for each interval.
In case of UDP we see additional columns showing the Jitter in milliseconds and the lost packets.

Please note that there is a display bug in TCP downstream mode:
When activating the server on the EVM the server shows that no data was transferred except for the last interval. The client though, shows the true result.

Downstream Test using NetPerf[edit]

This test is exactly like Upstream Test using NetPerf. The only difference is the the netperf command is invoked on the PC towards the EVM since we are dealing with downstream.


Calculating the WLAN utilization for TCP Upstream Test using Iperf[edit]

This test can provide indication about the wireless overhead on the EVM.
For more information about this test refer to CPU Utilization.

In order to calculate the WLAN Utilization for TCP upstream, we will invoke the same client command as a process and discard its prints:

iperf -c <Server IP> -t20 -i2 -w64k -p5001 > /dev/null &

The "> /dev/null" means dumping the application output to /dev/null which is basically eliminating the application output.
After invoking the client as process, invoke the command:

top

The 'top' command will refresh the screen each few seconds. The 'top' command will show the %idle of the CPU and the iperf consumption of the CPU.

JMem: 57176K used, 3076K free, 0K shrd, 2000K buff, 39692K cached
CPU:   2% usr  91% sys   0% nic   6% idle   0% io   0% irq   0% sirq
Load average: 0.57 0.29 0.18 3/69 1010
  PID  PPID USER     STAT   VSZ %MEM %CPU COMMAND
  443     2 root     SW       0   0%  29% [irq/207-wl1271]
 1007   740 root     S    19764  33%  23% iperf -c 192.168.1.109 -t20 -i2 -w64k -p5001
  835     2 root     SW       0   0%  12% [kworker/u:2]
 1004     2 root     SW       0   0%  12% [kworker/0:2]
  916     2 root     RW       0   0%  12% [kworker/u:0]
   28     2 root     SW       0   0%   5% [kworker/u:1]
 1010   740 root     R     3036   5%   1% top
  917     2 root     SW       0   0%   0% [kworker/u:3]
  613     1 root     S     2864   5%   0% udhcpc -R -b -p /var/run/udhcpc.eth0.p
    3     2 root     SW       0   0%   0% [ksoftirqd/0]
  723     1 root     S    48020  80%   0% /usr/bin/matrix_guiE -qws -display tra
  655     1 haldaemo S    12940  21%   0% /usr/sbin/hald
  750     1 root     S     4536   8%   0% wpa_supplicant -d -Dnl80211 -c/etc/wpa
  651     1 messageb S     3332   6%   0% /usr/bin/dbus-daemon --system
  680   656 root     S     3300   5%   0% /usr/libexec/hald-addon-cpufreq
  669   656 root     S     3288   5%   0% hald-addon-input: Listening on /dev/in
  668   656 root     S     3284   5%   0% /usr/libexec/hald-addon-rfkill-killswi
  656   655 root     S     3192   5%   0% hald-runner
  740   729 root     S     3036   5%   0% -sh
  706     1 root     S     2924   5%   0% /sbin/syslogd -n -C64 -m 20

Here you can see: idle = 6%, iperf = 23%.
You need to calculate the WLAN utilization as follows:
WLAN Utilization = 100 - (%idle + %netperf)
In our example: WLAN Utilization = 100 - (6 +23) = 71%.
Do the calculation three times to have a reasonable average. You need to disregard the first 'top' snapshot as it displays faulty values.
NoteNote: If you want to re-run the test, you need to make sure that the last iperf command is finished. Otherwise you may end up with two running netperf
clients. If you do not wish to wait for the netperf to finish you can kill its process. We do that by invoking:

kill <Iperf PID>

In our example, if we look at the 'top' command output, we can see that the Iperf PID is 1007. This also can be obtained by 'ps' command. For our example then, we would invoke:

kill 1007
E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article WLAN Throughput Test here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article WLAN Throughput Test here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article WLAN Throughput Test here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article WLAN Throughput Test here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article WLAN Throughput Test here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article WLAN Throughput Test here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article WLAN Throughput Test here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article WLAN Throughput Test here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article WLAN Throughput Test here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity