NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

OMAP Wireless Connectivity WLAN Throughput Measurement

From Texas Instruments Wiki
Jump to: navigation, search

Purpose[edit]

This section will cover the following items:

  • Setting the CPU clock
  • Throughput Measurement
  • CPU utilization Measurement

Setting CPU Clock[edit]

For more information about this test refer to <CPU Clock Setup>.
Listing the available frequencies

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies


Setting the AM18x EVM frequency to 456 MHz

echo 456000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed


Querying the CPU frequency

cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq


Throughput Measurement[edit]

Introduction to TCP and UDP[edit]

The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
For further information regarding TCP, refer to Transmission Control Protocol.

The User Datagram Protocol (UDP) is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. UDP is considered unreliable protocol.
For further information regarding UDP, refer to User Datagram Protocol.

Measurement Tools - NetPerf[edit]

Netperf is a benchmark that can be used to measure various aspects of networking performance. Its primary focus is on bulk data transfer and request/response performance using either TCP or UDP and the Berkeley Sockets interface.
The NetPerf tool requires a server side running the NetServer in order to communicate with. If you're running a Linux distribution it's likely you're already running the NetServer.
The NetPerf client command configures both client and server parameters.
We need to run NetPerf client on a host used as an upstream device. This means:
If we want to test EVM upstream throughput, we should run the NetPerf client on the EVM.
If we want to test EVM downstream throughput, we should run the NetPerf client on a host connected to the EVM.

Measurement Tools - Iperf[edit]

Iperf was developed as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.
Iperf can run as a client or as a server according to the arguments passed to it. Unlike NetPerf, when using Iperf we need to configure both client and server sides.

TCP/UDP Throughput Measurements[edit]

After reading the Measurement Tools section above, we are ready to go on to do the various tests.
We have the following setup for testing the throughput:

FAE summit TP test 1.png

Where one EVM is configured as station and the other one is configured as AP. The stations are connected via WLAN.

Once you set your EVM to be Station or AP, you can perform one of the tests below:

Hands On - Throughput Tests[edit]


TCP Upstream Test using Iperf[edit]

Server side:

iperf -s -i2 -p5001

Expected result:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.108 port 5001 connected with 192.168.1.109 port 46565
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  2.0- 4.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  4.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  6.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  8.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 10.0-12.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 12.0-14.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 14.0-16.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 16.0-18.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 18.0-20.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  0.0-20.1 sec  17.3 MBytes  7.24 Mbits/sec

Note:

  • The result will be shown after the client is activated.
  • The Iperf supplies faulty result for TCP test (0.00 bits/sec) on server side only.

Client side:

iperf -c <Server IP> -t20 -i2 -w64k -p5001

Expected result:

------------------------------------------------------------
Client connecting to 192.168.1.108, TCP port 5001
TCP window size:   128 KByte (WARNING: requested 64.0 KByte)
------------------------------------------------------------
[  3] local 192.168.1.109 port 46565 connected with 192.168.1.108 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  1.97 MBytes  8.26 Mbits/sec
[  3]  2.0- 4.0 sec  1.75 MBytes  7.34 Mbits/sec
[  3]  4.0- 6.0 sec  1.62 MBytes  6.78 Mbits/sec
[  3]  6.0- 8.0 sec  1.63 MBytes  6.85 Mbits/sec
[  3]  8.0-10.0 sec  1.86 MBytes  7.80 Mbits/sec
[  3] 10.0-12.0 sec  1.68 MBytes  7.05 Mbits/sec
[  3] 12.0-14.0 sec  1.84 MBytes  7.70 Mbits/sec
[  3] 14.0-16.0 sec  1.66 MBytes  6.95 Mbits/sec
[  3] 16.0-18.0 sec  1.66 MBytes  6.95 Mbits/sec
[  3] 18.0-20.0 sec  1.67 MBytes  7.01 Mbits/sec
[  3]  0.0-20.0 sec  17.3 MBytes  7.27 Mbits/sec


Calculating the WLAN utilization for TCP Upstream Test using Iperf[edit]

This test can provide indication about the wireless overhead on the EVM.
For more information about this test refer to <CPU Utilization>.

In order to calculate the WLAN Utilization for TCP upstream, we will invoke the same client command as a process and discard its prints:

iperf -c <Server IP> -t20 -i2 -w64k -p5001 > /dev/null &

The "> /dev/null" means dumping the application output to /dev/null which is basically eliminating the application output.
After invoking the client as process, invoke the command:

top

The 'top' command will refresh the screen each few seconds. The 'top' command will show the %idle of the CPU and the iperf consumption of the CPU.

JMem: 57176K used, 3076K free, 0K shrd, 2000K buff, 39692K cached
CPU:   2% usr  91% sys   0% nic   6% idle   0% io   0% irq   0% sirq
Load average: 0.57 0.29 0.18 3/69 1010
  PID  PPID USER     STAT   VSZ %MEM %CPU COMMAND
  443     2 root     SW       0   0%  29% [irq/207-wl1271]
 1007   740 root     S    19764  33%  23% iperf -c 192.168.1.109 -t20 -i2 -w64k -p5001
  835     2 root     SW       0   0%  12% [kworker/u:2]
 1004     2 root     SW       0   0%  12% [kworker/0:2]
  916     2 root     RW       0   0%  12% [kworker/u:0]
   28     2 root     SW       0   0%   5% [kworker/u:1]
 1010   740 root     R     3036   5%   1% top 
  917     2 root     SW       0   0%   0% [kworker/u:3]
  613     1 root     S     2864   5%   0% udhcpc -R -b -p /var/run/udhcpc.eth0.p
    3     2 root     SW       0   0%   0% [ksoftirqd/0]
  723     1 root     S    48020  80%   0% /usr/bin/matrix_guiE -qws -display tra
  655     1 haldaemo S    12940  21%   0% /usr/sbin/hald 
  750     1 root     S     4536   8%   0% wpa_supplicant -d -Dnl80211 -c/etc/wpa
  651     1 messageb S     3332   6%   0% /usr/bin/dbus-daemon --system 
  680   656 root     S     3300   5%   0% /usr/libexec/hald-addon-cpufreq 
  669   656 root     S     3288   5%   0% hald-addon-input: Listening on /dev/in
  668   656 root     S     3284   5%   0% /usr/libexec/hald-addon-rfkill-killswi
  656   655 root     S     3192   5%   0% hald-runner 
  740   729 root     S     3036   5%   0% -sh 
  706     1 root     S     2924   5%   0% /sbin/syslogd -n -C64 -m 20

Here you can see: idle = 6%, iperf = 23%.
You need to calculate the WLAN utilization as follows:
WLAN Utilization = 100 - (%idle + %netperf)
In our example: WLAN Utilization = 100 - (6 +23) = 71%.
Do the calculation three times to have a reasonable average. You need to disregard the first 'top' snapshot as it displays faulty values.
Note

If you want to re-run the test, you need to make sure that the last netperf command is finished. Otherwise you may end up with two running netperf
clients. If you do not wish to wait for the netperf to finish you can kill its process. We do that by invoking:
kill <Iperf PID>
In our example, if we look at the 'top' command output, we can see that the Iperf PID is 1007. This also can be obtained by 'ps' command. For our example then, we would invoke:
kill 1007



TCP Downstream Test using Iperf[edit]

Server side:

iperf -s -i2 -p5001

Expected result:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.108 port 5001 connected with 192.168.1.109 port 46565
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  2.0- 4.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  4.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  6.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  8.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 10.0-12.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 12.0-14.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 14.0-16.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 16.0-18.0 sec  0.00 Bytes  0.00 bits/sec
[  4] 18.0-20.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  0.0-20.1 sec  17.3 MBytes  7.24 Mbits/sec

Note:

  • The result will be shown after the client is activated.
  • The Iperf supplies faulty result for TCP test (0.00 bits/sec) on server side only.

Client side:

iperf -c <Server IP> -t20 -i2 -w64k -p5001

Expected result:

------------------------------------------------------------
Client connecting to 192.168.1.108, TCP port 5001
TCP window size:   128 KByte (WARNING: requested 64.0 KByte)
------------------------------------------------------------
[  3] local 192.168.1.109 port 46565 connected with 192.168.1.108 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  1.97 MBytes  8.26 Mbits/sec
[  3]  2.0- 4.0 sec  1.75 MBytes  7.34 Mbits/sec
[  3]  4.0- 6.0 sec  1.62 MBytes  6.78 Mbits/sec
[  3]  6.0- 8.0 sec  1.63 MBytes  6.85 Mbits/sec
[  3]  8.0-10.0 sec  1.86 MBytes  7.80 Mbits/sec
[  3] 10.0-12.0 sec  1.68 MBytes  7.05 Mbits/sec
[  3] 12.0-14.0 sec  1.84 MBytes  7.70 Mbits/sec
[  3] 14.0-16.0 sec  1.66 MBytes  6.95 Mbits/sec
[  3] 16.0-18.0 sec  1.66 MBytes  6.95 Mbits/sec
[  3] 18.0-20.0 sec  1.67 MBytes  7.01 Mbits/sec
[  3]  0.0-20.0 sec  17.3 MBytes  7.27 Mbits/sec

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

TCP Upstream Test using NetPerf[edit]

Client side:

netperf -H <Server IP> -D 2 -l 20 -t TCP_STREAM -f m -- -m 1472 -s64k -S64k

Expected result:

Sorry, Demo Mode not configured into this netperf.
please consider reconfiguring netperf with
--enable-demo=yes and recompiling
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.108 (192.168.1.108) port 0 AF_INET : interval
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

128000 128000   1472    20.10       6.10   

Server side must be running netserver process under Linux. You can check whether it is running by 'ps' command:
For EVM:

ps | grep netserver

For PC:

ps -A | grep netserver

If netserver is not running, install it and run it as service:

./netserver &

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

TCP Downstream Test using NetPerf[edit]

Client side:

netperf -H <Server IP> -D 2 -l 20 -t TCP_STREAM -f m -- -m 1472 -s64k -S64k

Expected result:

Sorry, Demo Mode not configured into this netperf.
please consider reconfiguring netperf with
--enable-demo=yes and recompiling
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.108 (192.168.1.108) port 0 AF_INET : interval
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

128000 128000   1472    20.10       6.10   

Server side must be running netserver process under Linux. You can check whether it is running by 'ps' command:
For EVM:

ps | grep netserver

For PC:

ps -A | grep netserver

If netserver is not running, install it and run it as service:

./netserver &

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

UDP Upstream Test using Iperf[edit]

Server side:

iperf -s -u -i2 -p5001

Expected Result:

------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   122 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.108 port 5001 connected with 192.168.1.109 port 34954
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0- 2.0 sec  2.15 MBytes  9.04 Mbits/sec  1.056 ms    0/ 1537 (0%)
[  3]  2.0- 4.0 sec  2.12 MBytes  8.88 Mbits/sec  1.173 ms    0/ 1511 (0%)
[  3]  4.0- 6.0 sec  2.21 MBytes  9.27 Mbits/sec  0.932 ms    1/ 1577 (0.063%)
[  3]  6.0- 8.0 sec  2.20 MBytes  9.23 Mbits/sec  1.159 ms    0/ 1569 (0%)
[  3]  8.0-10.0 sec  2.18 MBytes  9.14 Mbits/sec  1.293 ms    0/ 1555 (0%)
[  3] 10.0-12.0 sec  2.01 MBytes  8.42 Mbits/sec  2.418 ms    6/ 1438 (0.42%)
[  3] 12.0-14.0 sec  2.20 MBytes  9.21 Mbits/sec  0.948 ms   90/ 1657 (5.4%)
[  3] 14.0-16.0 sec  2.06 MBytes  8.63 Mbits/sec  2.068 ms    0/ 1468 (0%)
[  3] 16.0-18.0 sec  2.19 MBytes  9.17 Mbits/sec  1.337 ms    1/ 1561 (0.064%)
[  3]  0.0-20.0 sec  21.4 MBytes  8.97 Mbits/sec  0.863 ms  194/15442 (1.3%)
[  3]  0.0-20.0 sec  1 datagrams received out-of-order

Note: The result will be shown after the client is activated.
Client side:

iperf -c <Server IP> -t20 -u -i2 -b10M -p5001

Expected result:

------------------------------------------------------------
Client connecting to 192.168.1.108, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:   108 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.109 port 34954 connected with 192.168.1.108 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  2.15 MBytes  9.03 Mbits/sec
[  3]  2.0- 4.0 sec  2.13 MBytes  8.93 Mbits/sec
[  3]  4.0- 6.0 sec  2.19 MBytes  9.20 Mbits/sec
[  3]  6.0- 8.0 sec  2.22 MBytes  9.30 Mbits/sec
[  3]  8.0-10.0 sec  2.18 MBytes  9.16 Mbits/sec
[  3] 10.0-12.0 sec  2.18 MBytes  9.15 Mbits/sec
[  3] 12.0-14.0 sec  2.14 MBytes  8.96 Mbits/sec
[  3] 14.0-16.0 sec  2.09 MBytes  8.77 Mbits/sec
[  3] 16.0-18.0 sec  2.20 MBytes  9.21 Mbits/sec
[  3] 18.0-20.0 sec  2.17 MBytes  9.09 Mbits/sec
[  3]  0.0-20.0 sec  21.6 MBytes  9.08 Mbits/sec
[  3] Sent 15443 datagrams
[  3] Server Report:
[  3]  0.0-20.0 sec  21.4 MBytes  8.97 Mbits/sec  0.863 ms  194/15442 (1.3%)
[  3]  0.0-20.0 sec  1 datagrams received out-of-order

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

UDP Downstream Test using Iperf[edit]

Server side:

iperf -s -u -i2 -p5001

Expected Result:

------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   122 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.108 port 5001 connected with 192.168.1.109 port 34954
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0- 2.0 sec  2.15 MBytes  9.04 Mbits/sec  1.056 ms    0/ 1537 (0%)
[  3]  2.0- 4.0 sec  2.12 MBytes  8.88 Mbits/sec  1.173 ms    0/ 1511 (0%)
[  3]  4.0- 6.0 sec  2.21 MBytes  9.27 Mbits/sec  0.932 ms    1/ 1577 (0.063%)
[  3]  6.0- 8.0 sec  2.20 MBytes  9.23 Mbits/sec  1.159 ms    0/ 1569 (0%)
[  3]  8.0-10.0 sec  2.18 MBytes  9.14 Mbits/sec  1.293 ms    0/ 1555 (0%)
[  3] 10.0-12.0 sec  2.01 MBytes  8.42 Mbits/sec  2.418 ms    6/ 1438 (0.42%)
[  3] 12.0-14.0 sec  2.20 MBytes  9.21 Mbits/sec  0.948 ms   90/ 1657 (5.4%)
[  3] 14.0-16.0 sec  2.06 MBytes  8.63 Mbits/sec  2.068 ms    0/ 1468 (0%)
[  3] 16.0-18.0 sec  2.19 MBytes  9.17 Mbits/sec  1.337 ms    1/ 1561 (0.064%)
[  3]  0.0-20.0 sec  21.4 MBytes  8.97 Mbits/sec  0.863 ms  194/15442 (1.3%)
[  3]  0.0-20.0 sec  1 datagrams received out-of-order

Note: The result will be shown after the client is activated.
Client side:

iperf -c <Server IP> -t20 -u -i2 -b10M -p5001

Expected result:

------------------------------------------------------------
Client connecting to 192.168.1.108, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:   108 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.109 port 34954 connected with 192.168.1.108 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  2.15 MBytes  9.03 Mbits/sec
[  3]  2.0- 4.0 sec  2.13 MBytes  8.93 Mbits/sec
[  3]  4.0- 6.0 sec  2.19 MBytes  9.20 Mbits/sec
[  3]  6.0- 8.0 sec  2.22 MBytes  9.30 Mbits/sec
[  3]  8.0-10.0 sec  2.18 MBytes  9.16 Mbits/sec
[  3] 10.0-12.0 sec  2.18 MBytes  9.15 Mbits/sec
[  3] 12.0-14.0 sec  2.14 MBytes  8.96 Mbits/sec
[  3] 14.0-16.0 sec  2.09 MBytes  8.77 Mbits/sec
[  3] 16.0-18.0 sec  2.20 MBytes  9.21 Mbits/sec
[  3] 18.0-20.0 sec  2.17 MBytes  9.09 Mbits/sec
[  3]  0.0-20.0 sec  21.6 MBytes  9.08 Mbits/sec
[  3] Sent 15443 datagrams
[  3] Server Report:
[  3]  0.0-20.0 sec  21.4 MBytes  8.97 Mbits/sec  0.863 ms  194/15442 (1.3%)
[  3]  0.0-20.0 sec  1 datagrams received out-of-order

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

UDP Upstream Test using NetPerf[edit]

Client side:

netperf -H <Server IP> -l 20 -t UDP_STREAM -f m -- -m 1472 -s64k -S64k

Expected result:

UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.108 (192.168.1.108) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

128000    1472   20.00       12778      0       7.52
128000           20.00       11349              6.68

The first line of numbers are statistics from the sending (netperf) side. The second line of numbers are from the receiving (netserver) side. In this case, 12778 - 11349 or 1429 messages did not make it all the way to the remote netserver process.
Server side must be running netserver process under Linux. You can check whether it is running by 'ps' command:
For EVM:

ps | grep netserver

For PC:

ps -A | grep netserver

If netserver is not running, install it and run it as service:

./netserver &

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

UDP Downstream Test using NetPerf[edit]

Client side:

netperf -H <Server IP> -l 20 -t UDP_STREAM -f m -- -m 1472 -s64k -S64k

Expected result:

UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.108 (192.168.1.108) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

128000    1472   20.00       12778      0       7.52
128000           20.00       11349              6.68

The first line of numbers are statistics from the sending (netperf) side. The second line of numbers are from the receiving (netserver) side. In this case, 12778 - 11349 or 1429 messages did not make it all the way to the remote netserver process.
Server side must be running netserver process under Linux. You can check whether it is running by 'ps' command:
For EVM:

ps | grep netserver

For PC:

ps -A | grep netserver

If netserver is not running, install it and run it as service:

./netserver &

For WLAN Utilization measurement, refer to WLAN Utilization for TCP Upstream above.

E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article OMAP Wireless Connectivity WLAN Throughput Measurement here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article OMAP Wireless Connectivity WLAN Throughput Measurement here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article OMAP Wireless Connectivity WLAN Throughput Measurement here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity