NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

TI81XX UDP Performance Improvement

From Texas Instruments Wiki
Jump to: navigation, search
TIBanner.png
TI81XX UDP Performance Improvement
Linux PSP

UDP Performance Analysis[edit]

UDP stack flow

UDP doesn't have a flow control mechanism, so packets can be dropped at various levels in the network stack/hardware depending on lack of resource. The loss of packet can occur in the following levels.

  • EMAC Receive DMA Descriptors
    • In hardware where DMA doesn't find descriptors for received packet
  • Network Stack Queue
  • Socket Buffer Queue

EMAC Receive DMA Descriptors[edit]

When there is a burst of packets and the driver is not able to service the completed Rx descriptors, this leads to Hardware DMA Rx over runs. This can be identified with the hardware counters i.e. DMA Rx overruns. DMA Rx Overruns can be verified with the following commands.

  • TI816X
    • Rx SOF Overruns - devmem2 0x4A100284
    • Rx MOF Overruns - devmem2 0x4A100288
    • Rx DMA Overruns - devmem2 0x4A10028C

The performance can be improved by increasing the number of DMA Rx descriptors

Increasing DMA descriptors in DM81XX[edit]

  • Move the DMA descriptors from internal BD Ram to DDR location
  • Increase the size of Descriptors Memory size
  • Increase number of Rx descriptors queued to the hardware

Network Stack Queue[edit]

When the Driver is capable to queue the Packet burst to the network stack and still network performance is poor, then there is queue overflow in network stack and it can be found with ifconfig command in Rx dropped field count.

Increase Network stack Queue[edit]

  • sysctl -p | grep mem
    • This will display your current buffer settings. Save These! You may want to roll-back these changes
  • sysctl -w net.core.rmem_max=8388608
    • This sets the max OS receive buffer size for all types of connections.
  • sysctl -w net.core.wmem_max=8388608
    • This sets the max OS send buffer size for all types of connections.
  • sysctl -w net.core.rmem_default=65536
    • This sets the default OS receive buffer size for all types of connections.
  • sysctl -w net.core.wmem_default=65536
    • This sets the default OS send buffer size for all types of connections.
  • sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
    • TCP Autotuning setting. "The tcp_mem variable defines how the TCP stack should behave when it comes to memory usage. The first value specified in the tcp_mem variable tells the kernel the low threshold. Below this point, the TCP stack does not bother at all about putting any pressure on the memory usage by different TCP sockets. The second value tells the kernel at which point to start pressuring memory usage down. The final value tells the kernel how many memory pages it may use maximally. If this value is reached, TCP streams and packets start getting dropped until we reach a lower memory usage again. This value includes all TCP sockets currently in use."
  • sysctl -w net.ipv4.udp_mem='4096 87380 8388608'
    • UDP Autotuning setting. "The udp_mem variable defines how the UDP stack should behave when it comes to memory usage. The first value specified in the udp_mem variable tells the kernel the low threshold. Below this point, the UDP stack does not bother at all about putting any pressure on the memory usage by different UDP sockets. The second value tells the kernel at which point to start pressuring memory usage down. The final value tells the kernel how many memory pages it may use maximally. If this value is reached, UDP streams and packets start getting dropped until we reach a lower memory usage again. This value includes all UDP sockets currently in use."
  • sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
    • TCP Autotuning setting. "The first value tells the kernel the minimum receive buffer for each TCP connection, and this buffer is always allocated to a TCP socket, even under high pressure on the system. The second value specified tells the kernel the default receive buffer allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default value used by other protocols. The third and last value specified in this variable specifies the maximum receive buffer that can be allocated for a TCP socket."
  • sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
    • TCP Autotuning setting. "This variable takes 3 different values which holds information on how much TCP send-buffer memory space each TCP socket has to use. Every TCP socket has this much buffer space to use before the buffer is filled up. Each of the three values is used under different conditions. ... The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket. ... The second value in the variable tells us the default buffer space allowed for a single TCP socket to use. ... The third value tells the kernel the maximum TCP send buffer space."
  • sysctl -w net.ipv4.route.flush=1
    • This will ensure that immediately subsequent connections use these values.

Socket Buffer Queue[edit]

When there are no drops in Hardware and also in Network stack queue and still application is not receiving the UDP packets, then this indicates socket buffer queue overflow. This can be avoided by increasing the buffer size for the created socket in the application.

Increase Socket Buffer Queue[edit]

Socket buffer queue can be increased by using setsockopt API with SO_RCVBUF option name.

Note: Socket buffer size must be less than or equal to Networstack queue buffer size (sysctl_rmem_max).

TI816X Performance Results[edit]

UDP performance
Bandwidth (Mbps) Percentage of Packets dropped
10 0
20 0
50 0
100 0.2


E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article TI81XX UDP Performance Improvement here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article TI81XX UDP Performance Improvement here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article TI81XX UDP Performance Improvement here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article TI81XX UDP Performance Improvement here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity