NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.
TransportNetLib UsersGuide
Contents
Scope[edit]
TransportNetLib provides software API to utilize Network Coprocessor (NETCP) hardware accelerator functionality offered by Keystone devices. Purpose of this document is to introduce system level details and is intended for software application developers using the TransportNetLib software package. The document includes brief overview of APIs exposed by each module for illustration purpose and is subjected to modifications. Refer latest software release package to get more comprehensive details for APIs.
Brief overview of peripheral capability is included in the document. In order to get additional details for the device and peripherals refer URL and reference documentation.
References[edit]
No | Referenced Document | Control Number | Description |
LLD Documentation | Revision A | http://processors.wiki.ti.com/index.php/LLD_User_Gude | |
SPRUGZ6 | November 2010 | KeyStone Architecture Network Coprocessor (NETCP) User Guide | |
SPRUGR9 | April 2011 | KeyStone Architecture Multicore Navigator User Guide | |
SPRUGV9 | November 2010 | KeyStone Architecture Gigabit Ethernet(GbE) Switch Subsystem | |
SPRUGV4 | March 2011 | KeyStone Architecture Power Sleep Controller (PSC). | |
SPRUGY6 | May 2011 | KeyStone Architecture Security Accelerator (SA). | |
SPRUGS4 | Nov 2010 | KeyStone Architecture Packet Accelerator (PA) | |
NETCP Developer Guide | July 2011 | NETCP developer guide available under: pdk_xx/docs/
NETCPDeveloperGuide_xx.pd f |
Definitions[edit]
Acronym | Meaning |
AES | Advanced Encryption Standard |
API | Application Programming Interface |
BIOSMCSDK | SYS/BIOS Multicore Software Development Kit |
CPPI | Communications port program ming interface (Multicore Navigator) |
EMAC | Ethernet Media Access Controller Sub module |
ESP | Encapsulating Security Payload |
GbE | Gigabit Ethernet |
GTP-U | GPRS Tunneling Protocol |
HPLIB | High Peformance Library |
IP | Internet Protocol |
IPSec | Internet Protocol security |
IPSec Mgr | Internet Protocol Security Manager |
L2 | Layer -2: MAC |
L3 | Layer 3: IP |
L4 | Layer 4: TCP/UDP |
L5 | Layer 5: GTP-U |
LLD | Low Level Driver |
MAC | Media Access Control |
NETAPI | Network API |
NETCP | Network Coprocessor |
NWAL | Network Adaptation Layer for NETCP Peripheral |
OSAL | Operating System Abstraction Layer |
PA | Packet Accelerator |
PALLD | Packet Accelerator Low Level Driver |
PDK | Platform Development Kit |
PDSP | Packed data structure processor |
PKTDMA | Packet DMA |
PKTIO | Packet Input Output |
PKTLIB | Packet Library |
PSC | Power Sleep Controller |
QMSS | Queue Manager Subsystem |
QoS | Quality of Service |
SA | Security Accelerator |
SALLD | Security Accelerator Low Level Driver |
SGMII | Serial Gigabit Media Independent Interface |
TCP | Transmission Control Packet |
UDP | User Datagram Protocol |
Overview[edit]
Application view of hardware accelerator based solution available for keystone devices in order to offload Ethernet data path traffic can be broadly classified under two categories:
- Hardware: NETCP peripheral supports multiple hardware offload operations including:
- Packet Accelerator to perform:
- Packet header classification such as header matching
- Packet modification operations such as checksum generation
- Security Accelerator to perform crypto operation including encrypt ,decrypt and authentication of data packets
- Packet DMA (PKTDMA) controller in NETCP peripheral enables transfers of data and control packets between NETCP and host. Individual packets to NETCP contain data in containers called descriptors which can be moved between hardware queues. A subset of hardware queues are dedicated to NETCP peripheral. PKTDMA flows can be configured by the host to specify the buffers and descriptors for the packets forwarded by NETCP peripheral.
- Queue Manager Subsystem (QMSS)
- Packet Accelerator to perform:
controls behavior of hardware queues and enables routing of descriptors. Both PKTDMA and QMSS are central part of the Multicore Navigator functionality available in keystone devices. Refer [ REF: 2 & REF: 3] for additional details regarding NETCP peripheral and Multicore Navigator.
- Software: Delivered as Transport Net Library in MCSDK includes
- NETAPI: Data Path configuration, Packet Send/Receive APIs
- HPLIB (High Performance Lib): Low Overhead user mode library optimized for fast path applications.
- Packet Heap Management Lib
Note: The transport library is currently restricted to use in one user space process only. That process may be multi-threaded and the library is optimized for the low overhead Linux use case, where the worker threads are each dedicated to their own fast path core. This is not a requirement, however, for the library; it can be used in cases where the worker threads all run on one core or where they can run on all cores.
Figure 1 provides an overview of the software modules included in TransportNetLib package. TransportNetLib software available in MCSDK provides highly optimized platform abstraction to application leveraging various hardware offload features supported by NETCP peripheral. Software enables easier integration to customer application and fast path stacks. Table 2 provides brief overview of different modules.

Functionality | Sofware Module |
Data Path Configuration and Packet Send/Receive APIs | |
Low over head user mode library optimized for Fast Path Application. | |
Packet Heap Management Library. | |
Interface for offloading IPSec security policy and associated security association from Linux Kernel | |
Network monitoring on behalf of fast path application for any configuration updates | |
Network Adaptation Layer interfacing to NETCP | |
Low level NETCP drivers | |
Data Path Example[edit]
Packet Receive at Host[edit]
For the ingress traffic from network to host, NETCP allows classification lookup offload and policy checks before packets are redirected to application. Figure 2 provides an overview of NETCP offload functionality for packets received from network.

Packet Transmit to Network[edit]
NETCP allows offload for many processor intensive operation e.g.: checksum computation and crypto being offloaded to the hardware minimizing cycles consumed by software. In addition it allows flexible redirection of packets out of a specific Gigabit Ethernet port. Figure 3 illustrates detailed steps involved in the packet transmission path while enabling the offload functionality. Additional steps from application would only require preparation of command label which is attached to the packet during transmission.

Module Details[edit]
The section includes more detailed description of various modules included in TransportNetLib.
NETAPI[edit]
NETAPI provides user space interface to device transport resources.
- NETAPI : Common subsystem level APIs
- NETAPI-CFG : Data path configuration
- NETAPI-SECURITY: Security association configuration
- NETAPI- PKT IO : Packet send/receive APIs
- NETAPI-Scheduler: Optional Scheduler APIs
Common subsystem functionality includes:
- Initialization of resources for: PKTIO, PKTLIB, NWAL & Multi-core Navigator
- Initialization of PKTLIB heaps and PKTDMA resources for packets between NETCP and host.
Refer [ REF: 3 & REF: 8] for additional details regarding various PKTDMA resources
- Creation of default classification rules at NETCP
- Interface for :
- Polling NETCP configuration responses
- Maintenance of PKTLIB heaps created by NETAPI and application
Brief API overview:
Description | API |
Initialize global system level resource. | netapi_init |
Returns version of NETAPI module | netapi_getVersion |
De-allocates all global system level resources | netapi_shutdown |
Retrieve PKTLIB interface table. To be used for heaps created by application which will be managed by NETAPI. | netapi_getPktlibIfTable |
Returns the amount of free memory available for additional PKTLIB Heap buffers | netapi_getBufMemRemainder |
Returns the amount of free memory available for allocating additional PKTLIB heap descriptors | netapi_getDescRemainder |
Get the default NETCP flow that is used for receiving packets | netapi_getDefaultFlow |
Retrieve default NETCP route (flow + destination PKTIO channel) | netapi_getDefaultRoute |
API is used to set a piece of application-provided opaque data in netapi instance. | netapi_setCookie |
API is used to return a piece of application-provided opaque data that has been stored in the netapi instance. | netapi_getCookie |
API to poll for NETCP configuration response messages | netapi_netcpPoll |
Poll for the maintenance of PKTLIB heaps maintained by NETAPI | netapi_pollHeapGarbage |
NETAPI CFG: Configuration APIs[edit]
Configuration support includes:
- Setting up classification of packets inside of NETCP based on MAC address or VLAN L2 header fields
- Setting up classification based on L3 header fields: IP address, protocol, type of service
- Setting up classification based on Level 4 ports or L5 GTP-U tunnel ID.
The configuration APIs are used to insert and remove classification rules in the NETCP hardware lookup engines. Additionally API defines configuration for route specifying how a packet that matches the rule (but no further rule) need to be processed. The route defines which buffer pool(s) to use for packet storage and which PKTIO channel (queue) to route the packet to. For L4 rules, a drop action can also be specified.
Each configuration API returns a handle, Application Id (APPID), for further reference to corresponding NETCP rule. APPID value is also registered with NETCP during configuration and will be available in metadata of the packet from NETCP which is forwarded to the application. Application can use the same APPID for future actions including deletion/un-configuration of the NETCP rule.
Brief API overview:
Description | API |
Inserts a MAC interface rule in the NETCP hardware lookup engines for packet classification through first level lookup. | netapi_netcpCfgCreateMacInterface |
Deletes a MAC interface rule from NETCP hardware lookup engines. | netapi_netcpCfgDelMac |
Inserts an IP address rule associated with a MAC interface along with optional IP qualifiers in the NETCP hardware lookup engine. | netapi_netcpCfgAddIp |
Deletes an IP rule from NETCP hardware lookup engines, detaches it from MAC interface. | netcp_cfgDelIp |
Inserts a classifier rule in the NETCP hardware lookup engines which can be used to route a particular packet flow to a specific PKTIO channel. | netapi_netcpCfgAddClass |
Deletes a classifier rule from NETCP hardware lookup engines | netapi_netcpCfgDelClass |
Request statistics from NETCP | netapi_netcpCfgReqStats |
Inserts a flow rule in the NETCP hardware lookup engines | netapi_netcpCfgAddFlow |
Deletes a flow rule from NETCP hardware lookup engines | netcp_cfgDelFlow |
Configures NETCP with global rules for exception packet handling. | netapi_netcpCfgExceptions |
NETAPI-CFG-SECURITY[edit]
Functionality includes:
- Unidirectional IPSec Security Context creation and deletion
- Receive IPSec Security Policy creation and deletion
- Support for Sideband Encryption/Decryption offload to NETCP
The NETAPI Security APIs comprises two functional areas:
- Configuration
- Data Operations
Configuration APIs are used to setup the IPSec SAs. SAs are IPSec security contexts defining uni-directional secure path (tunnel or transport).In addition the security policy API allows configuration to enable policy check for incoming packet at NETCP. Security policy is a rule that states that the inner IP (tunnel) or outer IP (transport) destination address should have been received via the associated SA and is applicable only for inflow mode. Data operation APIs are available for send and receive operation on IPSec packets.
NETAPI security APIs supports two modes for each SA: inflow and sideband. The mode of the SA is determined through the SA setup API.
INFLOW MODE:[edit]
In this mode, the NETCP is setup to do IPSec processing as packets are received or transmitted. On receive, NETCP hardware will classify the packet as IPSec, use the source/destination/SPI to determine the SA context and send the packet to the crypto engine for processing. It will also perform a replay window check and will check packet against any receive policies that have been configured. The packet (assuming successful authentication and replay check) will be then delivered to the host in the normal manner (i.e., via a PKTIO channel). The APPID in the packet will indicate if IPSec has been performed or if an IPSec policy check was matched.
SIDEBAND MODE:[edit]
This is the traditional security co-processor mode of operation. In this mode, for receive, IPSec packets will be delivered to the host in the normal manner (via PKTIO ) with an AppId matching an IP configuration, or even just matching a MAC interface in the case that the IP has not been configured into NETCP. In either case, host software must determine the SA for the packet and then return the packet to NETCP for crypto. The packet is sent to crypto via a special PKTIO channel that is used for crypto sideband "put" operations (NETCP_SB_TX). After crypto has been performed, the packet is returned to host via a second special PKTIO channel that is used to report crypto results (NETCP_SB_RX).
For sideband encryption, the procedure is similar: the plain text packet is sent to crypto via the PKTIO: NETCP_SB_TX channel and the encrypted packet is return via the PKTIO: NETCP_SB_RX channel.
Brief API overview:
Description | API |
Add an SA as either inflow or sideband mode. SAs are attached to MAC interfaces. | netapi_secAddSA |
Delete an SA | netapi_secDelSA |
Add a receive security policy to an SA. | netapi_secAddRxPolicy( |
Delete a security policy. | netapi_secDelRxPolicy |
Retrieve SA statistics via NWAL | netapi_getSaStats |
NETAPI- PKTIO[edit]
PKTIO provides a channel interface for transmission and/or reception of packets between host and remote endpoints. Functionality includes:
- Transmit or Receive packets to and from NETCP
- Transmit or Receive of packets to Ethernet
- Transmit or Receive of packets to Crypto subsystem
- Creation of channel for Inter Process Communication (IPC) between multiple fast path cores or between fast path and slow path cores
- Multiple packets send/receive
- Poll for any pending data for a single or multiple PKTIO channels
- NETCP offload capability supported by NWAL
- Meta data information in the receive path from NWAL
Brief API overview:
Description | API |
Assigns global resources to a NETAPI PKTIO Channel. Channel can be used either to communicate with the NETCP or for IPC | netapi_pktioCreate |
Opens a PKTIO Channel for use | netapi_pktioOpen |
Closes a PKTIO Channel | netapi_pktioClose |
Send packet/payload through PKTIO channel. | netapi_pktioSend |
Send multiple packet through a PKTIO channel | netapi_pktioSendMulti |
Poll a PKTIO channel for received packets | netapi_pktioPoll |
Poll all PKTIO channels attached to an NETAPI instance for received packets | netapi_pktioPollAll |
NETAPI SCHEDULER APIs[edit]
NETAPI provides a set of OPTIONAL scheduler APIs which application can use to poll for control message/events and data packets. If these APIs are used, then a scheduler instance is required for each thread (ideally one thread per core but this can be also used with multiple threads per core). If not used, then application can invoke individual library poll routine directly from their main processing loop. The scheduler interface allows for configuration to callback functions to perform periodic housekeeping functions based on number of poll cycles.
Brief API overview:
Description | API |
API to open a scheduling context | netapi_schedOpen |
API for main entry point to scheduler | netapi_schedRun |
API to close a scheduling context | netapi_schedClose |
API to get scheduling context statistics | netapi_schedGetStats |
API to get the NETAPI handle from scheduling context | netapi_schedGetHandle |
PKTLIB[edit]
Module expands underlying CPPI hardware descriptors for optimal usage at application layer. Functionalities include:
- Zero copy operations for:
- Packet split/merge operations
- Cloning operations
- Headroom/Tail room addition through merge operation
- Allocations of packet buffer and descriptors during startup time
- Allows packet allocation by HW at Rx CPPI DMA
- Efficient recycling of data buffers including the case of buffers being referenced by multiple CPPI descriptors
In order to allow efficient zero copy operations the module introduces concept of zero buffer descriptors (BLD).

In the example referred under Figure 4 for the split operation, module expects a packet descriptor (PD: TiPkt) with buffer and linked buffer descriptors (BD) in addition to a Buffer Less Descriptor (BLD). At the end of the split, module returns pointer to two separate Packet descriptors to the application which is split based on the size configured by application.

In the example referred under Figure 5 for the merge operation, module expects two separate Packet Descriptors. At the end of the merge operation a single packet descriptor is returned back which merges both packet descriptors.
Brief API overview:
Description | API |
Create a packet Heap. Each heap has a specific set of properties and can be used by applications to have buffers & descriptors residing in different memory regions with different properties etc. | Pktlib_createHeap |
Initialize packet library module for shared memory heap | Pktlib_sharedHeapInit |
Lookup of a packet heap with the name provided as input | Pktlib_findHeapByName |
Get statistics for the heap | Pktlib_getHeapStats |
Allocate a packet from the packet Heap | Pktlib_allocPacket |
Free the packet back to heap | Pktlib_freePacket |
Merge two packets and output will be one merged packet | Pktlib_packetMerge |
Clone a packet | Pktlib_clonePacket |
Split the packet at a specific payload boundary. Output will be two packets | Pktlib_splitPacket |
Garbage collector for a specific heap | Pktlib_garbageCollection |
NWAL[edit]
Module provides network adaptation layer, and abstracts NETCP access to all upper software layers in TransportNetLib package.
Functionalities include:
- Initialization of :
- NETCP low level driver resources
- Packet DMA related resources associated to NETCP
- Supports both blocking and non blocking configuration to NETCP. In the case of blocking request, status is returned back in API call context. In the case of non blocking mode, application can invoke separate poll routine to retrieve the status of configuration request to NETCP.
- Classification of incoming packets based on L2/MAC header fields
- Classification of incoming packets based on L3/IPv4/IPv6 header fields
- Routing of packets to host based on L4: UDP/TCP/GTP-U
- Unidirectional IPSec SA creation and deletion.
- In band offload of IPSec encryption/decryption for the outgoing packets
- Access to SA data mode acceleration for the data plane applications. Refer release documentation for list of supported symmetric key and hash algorithms
- Supports offload of following features to NETCP Hardware during transmission of packets:
- IPv4 checksum/L4:TCP/UDP checksum/IPSec Encryption
- Redirection of packets through a specific MAC port
- Software Insertion of L2/L3/L4 header
- Upon reception of packet module provides additional meta data details including:
- Status of IP checksum/UDP/TCP checksum results
- Offset to L2/L3L4 protocol offsets. Appropriate layer offset will be valid only if classification or routing is enabled at NETCP
- Ingress MAC port information
Brief API overview:
Description | API |
Get buffer requirement from module. | nwal_getBufferReq |
Create global resources across all processors | nwal_create |
Free up all global module | nwal_delete |
API to retrieve global resources created by NWAL during nwal_create() API call | nwal_getGlobCxtInfo |
Initialize per process resources | nwal_start |
API to retrieve local per process resources created by NWAL during nwal_start() API call | nwal_getLocCxtInfo |
Configures MAC interface to NETCP | nwal_setMacIface |
Get handle for pre-existing MAC interface | nwal_getMacIfac |
Delete/Un-configure a MAC interface | nwal_delMacIface |
Configure IP classification at NETCP | nwal_setIPAddr |
Get a handle for pre-existing IP classification | nwal_getIPAddr |
Delete/Unconfigure the IP classification entry | nwal_delIPAddr |
Configure an IPSec SA channel (RX/ TX) | nwal_setSecAssoc |
Lookup an existing IPSec SA channel | nwal_getSecAssoc |
API to query the SA (IPSec) channel Stats. | nwal_getSecAssocStats |
Delete an IPSec SA channel configuration | nwal_delSecAssoc |
Configure IPSec Security Policy (RX/ TX) | nwal_setSecPolicy |
Lookup for an existing IPSec policy | nwal_getSecPolicy |
Delete an IPSec policy | nwal_delSecPolicy |
Configure a L4 UDP/TCP connection. | nwal_addConn |
Delete an existing connection | nwal_delConn |
Configure existing connection for remote | nwal_cfgConn |
Create Data Mode Security association | nwal_setDMSecAssoc |
Delete Data Mode Security Association | nwal_delDMSecAssoc |
Poll for packets from data mode channel | nwal_pollDm |
Transmit payload for data mode channel | nwal_sendDM |
API to querry the SA (Data Mode) channel Stats. | nwal_getDataModeStats |
Update MAC/IP/UDP header for TX packet | nwal_updateProtoHdr |
API to query NETCP: Global PASS stats | nwal_getPAStats |
API to query NETCP: Global SASS stats | nwal_getSASysStats |
API to transmit packet out | nwal_send |
API to poll for incoming packets | nwal_pollPkt |
API to poll for control or configuration response from NETCP | nwal_pollCtl |
API for run time NetCP global configuration | nwal_control |
IPSec Manager[edit]
The software module is provided to run on the control plane ARM Linux cores with following functionalities:
- Offload an IPSec security policy & associated security association to NETCP.
- IPSec Security parameters (Security Policy & Security association) are configured into Linux Kernel by an IKE protocol agent executing in the control plane. IPSec security policies & associations for the data plane (also referred to as Fast Path) can be offloaded to NETCP for HW acceleration using this interface.
- IPSecMgr retrieves the security parameters from Linux Kernel & configure NETCP utilizing the services of NETCP configuration module (NWAL).
- Manage Renewal & expiration of data path IPSec associations.
Brief API overview:
Description | API |
Offload the IPSec security policy & associated security association to NETCP. IPSec policy identifier passed to the API would be the policy-id used in Linux kernel. | offload_sp_req |
HPLIB (High Performance Library)[edit]
The library provides optimal implementation for some of necessary helper functions required by Fast Path application running of cores to use TransportNetLib software efficiently. Details of this library are provided in sub-sections below.
HPLIB-SYNC APIs[edit]
Provides APIs for synchronization across multiple processes or threads running across SMP ARM processes.
- Spinlock Lock/unlock
- RWLock Lock/unlock
- Atomic operations: read, set, add, subtract, increment, decrement, clear
- Memory barriers, read and write barriers
Brief API overview:
Description | API |
Initialize a spinlock in the unlocked state | hplib_mSpinLockInit |
Acquire a spinlock (blocking call) | hplib_mSpinLockLock |
Try to acquire a spinlock (non-blocking) | hplib_mSpinLockTryLock |
Release a spinLock | hplib_mSpinLockUnlock |
Test if spinLock is locked | hplib_mSpinLockIsLocked |
Initialize a read/write lock in the unlocked state | hplib_mRWLockInit |
Acquire a read/write lock for writing | hplib_mRWLockWriteLock |
Unlock writer part of read/write lock | hplib_mRWLockWriteUnlock |
Acquire a read/write lock for reading | hplib_mRWLockReadLock |
Unlock read part of read/write lock | hplib_mRWLockReadUnlock |
Initialize atomic 32 variable and set the state to lock | hplib_mAtomic32Init |
Atomically read a 32-bit integer | hplib_mAtomic32Read |
Atomically set a 32-bit integer | hplib_mAtomic32Set |
Atomically add a value to a 32-bit integer | hplib_mAtomic32Add |
Atomically increment by 1 a 32-bit integer | hplib_mAtomic32Inc |
Atomically subtract a value from a 32-bit integer | hplib_mAtomic32Sub |
Atomically decrement by 1 a 32-bit integer | hplib_mAtomic32Dec |
Atomically add a value to a 32-bit integer, and return the new value of the 32-bit integer after the addition | hplib_mAtomic32AddReturn |
Atomically subtract a value from a 32-bit integer, and return the new value of the 32-bit integer after the subtraction | hplib_mAtomic32SubReturn |
Atomically increment by 1 a 32-bit integer, and return a positive value if the new value of the 32-bit integer is zero, or zero in all other cases | hplib_mAtomic32IncAndTest |
Atomically decrement by 1 a 32-bit integer, and return a positive value if the new value of the 32-bit integer is zero, or zero in all other cases | hplib_mAtomic32DecAndTest |
Atomically test and set to 1 a 32-bit integer | hplib_mAtomic32TestSetReturn |
Atomically set to zero a 32-bit integer | hplib_mAtomic32Clear |
Used to initialize a 64 bit atomic variable and set the state to unlock | hplib_mAtomic64Init |
Atomically read a 64-bit integer | hplib_mAtomic64Read |
Atomically set a 64-bit integer | hplib_mAtomic64Set |
Atomically add a value to a 64-bit integer | hplib_mAtomic64Add |
General Memory Barrier guarantees that all LOAD and STORE operations that were issued before the barrier occur before the LOAD and STORE operations issued after the barrier | hplib_mMemBarrier |
Read memory barrier guarantees that all LOAD operations that were issued before the barrier occur before the STORE operations that are issued after | hplib_mReadMemBarrier |
Write memory barrier guarantees that all STORE operations that were issued before the barrier occur before the STORE operations that are issued after | hplib_mWriteMemBarrier |
HPLIB CACHE APIs[edit]
The HPLIB provides a set of cache APIs which are required for non-cache coherent memory architectures. For coherent memory architectures, these cache primitives are not required as cache coherency functionality is handled by the hardware. These cache APIs only work on memory which is allocated by the HPLIB kernel module.
Brief API overview:
Description | API |
Used to perform a cache writeback operation for a block of cached memory | hplib_cacheWb |
Used to perform a cache invalidate operation | hplib_cacheInv |
Used to perform a cache writeback operation and invalidate for a block of cached memory | hplib_cacheWbInv |
Used to perform a preload of memory into cache | hplib_cachePrefetch |
HPLIB- Virtual Memory APIs[edit]
The virtual memory APIs provide the following functionality:
- Provide a mapping of the physical addresses of key transport SOC peripherals (CPPI, QM, etc) into the transport process virtual address space
- Provide a mapping of the physical address of the contiguous blocks of memory that is to be used for descriptors and buffers into the transport process virtual address space. There is one primary area (typically the cached, CMA-allocated block from DDR) that is used to send and receive descriptors/buffers. Secondary areas can be mapped (e.g., un-cached MSMC) as well but these are restricted for use in PA-PA or PA-SA descriptor regions (i.e., are not processed by ARM user space directly).
- Provide conversion routines to translate from virtual to physical and vice versa. One primary mapping conversion routine pair is available for buffers/descriptors handled by the ARM transport process. Conversion routines for the secondary areas are available but are expected to be used in initialization phases only.
- Provide a memory allocation function for the contiguous blocks of memory used for descriptors and buffer
- API to traverse a CPPI host/monolithic descriptor and perform a virtual address to physical address translation on all address references in the descriptor
- API utility to traverse a CPPI host/monolithic descriptor and perform a physical address to virtual address translation on all address references in the descriptor
NOTE: Mapping is performed via dev/mem for most cases. Exceptions are QMSS Data Registers and DDR. These later two are mapped via HPLIB kernel module.
Brief API overview:
Description | API |
API is used to allocate continuous block of cached memory via CMA and optionally un-cached memory if specified, maps virtual memory for peripheral registers | hplib_vmInit |
API is used to release/unmap continuous block of memory allocated via hplib_VM_MemorySetup function, remove mapping of virtual memory for peripherals | hplib_vmTeardown |
API is used to allocate memory from the specified pool id | hplib_vmMemAlloc |
API returns the free amount of buffer/descriptor area for the memory pool specified | hplib_vmGetMemPoolRemainder |
API is used to convert a physical address to a virtual address for physical address | hplib_mVMPhyToVirt |
API is used to convert a virtual address to a physical address for virtual address | hplib_mVMVirtToPhy |
API is used to convert a physical address to a virtual address for physical address for specified memory pool ID | hplib_mVMPhyToVirtPoolId |
API is used to convert a virtual address to a physical address for virtual address for specified memory pool ID | hplib_mVMVirtToPhyPoolId |
API is used to traverse a CPPI host/monolithic descriptor and perform a virtual address to physical address translation on all address references in the descriptor | hplib_mVMConvertDescVirtToPhy |
API is used to traverse a CPPI host/monolithic descriptor and perform a physical address to virtual address translation on all address references in the descriptor | hplib_mVMConvertDescPhyToVirt |
HPLIB- Utility APIs[edit]
The HPLIB provides a set of utility APIs which provide the following functionality
- API which returns a 64bit H/W timestamp by reading an H/W timer (TIMER64 on TCI6614 SOC, A15 timer on TCI6638) and returns a 64 bit time stamp.
- API to read the current ARM PMU CCNT register( this counts CPU cycles)
- Access to ARM PMU event counters to be used for profiling purpose
Brief API overview:
Description | API |
API returns a 64bit H/W timestamp by reading a H/W timer | hplib_mUtilGetTimestamp |
API returns hardware timer clock ticks per second | hplib_mUtilGetTicksPerSec |
API is used to read the current ARM PMU CCNT register | hplib_mUtilGetPmuCCNT |
API read PMCx register | hplib_mUtilReadPmuCounter |
API is used to enable all four PMU event counters | hplib_mUtilWritePmuEnableAllCounters |
API is used to write PMNXSEL register, this tells which counter slot we are currently working with | hplib_mUtilWritePmuSelectCntr |
API is used to write EVTSEL register, indicate which event should be counted by the current counter slot | hplib_mUtilWritePmuEventToCount |
Program counter slot: slot to count event: event
hplib_mUtilProgramPmuEvent | |
Read event counter in slot: slot | hplib_mUtilReadPmuEvent |
API is used to schedule the calling thread to run on the specified core as specified by the CPU set, enables user space access to ARM system timer and the core performance monitor unit (PMU). | hplib_utilSetupThread |
HPLIB- OSAL APIs[edit]
HPLIB will implement a subset of LLD OSAL APIs as “static inline” functions to avoid the overhead of function calls. Currently, these APIs will be provided for QMSS and CPPI LLDs and focus on API’s which get invoked in the data path, such as APIs which perform virtual to physical and physical virtual address translations for buffers and descriptors. In order to disable compilation of these APIs into the hplib library in case you want to include your own version of OSAL APIs, please pass the following compile time flag when building the TransportNetLib libraries: DISABLE_OSAL=yes. Refer to #Building TransportNetLib Binaries for build details.
[edit]
Provides APIs for managing shared memory segment.
Brief API overview:
Description | API |
API to create a shared memory segment | hplib_shmCreate |
API open a shared memory segment | hplib_shmOpen |
API to delete a shared memory segment | hplib_shmDelete |
API to add entry to shared memory segment | hplib_shmAddEntry |
API to get entry to shared memory segment | hplib_shmGetEntry |
HPLIB Kernel Module[edit]
The kernel module,
hplibmod.ko , provides several helper functions for the user space transport library. These include:
- Enabling the ARM performance monitoring unit (PMU) to be accessed from user space. This lets the user space transport functions access the PMU registers (in ARM CP 15) directly without having to use kernel API or application such as OPROFILE.
- Creates a proc file, /proc/netapi, that can be read from user space (e.g., cat /proc/hplib). The act of reading this file performs the following:
- Enabled PMU for user space (see above), and resets the PMU CCNT register
- Prints the ARM CPU frequency
- The kernel module is responsible for obtaining a chunk of contiguous, cached memory for the user space transport library to use (from the Kernel Contiguous Memory Allocator –CMA). This memory will store descriptors and packet buffers primarily. The amount of memory to allocate is configurable when the module is installed via insmod. The default is 16 Mbytes.
- The kernel module performs a custom mapping of the SOC Queue Manager Data Register space so that writes to hardware queues (Queue Push) can be made bufferable. This improves performance of the Queue Push operation.
- It creates a pseudo device /dev/hplib. This pseudo device provides several functions related to memory management (such as cache operations) for user space transport. These are accessed through IOCTL commands to the device. The complete set of user space commands are discussed below
The kernel module memory management functions include:
- IOCTL to return the physical address of the contiguous memory block that has been allocated for user space transport.
- IOCTL to return the size of this block
Additional Data path Scenarios[edit]
IPSEC Operations[edit]
Side Band Data Operations[edit]
Sideband mode data operations are made through PKTIO channels created by NETAPI to send data to NETCP crypto and to receive crypto results. Refer nwalDmTxPayloadInfo_t and nwalDmRxPayloadInfo_t for metadata associated with side band crypto operations
Packet Receive (Decryption)[edit]
Since in the case of sideband mode for IPSec, there are no NETCP rules configured for IPSec following will be the sequence of packet flow when packet is received at ARM core:
- Packet arrives as normal packet as shown with an AppId meta data tag indicating that a MAC rule or IP rule has matched (it is possible to set up a classifier to match specifically on ESP protocol as well, in which case the AppId would reflect the classifier match).
- The Application is then required to figure out that this is an IPSec packet and identify security context to use.
- Once the security context has been found, the packet must be sent to the PKTIO: NETCP_SB_TX channel previously opened. The metadata nwalDmTxPayloadInfo_t associated with the packet must be set to tell the NETCP security accelerator component how to decrypt the packet.
- After processing by the NETCP crypto hardware, the decrypted packet will arrive on the PKTIO channel for NETCP_SB_RX, and its associated callback (this channel must be in the list of channels polled obviously). The decrypted and authenticated packet will have associated metadata that contains the AppId for the SA context that this packet belongs to.
- The metadata will also include a pointer to the authentication tag and the length of the tag. This tag would then be compared by software to the tag in the packet to authenticate the packet.

Sideband Packet Transmit (Encryption)[edit]
In order to encrypt and addition of the authentication tag for a packet during transmit, the sequence is similar to the receive steps.
- The packet to be encrypted is sent to the PKTIO: NETCP_SB_TX channel.
- The packet should be a fully formed IPSec packet. The inner IP checksum (in the case of tunnel mode) and L4 checksums must be created by software prior to submitting to crypto.
- The application must figure out which outbound SA to use and must attach the associated sideband handle to the packet metadata along with the offsets/lengths to the encrypted, authenticated areas.
- As in the decryption case above, the results of the crypto operation (i.e., the encrypted packet plus authentication tag) are returned via the callback associated with the PKTIO: NETCP_SB_RX channel. The AppId in the metadata will indicate which SA the resulting packet belongs to. The tag pointed to by the metadata will need to be copied by software into the tag portion of the packet (byte reversal of each 4 byte word is required).
The (encrypted) packet can then be transmitted to the network as in the case for normal packets. The outer IP checksum can be still be offloaded to NETCP.

In band Data Operations[edit]
Packet Receive (Decryption)[edit]
Received inflow packets are obtained from the callback registered with either the
- Default PKTIO:NETCP_RX channel
- PKTIO channel registered with the SA during netapi_secAddSA() call
- RX policy in netapi_secAddRxSA().
In any of these cases, the AppId in the metadata returned with the packet will indicate either the Id of the SA (if no policy was matched) or the ID of the policy if a policy is matched. Application can use the AppId to determine if it can bypass s/w IPSec processing. Inflow IPSec packets have been authenticated, decrypted and passed a window replay check, and have possibly passed a policy check if so indicated. Figure 8 provides details of different steps involved in hardware and software while receiving an IPSec packet and NETCP being enabled for L2/L3/L4 classifications and hence requiring minimal host processing.

Packet Transmit (Encryption)[edit]
Inflow packets are transmitted using the same PKTIO: NETCP_TX transmit channel as non-IPSec packets. Some additional meta data needs to be added to the meta data structure before issuing the pktio_send() API to get NETCP to apply the crypto transforms specified in the SA. The calling sequence is shown below for ESP tunnel mode. The steps include:
- The entire IPSec packet must be pre-prepared.
- The L4 checksum (if to be offloaded) must be set to 0
- The inner IP checksum must be set to 0 if it is to be offloaded
- The outer IP checksum must be computed by s/w
- NETCP will fill in the IV and ESP sequence number
- The ESP header SPI field must be field must be filled in.
- The inflow mode SA handle returned from netapi_secAddSA()must be added to the meta data structure,
- The txFlag1 field must have the bit NWAL_TX_FLAG1_DO_IPSEC_CRYPTO set.
- The offset to the start of the ESP header (in bytes) must be set
- The length of the ESP payload must be set. This is essentially the packet length (not including Ethernet checksum) minus the Ethernet header and outer IP header.
Figure 8 provides breakdown of different steps for outgoing packets with hardware offload being enabled for IPSec packets.

TransportNetLib Package[edit]
Build Environment-Prerequite[edit]
The mcsdk_linux_3_00_00_10 and later releases provide an integrated linux-software development kit environment. This environment, once setup, can be used to compile TransportNetLib libraries and test applications.
Steps to install Linux-Devkit:
- cd /ti/mcsdk_linux_3_00_00_X/linux-dev-kit
- execute .
/arago-2013.04-armv7a-linux-gnueabi-mcsdk-sdk-i686.sh * You will be prompted for a target directory where to install the devkit. Please note your installation path as this is required to update the TransportNetLib setup environment build script. Lets assume the devkit is installed in usr/local/arago-2013.04 (this is the default location) .
Building TransportNetLib Binaries[edit]
Transport-SDK software includes source code for Low Level software modules supporting ARM target. Following are the steps to build the TransportNetLib libraries and test applications delivered with TransportNetLib package.
Steps to Build:
- Change directory to <transport-net-lib-install-dir>/packages.
- Update the CROSS_TOOLS ENV variables in the TransportNetLib
setupenv.sh file:
- export CROSS_TOOL_IINSTALL_PATH= /data/hdcustom/linaro/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/bin
- export CROSS_TOOL_PRFX=arm-linux-gnueabihf-
- Update the TransportNetLib
setupenv.sh file which is located in the <transport-net-lib-install-dir>/packages directory to point to the linux-devkit installation path mentioned above. Update the following ENV variables:
- export PDK_INSTALL_PATH=/usr/local/arago-2013.04/sysroots/armv7ahf-vfp-neon-oe-linux-gnueabi/usr/include
- export SA_INSTALL_PATH=/usr/local/arago-2013.04/sysroots /armv7ahf-vfp-neon-oe-linux-gnueabi/usr/include
- export PDK_ARMV7LIBDIR=/usr/local/arago-2013.04/sysroots /armv7ahf-vfp-neon-oe-linux-gnueabi/usr/lib
- export ARMV7SALIBDIR=/usr/local/arago-2013.04/sysroots /armv7ahf-vfp-neon-oe-linux-gnueabi/usr/lib
- export ARMV7LIBDIR=/usr/local/arago-2013.04/sysroots /armv7ahf-vfp-neon-oe-linux-gnueabi/usr/lib
- Execute the script "source
setupenv.sh " to set the environment
- Build TransportNetLib libraries: make lib
- The libraries will be located in the ARMV7LIBDIR (set to <transport-net-lib-install-dir> /lib/armv7/ by default in
setupenv.sh )
- Build TransportNetLib test application: make tests
- The test application binaries will be located in the ARMV7BINDIR (set to <transport-net-lib-install-dir> /bin/armv7/ by default in
setupenv.sh )
- Building TransportNetLib sample applications
- To build ipsecmgr_daemon.out, change directory to <transport-net-lib-install-dir>/packages/ti/runtime/netapi/applications/ipsec_offload/ipsecmgr/build
- Execute: make app
- To build ipsecmgr_cmd_shell, change directory to <transport-net-lib-install-dir>/packages/ti/runtime/netapi/applications/ipsec_offload/config-app/build
- Execute: make app
- The sample application binaries will be located in the ARMV7BINDIR (set to <transport-net-lib-install-dir> /bin/armv7/ by default in
setupenv.sh )
Notes:
By default DEBUG_FLAG environment variable is set to disable the symbols. It could be modified in case of debugging
Building HPLIB Kernel Module[edit]
Prerequisite for this step:[edit]
TransportNetLib kernel module,
hplibmod.ko , is required to perform certain setups to the base kernel. To re-build this module (if necessary):
- Download the kernel and build according to < mcsdk-install-dir >/sc_mcsdk_bios_x_xx_xx_xx/docs/ SC-MCSDK_User_Guide. Assume the kernel is located in <linux-keystone> directory
Note: Due to an issue with legacy drivers that expect a contiguous, un-cached DDR area for shared buffers, patches in the kernel that enable continguous, cached DDR that is (hw maintained) coherent with external DMA may be disabled by default in some releases. This results in a user space transport library that is functional but Not performance optimized. To enable full performance for user space transport, the kernel must be rebuilt as shown below so that continguous, cached and DMA-coherent memory is available to user space.
To rebuild Kernel with CONTINGUOUS, CACHEABLE DMA-COHERENCY ENABLED:
1. Assumption is that linux-keystone git repo has been cloned and is present in linux-keystone directory
2. From the linux-keystone directory, Execute: make keystone2_defconfig
3. From the linux-keystone directory, Execute: make menuconfig.
Navigate to System Type --->TI KeyStone Implementations and select MPAX based coherency support, exit and save configuration.
4. Exeucte: make uImage
5. The resulting kernel image, uImage, can be found in the following directory: linux-keystone/arch/arm/boot
Steps To Build HPLIB KERNEL MODULE (hplibmod.ko):[edit]
- Go to <transport-net-lib-install-dir>/ packages/ti/runtime/hplib//module
Modify environment variable KDIR to point to the location of kernel directory: Example:
- export KDIR = <linux-keystone>
OR
- Update the environment variable KDIR in the Makefile to point to the location of the kernel directory,
- Run make to produce the
hplibmod.ko kernel module
- Copy this to the root file system
- Make sure that this is insmoded after boot.
Build IPSECMGR KERNEL MODULE (ipsecmgr_mod.ko):[edit]
- Prerequisite: Clone the ipsecmgr.git repository from http://arago-project.org/git/projects/ipsecmgr.git
- In order to build the ipsecmgr.ko, update the variable in the Makefile in the ipsecmgr/src/module directory. The variable is “KDIR” and this will need to point to the location where the linux-keystone directory is installed(linux-keystone is the top level directory from where the kernel is built).
- For example, if the linux-keystone is installed in directory under /home/temp/linux_keystone.Update the KDIR variable as follows:
KDIR ?:: /home/temp/linux_keystone
Then to build the ipsecmgr_mod.ko, type “make”.
The built ipsecmgr_mod.ko will be present in the ipsecmgr/src/module directory.
- Note one thing that the ipsecmgr_module will “load” if it is built against the kernel build from the KDIR directory and that kernel is running on the EVM. In other words, kernel modules and linux kernels need to be in sync. If they are out of sync then the ipsecmgr kernel module may not load.
Low OverHead Linux[edit]
Transport Library applications may be run in a mode of Linux, known as Low Overhead Linux (LOL), where a subset of the multi-core ARM15s present on Keystone2 SOCs are dedicated to just the application. The goal of this mode is to utilize these cores (referred to as fast path cores) as if they were bare metal, so that “nearly†100% of the cycles are available to a specialized, performance-critical, application and not used in any other processing including kernel processing or IRQ processing.
This can be achieved via the following mechanisms:
1. CPU isolation- Isolate ARM15 cores to run particular threads only. The kernel boot line command parameter, isolcpus, can be used to isolate one or more ARM15 cores from general SMP balancing and kernel scheduling algorithms. For example, if you chose to set isolate ARM15 cores 2,3 and 4 (referred to fast path cores)you can set the bootargs env parameter as follows:
setenv bootargs "console=ttyS0,115200n8 rootwait=1 isolcpus=2,3,4 rootfstype=nfs root=/dev/nfs rw nfsroot=158.218.103.132:/targetfs,v3,tcp,rsize=4096,wsize=4096 ip=dhcp"
2. SMP IRQ Affinity- When an interrupt arrives, the ARM15 core must stop what is currently doing and process the interrupt which at times can consume CPU cycles and too many interrupts can cause high system CPU usage, preventing performance critical user space applications from running efficiently. To avoid such a situation, we can assign certain IRQs to subset of ARM15 cores (referred to as slow path core), freeing up other cores to more efficiently run user space applications. For example, if we desire to set IRQ #98 smp_affinity to core 1, we would execute the following from the linux shell:
/bin/echo 1 > /proc/irq/98/smp_affinity
A shell script is being delivered as part of the root file system, /etc/netapi/irqset.sh, can be run to set the SMP IRQ affinity of all IRQs to a specific core. This is a basic script which loops through all off the IRQs in the /proc/irq directory and sets the smp_affinity to ARM15 core 1.
The third component of utilizing LOL with the transport library is to assign worker threads to their own CPUs. The hplib function hplib_utilSetupThread (file ti/runtime/hplib/src/hplib_util.c) can be used for this purpose. For example, to assign thread ID 1 to a specific CPU set (e.g Core 3) one would invoke the API as follows:
cpu_set_t cpu_set;
CPU_ZERO( &cpu_set);
CPU_SET( 3, &cpu_set);
hplib_utilSetupThread(1, &cpu_set);
The net_test example programs have configuration via net_test_config file to control how threads are assigned to cores. Slow path threads (e.g., sp0 below) are control plain threads that will typically run on normally scheduled, non-isolated cores. These are used for NETCP re-configuration, statistics, etc.. Fast path threads are threads which are solely doing packet processing (rx/tx) functions, i.e., running a fast path IP stack. The following example from the net_test_config.txt shows how threads are mapped to specific cores (or CPU sets).
sp0 = 1 0-0 /*slow path thread number 0 with id thread 1 being mapped to run solely on ARM CORE 0 */
fp0 = 2 1-1 /* fast path thread number 0 with thread id 2 being mapped to run solely on ARM CORE 1 */
fp1 = 3 2-3 /* fast path thread number 1 with thread id 3 being mapped to run solely on ARM COREs 2 or 3 [as Linux scheduler decides ]*/
TransportNetLib Test Users Guide[edit]
Please refer to TransportNetLib Test Users Guide for an details about sample test applications being provide with this release.
TransportNetLib Sample Applications[edit]
The following two NETAPI samples application are provided starting with release 1.0.0.8.
ipsecmgr_daemon[edit]
This user space application executes on the ARM and is responsible for detecting IPSEC Security Association and Policy configuration from the Linux Kernel. Generating this IPSEC configuration on the ARM/Linux can be done using either Strongwan or setkey user space utilities which can be used to add security association or security policy database entries in the kernel.
Once the kernel detects these entries, they can be offloaded to NETCP (hardware) crypto for IPSEC packet processing using the command shell application described below. This type of offload is known as inflow ipsec mode and its main characteristic is that IPSEC cryptography is performed on packets prior to their arrival to the Kernel interface driver (for ingress) or as the packet is being transmitted (egress). This saves s/w cycles to perform the crypto function (or extra packet transfers in and out of the security accelerator in the sideband crypto offload case).
Example:
To get the sp_id required for the ipsecmgr_cmd_shell, enter the commands below on the daemon (example with k2h). Ensure that IPSEC is setup prior to these steps i.e with either Setkey or Strongswan.
$ipsecmgr_daemon_k2h.out
$ip -s xfrm policy | grep "dir in" | grep -v grep | awk '{print "offload_sp --sp_id " $6 " --shared"}'
$ip -s xfrm policy | grep "dir out" | grep -v grep | awk '{print "offload_sp --sp_id " $6 " --shared"}'
This will print out something like below. Note the 2 sp_ids for the next step.
$offload_sp --sp_id 32 --shared
$offload_sp --sp_id 25 --shared
ipsecmgr_cmd_shell[edit]
This user space application executes on the ARM and is used for offloading IPSEC packet processing to NETCP for configured security associations/policies. For example to startng offloading IPSec Policy with security policy <ID> to NETCP, execute the following command from the command shell:
IPSECMGR-CFG> offload_sp –sp_id <ID>--shared
This command does the following:
- communicates with the kernel to tell it that these associations/policies are to be offloaded
- sets up NETCP security contexts for these associations
- sets up NETCP classification rules to intercept ingress packets for the offloaded context and to send them to crypto accelerator before they are sent to the ARM
To stop offloading IPSec Policy with security policy <ID> to NETCP, execute the following command from the command shell:
IPSECMGR-CFG> stop_offload –sp_id
Example:
- Note the sp_ids from the previous step.
$ipsecmgr_cmd_shell_k2h.out
In the ipsecmgr shell type the following(IPSECMGR-CFG>): offload_sp --sp_id 32 --shared
In the ipsecmgr shell type the following(IPSECMGR-CFG>): offload_sp --sp_id 25 --shared
In the ipsecmgr shell type the following(IPSECMGR-CFG>): exit
Run Iperf or run a ping across the connection. A tcpdump will show the IPSEC traffic.
setup[edit]
Prior to running the 2 sample applications listed above, you will need to setup environment parameters as follows:
Variable to specify the local unix socket name for IPC with IPSec daemon
- export IPSECMGR_APP_SOCK_NAME="/etc/app_sock"
Variable to specify the unix socket name of the IPSec daemon
- export IPSECMGR_DAEMON_SOCK_NAME="/etc/ipsd_sock"
Variable to specify the log file to be used by the ipsecmgr library
- export IPSECMGR_LOG_FILE="/var/run/ipsecmgr_app.log"
You will also need to insmod 2 provide kernel modules which can be located in the filesystem under the /lib/modules/<KERNEL_VER>/extras directory where KERNEL_VER can be found by typing "uname -r" from linux shell prompt.
- insmod hplibmod.ko
- insmod ipsecmgr_mod.ko
The ipsec inflow mode requires some device tree configuration:
Note: The device tree changes below aren't necessary if using the default TI MCSDK release images.
- The receive flow numbers used by the Linux interfaces (netrx0, 1, ..) must be sequential
- pktdma channels must be defined for the egress path, to send packets direct to SA on egress instead of to PDSP5 or QOS Shaper input queues. There will be one per interface that uses ipsec inflow-mode on egress.
e.g.
'''pktdma@2004000 {
..
channels {
..
satx-0 { /* for inteface 0 */
transmit;
label = "satx-0";
pool = "pool-net";
submit-queue = <0x286>;
};
satx-1 { /* for interface 1 */
transmit;
label = "satx-0";
pool = "pool-net";
submit-queue = <0x286>;
};
..'''
- an entry in the netcp@2090000 {} is required to enable inflow mode on egress:
'''sa@20c0000 { label = "keystone-sa"; multi-interface; interface-0; interface-1; .. /* all interfaces that will be using inflow-mode ipsec on egress */ tx_queue_depth = <0x20>; };''' =Software Component Directory Structure=
An overview of the software CSL/LLD component included in BIOSMCSDK software is presented in below table.
SOFTWARE COMPONENTS | DOCUMENTATION AND API DIRECTORY REFERENCES | DESCRIPTION |
NETAPI/PKTIO | <TRANSPORT_NET_INSTALL_DIR>/packages/ti/runtime/netapi/docs/doxygen/html | API doxygen file |
<TRANSPORT_NET_INSTALL_DIR>/packages/ti/runtime/netapi/ | Top level directory for API header files | |
NWAL | < PDK_INSTALL_DIR >/packages/ti/drv/nwal/docs/doxygen/html | API doxygen file |
< PDK_INSTALL_DIR >/packages/ti/drv/nwal/ | Top level directory for API header files | |
HPLIB | < TRANSPORT_NET_INSTALL_DIR >/packages/ti/runtime/hplib/docs/doxygen/html | API doxygen file |
< TRANSPORT_NET_INSTALL_DIR >/packages/ti/runtime/hplib | Top level directory for API header files | |
PKTLIB | < PDK_INSTALL_DIR >/packages/ti/runtime/PKTLIB/docs/doxygen/html | API doxygen file |
< PDK_INSTALL_DIR>/packages/ti/runtime/PKTLIB/ | Top level directory for API header files | |
PDK LLD packages | <PDK_INSTALL_DIR>/packages/ti/drv/
|
Documentation for all LLDs |
<PDK_INSTALL_DIR>/packages/ti/drv/
| ||
SA LLD | <packages/ti/drv/sa/docs/doxygen
|
SA LLD API documentation |
<packages/ti/drv/sa> | Top level directory for API header files for SA LLD | |