NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

MCSDK UG Chapter Developing Transports

From Texas Instruments Wiki
Jump to: navigation, search




TI Logo oneline twocolor.png



Developing with MCSDK: Transports

Last updated: 12/21/2015

Contents


Overview[edit]

Learn about the various Transports that are included in the MCSDK and how they move data between the ARM and DSP subsystems or between different software layers.

MCSDK package provides multiple high level software abstractions component to facilitate applications to communicate between different processors. There are also low level drivers which talks to CPPI/QMSS/PA and can be used to communicate between separate entities. This section will focus on high level software abstractions for transport communication.

Acronyms[edit]

The following acronyms are used throughout this chapter.

Acronym Meaning
API Application Programming Interface
ARM Advanced RISC Machine
CCS Code Composer Studio
CPPI Communications Port Programming Interface (Multicore Navigator)
DSP Digital Signal Processor
EVM Evaluation Module, hardware platform containing the Texas Instruments DSP
IPC Texas Instruments Inter-Processor Communication
LLD Low Level Driver
MCSDK Texas Instruments Multi-Core Software Development Kit
MSMC Multicore Shared Memory
PDK Texas Instruments Programmers Development Kit
QMSS Queue Manager Sub-System
RM Resource Manager
RTSC Eclipse Real-Time Software Components
SRIO Serial RapidIO
TI Texas Instruments
TID MessageQ Network Transport ID



Transport Network Library[edit]

Please see chapter TransportNetLib User Guide for details.

IPC Transports[edit]

IPC transports are the IPC MessageQ API's underlying configurable data paths over shared memory and hardware resources. Transports are registered with MessageQ providing a common IPC interface between processors within a system that contains a single or multiple KeyStone II devices. The transports supplied with the IPC component are shared memory based. Additional transports, utilizing the QMSS and SRIO LLDs, are supplied via Yocto/bitbake for ARMv7 Linux IPC and MCSDK BIOS PDK for SYS/BIOS DSP IPC.

MessageQ can support up to nine simultaneous transports over two transport interfaces. The first interface is the standard MessageQ, priority based, interface that has always existed. A shared memory transport is always registered as the normal priority transport when MessageQ is initialized at IPC start. An additional transport can be registered with MessageQ as a high priority transport. The second interface is the Network transport interface. The LLD transports can be registered with MessageQ as a Network transport. Network transports are registered with MessageQ using a Transport ID, or TID. There are seven possible TID values ranging from 1 through 7. A MessageQ message is routed over a transport registered with a certain priority, or TID, based on the settings in the message's MessageQ header. Editing a message's MessageQ header to contain the desired transport priority, or TID, is accomplished by calling the proper MessageQ header modification macro.

The below table gives an overview of the transport offerings, their location, and the communication path they enable.

Transport MessageQ Interface Type Location Communication Route Enabled Communication Path Special Considerations
TransportShm... MessageQ (priority based) IPC component - SYS/BIOS subdirectories Shared memory SYS/BIOS DSP to DSP There are multiple implementations of TransportShm delivered within the IPC component. Please see the IPC documentation provided with the component for more information on these shared memory transport implementations.
TransportRpmsg MessageQ (priority based)
  • IPC component - ARMv7 Linux subdirectories
  • IPC component - SYS/BIOS subdirectories
  • Yocto/bitbake ti-ipc recipe
Shared memory
  • ARMv7 Linux to/from SYS/BIOS DSP
MessageQ messages sent over TransportRpmsg traveling from/to ARMv7 user space go through the Linux kernel before reaching the DSP. This provides clean partitioning between user memory and DSP memory. However, TransportRpmsg is considered a slow path since the user space MessageQ messages must be copied from/to DSP memory by kernel and DSP.
SYS/BIOS DSP TransportSrio Network MCSDK BIOS PDK SRIO LLD
  • SYS/BIOS DSP to/from SYS/BIOS DSP (intra- and inter-device)
  • SYS/BIOS DSP to/from ARMv7 Linux (intra- and inter-device)
  • TransportSrio can send MessageQ messages to ARMv7 and DSP processors on remote devices in a multiple device system. IPC MultiProc must be configured to be aware of all processors existing on all devices and all devices must be connected over a SRIO interconnect.
  • The main purpose of TransportSrio is for multi-device communication over MessageQ. The transmission latency is greater for this transport due to the latter capability. Therefore, it is recommended a shared memory or other LLD-based transport is used for intra-device communication due to their lower latency costs.
ARMv7 Linux TransportSrio Network Yocto/bitbake ti-transport-srio recipe SRIO LLD
  • ARMv7 Linux to/from ARMv7 Linux (intra- and inter-device)
  • SYS/BIOS DSP to/from ARMv7 Linux (intra- and inter-device)
  • TransportSrio can send MessageQ messages to ARMv7 and DSP processors on remote devices in a multiple device system. IPC MultiProc must be configured to be aware of all processors existing on all devices and all devices must be connected over a SRIO interconnect.
  • The main purpose of TransportSrio is for multi-device communication over MessageQ. The transmission latency is greater for this transport due to the latter capability. Therefore, it is recommended a shared memory or other LLD-based transport is used for intra-device communication due to their lower latency costs.
ARMv7 Linux TransportQmss Network Yocto/bitbake ti-transport-qmss recipe QMSS LLD
  • ARMv7 Linux process to process
  • ARMv7 Linux to/from SYS/BIOS DSP
SYS/BIOS DSP TransportQmss Network QMSS LLD
  • SYS/BIOS DSP to/from SYS/BIOS DSP
  • SYS/BIOS DSP to/from ARMv7 Linux

The IPC component (ARMv7 and SYS/BIOS) is available in MCSDK BIOS and MCSDK Linux installations. It will be installed in <MCSDK BIOS/Linux install root>/ipc_3_##_##_##<version>. Additionally, the IPC component's ARMv7 source is packaged in a Yocto/bitbake recipe. A user can develop ARMv7 Linux user-space applications with IPC on KeyStone II devices by building the ti-ipc package in Yocto.

The IPC component is also used on OMAP-based Android devices. The component is compatible with legacy SYS/BIOS IPC MessageQ APIs available in other TI SDKs. A rich set of documentation can be browsed at IPC_3.x.

The following sections will give architecture, build, and configuration details for the LLD-based transports delivered via MCSDK BIOS PDK and Yocto/bitbake. More information on the shared memory transports delivered with the IPC component can be found in the IPC_Users_Guide PDF found in the docs directory of the IPC component.

KeyStone II IPC Details[edit]

  1. KeyStone II platforms must use MPM to load and run DSP applications linked with IPC 3.x that wish to perform ARM to DSP, and vice versa, over MessageQ. MPM reads the DSP image on download and sets the kernel appropriate parameters for IPC.
  2. CCS can be used to debug DSP applications after MPM has loaded and run the app using the load symbols facility.
  3. Headers and libraries for the ARMv7 IPC component and IPC transports are provided as a part of linux-devkit.
  4. The ARMv7 IPC's NameServer runs as a Linux daemon from the filesystem provided in the MCSDK LINUX component.

SYS/BIOS DSP TransportSrio[edit]

The SYS/BIOS DSP TransportSrio is a MessageQ Network interface transport that can be used on a KeyStone II DSP running SYS/BIOS IPC to send and receive MessageQ messages between any ARMv7 and DSP processor on the same or different device. Communication between any processor comes with the caveat that all processor's must have a unique ID assigned by the IPC MultiProc module. Additionally, the ID mappings maintained by the MultiProc modules on each device must be in sync.

Architecture[edit]

The SYS/BIOS DSP TransportSrio is a MessageQ Network interface transport that utilizes the SRIO LLD to send and receive MessageQ messages between SRIO endpoints. The SRIO endpoints can be on the same device or on another device entirely. All SRIO endpoints mapped through TransportSrio must have MessageQ as the upper level messaging layer.

SRIO_IPC_Trans_Arch.jpg

TransportSrio is restricted to being a MessageQ Network interface transport. Network interface transports are registered with MessageQ with a Transport ID value, or TID, which can be any integer from 1 through seven. The transport must be created and added to MessageQ's Network transport routing table after IPC has started, IPC has synced with all cores, and MessageQ has enabled a default intra-device, core to core transport. The default priority-based, intra-device, MessageQ interface transport is TransportShmNotify in DSP-only cases or TransportRpmsg in ARMv7 + DSP cases. MessageQ messages can be routed over the different transports by setting the desired transport priority, or TID value, in the MessageQ header's flags field. A message will be sent over a registered Network transport if a valid TID and a priority are set.

<syntaxhighlight lang='C'> MessageQ_Msg msg;

msg = MessageQ_alloc(MY_HEAP_ID, sizeof(msg));

/* Route over MessageQ's MessageQ interface normal priority transport.

* Should never need to explicitly set since MessageQ_alloc()
* will set normal priority by default */

MessageQ_setMsgPri(msg, MessageQ_NORMALPRI); MessageQ_put(queueId, msg);

/* ...or... */

/* Route over MessageQ's MessageQ interface high priority transport */ MessageQ_setMsgPri(msg, MessageQ_HIGHPRI); MessageQ_put(queueId, msg);

/* ...or... */

/* Route over MessageQ's Network interface

* TID value has to be a value between 1 and 7 */

MessageQ_setTransportId(msg, transport_tid); MessageQ_put(queueId, msg); </syntaxhighlight>

The TransportSrio must be created and registered with MessageQ after IPC has started so that all initialization requirements for the SRIO transport can be satisfied. First, and most importantly, TransportSrio initialization will be making resource requests from the CPPI, QMSS, and SRIO LLDs. As a result, the Resource Manager (RM) LLD must be fully initialized and a transport path from RM Clients to the RM Server must be available. Typically, the TransportShmNotify is used to enable RM message passing between the Clients and Server. Second, forcing TransportSrio initialization after IPC start and sync allows any CPPI Host descriptors and attached buffers to be placed in any type of device memory.

TransportSrio relies on IPC MultiProc's cluster functionality in order to communicate with remote device DSP cores running TransportSrio. The application must define the entire processor topology for the MultiProc module. The number of processors across all devices must be defined for MultiProc. The application's RTSC .cfg file must also define the local device's cluster base ID for MultiProc. An example for an application spanning three devices:

Device A <syntaxhighlight lang='C'> var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc'); /* Cluster definitions - Example has three clusters, one for each device. Each cluster

* has two DSPs within it.
* Device A [Cluster Base ID: 0] - 1 Host + 2 DSPs (Procs)
* Device B [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Device C [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Total of 3 Hosts + 6 DSPs (Procs) */

MultiProc.numProcessors = 9; /* baseIdOfCluster and numProcessors must be set BEFORE setConfig is run */ MultiProc.numProcsInCluster = 3; MultiProc.baseIdOfCluster = 0; var procNameList = ["HOST", "CORE0", "CORE1"]; MultiProc.setConfig(null, procNameList); </syntaxhighlight>

Device B <syntaxhighlight lang='C'> var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc'); /* Cluster definitions - Example has three clusters, one for each device. Each cluster

* has two DSPs within it.
* Device A [Cluster Base ID: 0] - 1 Host + 2 DSPs (Procs)
* Device B [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Device C [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Total of 3 Hosts + 6 DSPs (Procs) */

MultiProc.numProcessors = 9; /* baseIdOfCluster and numProcessors must be set BEFORE setConfig is run */ MultiProc.numProcsInCluster = 3; MultiProc.baseIdOfCluster = 3; var procNameList = ["HOST", "CORE0", "CORE1"]; MultiProc.setConfig(null, procNameList); </syntaxhighlight>

Device C <syntaxhighlight lang='C'> var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc'); /* Cluster definitions - Example has three clusters, one for each device. Each cluster

* has two DSPs within it.
* Device A [Cluster Base ID: 0] - 1 Host + 2 DSPs (Procs)
* Device B [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Device C [Cluster Base ID: 3] - 1 Host + 2 DSPs (Procs)
* Total of 3 Hosts + 6 DSPs (Procs) */

MultiProc.numProcessors = 9; /* baseIdOfCluster and numProcessors must be set BEFORE setConfig is run */ MultiProc.numProcsInCluster = 3; MultiProc.baseIdOfCluster = 6; var procNameList = ["HOST", "CORE0", "CORE1"]; MultiProc.setConfig(null, procNameList); </syntaxhighlight>

SYS/BIOS DSP TransportSrio Source Delivery and Recompilation[edit]

The SYS/BIOS DSP TransportSrio source code and examples are delivered within the MCSDK BIOS PDK component. DSP TransportSrio can be rebuilt using the environment setup scripts provided with the PDK package. DSP TransportSrio example applications are created as part of the pdkProjectCreate scripts. They can be imported and built the same as PDK LLD example and test CCS projects.

Recompiling on Windows[edit]

  1. Open a Windows command terminal and navigate to <pdk_install_dir>/packages.
  2. Set the component install paths. The following commands can be used assuming installation of MCSDK 3.1.3.6 (Presuming CCS and MCSDK 3.1.3 is installed in C:\ti\)
    set C6X_GEN_INSTALL_PATH="C:\ti\ccsv5\tools\compiler\c6000_7.4.8"
    set XDC_INSTALL_PATH=C:\ti\xdctools_3_30_05_60
    set EDMA3LLD_BIOS6_INSTALLDIR="C:\ti\edma3_lld_02_11_13_17"
    set CG_XML_BIN_INSTALL_PATH=C:\ti\cg_xml\bin
    set BIOS_INSTALL_PATH=C:\ti\bios_6_41_00_26\packages
    set IPC_INSTALL_PATH=C:\ti\ipc_3_35_01_07\packages
    set PDK_INSTALL_PATH=C:\ti\pdk_keystone2_3_01_03_06
  3. Run pdksetupenv.bat
    >pdksetupenv.bat
  4. Navigate to <pdk_install_path>/packages/ti/transport/ipc/c66/srio/
  5. Build the IPC SRIO Transport library
    >xdc

Issue the following commands if the SRIO transport ever needs to be rebuilt:

>xdc clean
>xdc

Recompiling on Linux[edit]

  1. Open a Linux bash terminal and navigate to <pdk_install_dir>/packages.
  2. Export the component install paths. The following commands can be used assuming installation of MCSDK 3.1.3.6 (Presuming CCS and MCSDK 3.1.3 is installed in /opt/ti)
    export C6X_GEN_INSTALL_PATH=/opt/ti/ccsv5/tools/compiler/c6000_7.4.8
    export XDC_INSTALL_PATH=/opt/ti/xdctools_3_30_05_60
    export EDMA3LLD_BIOS6_INSTALLDIR=/opt/ti/edma3_lld_02_11_13_17
    export CG_XML_BIN_INSTALL_PATH=/opt/ti/cg_xml/bin
    export BIOS_INSTALL_PATH=/opt/ti/bios_6_41_00_26/packages
    export IPC_INSTALL_PATH=/opt/ti/ipc_3_35_01_07/packages
    export PDK_INSTALL_PATH=/opt/ti/pdk_keystone2_3_01_03_06
  3. Run pdksetupenv.sh
    $ source pdksetupenv.sh
  4. Navigate to <pdk_install_path>/packages/ti/transport/ipc/c66/srio/
  5. Build the IPC SRIO Transport library
    $ xdc

Issue the following commands if the SRIO transport ever needs to be rebuilt:

$ xdc clean
$ xdc

SYS/BIOS DSP TransportSrio Configuration Parameters[edit]

Following are the configuration parameters for DSP TransportSrio instance creation. Descriptions, default values, and programming considerations are provided for each configuration parameter. Each parameter is an element of the TransportSrio_Params structure. A structure of this type must be created, populated, and passed to the TransportSrio_create() function via pointer. The structure should be initialized to its default values using the TransportSrio_Params_init() function prior to population with user specific parameters.

<syntaxhighlight lang='C'>

   TransportSrio_Params  transSrioParams;
   ...
   TransportSrio_Params_init(&transSrioParams);
   transSrioParams.deviceCfgParams   = ...;
   transSrioParams.txMemRegion       = ...;
   ...
   srioTransHandle = TransportSrio_create(&transSrioParams, &errorBlock);

</syntaxhighlight>

Configuration Parameter Element Description Initial Value Special Considerations
<syntaxhighlight lang='C'>TransportSrio_DeviceConfigParams *deviceCfgParams;</syntaxhighlight> Pointer to the device specific TransportSrio configuration parameter structure. NULL TransportSrio configuration structure is defined in the TransportSrio_device.c source file for supported devices.
<syntaxhighlight lang='C'>Int txMemRegion;</syntaxhighlight> QMSS memory region from which to allocate transmit side Host descriptors. -1
  • Descriptors inserted into this region must be of Host type.
  • The descriptors can be located in L2, MSMC, or DDR3 memory. The base address of the descriptors must be cache line aligned if located within a shared memory (MSMC or DDR3) and caching is enabled.
<syntaxhighlight lang='C'>UInt32 txNumDesc;</syntaxhighlight> Number of Host descriptors to pre-allocate for SRIO transmit operations. MessageQ data buffers are attached to the buffers and sent out via the SRIO LLD. Descriptors are recycled onto a completion queue. Descriptors, and their attached buffers, are recycled in future TransportSrio_put operations. 2 Minimum of two descriptors needed for a ping-pong-like operation. While one descriptor+buffer pair is sent, the other is recycled.
<syntaxhighlight lang='C'>UInt32 txDescSize;</syntaxhighlight> Size of the transmit descriptors in bytes. 0 Cache coherence operations may be performed on the descriptors based on their memory location. As a result, the descriptor size should be a multiple a cache line.
<syntaxhighlight lang='C'>UInt32 rxQType;</syntaxhighlight> The QMSS queue type that will be opened and used with the receive accumulator. The queue type does not matter. Accumulator GEM events are mapped to the accumulator channels, not the queue type+value. 0
<syntaxhighlight lang='C'>Int rxMemRegion;</syntaxhighlight> QMSS memory region from which to allocate receive side Host descriptors. -1
  • This memory region can be the same as that provided for txMemRegion.
  • Descriptors inserted into this region must be of Host type.
  • The descriptors can be located in L2, MSMC, or DDR3 memory. The base address of the descriptors must be cache line aligned if located within a shared memory (MSMC or DDR3) and caching is enabled.
<syntaxhighlight lang='C'>UInt32 rxNumDesc;</syntaxhighlight> Number of Host descriptors to pre-allocate for SRIO receive operations. MessageQ data buffers are pre-allocated and attached to the descriptors at TransportSrio_create() time. Data received by the SRIO LLD is copied directly into the MessageQ buffer attached the receive descriptor. 1 Minimum of one descriptor needed to receive a packet. The descriptors are reused for receive operations. New MessageQ buffers are allocated and attached to descriptors prior to their reuse.
<syntaxhighlight lang='C'>UInt32 rxDescSize;</syntaxhighlight> Size of the receive descriptors in bytes. 0 Cache coherence operations may be performed on the descriptors based on their memory location. As a result, the descriptor size should be a multiple a cache line.
<syntaxhighlight lang='C'>UInt16 rxMsgQHeapId;</syntaxhighlight> Rx-side MessageQ Heap ID. MessageQ buffers are pre-allocated out of this heap and attached to descriptors for packets received by the SRIO interface. ~1
  • The heap must have AT LEAST rxNumDesc number of buffers.
  • The heap can reside can reside in L2, MSMC, or DDR3. TransportSrio will perform cache coherence operations on heap buffers residing in shared memory areas.
  • Buffers should be sized to be a multiple of a cache line if the heap is located in a shared memory. This prevents corruptions when cache coherence operations are performed.
<syntaxhighlight lang='C'>UInt32 maxMTU;</syntaxhighlight> Maximum transmittable unit in bytes that will be handled by TransportSrio. This is also the size of the buffers within the heap mapped to the rxMsgQHeapId. 256 maxMTU should be sized to be a multiple of a cache line. This prevents corruptions when cache coherence operations are performed on rxMsgQHeapId buffers located in a shared memory.
<syntaxhighlight lang='C'>UInt8 accumCh;</syntaxhighlight> The accumulator channel used for SRIO packet reception. The GEM event for the accumulator interrupt will be derived from the provided accumulator channel and the DSP core number. 0 Please refer to the device's Multicore Navigator specification for the proper accumulator channels for each DSP core.
<syntaxhighlight lang='C'>UInt32 accumTimerCount;</syntaxhighlight> Number of global timer ticks to delay the periodic accumulator interrupt. A value of zero will cause an interrupt immediately upon a descriptor being placed in the accumulator ping/pong buffer. 0
<syntaxhighlight lang='C'>Void *rmServiceHandle;</syntaxhighlight> RM service handle that will be given to Srio_start. The RM service handle only needs to be provided if the intent is for RM to manage SRIO resources NULL
<syntaxhighlight lang='C'>UInt rxIntVectorId;</syntaxhighlight> Interrupt vector ID to tie to the receive side accumulator operation. ~1
<syntaxhighlight lang='C'>srioSockParams *sockParams;</syntaxhighlight> Pointer to socket parameters used by the SRIO transport to bind the socket and to route messages to the proper endpoints. NULL Socket parameters include the socket type (Type 11 or Type 9), the number of endpoints in the endpoint list, and a pointer to the Type 11 or Type 9 endpoint parameter list. The endpoint parameter list contains the address information for each processor endpoint in the system. The parameter list must be indexed by the IPC MultiProc ID so that all processors can be mapped to a unique SRIO address.
<syntaxhighlight lang='C'>UInt transNetworkId;</syntaxhighlight> The transport instance will be registered with MessageQ's network transport interface using the supplied network transport ID. MessageQ messages with a matching transport ID in their MessageQ header will be sent over the transport instance. 0 The MessageQ network interface transport ID must have a value between 1 and 7.

Adding the SYS/BIOS DSP TransportSrio to a DSP Application[edit]

TransportSrio requires some special considerations when adding it to an application since it is not a standard, shared memory, IPC transport. As described earlier, TransportSrio is an IPC MessageQ Network interface transport. It relies on IPC and MessageQ being initialized with a, priority-based, shared memory transport prior to creating any TransportSrio instances. The latter occurs by default in the RTSC configuration and Ipc_start(). Creating TransportSrio instances after IPC and MessageQ have initialized allows the transport to be initialized without any hardcoded assumptions about the locations of heaps buffers and QMSS descriptors in memory. It also allows the LLDs used by TransportSrio to request their resources from RM since RM will use the MessageQ shared memory transport as the resource request/response path.

Additions to the Application RTSC .cfg[edit]

  • TransportSrio requires the CPPI, QMSS, and SRIO LLDs in order to operate. The RM LLD is a requirement for Keystone II devices

<syntaxhighlight lang='C'> var Cppi = xdc.loadPackage('ti.drv.cppi'); var Qmss = xdc.loadPackage('ti.drv.qmss'); var Srio = xdc.loadPackage('ti.drv.srio'); var Rm = xdc.loadPackage('ti.drv.rm');

Program.sectMap[".qmss"] = new Program.SectionSpec(); Program.sectMap[".qmss"] = "MSMCSRAM";

Program.sectMap[".cppi"] = new Program.SectionSpec(); Program.sectMap[".cppi"] = "MSMCSRAM";

Program.sectMap[".sharedGRL"] = new Program.SectionSpec(); Program.sectMap[".sharedGRL"] = "L2SRAM";

Program.sectMap[".sharedPolicy"] = new Program.SectionSpec(); Program.sectMap[".sharedPolicy"] = "L2SRAM";

Program.sectMap[".srioSharedMem"] = new Program.SectionSpec(); Program.sectMap[".srioSharedMem"] = "MSMCSRAM"; </syntaxhighlight>

  • The TransportSrio module must be included to pull in the library. MessageQ must be configured with reserve queue which will be used by any MessageQ Network interface transports that are registered. The NameServer module does not work with the MessageQ Network interface transports since these transports can potentially send MessageQ messages off-device. NameServer cannot query outside of the device.

<syntaxhighlight lang='C'> var Ipc = xdc.useModule('ti.sdo.ipc.Ipc'); /* Ipc_start() will synchronize all local DSP processors */ Ipc.procSync = Ipc.ProcSync_ALL; var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ'); /* Reserve a block of MessageQ queues for use by the MessageQ network interface

* transports since they don't use the NameServer module */

MessageQ.numReservedEntries = 4; var TransportSrio = xdc.useModule('ti.transport.ipc.c66.srio.TransportSrio'); </syntaxhighlight>

  • The device-specific low-level IPC modules must be included so that the interrupt logic properly associates the device MultiProc IDs to destination interrupt generation. The MultiProc module and the low-level IPC module must both be aware that an ARMv7 processor exists as MultiProc ID 0 on KeyStone II devices.

<syntaxhighlight lang='C'> /* Use the correct version of the low-level IPC modules so that the ARMv7

* processor is correctly factored into the notification logic */

var NotifyDriverCirc = xdc.useModule('ti.sdo.ipc.notifyDrivers.NotifyDriverCirc'); var Interrupt = xdc.useModule('ti.ipc.family.tci6638.Interrupt'); NotifyDriverCirc.InterruptProxy = Interrupt; var VirtQueue = xdc.useModule('ti.ipc.family.tci6638.VirtQueue');

/* Notify brings in the ti.sdo.ipc.family.Settings module, which does

*  lots of config magic which will need to be UNDONE later, or setup
*  earlier, to get the necessary overrides to various IPC module proxies!
*/

var Notify = xdc.module('ti.sdo.ipc.Notify'); var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');

/* Note: Must call this to override what's done in Settings.xs ! */ Notify.SetupProxy = xdc.module('ti.ipc.family.tci6638.NotifyCircSetup'); </syntaxhighlight>

  • As shown earlier in the Architecture section, the MultiProc parameters must be configured correctly for each device.

Source Code Additions[edit]

  • A TransportSrio instance can be created between two endpoints after the following:
    • IPC has started and all local DSPs have attached
    • RM instances have been created. Communication between the RM Clients and the RM Server will take place over the default MessageQ interface, priority-based, IPC transport.
    • QMSS and CPPI have been initialized and started
    • The SRIO IP block has been turned ON via the PSC and the SRIO device initialization routine has been executed
    • The heaps that will provide buffers for MessageQ send/receive have been created. The same heap can be used for both transmit and receive. NoteNote: The heap used by the TransportSrio receive logic must have a gate that is able to operate within an interrupt context. In the transport examples a GateMP configured for GateMP_LocalProtect_INTERRUPT is used.

<syntaxhighlight lang='C'> /* Create the heap that will be used to allocate messages. */ GateMP_Params_init(&gateMpParams); gateMpParams.localProtect = GateMP_LocalProtect_INTERRUPT; gateMpHandle = GateMP_create(&gateMpParams);

HeapBufMP_Params_init(&heapBufParams); heapBufParams.regionId = 0; ... heapBufParams.gate = gateMpHandle; heapHandle = HeapBufMP_create(&heapBufParams); </syntaxhighlight>

  • SYS/BIOS DSP TransportSrio instances can be created between end points once all previous requirements are satisfied. A TransportSrio instance of each SRIO type, 11 and 9, can exist on a DSP simultaneously. The following example code comes from the MultiBoard example:

<syntaxhighlight lang='C'>

   /* Create SRIO type 11 & 9 transport instances.  They will be a network
    * transport so won't interfere with default MessageQ transport, shared
    * memory notify transport */
       
   /* Type 11 configuration */
   TransportSrio_Params_init(&transSrioParamsT11);
   /* Configure common parameters */
   transSrioParamsT11.deviceCfgParams   = &srioTransCfgParams;
   transSrioParamsT11.txMemRegion       = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.  Account type 9+11 for send/receive (divide by 4) */
   transSrioParamsT11.txNumDesc         = (HOST_DESC_NUM / 4) / ipcNumLocalDspCores;
   transSrioParamsT11.txDescSize        = HOST_DESC_SIZE_BYTES;
   transSrioParamsT11.rxQType           = Qmss_QueueType_HIGH_PRIORITY_QUEUE;
   transSrioParamsT11.rxMemRegion       = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.  Account type 9+11 for send/receive (divide by 4) */
   transSrioParamsT11.rxNumDesc         = (HOST_DESC_NUM / 4) / ipcNumLocalDspCores;
   transSrioParamsT11.rxDescSize        = HOST_DESC_SIZE_BYTES;
   transSrioParamsT11.rxMsgQHeapId      = SRIO_MSGQ_HEAP_ID;
   transSrioParamsT11.maxMTU            = SRIO_MTU_SIZE_BYTES;
   transSrioParamsT11.rmServiceHandle   = rmServiceHandle;    
   /* Must map to a valid channel for each DSP core.  Follow sprugr9f.pdf Table 5-9 */
   transSrioParamsT11.accumCh           = DNUM;
   transSrioParamsT11.accumTimerCount   = 0; 
   transSrioParamsT11.transNetworkId    = SRIO_T11_TRANS_NET_ID;
   transSrioParamsT11.rxIntVectorId     = 8;
   memset(&t11EpParams, 0, sizeof(t11EpParams));
   /* Linux Host (Producer) MultiProc ID - 0 */
   t11EpParams[0].tt       = 0;
   t11EpParams[0].deviceId = DEVICE_ID1_8BIT;
   t11EpParams[0].mailbox  = 0;
   t11EpParams[0].letter   = 0;
   t11EpParams[0].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   /* Core 0 (Producer) MultiProc ID - 1 */
   t11EpParams[1].tt       = 0;
   t11EpParams[1].deviceId = DEVICE_ID1_8BIT;
   t11EpParams[1].mailbox  = 0;
   t11EpParams[1].letter   = 1;
   t11EpParams[1].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   /* Core 1 (Producer) MultiProc ID - 2 */
   t11EpParams[2].tt       = 0;
   t11EpParams[2].deviceId = DEVICE_ID1_8BIT;
   t11EpParams[2].mailbox  = 0;
   t11EpParams[2].letter   = 2;
   t11EpParams[2].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   /* Linux Host (Consumer) MultiProc ID - 3 */
   t11EpParams[3].tt       = 0;
   t11EpParams[3].deviceId = DEVICE_ID2_8BIT;
   t11EpParams[3].mailbox  = 0;
   t11EpParams[3].letter   = 0;
   t11EpParams[3].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   /* Core 0 (Consumer) MultiProc ID - 4 */
   t11EpParams[4].tt       = 0;
   t11EpParams[4].deviceId = DEVICE_ID2_8BIT;
   t11EpParams[4].mailbox  = 0;
   t11EpParams[4].letter   = 1;
   t11EpParams[4].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   /* Core 1 (Consumer) MultiProc ID - 5 */
   t11EpParams[5].tt       = 0;
   t11EpParams[5].deviceId = DEVICE_ID2_8BIT;
   t11EpParams[5].mailbox  = 0;
   t11EpParams[5].letter   = 2;
   t11EpParams[5].segMap   = (sizeof(TstMsg) > 256 ? 1 :0);
   memset(&t11socketParams, 0, sizeof(t11socketParams));
   t11socketParams.epListSize = NUM_TOTAL_CORES;
   t11socketParams.sockType = TransportSrio_srioSockType_TYPE_11;
   t11socketParams.u.pT11Eps = &t11EpParams[0];
   
   transSrioParamsT11.sockParams = &t11socketParams;
   Error_init(&errorBlock);
   System_printf("IPC Core %d : "
                 "Creating SRIO Transport instance with Type 11 socket\n",
                 ipcCoreId);
   srioT11TransHandle = TransportSrio_create(&transSrioParamsT11, &errorBlock);
   if (srioT11TransHandle == NULL) {
       System_printf("Error IPC Core %d : "
                     "TransportSrio_create failed with id %d\n", ipcCoreId,
                     errorBlock.id);
       return;
   }  
   /* Type 9 configuration */
   TransportSrio_Params_init(&transSrioParamsT9);
   /* Configure common parameters */
   transSrioParamsT9.deviceCfgParams   = &srioTransCfgParams;
   transSrioParamsT9.txMemRegion       = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.
    * Account type 9+11 for send/receive (divide by 4) */
   transSrioParamsT9.txNumDesc         = (HOST_DESC_NUM / 4) / ipcNumLocalDspCores;
   transSrioParamsT9.txDescSize        = HOST_DESC_SIZE_BYTES;
   transSrioParamsT9.rxQType           = Qmss_QueueType_HIGH_PRIORITY_QUEUE;
   transSrioParamsT9.rxMemRegion       = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.
    * Account type 9+11 for send/receive (divide by 4) */
   transSrioParamsT9.rxNumDesc         = (HOST_DESC_NUM / 4) / ipcNumLocalDspCores;
   transSrioParamsT9.rxDescSize        = HOST_DESC_SIZE_BYTES;
   transSrioParamsT9.rxMsgQHeapId      = SRIO_MSGQ_HEAP_ID;
   transSrioParamsT9.maxMTU            = SRIO_MTU_SIZE_BYTES;
   transSrioParamsT9.rmServiceHandle   = rmServiceHandle;
   /* Type 9 instance specific parameters */
   /* Must map to a valid channel for each DSP core.
    * Follow sprugr9g.pdf Table 5-9 */
   transSrioParamsT9.accumCh            = DNUM + 8; /* Next valid accum ch is
                                                     * number of device DSP
                                                     * cores away 
                                                     * (8 for K2HK) */
   transSrioParamsT9.accumTimerCount    = 0; 
   transSrioParamsT9.transNetworkId     = SRIO_T9_TRANS_NET_ID;
   transSrioParamsT9.rxIntVectorId      = 9;
   
   /* Type 9 specific */
   memset(&t9EpParams, 0, sizeof(t9EpParams));
   /* Linux Host (Producer) MultiProc ID - 0 */
   t9EpParams[0].tt       = 0;
   t9EpParams[0].deviceId = DEVICE_ID1_8BIT;
   t9EpParams[0].cos      = 0;
   t9EpParams[0].streamId = 0;
   /* Core 0 (Producer) MultiProc ID - 1 */
   t9EpParams[1].tt       = 0;
   t9EpParams[1].deviceId = DEVICE_ID1_8BIT;
   t9EpParams[1].cos      = 0;
   t9EpParams[1].streamId = 1;
   /* Core 1 (Producer) MultiProc ID - 2 */
   t9EpParams[2].tt       = 0;
   t9EpParams[2].deviceId = DEVICE_ID1_8BIT;
   t9EpParams[2].cos      = 0;
   t9EpParams[2].streamId = 2;
   /* Linux Host (Consumer) MultiProc ID - 3 */
   t9EpParams[3].tt       = 0;
   t9EpParams[3].deviceId = DEVICE_ID2_8BIT;
   t9EpParams[3].cos      = 0;
   t9EpParams[3].streamId = 0;
   /* Core 0 (Consumer) MultiProc ID - 4 */
   t9EpParams[4].tt       = 0;
   t9EpParams[4].deviceId = DEVICE_ID2_8BIT;
   t9EpParams[4].cos      = 0;
   t9EpParams[4].streamId = 1;
   /* Core 1 (Consumer) MultiProc ID - 5 */
   t9EpParams[5].tt       = 0;
   t9EpParams[5].deviceId = DEVICE_ID2_8BIT;
   t9EpParams[5].cos      = 0;
   t9EpParams[5].streamId = 2;
   
   memset(&t9socketParams, 0, sizeof(t9socketParams));
   t9socketParams.epListSize = NUM_TOTAL_CORES;    
   t9socketParams.sockType = TransportSrio_srioSockType_TYPE_9;
   t9socketParams.u.pT9Eps = &t9EpParams[0];    
   
   transSrioParamsT9.sockParams = &t9socketParams;
   
   Error_init(&errorBlock);
   System_printf("IPC Core %d : Creating SRIO Transport instance with Type 9 socket\n", ipcCoreId); 
   srioT9TransHandle = TransportSrio_create(&transSrioParamsT9, &errorBlock);
   if (srioT9TransHandle == NULL) {
       System_printf("Error IPC Core %d : TransportSrio_create failed with id %d\n", ipcCoreId,
                     errorBlock.id);
       return;
   }

</syntaxhighlight>

  • MessageQ_create() and MessageQ_open() must utilize the reserve queues set-aside in MessageQ since the network interface transports, like TransportSrio, do not use the NameServer. To utilize the reserved queues:
    • To create a local reserved queue MessageQ_create() will take NULL instead of a string containing a remote queue name
    • To open a remote reserved queue MessageQ_open() is replaced by the MessageQ_openQueueId() API

<syntaxhighlight lang='C'>

   System_printf("IPC Core %d : Creating reserved MessageQ %d\n", ipcCoreId, MESSAGEQ_RESERVED_RCV_Q);
   MessageQ_Params_init(&msgQParams);
   msgQParams.queueIndex = MESSAGEQ_RESERVED_RCV_Q;
   /* Create reserved message queue. */
   localMessageQ = MessageQ_create(NULL, &msgQParams);
   if (localMessageQ == NULL) {
       System_printf("Error IPC Core %d : MessageQ_create failed\n", ipcCoreId);
       return;
   }   
   System_printf("IPC Core %d : Opening reserved MessageQ %d on IPC core %d\n", ipcCoreId,
                 MESSAGEQ_RESERVED_RCV_Q, remoteProcId);
   remoteQueueId = MessageQ_openQueueId(MESSAGEQ_RESERVED_RCV_Q, remoteProcId);

</syntaxhighlight>

  • MessageQ_put() and MessageQ_get() can be used normally to send and receive messages between end points using TransportSrio. The only caveat is that messages that are to use TransportSrio must be sent with the transport ID, TID, that was used to register the TransportSrio instance with the MessageQ Network interface.

<syntaxhighlight lang='C'> typedef struct {

   MessageQ_MsgHeader header; /* 32 bytes */
   int32_t            src;
   int32_t            flags;
   int32_t            numMsgs; 
   int32_t            seqNum;
   uint32_t           data[TEST_MSG_DATA_LEN_WORDS];
   uint8_t            pad[16]; /* Pad to cache line size of 128 bytes */

} TstMsg;

...

TstMsg *txMsg;

txMsg = (TstMsg *) MessageQ_alloc(SRIO_MSGQ_HEAP_ID, sizeof(TstMsg)); /* Set the transport ID to route message through Type 11 SRIO Transport instance */ MessageQ_setTransportId(txMsg, SRIO_T11_TRANS_NET_ID);

/* OR */

/* Set the transport ID to route message through Type 9 SRIO Transport instance */ MessageQ_setTransportId(txMsg, SRIO_T9_TRANS_NET_ID);

/* Send message */ MessageQ_put(remoteQueueId, (MessageQ_Msg) txMsg); </syntaxhighlight>

NoteNote: Please pay extra attention to the alignments and padding of CPPI descriptors and heap buffers used by the SRIO Transport. Cache coherence operations will be performed on descriptors and buffers that are detected to be from a shared memory, such as MSMC or DDR3. All descriptors and buffers should be cache line aligned and padded to avoid data corruptions when the coherence operations are performed.

SYS/BIOS DSP TransportSrio Examples[edit]

Two examples are included with the SYS/BIOS DSP TransportSrio component. Both examples are fully integrated with RM so they can be run while ARM Linux is up:

  • transportIpcSrioBenchmarkK2XExampleProject - An example that performs latency, throughput, and data integrity tests over DSP TransportSrio. In addition, it tests the MessageQ routing capabilities via the NORMALPRI and HIGHPRI flags.
  • transportIpcSrioMultiBoardK2XExampleProject - An example that sends packets from two DSPs on one EVM to two DSPs on another EVM. A data integrity check is performed on the data transferred between the two devices. Two different executables are required for this example.
    • transportIpcSrioMultiBoardProducerK2XExampleProject - The producer side of the multi-board test. The producer opens two MessageQ's on the consumer device and uses them to send MessageQ messages to the consumer device.
    • transportIpcSrioMultiBoardConsumerK2XExampleProject - The consumer side of the multi-board test. The consumer creates two MessageQ's and waits for the producer to open them and start sending data. Data integrity checks are performed on the data received from the producer device.

The projects are built as part of the generic pdkProjectCreate process. They can be imported, built, and run through CCS just like any other LLD example and test project.

NoteNote: The multi-board test requires two Keystone II devices connected via SRIO lanes by way of a breakout card or chassis in order to operate successfully.

ARMv7 Linux TransportSrio[edit]

ARMv7 Linux TransportSrio is the ARMv7 Linux MessageQ Network interface transport counterpart to the SYS/BIOS DSP version of TransportSrio. The ARMv7 Linux version of TransportSrio can be registered with Network transport interface of ARMv7 Linux MessageQ to allow SRIO-based communication with other ARMv7 and DSP processors running TransportSrio on the local, or a remote, device. Communication between local and remote device processors is achieved by each processor having a unique MultiProc ID assigned to it. The MultiProc ID mapping is maintained within the LAD daemon in the ARMv7 Linux version of IPC. Each Linux Host within a system must be running a LAD daemon that has been configured to know about the maximum number of processors in the system. The LAD daemon must also be configured with a unique cluster base ID within the processor ID space. The MultiProc ID mappings maintained by the LAD daemon on each Linux Host must be in sync with ID configurations on all other processors, ARMv7 and DSP, in the system.

Architecture[edit]

TransportSrio does not directly interact with SRIO. Instead the transport interfaces with a MPM-Transport instance that supports SRIO.

Only one TransportSrio instance of each Type (11 and 9) can exist across all Linux processes. This was a design decision made so that SRIO addresses could be assigned based on MultiProc ID rather than a combination of MultiProc ID + MessageQ queue ID. The latter scheme would overly complicate SRIO address assignments to processors in a multi-device system. A reroute functionality exists within the TransportSrio reception logic to compensate for the lack of direct process reception for more than one Linux process. The reroute logic checks the destination MessageQ queue in the received MessageQ message header. The message is sent over a registered process to process MessageQ transport if the destination queue does not exist within the receiving Linux process. The transport over which the reroute occurs is mapped to a TID provided at TransportSrio instance creation.

An ARMv7 Linux processor running a TransportSrio instance can communicate with any other processor on any device as long as:

  • The processors are running an ARMv7 Linux TransportSrio or SYS/BIOS DSP TransportSrio instance
  • All processors have a synchronized mapping of MultiProc ID to SRIO addresses
  • All processors are physically connected over SRIO lanes.

Linux_TransSrio_Arch.jpg



ARMv7 Linux TransportSrio Source Delivery and Recompilation[edit]

The ARMv7 Linux TransportSrio source code can be downloaded and built two ways. The transport source code is delivered and built as part of Yocto/bitbake. The source code can also be downloaded and built directly from the GIT repository.

Recompiling Through Yocto/bitbake[edit]

  1. Follow the instructions in the Exploring section of the user guide to configure the Yocto build environment. The tisdk-server-rootfs-image does not need to be built. Instead look at the section for building other components
  2. Build the TransportSrio libraries, ipc-transport-srio recipe, and user-space tests, ipc-transport-srio-test recipe:
    $ MACHINE=k2hk-evm TOOLCHAIN_BRAND=linaro ARAGO_BRAND=mcsdk bitbake ipc-transport-srio
    $ MACHINE=k2hk-evm TOOLCHAIN_BRAND=linaro ARAGO_BRAND=mcsdk bitbake ipc-transport-srio-test

    NoteNote: The initial build may take quite some time since the kernel is built as a dependency
    NoteNote: Building with just the ipc-transport-srio-test recipe will also build the ipc-transport-srio recipe since the test recipe depends on the library recipe.
  3. The built TransportSrio static library will be located in
    <base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-srio/<tag-ver_recipe-ver>/packages-split/ipc-transport-srio-staticdev/usr/lib/libTransportSrio.a

    The built TransportSrio shared library will be located in
    <base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-srio/<tag-ver_recipe-ver>/packages-split/ipc-transport-srio/usr/lib/libTransportSrio.so.1.0.0
  4. The ipc-transport-srio-test recipe will build test static and shared library executables for all supported devices. The executables will be located in
    base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-srio-test/<tag-ver_recipe-ver>/packages-split/ipc-transport-srio-test/usr/bin/

Recompiling Through GIT Repository[edit]

Recompiling through the ARMv7 Linux TransportSrio GIT repository requires that the latest MCSDK Linux installation. The MCSDK Linux PDK component and the Linux devkit must be installed. The Linux devkit installation script can be found in <MCSDK Linux install root>/mcsdk_linux_3_XX_YY_ZZ/linux-devkit/

  1. Clone the keystone-linux/ipc-transport repository from git.ti.com
    $ git clone git://git.ti.com/keystone-linux/ipc-transport.git
  2. Navigate to the MCSDK Linux installation of pdk_3_XX_YY_ZZ/packages and source armv7setupenv.sh.
    NoteNote: The armv7setupenv.sh script must be modified to point to the linaro toolchain and installed devkit path
    $ source armv7setupenv.sh
  3. Navigate back to the SRIO transport directory in the ipc-transport GIT repository
    $ cd <repo_root_path>/ipc-transport/linus/srio
  4. Build the TransportSrio library and user-space test executables:
    $ make lib
    $ make tests
  5. The TransportSrio static and shared libraries will be copied directly into the Linux devkit's /usr/lib folder as long as the devkit install path was setup correctly prior to running the armv7setupenv.sh script
  6. The test executables will be generated in the <base_repo_path>/ipc-transport/bin/<k2 device>/test/ folder. Only the device specified in the armv7setupenv.sh will be built.
    NoteNote: Setting the USEDYNAMIC_LIB environment variable to "yes" will generate the shared library test executables
    $ export USEDYNAMIC_LIB=yes

ARMv7 Linux TransportSrio Configuration Parameters[edit]

Following are the configuration parameters for TransportSrio instance creation. Descriptions, default values, and programming considerations are provided for each configuration parameter. Each parameter is an element of the TransportSrio_Params structure. A structure of this type must be created, populated, and passed to the TransportSrio_create() function via pointer. The structure should be initialized to its default values using the TransportSrio_Params_init() function prior to population with application specific parameters.

<syntaxhighlight lang='C'>

   TransportSrio_Params  srio_trans_params;
   ...
   TransportSrio_Params_init(&srio_trans_params);
   snprintf(mpm_inst_name, MPM_INST_NAME_LEN, "arm-srio-generic");
   srio_trans_params.mpm_trans_inst_name = mpm_inst_name;
   srio_trans_params.rm_service_h        = ...;
   ...
   srio_handle = TransportSrio_create(&srio_trans_params);

</syntaxhighlight>

Configuration Parameter Element Description Initial Value Special Considerations
<syntaxhighlight lang='C'>Char *mpm_trans_inst_name;</syntaxhighlight> MPM-Transport instance name. This string must match a "slaves" name string defined within the mpm_config.json file used in the Linux filesystem NULL This string name must match the generic SRIO instance name in the filesystems /etc/mpm/mpm_config.json. The MPM JSON file's generic SRIO string is "arm-srio-generic".
<syntaxhighlight lang='C'>Void *rm_service_h;</syntaxhighlight> RM instance service handle needed by MPM-Transport to request hardware resources NULL
<syntaxhighlight lang='C'>TransportSrio_SocketParams *sock_params;</syntaxhighlight> Pointer to socket parameters used by the SRIO transport to bind the socket and to route messages to the proper endpoints. NULL
  • Socket parameters include the socket type (Type 11 or Type 9), the number of endpoints in the endpoint list, and a pointer to the Type 11 or Type 9 endpoint parameter list. The endpoint parameter list contains the address information for each processor endpoint in the system. The parameter list must be indexed by the IPC MultiProc ID so that all processors can be mapped to a unique SRIO address.
  • This structure will be replicated within the TransportSrio instance. So structure passed by pointer can be allocated from temporary memory.
<syntaxhighlight lang='C'>Int rx_msg_size_bytes;</syntaxhighlight> Maximum size in bytes of messages that will be received by this transport. Used to allocated MessageQ messages for reception 0 The value specified must be in sync with the "sizebuf" values specified in the "qmss-queue-map" transmit and receive free queue nodes of the mpm_config.json configuration file.
<syntaxhighlight lang='C'>Int reroute_tid;</syntaxhighlight> MessageQ Network interface Transport ID of ARMv7 Linux TransportQmss instance. Will be used to reroute received MessageQ messages destined for a MessageQ queue that was opened from another Linux process. Messages received for a MessageQ queue opened from another Linux process will be dropped if this value is left as 0. 0 A valid TID value is between 1 and 7.
<syntaxhighlight lang='C'>

Int (*srio_device_init) (Void *init_cfg,

                        Void *srio_base_addr, 
                        UInt32 serdes_addr);</syntaxhighlight>
Pointer to the SRIO device initialization routine. This routine initializes the SRIO hardware with routing and address information. MPM-Transport will call the provided function pointer during the SRIO initialization process. NULL
  • Routine should be run once per device.
  • Function pointer should be NULL if another process or device core was the first to initialize the usage of SRIO.
  • The init_cfg input parameter can be a pointer to an application specific input parameter to the srio_device_init function
<syntaxhighlight lang='C'>Void *init_cfg;</syntaxhighlight> Application specific input parameter to the srio_device_init implementation NULL
<syntaxhighlight lang='C'>

Int (*srio_device_deinit) (Void *deinit_cfg,

                          Void *srio_base_addr);</syntaxhighlight>
Pointer to the SRIO device de-initialization routine. Will be invoked by MPM-Transport when de-initializing the SRIO hardware within deletion routines. NULL
  • Routine should be run once per device
  • Function pointer should be NULL if another process or device core will be the last to delete its usage of SRIO
  • The deinit_cfg input parameter can be a pointer to an application specific input parameter to the srio_device_deinit function.
<syntaxhighlight lang='C'>Void *deinit_cfg;</syntaxhighlight> Application specific input parameter to the srio_device_deinit implementation NULL
<syntaxhighlight lang='C'>Int mpm_trans_init_qmss;</syntaxhighlight> Controls whether MPM-Transport will initialize the QMSS hardware
0 - MPM-Transport will not initialize QMSS. Set if another entity within the system initialized the QMSS hardware.
1 - MPM-Transport will initialize QMSS. Set if this is the first system entity being created that used QMSS and QMSS has not been initialized
0

MPM Transport Configuration Effects on ARMv7 Linux TransportSrio[edit]

TransportSrio assumes MPM Transport will manage configuration of the QMSS, CPPI, and SRIO LLDs. As a result, descriptor and descriptor buffer management is pushed to MPM Transport in the ARMv7 Linux version of TransportSrio. The MPM Transport JSON configuration file should be modified in order to change QMSS descriptor and buffer related parameters.

The MPM Transport JSON configuration file is located in the Linux file system at /etc/mpm/mpm_config.json

Adding the ARMv7 Linux TransportSrio to a User-Space Application[edit]

A TransportSrio instance can be created after the following:

  • IPC has started
  • A RM client instance has been created. Communication between the RM Client and the RM Server will take place over a Linux socket.
  • [Optional] A TransportQmss instance has been created so that TransportSrio can reroute MessageQ messages received over SRIO that are destined for a Linux process other than the process in which the TransportSrio instance will be created.
  • ARMv7 Linux TransportSrio instances can be created once all previous requirements are satisfied. A TransportSrio instance of each SRIO type, 11 and 9, can exist simultaneously in Linux user space. The following example code comes from the Producer-side of the Producer/Consumer example:

<syntaxhighlight lang='C'>

   /* Type 11 instance */
   
   TransportSrio_Params_init(&srio_trans_params);
   snprintf(mpm_inst_name, MPM_INST_NAME_LEN, "arm-srio-generic");
   srio_trans_params.mpm_trans_inst_name = mpm_inst_name;
   srio_trans_params.rm_service_h        = rm_service_h;
   srio_trans_params.rx_msg_size_bytes   = MAX_PACKET_SIZE;
   srio_trans_params.reroute_tid         = TRANS_QMSS_NET_ID;
   srio_trans_params.srio_device_init    = &mySrioDevice_init;
   srio_trans_params.init_cfg            = (void *)&path_mode;
   srio_trans_params.srio_device_deinit  = &mySrioDevice_deinit;
   srio_trans_params.deinit_cfg          = NULL;
   /* TransportQmss informs mpm-transport to init QMSS */
   srio_trans_params.mpm_trans_init_qmss = 0;
   
   /* Configure producer's static socket parameters. Structures can
    * be local since TransportSrio will make a copy */
   memset(&t11_params, 0, sizeof(t11_params));
   sock_params.num_eps   = MAX_SYSTEM_PROCESSORS;
   sock_params.sock_type = sock_TYPE_11;
   
   /* Linux Host (Producer) */
   t11_params[0].tt        = 0;
   t11_params[0].device_id = DEVICE_ID1_8BIT;
   t11_params[0].letter    = 0;
   t11_params[0].mailbox   = 0;
   t11_params[0].seg_map   = (MAX_PACKET_SIZE > 256) ? 1 : 0;
   
   /* Linux Host (Consumer) */
   t11_params[9].tt        = 0;
   t11_params[9].device_id = DEVICE_ID2_8BIT;
   t11_params[9].letter    = 0;
   t11_params[9].mailbox   = 0;
   t11_params[9].seg_map   = (MAX_PACKET_SIZE > 256) ? 1 : 0;
   sock_params.u.t11_eps = &t11_params[0];
   srio_trans_params.sock_params = &sock_params;
   printf("Process %d : Creating TransportSrio Type 11 instance\n",
          local_process);
   srio_trans_t11_h = TransportSrio_create(&srio_trans_params);
   if (!srio_trans_t11_h) {
       printf("ERROR Process %d : "
              "Failed to create TransportSrio Type 11 handle\n", 
              local_process);
       status = -1;
       goto err_exit;
   }
   
   /* Register transport with MessageQ as network transport */
   net_trans_h = TransportSrio_upCast(srio_trans_t11_h);
   base_trans_h = INetworkTransport_upCast(net_trans_h);
   if (MessageQ_registerTransportId(TRANS_SRIO_T11_NET_ID, base_trans_h) < 0) {
       printf("ERROR Process %d : "
              "Failed to register TransportSrio Type 11 as network "
              "transport with TID\n", local_process, TRANS_SRIO_T11_NET_ID);
       status = -1;
       goto err_exit;
   }
   /* Type 9 instance */
   TransportSrio_Params_init(&srio_trans_params);
   snprintf(mpm_inst_name, MPM_INST_NAME_LEN, "arm-srio-generic");
   srio_trans_params.mpm_trans_inst_name = mpm_inst_name;
   srio_trans_params.rm_service_h        = rm_service_h;
   srio_trans_params.rx_msg_size_bytes   = MAX_PACKET_SIZE;
   srio_trans_params.reroute_tid         = TRANS_QMSS_NET_ID;
   /* Don't run device init/deinit for second transport */
   srio_trans_params.srio_device_init    = NULL;
   srio_trans_params.init_cfg            = NULL;
   srio_trans_params.srio_device_deinit  = NULL;
   srio_trans_params.deinit_cfg          = NULL;
   /* TransportQmss informs mpm-transport to init QMSS */
   srio_trans_params.mpm_trans_init_qmss = 0;
   
   /* Configure producer's static socket parameters. Structures can
    * be local since TransportSrio will make a copy */
   memset(&t9_params, 0, sizeof(t9_params));
   sock_params.num_eps   = MAX_SYSTEM_PROCESSORS;
   sock_params.sock_type = sock_TYPE_9;
   
   /* Linux Host (Producer) */
   t9_params[0].tt        = 0;
   t9_params[0].device_id = DEVICE_ID1_8BIT;
   t9_params[0].cos       = 0;
   t9_params[0].stream_id = 0;
   
   /* Linux Host (Consumer) */
   t9_params[9].tt        = 0;
   t9_params[9].device_id = DEVICE_ID2_8BIT;
   t9_params[9].cos       = 0;
   t9_params[9].stream_id = 0;
   sock_params.u.t9_eps = &t9_params[0];
   srio_trans_params.sock_params = &sock_params;
   printf("Process %d : Creating TransportSrio Type 9 instance\n",
          local_process);
   srio_trans_t9_h = TransportSrio_create(&srio_trans_params);
   if (!srio_trans_t9_h) {
       printf("ERROR Process %d : "
              "Failed to create TransportSrio Type 9 handle\n", 
              local_process);
       status = -1;
       goto err_exit;
   }
   
   /* Register transport with MessageQ as network transport */
   net_trans_h = TransportSrio_upCast(srio_trans_t9_h);
   base_trans_h = INetworkTransport_upCast(net_trans_h);
   if (MessageQ_registerTransportId(TRANS_SRIO_T9_NET_ID, base_trans_h) < 0) {
       printf("ERROR Process %d : "
              "Failed to register TransportSrio Type 9 as network "
              "transport with TID %d\n", local_process, TRANS_SRIO_T9_NET_ID);
       status = -1;
       goto err_exit;
   }

</syntaxhighlight>

  • MessageQ_create() and MessageQ_open() must utilize the reserve queues set-aside in MessageQ since the network interface transports do not use the NameServer. On the Linux the IPC LAD daemon must have its configuration changed to reserve queues. To utilize the reserved queues:
    • To create a local reserved queue MessageQ_create() will take NULL instead of a string containing a remote queue name
    • To open a remote reserved queue MessageQ_open() is replaced by the MessageQ_openQueueId() API

<syntaxhighlight lang='C'>

       /* Create queue used to receive messages from remote devices and open
        * remote device queues - one per process */
       MessageQ_Params_init(&msg_params);
       msg_params.queueIndex = MESSAGEQ_RESERVED_RCV_Q;
       srio_msg_q_h = MessageQ_create(NULL, &msg_params);
       if (srio_msg_q_h == NULL) {
           printf("ERROR Process %d : Failed to create MessageQ\n",
                  local_process);
           status = -1;
           goto err_exit;
       }
       printf("Process %d : "
              "Created remote device reception MessageQ with QId: 0x%x\n",
              local_process, MessageQ_getQueueId(srio_msg_q_h));
       /* Open Consumer Linux Host MessageQ queues on each process of the
        * remote Linux Host */
       for (i = 0; i < CONSUMER_PROCESSES; i++) {
           rem_srio_q_id[i] = MessageQ_openQueueId(MESSAGEQ_RESERVED_RCV_Q + i,
                                                   CONSUMER_HOST_PROC_ID);
           printf("Process %d : Opened remote device QId: 0x%x\n",
                  local_process, rem_srio_q_id[i]);
       }

</syntaxhighlight>

  • MessageQ_put() and MessageQ_get() can be used normally to send and receive messages between end points using TransportSrio. The only caveat is that messages that are to use TransportSrio must be sent with the transport ID, TID, that was used to register the TransportSrio instance with the MessageQ Network interface.

<syntaxhighlight lang='C'> /* TransportSrio Type 11 MessageQ Network Interface TID */

  1. define SRIO_T11_TRANS_NET_ID 1

/* TransportSrio Type 9 MessageQ Network Interface TID */

  1. define SRIO_T9_TRANS_NET_ID 2

typedef struct {

   MessageQ_MsgHeader header; /* 32 bytes */
   int32_t            src;
   int32_t            flags;
   int32_t            numMsgs; 
   int32_t            seqNum;
   uint32_t           data[TEST_MSG_DATA_LEN_WORDS];
   uint8_t            pad[16]; /* Pad to cache line size of 128 bytes */

} test_msg_t;

...

test_msg_t *msg = NULL;

msg = (test_msg_t *) MessageQ_alloc(0, sizeof(*msg)); if (msg == NULL) {

   printf("ERROR Process %d : MessageQ_alloc failed\n",
          local_process);
   goto err_exit;

}

...

/* Set the transport ID to route message through Type 11 SRIO Transport instance */ MessageQ_setTransportId(msg, SRIO_T11_TRANS_NET_ID);

/* OR */

/* Set the transport ID to route message through Type 9 SRIO Transport instance */ MessageQ_setTransportId(msg, SRIO_T9_TRANS_NET_ID);

/* Send message */ status = MessageQ_put(rem_srio_q_id, (MessageQ_Msg)msg); if (status < 0) {

   printf("ERROR Process %d : MessageQ_put failed\n",
          local_process);
   goto err_exit;

} </syntaxhighlight>



ARMv7 Linux TransportSrio Tests[edit]

ARMv7 Linux TransportSrio includes a single test that uses TransportSrio to send MessageQ messages between Linux Hosts on two different KeyStone II devices. The test consists of two user-space executables, producer.out and consumer.out. The producer and consumer applications will create Type 11 and Type 9 TransportSrio instances. A TransportQmss instance will also be created in the consumer application and provided to TransportSrio for multi-process routing. The applications will synchronize over TransportSrio and then a bulk MessageQ message transfer will take place from producer to consumer. The bulk transfer will be run over both the Type 11 and Type 9 instance. The producer application will send messages to two different MessageQ queues on the consumer device. One queue will be located in the process where the TransportSrio instance exists. The other queue will be in a process where only a TransportQmss instance exists. Only one TransportSrio instance of each type can exist on the Linux host at any given time. Messages received by TransportSrio that are destined for the queue located on the other process will be rerouted to that process using the TransportQmss instance.

Building the Producer/Consumer Test[edit]

The producer and consumer user-space applications can be built through Yocto/bitbake or through downloading the keystone-linux/ipc-transport GIT repository. For instructions on how to do either please see UG section on ARMv7 Linux TransportSrio Source Delivery



Running the Producer/Consumer Test[edit]

Setup is needed on both the producer and consumer devices prior to running the test. The LAD daemon provided with the release file system has been patched to assume up to 65535 processors exist in the system and to reserve 8 MessageQ queues. It has also been patched to take the MultiProc cluster base ID as a command line input.

Producer Setup:
NoteNote: Replace "K2X" and "k2x" with the proper device

  1. Load the CMEM module
    $ insmod /lib/modules/3.10.61/extra/cmemk.ko
  2. Download and run the IPC DSP minimal startup applcation
    $ mpmcl load dsp0 transportIpcStartupK2XUtilProject.out
    $ mpmcl run dsp0
  3. Start the RM Server
    $ rmServer.out /usr/bin/device/k2x/global-resource-list.dtb /usr/bin/device/k2x/policy_dsp_arm.dtb

Consumer Setup:
NoteNote: Replace "K2X" and "k2x" with the proper device

  1. Kill and restart LAD with a cluster base ID of 9. This makes the consumer device unique from the producer device from an IPC MultiProc point of view
    $ pkill lad_tci6638
    $ lad_tci6638 -l log.txt -b 9
  2. Load the CMEM module
    $ insmod /lib/modules/3.10.61/extra/cmemk.ko
  3. Download and run the IPC DSP minimal startup applcation
    $ mpmcl load dsp0 transportIpcStartupK2XUtilProject.out
    $ mpmcl run dsp0
  4. Start the RM Server
    $ rmServer.out /usr/bin/device/k2x/global-resource-list.dtb /usr/bin/device/k2x/policy_dsp_arm.dtb

The produce and consumer executables can be run once setup is complete on both devices. Order of execution shouldn't matter but start the consumer.out prior to the producer.out to be safe.



SYS/BIOS DSP TransportQmss[edit]

The SYS/BIOS DSP TransportQmss is a MessageQ Network interface transport that can be used on a KeyStone II DSP running SYS/BIOS IPC to send and receive MessageQ messages between DSP processors and Linux user-space processes running the ARMv7 Linux version of TransportQmss.

Architecture[edit]

The SYS/BIOS DSP TransportQmss is a MessageQ Network interface transport that utilizes the QMSS LLD to send and receive MessageQ messages between DSPs and Linux user-space processes on a Keystone II device. The SRIO endpoints can be on the same device or on another device entirely.

Linux_TransSrio_Arch.jpg

TransportQmss is restricted to being a MessageQ Network interface transport. Network interface transports are registered with MessageQ with a Transport ID value, or TID, which can be any integer from 1 through seven. The transport must be created and added to MessageQ's Network transport routing table after IPC has started, IPC has synced with all cores, and MessageQ has enabled a default intra-device, core to core transport. The default priority-based, intra-device, MessageQ interface transport is TransportShmNotify in DSP-only cases or TransportRpmsg in ARMv7 + DSP cases. MessageQ messages can be routed over the different transports by setting the desired transport priority, or TID value, in the MessageQ header's flags field. A message will be sent over a registered Network transport if a valid TID and priority are set.

<syntaxhighlight lang='C'> MessageQ_Msg msg;

msg = MessageQ_alloc(MY_HEAP_ID, sizeof(msg));

/* Route over MessageQ's MessageQ interface normal priority transport.

* Should never need to explicitly set since MessageQ_alloc()
* will set normal priority by default */

MessageQ_setMsgPri(msg, MessageQ_NORMALPRI); MessageQ_put(queueId, msg);

/* ...or... */

/* Route over MessageQ's MessageQ interface high priority transport */ MessageQ_setMsgPri(msg, MessageQ_HIGHPRI); MessageQ_put(queueId, msg);

/* ...or... */

/* Route over MessageQ's Network interface

* TID value has to be a value between 1 and 7 */

MessageQ_setTransportId(msg, transport_tid); MessageQ_put(queueId, msg); </syntaxhighlight>

The TransportQmss must be created and registered with MessageQ after IPC has started so that all initialization requirements for the QMSS transport can be satisfied. First, and most importantly, TransportQmss initialization will be making resource requests from the CPPI and QMSS LLDs. As a result, the Resource Manager (RM) LLD must be fully initialized and a transport path from RM Clients to the RM Server must be available. Typically, the TransportShmNotify is used to enable RM message passing between the Clients and Server in a DSP-only use case. TransportRpmsg must be used as the RM control messaging backbone in a mixed ARM Linux and DSP use case. Second, forcing TransportQmss initialization after IPC start and sync allows any CPPI Host descriptors and attached buffers to be placed in any type of device memory.

SYS/BIOS DSP TransportQmss Source Delivery and Recompilation[edit]

The SYS/BIOS DSP TransportQmss source code and examples are delivered within the MCSDK BIOS PDK component. DSP TransportQmss can be rebuilt using the environment setup scripts provided with the PDK package. DSP TransportQmss example applications are created as part of the pdkProjectCreate scripts. They can be imported and built the same as PDK LLD example and test CCS projects.

Recompiling on Windows[edit]

  1. Open a Windows command terminal and navigate to <pdk_install_dir>/packages.
  2. Set the component install paths. The following commands assume CCS and MCSDK are installed in the default location C:\ti\)
    set C6X_GEN_INSTALL_PATH="C:\ti\ccsv6\tools\compiler\c6000_[version]"
    set XDC_INSTALL_PATH=C:\ti\xdctools_[version]
    set EDMA3LLD_BIOS6_INSTALLDIR="C:\ti\edma3_lld_[version]"
    set CG_XML_BIN_INSTALL_PATH=C:\ti\cg_xml\bin
    set BIOS_INSTALL_PATH=C:\ti\bios_[version]\packages
    set IPC_INSTALL_PATH=C:\ti\ipc_[version]\packages
    set PDK_INSTALL_PATH=C:\ti\pdk_keystone2_[version]
  3. Run pdksetupenv.bat
    >pdksetupenv.bat
  4. Navigate to <pdk_install_path>/packages/ti/transport/ipc/c66/qmss/
  5. Build the IPC QMSS Transport library
    >xdc

Issue the following commands if the QMSS transport ever needs to be rebuilt:

>xdc clean
>xdc

Recompiling on Linux[edit]

  1. Open a Linux bash terminal and navigate to <pdk_install_dir>/packages.
  2. Export the component install paths. The following commands assume CCS and MCSDK are installed in the default location /opt/ti)
    export C6X_GEN_INSTALL_PATH=/opt/ti/ccsv5/tools/compiler/c6000_[version]
    export XDC_INSTALL_PATH=/opt/ti/xdctools_[version]
    export EDMA3LLD_BIOS6_INSTALLDIR=/opt/ti/edma3_lld_[version]
    export CG_XML_BIN_INSTALL_PATH=/opt/ti/cg_xml/bin
    export BIOS_INSTALL_PATH=/opt/ti/bios_[version]/packages
    export IPC_INSTALL_PATH=/opt/ti/ipc_[version]/packages
    export PDK_INSTALL_PATH=/opt/ti/pdk_keystone2_[version]
  3. Run pdksetupenv.sh
    $ source pdksetupenv.sh
  4. Navigate to <pdk_install_path>/packages/ti/transport/ipc/c66/qmss/
  5. Build the IPC QMSS Transport library
    $ xdc

Issue the following commands if the QMSS transport ever needs to be rebuilt:

$ xdc clean
$ xdc

SYS/BIOS DSP TransportQmss Configuration Parameters[edit]

Following are the configuration parameters for DSP TransportQmss instance creation. Descriptions, default values, and programming considerations are provided for each configuration parameter. Each parameter is an element of the TransportQmss_Params structure. A structure of this type must be created, populated, and passed to the TransportQmss_create() function via pointer. The structure should be initialized to its default values using the TransportQmss_Params_init() function prior to population with user specific parameters.

<syntaxhighlight lang='C'>

   TransportQmss_Params  transQmssParams;
   ...
   TransportQmss_Params_init(&transQmssParams);
   transQmssParams.deviceCfgParams   = ...;
   transQmssParams.txMemRegion       = ...;
   ...
   qmssTransHandle = TransportQmss_create(&transQmssParams, &errorBlock);

</syntaxhighlight>

Configuration Parameter Element Description Initial Value Special Considerations
<syntaxhighlight lang='C'>TransportQmss_DeviceConfigParams *deviceCfgParams;</syntaxhighlight> Pointer to the device specific TransportQmss configuration parameter structure. NULL TransportQmss configuration structure is defined in the TransportQmss_device.c source file for supported devices.
<syntaxhighlight lang='C'>Int txMemRegion;</syntaxhighlight> QMSS memory region from which to allocate transmit side Host descriptors. -1
  • Descriptors inserted into this region must be of Host type.
  • The descriptors can be located in L2, MSMC, or DDR3 memory. The base address of the descriptors must be cache line aligned if located within a shared memory (MSMC or DDR3) and caching is enabled.
<syntaxhighlight lang='C'>UInt32 txNumDesc;</syntaxhighlight> Number of Host descriptors to pre-allocate for QMSS transmit operations. MessageQ data buffers are attached to the buffers and sent out via the QMSS LLD. Descriptors are recycled onto a completion queue. Descriptors, and their attached buffers, are recycled in future TransportQmss_put operations. 2 Minimum of two descriptors needed for a ping-pong-like operation. While one descriptor+buffer pair is sent, the other is recycled.
<syntaxhighlight lang='C'>UInt32 txDescSize;</syntaxhighlight> Size of the transmit descriptors in bytes. 0 Cache coherence operations may be performed on the descriptors based on their memory location. As a result, the descriptor size should be a multiple a cache line.
<syntaxhighlight lang='C'>Int rxMemRegion;</syntaxhighlight> QMSS memory region from which to allocate receive side Host descriptors. -1
  • This memory region can be the same as that provided for txMemRegion.
  • Descriptors inserted into this region must be of Host type.
  • The descriptors can be located in L2, MSMC, or DDR3 memory. The base address of the descriptors must be cache line aligned if located within a shared memory (MSMC or DDR3) and caching is enabled.
<syntaxhighlight lang='C'>UInt32 rxNumDesc;</syntaxhighlight> Number of Host descriptors to pre-allocate for QMSS receive operations. MessageQ data buffers are pre-allocated and attached to the descriptors at TransportQmss_create() time. Data received by the QMSS LLD is copied directly into the MessageQ buffer attached the receive descriptor. 1 Minimum of one descriptor needed to receive a packet. The descriptors are reused for receive operations. New MessageQ buffers are allocated and attached to descriptors prior to their reuse.
<syntaxhighlight lang='C'>UInt32 rxDescSize;</syntaxhighlight> Size of the receive descriptors in bytes. 0 Cache coherence operations may be performed on the descriptors based on their memory location. As a result, the descriptor size should be a multiple a cache line.
<syntaxhighlight lang='C'>UInt16 rxMsgQHeapId;</syntaxhighlight> Rx-side MessageQ Heap ID. MessageQ buffers are pre-allocated out of this heap and attached to descriptors for packets received by the QMSS interface. ~1
  • The heap must have AT LEAST rxNumDesc number of buffers.
  • The heap can reside can reside in L2, MSMC, or DDR3. TransportQmss will perform cache coherence operations on heap buffers residing in shared memory areas.
  • Buffers should be sized to be a multiple of a cache line if the heap is located in a shared memory. This prevents corruptions when cache coherence operations are performed.
<syntaxhighlight lang='C'>UInt32 maxMTU;</syntaxhighlight> Maximum transmittable unit in bytes that will be handled by TransportQmss. This is also the size of the buffers within the heap mapped to the rxMsgQHeapId. 256 maxMTU should be sized to be a multiple of a cache line. This prevents corruptions when cache coherence operations are performed on rxMsgQHeapId buffers located in a shared memory.
<syntaxhighlight lang='C'>TransportQmss_QueueRcvParams rcvQParams;</syntaxhighlight> Parameters that define the type of receive QMSS queue that will be used by the transport. TransportQmss supports two receive queue types, accumulator or QPEND Please refer to example/src/bench_qmss.c for examples on how to configure TransportQmss for the different receive queue types
<syntaxhighlight lang='C'>Void *rmServiceHandle;</syntaxhighlight> RM service handle that will be given to Qmss_init/start. The RM service handle only needs to be provided if the intent is for RM to manage QMSS resources NULL
<syntaxhighlight lang='C'>UInt rxIntVectorId;</syntaxhighlight> Interrupt vector ID to tie to the receive side accumulator operation. ~1
<syntaxhighlight lang='C'>UInt transNetworkId;</syntaxhighlight> The transport instance will be registered with MessageQ's network transport interface using the supplied network transport ID. MessageQ messages with a matching transport ID in their MessageQ header will be sent over the transport instance. 0 The MessageQ network interface transport ID must have a value between 1 and 7.

Adding the SYS/BIOS DSP TransportQmss to a DSP Application[edit]

TransportQmss requires some special considerations when adding it to an application since it is not a standard, shared memory, IPC transport. As described earlier, TransportQmss is an IPC MessageQ Network interface transport. It relies on IPC and MessageQ being initialized with a, priority-based, shared memory transport prior to creating any TransportQmss instances. The latter occurs by default in the RTSC configuration and Ipc_start(). Creating TransportQmss instances after IPC and MessageQ have initialized allows the transport to be initialized without any hardcoded assumptions about the locations of heaps buffers and QMSS descriptors in memory. It also allows the LLDs used by TransportQmss to request their resources from RM since RM will use the MessageQ shared memory transport as the resource request/response path.

Additions to the Application RTSC .cfg[edit]

  • TransportQmss requires the CPPI and QMSS LLDs in order to operate. The RM LLD is a requirement for Keystone II devices

<syntaxhighlight lang='C'> var Cppi = xdc.loadPackage('ti.drv.cppi'); var Qmss = xdc.loadPackage('ti.drv.qmss'); var Rm = xdc.loadPackage('ti.drv.rm');

Program.sectMap[".qmss"] = new Program.SectionSpec(); Program.sectMap[".qmss"] = "MSMCSRAM";

Program.sectMap[".cppi"] = new Program.SectionSpec(); Program.sectMap[".cppi"] = "MSMCSRAM"; </syntaxhighlight>

  • The TransportQmss module must be included to pull in the library. MessageQ must be configured with reserve queue which will be used by any MessageQ Network interface transports that are registered. The NameServer module does not work with the MessageQ Network interface transports since these transports can potentially send MessageQ messages off-device. NameServer cannot query outside of the device.

<syntaxhighlight lang='C'> var Ipc = xdc.useModule('ti.sdo.ipc.Ipc'); /* Ipc_start() will synchronize all local DSP processors */ Ipc.procSync = Ipc.ProcSync_ALL; var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ'); /* Reserve a block of MessageQ queues for use by the MessageQ network interface

* transports since they don't use the NameServer module */

MessageQ.numReservedEntries = 4; var TransportQmss = xdc.useModule('ti.transport.ipc.c66.qmss.TransportQmss'); </syntaxhighlight>

  • The device-specific low-level IPC modules must be included so that the interrupt logic properly associates the device MultiProc IDs to destination interrupt generation. The MultiProc module and the low-level IPC module must both be aware that an ARMv7 processor exists as MultiProc ID 0 on KeyStone II devices.

<syntaxhighlight lang='C'> /* Use the correct version of the low-level IPC modules so that the ARMv7

* processor is correctly factored into the notification logic */

var NotifyDriverCirc = xdc.useModule('ti.sdo.ipc.notifyDrivers.NotifyDriverCirc'); var Interrupt = xdc.useModule('ti.ipc.family.tci6638.Interrupt'); NotifyDriverCirc.InterruptProxy = Interrupt; var VirtQueue = xdc.useModule('ti.ipc.family.tci6638.VirtQueue');

/* Notify brings in the ti.sdo.ipc.family.Settings module, which does

*  lots of config magic which will need to be UNDONE later, or setup
*  earlier, to get the necessary overrides to various IPC module proxies!
*/

var Notify = xdc.module('ti.sdo.ipc.Notify'); var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');

/* Note: Must call this to override what's done in Settings.xs ! */ Notify.SetupProxy = xdc.module('ti.ipc.family.tci6638.NotifyCircSetup'); </syntaxhighlight>

  • As shown earlier in the Architecture section, the MultiProc parameters must be configured correctly for each device.

Source Code Additions[edit]

  • A TransportQmss instance can be created between two endpoints after the following:
    • IPC has started and all local DSPs have attached
    • RM instances have been created. Communication between the RM Clients and the RM Server will take place over the default MessageQ interface, priority-based, IPC transport.
    • QMSS and CPPI have been initialized and started
    • The heaps that will provide buffers for MessageQ send/receive have been created. The same heap can be used for both transmit and receive. NoteNote: The heap used by the TransportQmss receive logic must have a gate that is able to operate within an interrupt context. In the transport examples a GateMP configured for GateMP_LocalProtect_INTERRUPT is used.

<syntaxhighlight lang='C'> /* Create the heap that will be used to allocate messages. */ GateMP_Params_init(&gateMpParams); gateMpParams.localProtect = GateMP_LocalProtect_INTERRUPT; gateMpHandle = GateMP_create(&gateMpParams);

HeapBufMP_Params_init(&heapBufParams); heapBufParams.regionId = 0; ... heapBufParams.gate = gateMpHandle; heapHandle = HeapBufMP_create(&heapBufParams); </syntaxhighlight>

  • SYS/BIOS DSP TransportQmss instances can be created between end points once all previous requirements are satisfied. Only a single TransportQmss instance, regardless of receive queue type, accumulator and QPEND, can exist on a DSP. This restriction does not affect receiving from other TransportQmss instances configured with different receive queue types. The following example code comes from the Benchmark example:

<syntaxhighlight lang='C'> /* Create QMSS Accumulator or QPEND transport instance. They will be

    * network transports so won't interfere with default MessageQ transport,
    * shared memory notify transport */
   TransportQmss_Params_init(&transQmssParams);
   /* Configure common parameters */
   transQmssParams.deviceCfgParams = &qmssTransCfgParams;
   transQmssParams.txMemRegion     = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.  Account for send/receive
    * (divide by 2) */
   transQmssParams.txNumDesc       = (HOST_DESC_NUM / 2) / NUM_DSP_CORES;
   transQmssParams.txDescSize      = HOST_DESC_SIZE_BYTES;
   transQmssParams.rxMemRegion     = HOST_DESC_MEM_REGION;
   /* Descriptor pool divided between all cores.  Account for send/receive
    * (divide by 2) */
   transQmssParams.rxNumDesc       = (HOST_DESC_NUM / 2) / NUM_DSP_CORES;
   transQmssParams.rxDescSize      = HOST_DESC_SIZE_BYTES;
   transQmssParams.rxMsgQHeapId    = QMSS_MSGQ_HEAP_ID;
   transQmssParams.maxMTU          = QMSS_MTU_SIZE_BYTES;
   transQmssParams.rmServiceHandle = rmServiceHandle;
   transQmssParams.rxIntVectorId   = 8;
   transQmssParams.transNetworkId  = QMSS_TRANS_NET_ID;
   /* Receive type specific parameters */
   if (testIterations & 0x1) {
       /* Odd iterations create TransportQmss instance with QPEND receive
        * logic */
       transQmssParams.rcvQParams.qType             = TransportQmss_queueRcvType_QPEND;
       /* Choose an arbitrary system event from Table 6-22 System Event
        * Mapping in tci6638k2k.pdf.  System event can be anything that is not
        * already in use and maps to a different CIC host interrupt per DSP */
       transQmssParams.rcvQParams.qpend.systemEvent = 43;
       System_printf("Core %d : "
                     "Creating QMSS Transport instance with rx QPEND queue\n",
                     coreNum);
   } else {
       /* Even iterations create TransportQmss instance with accumulator
        * receive logic */
       transQmssParams.rcvQParams.qType             = TransportQmss_queueRcvType_ACCUMULATOR;
       transQmssParams.rcvQParams.accum.rxAccQType  = Qmss_QueueType_HIGH_PRIORITY_QUEUE;
       /* Use PDSP3 since Linux uses PDSP1.  Using the same PDSP as Linux can
        * cause a potential PDSP firmware lockup since Linux does not use the
        * critical section preventing commands being sent to a PDSP
        * simultaneously */
       transQmssParams.rcvQParams.accum.qmPdsp      = (UInt32)Qmss_PdspId_PDSP3;
       /* Must map to a valid channel for each DSP core.  Follow sprugr9f.pdf
        * Table 5-9 */
       transQmssParams.rcvQParams.accum.accCh       = DNUM;
       transQmssParams.rcvQParams.accum.accTimerCnt = 0;
       System_printf("Core %d : "
                     "Creating QMSS Transport instance with rx Accumulator "
                     "queue\n", coreNum);
   }
   Error_init(&errorBlock);
   qmssTransHandle = TransportQmss_create(&transQmssParams, &errorBlock);
   if (qmssTransHandle == NULL) {
       System_printf("Error Core %d : "
                     "TransportQmss_create failed with id %d\n", coreNum,
                     errorBlock.id);
       return;
   }

</syntaxhighlight>

  • MessageQ_create() and MessageQ_open() must utilize the reserve queues set-aside in MessageQ since the network interface transports, like TransportQmss, do not use the NameServer. To utilize the reserved queues:
    • To create a local reserved queue MessageQ_create() will take NULL instead of a string containing a remote queue name
    • To open a remote reserved queue MessageQ_open() is replaced by the MessageQ_openQueueId() API

<syntaxhighlight lang='C'>

   System_printf("IPC Core %d : Creating reserved MessageQ %d\n", ipcCoreId, MESSAGEQ_RESERVED_RCV_Q);
   MessageQ_Params_init(&msgQParams);
   msgQParams.queueIndex = MESSAGEQ_RESERVED_RCV_Q;
   /* Create reserved message queue. */
   localMessageQ = MessageQ_create(NULL, &msgQParams);
   if (localMessageQ == NULL) {
       System_printf("Error IPC Core %d : MessageQ_create failed\n", ipcCoreId);
       return;
   }   
   System_printf("IPC Core %d : Opening reserved MessageQ %d on IPC core %d\n", ipcCoreId,
                 MESSAGEQ_RESERVED_RCV_Q, remoteProcId);
   remoteQueueId = MessageQ_openQueueId(MESSAGEQ_RESERVED_RCV_Q, remoteProcId);

</syntaxhighlight>

  • MessageQ_put() and MessageQ_get() can be used normally to send and receive messages between end points using TransportQmss. The only caveat is that messages that are to use TransportQmss must be sent with the transport ID, TID, that was used to register the TransportQmss instance with the MessageQ Network interface.

<syntaxhighlight lang='C'> typedef struct {

   MessageQ_MsgHeader header; /* 32 bytes */
   int32_t            src;
   int32_t            flags;
   int32_t            numMsgs; 
   int32_t            seqNum;
   uint32_t           data[TEST_MSG_DATA_LEN_WORDS];
   uint8_t            pad[16]; /* Pad to cache line size of 128 bytes */

} TstMsg;

...

TstMsg *txMsg;

txMsg = (TstMsg *) MessageQ_alloc(SRIO_MSGQ_HEAP_ID, sizeof(TstMsg)); /* Set the transport ID to route message through QMSS Transport instance */ MessageQ_setTransportId(txMsg, transId); </syntaxhighlight>

NoteNote: Please pay extra attention to the alignments and padding of CPPI descriptors and heap buffers used by the QMSS Transport. Cache coherence operations will be performed on descriptors and buffers that are detected to be from a shared memory, such as MSMC or DDR3. All descriptors and buffers should be cache line aligned and padded to avoid data corruptions when the coherence operations are performed.

SYS/BIOS DSP TransportQmss Examples[edit]

SYS/BIOS DSP TransportQmss includes two tests. A DSP-only benchmark test and the DSP portion of the Heterogeneous Processor test. Both examples are fully integrated with RM so they can be run while ARM Linux is up:

Benchmark Example[edit]

An example that performs latency, throughput, and data integrity tests over DSP TransportQmss using both receive queue configurations. It also runs the same measurements for a shared memory transport for comparison.

Building[edit]

The Heterogeneous processor test's DSP endpoint application is called transportIpcQmssBenchmarkK2XExampleProject.out. The source code is located in <install_base>/pdk_keystone2_[version]/packages/ti/transport/ipc/c66/qmss/test. The project can be imported into CCS from <install_base>/pdk_keystone2_[version]/packages/exampleProjects folder. Build the project and copy the generated .out to the Linux file system. The project has been configured to be downloaded and run to DSP cores 0 and 1 from Linux.

Running[edit]
  1. Load the DSP endpoints (Repeat for both DSPs)
    $ mpmcl load dsp0 transportIpcQmssBenchmarkK2XExampleProject.out
    $ mpmcl load dsp1 transportIpcQmssBenchmarkK2XExampleProject.out
    $ mpmcl run dsp0
    $ mpmcl run dsp1
  2. The DSP endpoint logs can be dumped from /sys/kernel/debug/remoteproc/remoteproc#/trace0 where # is the DSP whose trace to dump

Heterogeneous Processor Test[edit]

The Heterogeneous processor test uses TransportQmss to send MessageQ messages between Linux user-space processes and DSP cores. The MessageQ interface is used to send messages between a Linux process and a configurable number of DSP cores. Each Linux process and DSP runs an exclusive TransportQmss instance. Data integrity of the message is checked after each transfer.

Building[edit]

For instructions on how to build the DSP-side of the Heterogeneous processor test please see the ARMv7 TransportQmss section

Running[edit]

For instructions on how to run the DSP-side of the Heterogeneous processor test please see the ARMv7 TransportQmss section

ARMv7 Linux TransportQmss[edit]

ARMv7 Linux TransportQmss is the ARMv7 Linux MessageQ Network interface transport counterpart to the SYS/BIOS DSP version of TransportQmss. The ARMv7 Linux version of TransportQmss can be registered with Network transport interface of ARMv7 Linux MessageQ to allow QMSS-based communication with other Linux user-space processes and DSP processors running TransportQmss.

Architecture[edit]

TransportQmss does not directly interact with QMSS. Instead the transport interfaces with a MPM-Transport instance that supports QMSS.

Each Linux user-space process can create its own TransportQmss instance. MessageQ APIs can be used to send messages between user-space processes as long as each process registers a TransportQmss instance with MessageQ. Communication with DSPs on the same device will be possible once development of SYS/BIOS DSP TransportQmss is complete.

Linux_TransSrio_Arch.jpg

ARMv7 Linux TransportQmss Source Delivery and Recompilation[edit]

The ARMv7 Linux TransportQmss source code can be downloaded and built two ways. The transport source code is delivered and built as part of Yocto/bitbake. The source code can also be downloaded and built directly from the GIT repository.

Recompiling Through Yocto/bitbake[edit]

  1. Follow the instructions in the Exploring section of the user guide to configure the Yocto build environment. The tisdk-server-rootfs-image does not need to be built. Instead look at the section for building other components
  2. Build the TransportQmss libraries, ipc-transport-qmss recipe, and user-space tests, ipc-transport-qmss-test recipe:
    $ MACHINE=k2hk-evm TOOLCHAIN_BRAND=linaro ARAGO_BRAND=mcsdk bitbake ipc-transport-qmss
    $ MACHINE=k2hk-evm TOOLCHAIN_BRAND=linaro ARAGO_BRAND=mcsdk bitbake ipc-transport-qmss-test

    NoteNote: The initial build may take quite some time since the kernel is built as a dependency
    NoteNote: Building with just the ipc-transport-qmss-test recipe will also build the ipc-transport-qmss recipe since the test recipe depends on the library recipe.
  3. The built TransportQmss static library will be located in
    <base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-qmss/<tag-ver_recipe-ver>/packages-split/ipc-transport-qmss-staticdev/usr/lib/libTransportQmss.a

    The built TransportQmss shared library will be located in
    <base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-qmss/<tag-ver_recipe-ver>/packages-split/ipc-transport-qmss/usr/lib/libTransportQmss.so.1.0.0
  4. The ipc-transport-qmss-test recipe will build test static and shared library executables for all supported devices. The executables will be located in
    base_path>/oe-layersetup/build/arago-tmp-external-linaro-toolchain/work/cortexa15hf-vfp-neon-3.8-oe-linux-gnueabi/ipc-transport-qmss-test/<tag-ver_recipe-ver>/packages-split/ipc-transport-qmss-test/usr/bin/

Recompiling Through GIT Repository[edit]

Recompiling through the ARMv7 Linux TransportQmss GIT repository requires that the latest MCSDK Linux installation. The MCSDK Linux PDK component and the Linux devkit must be installed. The Linux devkit installation script can be found in <MCSDK Linux install root>/mcsdk_linux_3_XX_YY_ZZ/linux-devkit/

  1. Clone the keystone-linux/ipc-transport repository from git.ti.com
    $ git clone git://git.ti.com/keystone-linux/ipc-transport.git
  2. Navigate to the MCSDK Linux installation of pdk_3_XX_YY_ZZ/packages and source armv7setupenv.sh.
    NoteNote: The armv7setupenv.sh script must be modified build for the correct K2 device, and to point to the linaro toolchain and installed devkit path
    $ source armv7setupenv.sh
  3. Navigate back to the QMSS transport directory in the ipc-transport GIT repository
    $ cd <repo_root_path>/ipc-transport/linus/qmss
  4. Build the TransportQmss library and user-space test executables:
    $ make lib
    $ make tests
  5. The TransportQmss static and shared libraries will be copied directly into the Linux devkit's /usr/lib folder as long as the devkit install path was setup correctly prior to running the armv7setupenv.sh script
  6. The test executables will be generated in the <base_repo_path>/ipc-transport/bin/<k2 device>/test/ folder. Only the device specified in the armv7setupenv.sh will be built.
    NoteNote: Setting the USEDYNAMIC_LIB environment variable to "yes" will generate the shared library test executables
    $ export USEDYNAMIC_LIB=yes

ARMv7 Linux TransportQmss Configuration Parameters[edit]

Following are the configuration parameters for TransportQnss instance creation. Descriptions, default values, and programming considerations are provided for each configuration parameter. Each parameter is an element of the TransportQmss_Params structure. A structure of this type must be created, populated, and passed to the TransportQmss_create() function via pointer. The structure should be initialized to its default values using the TransportQmss_Params_init() function prior to population with application specific parameters.

<syntaxhighlight lang='C'>

   TransportQmss_Params  qmss_trans_params;
   ...
   TransportQmss_Params_init(&qmss_trans_params);
   snprintf(mpm_inst_name, MPM_INST_NAME_LEN, "arm-qmss-generic");
   qmss_trans_params.mpm_trans_inst_name = mpm_inst_name;
   qmss_trans_params.rm_service_h        = ...;
   ...
   qmss_handle = TransportQmss_create(&qmss_trans_params);

</syntaxhighlight>

Configuration Parameter Element Description Initial Value Special Considerations
<syntaxhighlight lang='C'>Char *mpm_trans_inst_name;</syntaxhighlight> MPM-Transport instance name. This string must match a "slaves" name string defined within the mpm_config.json file used in the Linux filesystem NULL This string name must match the generic QMSS instance name in the filesystem's /etc/mpm/mpm_config.json. The MPM JSON file's generic QMSS string is "arm=qmss-generic".
<syntaxhighlight lang='C'>Void *rm_service_h;</syntaxhighlight> RM instance service handle needed by MPM-Transport to request hardware resources NULL
<syntaxhighlight lang='C'>Int rx_msg_size_bytes;</syntaxhighlight> Maximum size in bytes of messages that will be received by this transport. Used to allocated MessageQ messages for reception 0 The value specified must be in sync with the "sizebuf" values specified in the "qmss-queue-map" transmit and receive free queue nodes of the mpm_config.json configuration file.
<syntaxhighlight lang='C'>Int mpm_trans_init_qmss;</syntaxhighlight> Controls whether MPM-Transport will initialize the QMSS hardware
0 - MPM-Transport will not initialize QMSS. Set if another entity within the system initialized the QMSS hardware.
1 - MPM-Transport will initialize QMSS. Set if this is the first system entity being created that used QMSS and QMSS has not been initialized
0

MPM Transport Configuration Effects on ARMv7 Linux TransportQmss[edit]

TransportQmss assumes MPM Transport will manage configuration of the QMSS and CPPI LLDs. As a result, descriptor and descriptor buffer management is pushed to MPM Transport in the ARMv7 Linux version of TransportQmss. The MPM Transport JSON configuration file should be modified in order to change QMSS descriptor and buffer related parameters.

The MPM Transport JSON configuration file is located in the Linux file system at /etc/mpm/mpm_config.json

Adding the ARMv7 Linux TransportQmss to a User-Space Application[edit]

A TransportQmss instance can be created after the following:

  • IPC has started
  • A RM client instance has been created. Communication between the RM Client and the RM Server will take place over a Linux socket.
  • ARMv7 Linux TransportQmss instances can be created once all previous requirements are satisfied. A TransportQmss instance can be created per user-space process. The number of TransportQmss instances that can exist simultaneously is limited to the number of QMSS queue pend queues available for use outside the kernel. The following example code comes from the TransportQmss multi-process test:

<syntaxhighlight lang='C'>

   TransportQmss_Params_init(&qm_trans_params);
   snprintf(mpm_inst_name, MPM_INST_NAME_LEN, "arm-qmss-generic");
   qm_trans_params.mpm_trans_inst_name = mpm_inst_name;
   qm_trans_params.rm_service_h        = rm_client_service_handle;
   qm_trans_params.rx_msg_size_bytes   = MAX_PACKET_SIZE;
   qm_trans_params.mpm_trans_init_qmss = 1;
   printf("Process %d : Creating TransportQmss instance\n", local_process);
   qmss_trans_h = TransportQmss_create(&qm_trans_params);
   if (!qmss_trans_h) {
       printf("ERROR Process %d : Failed to create TransportQmss handle\n", 
              local_process);
       status = -1;
       goto err_exit;
   }
   /* Register transport with MessageQ as network transport */
   net_trans_h = TransportQmss_upCast(qmss_trans_h);
   base_trans_h = INetworkTransport_upCast(net_trans_h);
   if (MessageQ_registerTransportId(TRANS_QMSS_NET_ID, base_trans_h) < 0) {
       printf("ERROR Process %d : "
              "Failed to register TransportQmss as network transport\n",
              local_process);
       status = -1;
       goto err_exit;
   }

</syntaxhighlight>

  • Use the standard MessageQ_create() and MessageQ_open() APIs to create a local process MessageQ and open a remote process MessageQ, respectively. The LAD daemon's NameServer will store created queues and respond to queue open queries.

<syntaxhighlight lang='C'>

   /* Create a MessageQ */
   snprintf(msg_q_name, MSGQ_Q_NAME_LEN, "Process_%d_MsgQ", local_process);
   MessageQ_Params_init(&msg_params);
   msg_q_h = MessageQ_create(msg_q_name, &msg_params);
   if (msg_q_h == NULL) {
       printf("ERROR Process %d : Failed to create MessageQ\n", local_process);
       status = -1;
       goto err_exit;
   }
   printf("Process %d : Local MessageQ: %s, QId: 0x%x\n", local_process,
          msg_q_name, MessageQ_getQueueId(msg_q_h));
   /* Open next process's MessageQ */
   snprintf(remote_q_name, MSGQ_Q_NAME_LEN, "Process_%d_MsgQ", next_process);
   printf ("Process %d : Attempting to open remote queue: %s\n",
           local_process, remote_q_name);
   do {
       status = MessageQ_open(remote_q_name, &remote_q_id);
       sleep(1);
   } while ((status == MessageQ_E_NOTFOUND) || (status == MessageQ_E_TIMEOUT));
   if (status < 0) {
       printf("ERROR Process %d : Error %d when opening next process MsgQ\n",
              local_process, status);
       status = -1;
       goto err_exit;
   } else {
       printf("Process %d : Opened Remote queue: %s, QId: 0x%x\n",
              local_process, remote_q_name, remote_q_id);
   }

</syntaxhighlight>

  • MessageQ_put() and MessageQ_get() can be used normally to send and receive messages between processes using TransportQmss. The only caveat is that messages that are to use TransportQmss must be sent with the transport ID, TID, that was used to register the TransportQmss instance with the MessageQ Network interface.

<syntaxhighlight lang='C'> /* TransportQmss MessageQ Network Interface TID */

  1. define TRANS_QMSS_NET_ID 4

typedef struct {

   MessageQ_MsgHeader header; /* 32 bytes */
   int32_t            src;
   int32_t            flags;
   int32_t            numMsgs; 
   int32_t            seqNum;
   uint32_t           data[TEST_MSG_DATA_LEN_WORDS];
   uint8_t            pad[16]; /* Pad to cache line size of 128 bytes */

} test_msg_t;

...

test_msg_t *msg = NULL;

msg = (test_msg_t *) MessageQ_alloc(0, sizeof(*msg)); if (msg == NULL) {

   printf("ERROR Process %d : MessageQ_alloc failed\n",
          local_process);
   goto err_exit;

}

...

/* Set the transport ID to route message through TransportQmss instance */ MessageQ_setTransportId(msg, TRANS_QMSS_NET_ID); status = MessageQ_put(remote_q, (MessageQ_Msg)msg); if (status < 0) {

   printf("ERROR Process %d : MessageQ_put failed\n",
          local_process);
   goto err_exit;

} </syntaxhighlight>

ARMv7 Linux TransportQmss Tests[edit]

ARMv7 Linux TransportQmss includes two tests. The Multi-Process test and the ARM Linux portion of the Heterogeneous Processor test.

Multi-Process Test[edit]

The Multi-Process test uses TransportQmss to send MessageQ messages between Linux user-space processes. The MessageQ interface is used to send a message, round-robin, between a configurable number of Linux processes. Each process runs an exclusive TransportQmss instance. Data integrity of the message is checked after each process to process transfer. The default number of processes and round-robin iterations is 4, and 100, respectively.

Building[edit]

The Multi-Process user-space application can be built through Yocto/bitbake or through downloading the keystone-linux/ipc-transport GIT repository. For instructions on how to do either please see UG section on ARMv7 Linux TransportQmss Source Delivery. Once built, the multiProcessTest binary can be moved to the Linux filesystem.

Running[edit]

NoteNote: Replace "K2X" and "k2x" with the proper device

  1. Load the CMEM module
    $ insmod /lib/modules/3.10.61/extra/cmemk.ko
  2. Start the RM Server
    $ rmServer.out /usr/bin/device/k2x/global-resource-list.dtb /usr/bin/device/k2x/policy_dsp_arm.dtb
  3. Execute the Multi-Process test binary
    $ ./multiProcessTest.out

<syntaxhighlight lang='bash'> root@k2e-evm:~# ./multiProcessTest.out

                • TransportQmss Linux Multi-Process Test *********

TransportQmss Version : 0x02000000 Version String: Linux IPC Transports Revision: 2.0.0.00:Nov 4 2015:17:46:03 Process 1 : Initialized RM_Client1 Process 3 : Initialized RM_Client3 Process 0 : Initialized RM_Client0 Process 2 : Initialized RM_Client2 Process 1 : Creating TransportQmss instance Process 0 : Creating TransportQmss instance Process 3 : Creating TransportQmss instance Process 2 : Creating TransportQmss instance Process 1 : Local MessageQ: Process_1_MsgQ, QId: 0x80 Process 1 : Attempting to open remote queue: Process_2_MsgQ Process 0 : Local MessageQ: Process_0_MsgQ, QId: 0x81 Process 0 : Attempting to open remote queue: Process_1_MsgQ Process 3 : Local MessageQ: Process_3_MsgQ, QId: 0x82 Process 3 : Attempting to open remote queue: Process_0_MsgQ Process 2 : Local MessageQ: Process_2_MsgQ, QId: 0x83 Process 2 : Attempting to open remote queue: Process_3_MsgQ Process 0 : Opened Remote queue: Process_1_MsgQ, QId: 0x80 Process 0 : Allocating round trip test MessageQ msg Round Trip - 1 Process 0 : Sending msg to Process 1 Process 3 : Opened Remote queue: Process_0_MsgQ, QId: 0x81 Process 2 : Opened Remote queue: Process_3_MsgQ, QId: 0x82 Process 1 : Opened Remote queue: Process_2_MsgQ, QId: 0x83 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 2 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 3 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 4 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 5 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 6 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 7 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 8 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 9 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 10 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 11 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 12 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 13 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 14 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 15 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 16 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 17 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 18 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 19 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 20 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 21 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 22 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 23 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 24 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 25 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 26 Process 0 : Flushing transport's dst cache of dst MessageQ queue ID 0x80 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Flushing transport's dst cache of dst MessageQ queue ID 0x83 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Flushing transport's dst cache of dst MessageQ queue ID 0x82 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Flushing transport's dst cache of dst MessageQ queue ID 0x81 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 27 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 28 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 29 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 30 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 31 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 32 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 33 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 34 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 35 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 36 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 37 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 38 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 39 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 40 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 41 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 42 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 43 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 44 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 45 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 46 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 47 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 48 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 49 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 50 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 51 Process 0 : Flushing transport's entire dst cache Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Flushing transport's entire dst cache Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Flushing transport's entire dst cache Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Flushing transport's entire dst cache Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 52 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 53 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 54 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 55 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 56 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 57 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 58 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 59 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 60 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 61 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 62 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 63 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 64 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 65 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 66 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 67 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 68 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 69 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 70 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 71 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 72 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 73 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 74 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 75 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 76 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 77 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 78 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 79 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 80 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 81 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 82 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 83 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 84 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 85 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 86 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 87 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 88 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 89 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 90 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 91 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 92 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 93 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 94 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 95 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 96 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 97 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 98 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 99 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Sending msg to Process 0 Process 0 : Received msg with good data from Process 3 Round Trip - 100 Process 0 : Sending msg to Process 1 Process 1 : Received msg with good data from Process 0 Process 1 : Sending msg to Process 2 Process 2 : Received msg with good data from Process 1 Process 2 : Sending msg to Process 3 Process 3 : Received msg with good data from Process 2 Process 3 : Freeing round trip test MessageQ msg Test PASSED Cleaning up Test Complete! </syntaxhighlight>

Heterogeneous Processor Test[edit]

The Heterogeneous processor test uses TransportQmss to send MessageQ messages between Linux user-space processes and DSP cores. The MessageQ interface is used to send messages between a Linux process and a configurable number of DSP cores, the default is 2 DSPs. Each Linux process and DSP runs an exclusive TransportQmss instance. Data integrity of the message is checked after each transfer.

Building[edit]

The Heterogeneous processor test's ARM Linux endpoint, a user-space application, can be built through Yocto/bitbake or through downloading the keystone-linux/ipc-transport GIT repository. For instructions on how to do either please see UG section on ARMv7 Linux TransportQmss Source Delivery. Once built, the armEpTest binary can be moved to the Linux filesystem.

The Heterogeneous processor test's DSP endpoint application is called transportIpcQmssDspEpK2XTestProject.out. The source code is located in <install_base>/pdk_keystone2_<ver>/packages/ti/transport/ipc/c66/qmss/test. The project can be imported into CCS from <install_base>/pdk_keystone2_<ver>/packages/exampleProjects folder. Build the project and copy the generated .out to the Linux file system.


Running[edit]

NoteNote: Replace "K2X" and "k2x" with the proper device

  1. Start the RM Server (Only applicable if the rmServer is not started during Linux boot)
    $ rmServer.out /usr/bin/device/k2x/global-resource-list.dtb /usr/bin/device/k2x/policy_dsp_arm.dtb
  2. Load the DSP endpoints (Repeat for the # of DSPs, default is 2)
    $ mpmcl load dsp# transportIpcQmssDspEpK2XTestProject.out
    $ mpmcl run dsp#
  3. Execute Linux endpoint binary
    $ ./armEpTest.out
  4. The DSP endpoint logs can be dumped from /sys/kernel/debug/remoteproc/remoteproc#/trace0 where # is the DSP whose trace to dump

<syntaxhighlight lang='bash'>

root@k2e-evm:~# ./armEpTest.out

  • ARMv7 Linux TransportQmss Heterogeneous Test (ARM EP) *

TransportQmss Version : 0x02000000 Version String: Linux IPC Transports Revision: 2.0.0.00:Nov 4 2015:17:59:08 Process 1 : Initialized RM_Client0 Process 1 : Opening RM client socket /var/run/rm/rm_client0 Process 1 : Creating TransportQmss instance Process 0 : Starting RM Message Hub Process 0 : Created RM hub queue: RM_Message_Hub, Qid: 0x80 Process 0 : Opening RM_Client_DSP_1 Process 1 : Local MessageQ: TEST_MsgQ_Proc_0, QId: 0x81 Process 1 : Attempting to open DSP 1 queue: TEST_MsgQ_Proc_1 Process 0 : Opened Remote queue: RM_Client_DSP_1, QId: 0x10080 Process 0 : Sending handshake to DSP 1 Process 0 : Received handshake response from DSP 1 Process 0 : Opening RM hub socket /var/run/rm/rm_msg_hub Process 0 : Opening RM Server socket /var/run/rm/rm_server Process 0 : Sending ready msg to DSP 1 Process 0 : Wait for RM messages from DSP RM clients Process 1 : Opened DSP 1 queue: TEST_MsgQ_Proc_1, QId: 0x10081 Process 1 : Allocating bidirectional test MessageQ msg Process 1 : ### Round Trip - 1 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 2 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 3 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 4 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 5 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 6 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 7 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 8 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 9 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 10 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 11 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 12 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 13 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 14 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 15 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 16 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 17 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 18 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 19 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 20 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 21 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 22 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 23 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 24 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 25 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 26 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 27 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 28 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 29 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 30 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 31 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 32 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 33 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 34 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 35 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 36 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 37 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 38 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 39 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 40 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 41 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 42 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 43 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 44 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 45 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 46 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 47 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 48 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 49 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 50 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 51 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 52 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 53 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 54 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 55 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 56 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 57 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 58 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 59 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 60 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 61 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 62 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 63 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 64 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 65 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 66 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 67 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 68 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 69 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 70 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 71 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 72 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 73 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 74 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 75 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 76 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 77 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 78 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 79 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 80 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 81 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 82 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 83 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 84 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 85 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 86 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 87 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 88 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 89 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 90 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 91 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 92 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 93 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 94 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 95 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 96 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 97 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 98 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 99 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : ### Round Trip - 100 ### Process 1 : Sending msg to DSP 1 using Qid 0x10081 Process 1 : Received msg with good data from DSP 1 Process 1 : Freeing bidirectional test MessageQ msg Test PASSED Cleaning test process Process 0 : Cleaning up RM Message Hub Test Complete! </syntaxhighlight>

MPM Mailbox[edit]

Mailbox is used for exchanging control messages between the host and individual DSP cores. As shown in the picture below, a mailbox is uni-directional, either host->DSP or DSP->host. Mailboxes are identified by a unique integer value which is returned after a mailbox is created on the host and opened on the DSP core. There exists a maximum of 2 mailboxes per DSP core (1 mailbox for Host -> DSP messages and 1 mailbox for DSP -> Host messages). Each mailbox has configurable amount of memory space to allocate slots which store a pending message.

Mailbox Block diagram

An empty Mailbox slot must be allocated prior to sending a message. Receiving a message frees a slot and marks it as being empty. Mailboxes can be queried to obtain the number of unread messages within the mailbox.

Here is the high-level API for this module:

  • mpm_mailbox_create is called by the host and dsp to create a mailbox at the location specified
  • mpm_mailbox_open is called by both Host and DSP core to open the mailbox
  • mpm_mailbox_write is a non-blocking call, writes a message to the mailbox. Returns with error message indicating full, if all the mailbox slots are occupied
  • mpm_mailbox_read is a non-blocking call. A message is picked up and returned to the application when available. Returns error message indicating empty, if there are no messages to be processed.
  • mpm_mailbox_query obtains the number of unread messages in the mailbox


MPM Sync[edit]

Sync Module implements support for Multicore Barriers and Locks.

Barrier[edit]

Step-1) Number of participants in a barrier need to be defined up front, and using the below API, the memory size required for that many participants can be obtained.

int32_t mpm_sync_barr_get_sizes(int32_t num_users)

Step-2) One of the participants has to invoke a barrier init. It is not harmful to invoke this by some or all the participants. If a barrier is already initialized, subsequent calls (by other cores) simply returns.

mpm_sync_barr_init(void *barr, int32_t num_users)

Step-3) Participants call mpm_sync_barr_wait function to busy wait until all the participants arrive at that location.

mpm_sync_barr_wait(void *barr)


Lock[edit]

Lock module implements Lamport’s bakery algorithm using shared memory. It is a multi-process safe lock.

Maximum number of participants in a lock need to be determined up front for memory allocation reasons. The required memory can be queried using mpm_sync_lock_get_sizes() API.

Here is the high-level API for this module:

  • mpm_sync_lock_init() needs to be called once to initialize the lock.
  • mpm_sync_lock_check() is a non-blocking call to check if lock is in use.
  • mpm_sync_lock_acquire() is a blocking call to acquire the lock and
  • mpm_sync_lock_release() releases the lock.

MPM Transport[edit]

The MPM transport is designed to provide access to memory associated with remote cores/nodes. The current supported transport modes are: shared memory transport and Hyperlink transport.

MPM Transport

Following are some implementation details:

  • The API's to access transport and its static library are provided in linux-devkit.
  • The APIs can be reviewed from mpm_transport.h.
  • The parameters of transport can be configured from the JSON config file mpm_config.json. Currently the MPM downloader shares same config file, it is likely to change in future.
  • The static library to be linked for the transport is libmpmtransport.a, which is in linux-devkit provided in the release. The link option should be -lmpmtransport.
  • An example application of the transport component is provided in test.

Release History[edit]

Refer to following table for release history:

MCSDK Version MPM-Transport Version Major Updates / Features Release
3.0.1 1.0.0 Initial MPM-Transport release. Shared memory functionality.
3.0.2 1.0.0 No major changes
3.0.3 1.0.1 Hyperlink support for transfers. JSON to add new slaves and additional array segment for hyperlink peripheral parameters. Leverages hyperlink user mode driver.
3.0.4 1.0.4 Added support for EDMA3 and 36-bit addresses. Intersect with MCSDK-HPC developments.
3.1.0, 3.1.1 1.0.5 Multiple optimizations for hyperlink, such as linked transfers, parsing DTB for params, K2E support
3.1.2 1.0.6 Added hyperlink interrupt support
3.1.3 1.0.7 Added QMSS and SRIO support

Migration from MCSDK 3.1.1 to MCSDK 3.1.3[edit]

The default examples using hyperlink are changed to serdes_init = 0, which means that you need to enable the serdes externally before running the examples. You can do this by using mpmcl: <syntaxhighlight lang='bash'> user@k2hk-evm> mpmcl transport <slave name> open </syntaxhighlight>

Additionally, hyperlink and SRIO LLD symbols are now declared as weak to accommodate devices without these peripherals. If you are linking with the static MPM Transport library (libmpmtransport.a), you will have to link with --whole-archive so that the user space LLDs will override the weak symbols. No changes if you use the shared library (libmpmtransport.so).



Building the Library[edit]

1) Clone the git repository for the source code:

git clone git://git.ti.com/keystone-linux/mpm-transport.git
cd mpm-transport

2) Set up your environment

export BUILD_LOCAL=true
export PATH="$PATH:<your_linaro_toolchain>/bin"

3) Source the general MCSDK environment setup file (used also for building other LLDs)

cd <your_mcsdk_install>/pdk_keystone2_3_xx_xx_xx/packages
source armv7setupenv.sh
cd -

NoteNote: armv7setupenv.sh is located in the pdk_keystone2_3_xx_xx_xx folder that came with MCSDK installation. Your setup should have linux-devkit installed and use the resource in there for development. Modify armv7setupenv.sh according to your system as necessary

4) Run make:

make clean
make

Modifying and Rebuilding the Library for Profiling[edit]

MPM-Transport provides some internal time profiling for its provided API. Prior to running make (to rebuild the library as the steps state above), modify the file mpm_transport_time_profile.h to enable the areas you want to profile. Edit the mpm_transport_time_profile.h file:

vi <mpm_transport>/src/utils/time_profile/mpm_transport_time_profile.h

You should see <syntaxhighlight lang='C'>

#define TIME_PROFILING 0
#if TIME_PROFILING
#define TIME_PROFILE_HYPLNK_PUT_INITIATE			0
#define TIME_PROFILE_HYPLNK_GET_INITIATE			0
#define TIME_PROFILE_HYPLNK_PUT_INITIATE_LINKED		0
#define TIME_PROFILE_HYPLNK_GET_INITIATE_LINKED		0
...
...

</syntaxhighlight>

  • To enable TIME_PROFILING, change "#define TIME_PROFILING 0" to "#define TIME_PROFILING 1"
  • Change the define to 1 for the function you want to profile, or vice versa, to 0 for regular usage
  • The variables are named according to the functions it targets. For example, "TIME_PROFILE_HYPLNK_PUT_INITIATE" refers to the mpm_transport_put_initiate() function that is using Hyperlink transport
  • Please account for nested timestamps. For example, "TIME_PROFILE_HYPLNK_GET_WINDOW" profiles the portion where it maps hyperlink to a remote address, which is within any of the read/write operations. Enabling this will also skew the results for those read/write APIs (such as "TIME_PROFILE_HYPLNK_PUT_INITIATE"

Building Examples with SerDes Bypass[edit]

In the transport modes that uses SerDes, examples may need to explicitly skip the SerDes initialization portion if it is already handled by another process. Motivation to do so would be to minimize risk of breaking the SerDes link or to use the given setup as-is. To do so, set the serdes_init variable in the open structure to 0 before calling mpm_transport_open()

<syntaxhighlight lang='C'>

   ...
   mpm_transport_open_t ocfg = {
       .open_mode	= (O_SYNC|O_RDWR),
       .msec_timeout = 5000,
       .serdes_init = 0,
   };
   ...
   h = mpm_transport_open("arm-remote-hyplnk-0", &ocfg);
   ...

</syntaxhighlight>

In this case, "arm-remote-hyplnk-0" is the slave name to be opened and the slave peripheral specification can be found in the JSON file. Since serdes_init is 0, a separate process should manage opening and closing the transport associated with the slave. MPM has the capability to do so by: "mpmcl transport arm-remote-hyplnk-0 open" and "mpmcl transport arm-remote-hyplnk-0 close", respectively.

Setting Transport Parameters from JSON file[edit]

The JSON configuration file for MPM-Transport is mpm_config.json. There are four major arrays in this file that you should be aware of:

1) segments - The target destination address has be within the range specified in a memory segment.
name - the name of the segment
localaddr - address from a local point of view. This is NOT needed for transport modes outside of sharedmem because peripherals see the globaladdr instead.
globaladdr - address from the soc point of view
length - how long the segment is
devicename - /dev/* to open. This only applies to sharedmem. All other transport modes will have their respective transport device.
2) slaves - The top-level profile that includes all the information about the transport you want to use
name - name of the slave
transport - points to a profile in the "transports" array. Exception is "sharedmem", which does not need a profile
dma - points to a profile in the "dma-params" array.
memorymap - points to profiles in the "segments" array. This entry is an array of segment names.
crashcallback - points to the script for crash callback. This is optional for mpm-transport, but is used by mpmcl.
3) transports - Individual transport configurations for the transport mode being used
name - name of the transport profile
transporttype - the transport mode, eg. hyperlink, pcie, srio
Note: peripheral and transport parameters will vary based on the transport used. See peripheral-related section for mpm-transport
4) dma-params - Configuration for using EDMA3
name - name of the DMA profile
edma3inst - the EDMA3 instance number to use (0-4, total of 5)
shadowregion - the shadowregion of the EDMA3 instance to use (0-7, total of 8)
5) mpax-params - Configuration for using keystonemmap library
name - name of the profile
base - base address to use for creating logical addresses (32-bit addresses aliased to specific 36-bit addresses)
size - length of the space allow, from the base, to do the logical mapping
index - entry number in the MPAX table
6) qmss-mem-regions - parameters to figure a QMSS memory region
name - name of the memory region
regiondescnum - number of descriptors this region will have
regiondescsize - size per descriptor
regiondescflag - to be used for manageDescFlag
regionnum - the region number. Use -1 for next available
regionstartidx - the index number to offset from
7) qmss-queue-map - parameters to setup a queue
name - the name of the queue
queue - the queue number. Use -1 for next available
qtype - to be used for the queue type to request
numdesc - number of descriptors to init for this queue
sizebuf - size of the buffer for each descriptor. Will be allocated with CMEM.


Transport over Hyperlink[edit]

Hyperlink can be configured point-to-point or loopback. There are two Hyperlink ports in Keystone 2. Please refer to the Hyperlink user guide under your SOC's product page for more detailed explanation of the Hyperlink settings and how they are used.

Pre-requisites[edit]

  • MCSDK 3.0.3 or higher. The Hyperlink user mode driver and associated hyplnk_device.c file will be available in the linux-devkit.
  • libhyplnk*.so* - you will need the shared libraries at runtime. Please ensure that your filesystem has the Hyperlink libs under /usr/lib

JSON Transport Profile[edit]

Fields needed for a Hyperlink transport profile in the JSON file:

  • name - name of the profile
  • transporttype - transport mode. valid value = hyperlink (for using Hyperlink)
  • direction - valid values = "loopback" for loopback mode, "remote" for sending outside of SOC
  • hyplnkinterface - Hyperlink instance to use. Valid values = "hyplnk0" to use port 0, "hyplnk1" to use port 1
  • txprivid - transmit privid overlay field
  • rxprivid - receiving side's privid select field
  • rxsegsel - receiving side's segment select field
  • rxlenval - Length of the Hyperlink segment
  • lanerate - Valid values = full, half, quarter
  • numlanes - Valid values = 4, 1, 0

Example:

<syntaxhighlight lang='C'> {

 "name": "hyplnk0-loopback",
 "transporttype": "hyperlink",
 "direction": "loopback",
 "hyplnkinterface": "hyplnk0",
 "txprivid": 0,
 "rxprivid": 0,
 "rxsegsel": 6,
 "rxlenval": 21,
 "numlanes": "4"

}, </syntaxhighlight>

Compiling and Linking with Hyperlink Libraries[edit]

  • If you are compiling with the dynamic MPM Transport library, your application will link in the needed user space LLDs at runtime. Please make sure that libhyplnk_device.so.1 exists in your linker path and points to the appropriate device-specific library in your filesystem.
  • If you are compiling with static libraries, you will need to link in the Hyperlink library for your application. Please use --whole-archive, -lhyplnk_<device>, and -L${DEVKIT_USR_LIB} (DEVKIT_USR_LIB is your devkit's user library directory containing the needed libs).


Transport over QMSS[edit]

QMSS transport is set up to push and pop descriptors from queues. This is a cornerstone for many of the packet-based transports that is used in the system.

Pre-requisites[edit]

  • MCSDK 3.1.3 or higher. The Hyperlink user mode driver and associated hyplnk_device.c file will be available in the linux-devkit.
  • libqmss*.so* and libcppi*.so* - you will need these shared libraries at runtime. Please ensure that your filesystem has them under /usr/lib.
  • TI CMEM - the cmem module will be needed to allocate continuous memory.

JSON Transport Profile[edit]

Fields needed:

  • name - name of the profile
  • transporttype - needs to be "qmss" to signify QMSS transport
  • qmssmaxdesc - max number of descriptors to initialize QMSS with. This is done once per process
  • initregion - looks for the corresponding qmss-mem-region for params to setup a memory region
  • txfreeq - looks for the corresponding qmss-queue-map to setup a TX queue
  • rxfreeq - looks for the corresponding qmss-queue-map to setup a RX queue
  • writefifodepth - FIFO depth to setup CPPI with
  • cpdmatimeout - timeout value for cpdma to setup CPPI with

Initializing QMSS[edit]

  • Setup Resource Manager - applications will need to initialize the RM driver and pass in the RM client handle for QMSS and CPPI LLDs to use:

<syntaxhighlight lang='C'> mpm_transport_open_t ocfg; ocfg.rm_info.rm_client_handle = rmClientServiceHandle; </syntaxhighlight>

QMSS Send and Receive[edit]

For receiving: int mpm_transport_packet_recv(mpm_transport_h h, char **buf, int *len, mpm_transport_recv_t *cfg);

  • buf - a pointer to a buffer of space to copy content to
  • len - length to copy
  • cfg - receive options, will be used to specified memcpy or direct linking of buffer

When you open a QMSS handle, you will set up your MPM Transport handle to listen in on a specific RX flow. You can get this flow ID from the mpm_transport_open_t structure that you opened with.

<syntaxhighlight lang='C'> mpm_transport_open_t ocfg; mpm_transport_open("arm-qmss-generic", &ocfg); flow_id = ocfg.transport_info.qmss.flowId; </syntaxhighlight>

For sending: int mpm_transport_packet_send(mpm_transport_h h, char **buf, int *len, mpm_transport_packet_addr_t *addr_info, mpm_transport_send_t *cfg);

  • buf - a pointer to a buffer of space to copy content from
  • len - length to copy
  • addr_info - the address to send to
  • cfg - send options, will be used to specified memcpy or direct linking of buffer

To send a packet in QMSS, addr_info needs to be specified with packet_addr_type_QMSS and the flow ID to send to.

<syntaxhighlight lang='C'> mpm_transport_packet_addr_t send_addr; send_addr.addr_type = packet_addr_type_QMSS; send_addr.addr.qmss.flowId = 16; </syntaxhighlight>

Compiling and Linking with QMSS Libraries[edit]

  • If you are compiling with the dynamic MPM Transport library, your application will link in the needed user space LLDs at runtime. Please make sure that libqmss_device.so.1 and libcppi_device.so.1 exists in your linker path and points to the appropriate device-specific library in your filesystem.
  • If you are compiling with static libraries, you will need to link in the needed peripheral LLDs for your application. Please use --whole-archive, -lqmss_<device>, -lcppi_<device> and -L${DEVKIT_USR_LIB} (DEVKIT_USR_LIB is your devkit's user library directory containing the needed libs).


Transport over SRIO[edit]

SRIO is a high speed transport that can transfer data between multiple SOCs or loopback to the same device. This builds on top of the QMSS packet-based transport and abstracts the SRIO user space driver for ease of use.

Pre-requisites[edit]

  • MCSDK 3.1.3 or higher. The Hyperlink user mode driver and associated hyplnk_device.c file will be available in the linux-devkit.
  • libqmss*.so*, libcppi*.so*, and libsrio*.so* - you will need these shared libraries at runtime. Please ensure that your filesystem has them under /usr/lib.
  • TI CMEM - the cmem module will be needed to allocate continuous memory.

JSON Transport Profile[edit]

Fields needed:

  • name - name of the profile
  • transporttype - needs to be "srio" to signify SRIO transport
  • qmssmaxdesc - max number of descriptors to initialize QMSS with. This is done once per process
  • initregion - looks for the corresponding qmss-mem-region for params to setup a memory region
  • txfreeq - looks for the corresponding qmss-queue-map to setup a TX queue
  • rxfreeq - looks for the corresponding qmss-queue-map to setup a RX queue

Initializing SRIO[edit]

  • Setup Resource Manager - applications will need to initialize the RM driver and pass in the RM client handle for QMSS and CPPI LLDs to use:

<syntaxhighlight lang='C'> mpm_transport_open_t ocfg; ocfg.rm_info.rm_client_handle = rmClientServiceHandle; </syntaxhighlight>

  • SRIO specific initialization - applications will need to tell MPM Transport the handle's SRIO address information. Example for type 11:

<syntaxhighlight lang='C'> ocfg.transport_info.srio.type = packet_addr_type_SRIO_TYPE11; ocfg.transport_info.srio.type11.tt = 1; ocfg.transport_info.srio.type11.id = coreDeviceID[coreNum]; ocfg.transport_info.srio.type11.letter = 2; ocfg.transport_info.srio.type11.mailbox = 3; ocfg.transport_info.srio.type11.segmap = 0x0; </syntaxhighlight>

  • SRIO device initialization - applications will to provide MPM Transport a function to initialize/deinitialize the SRIO device:

<syntaxhighlight lang='C'> ocfg.transport_info.srio.deviceInit = &mySrioDevice_init; ocfg.transport_info.srio.initCfg = NULL; //param for device init function if needed ocfg.transport_info.srio.deviceDeinit = &mySrioDevice_deinit; ocfg.transport_info.srio.deinitCfg = NULL; //param for device deinit function if needed </syntaxhighlight>

SRIO Send and Receive[edit]

For receiving: int mpm_transport_packet_recv(mpm_transport_h h, char **buf, int *len, mpm_transport_recv_t *cfg);

  • buf - a pointer to a buffer of space to copy content to
  • len - length to copy
  • cfg - receive options, will be used to specified memcpy or direct linking of buffer

When you open a SRIO handle, it will internally open a SRIO socket handle to listen in on.

For sending: int mpm_transport_packet_send(mpm_transport_h h, char **buf, int *len, mpm_transport_packet_addr_t *addr_info, mpm_transport_send_t *cfg);

  • buf - a pointer to a buffer of space to copy content from
  • len - length to copy
  • addr_info - the address to send to
  • cfg - send options, will be used to specified memcpy or direct linking of buffer

To send a packet in SRIO, addr_info needs to be specified with packet_addr_type_SRIO_TYPE9 or packet_addr_type_SRIO_TYPE11, and the SRIO information associated with that type. For example, in type 11:

<syntaxhighlight lang='C'> mpm_transport_packet_addr_t send_addr; send_addr.addr_type = packet_addr_type_SRIO_TYPE11; send_addr.addr.srio.type11.tt = 1; send_addr.addr.srio.type11.id = 0xABCD; send_addr.addr.srio.type11.letter = 2; send_addr.addr.srio.type11.mailbox = 3; </syntaxhighlight>

Compiling and Linking with SRIO Libraries[edit]

  • If you are compiling with the dynamic MPM Transport library, your application will link in the needed user space LLDs at runtime. Please make sure that libqmss_device.so.1, libcppi_device.so.1, and libsrio.so.1 exists in your linker path and points to the appropriate device-specific library in your filesystem.
  • If you are compiling with static libraries, you will need to link in the needed peripheral LLDs for your application. Please use --whole-archive, -lqmss_<device>, -lcppi_<device>, -lsrio_<device> and -L${DEVKIT_USR_LIB} (DEVKIT_USR_LIB is your devkit's user library directory containing the needed libs).

Using EDMA3[edit]

Using the Enhanced Direct Memory Access controller will provide performance increase in transporting large chunks of data over using memcpy.

  • currently only supports hyperlink transport mode

Pre-requisites[edit]

  • MCSDK 3.0.4 or higher. The EDMA3 user mode driver and associated evmTCI6636K2HSample.c file will be available in the linux-devkit.
  • libedma3.so* and libedma3rm.so* - you will need the shared libraries at runtime. Please ensure that your filesystem has the EDMA3 libs under /usr/lib

MPM-Transport Functions/API to use EDMA3[edit]

The tranport functions are named get/put to signify direction of transfer, and "initiate" to signify it is a call to the EDMA3 controller to start a transfer. All transfer calls has a parameter input, boolean is_blocking, to determine whether EDMA3 should wait for completion before continuing. If this parameter is false, mpm_transport_transfer_check() must be called to check for transfer completion and clear EDMA3 flags. All transfer API returns a transfer handle, mpm_transport_trans_h, for mpm_transport_transfer_check() to check.

Note that all from_addr and to_addr are physical addresses.

  • mpm_transport_get_initiate() - Get length data from remote from_addr and store it in local to_addr
  • mpm_transport_put_initiate() - Put length data from local from_addr to remote destination to_addr
  • mpm_transport_get_initiate_linked() - Same as mpm_transport_get_initiate(), except that this API accepts arrays of to_addr, from_addr, and length and complete all transfer with a single call. Parameter num_links must specify number of linked transfer and should equal the size of the three aforementioned arrays.
  • mpm_transport_put_initiate_linked() - Same as mpm_transport_put_initiate(), except that this API accepts arrays of to_addr, from_addr, and length and complete all transfer with a single call. Parameter num_links must specify number of linked transfer and should equal the size of the three aforementioned arrays.

Compiling and Linking with EDMA3 Libraries[edit]

  • You will need to compile and link in your device's evm[DEVICE].c file. This is located in linux-devkit/sysroots/armv7ahf-vfp-neon-oe-linux-gnueabi/usr/include/ti/sdo/edma3/drv/sample/src/platforms/evm[DEVICE].c.
  • You will need to link in the EDMA3 drv and rm libraries for your application. Please use -ledma3 and -ledma3rm with -L${DEVKIT_USR_LIB}, where DEVKIT_USR_LIB is your devkit's user library directory containing the needed libs.

Using libkeystonemmap[edit]

Using the Keystone MMAP driver will allow you to access the 36-bit physical memory space.

  • currently only supports hyperlink transport mode

Pre-requisites[edit]

  • MCSDK 3.0.4 or higher. The libkeystonemmap user mode driver and headers will be available in the linux-devkit
  • libkeystonemmap.so* - you will need the shared libraries at runtime. Please ensure that your filesystem has the libkeystonemmap libs under /usr/lib
  • All EDMA3 pre-requisites

MPM-Transport Functions/API to use libkeystonemmap[edit]

The 36-bit address space is accessed by modifying an entry in the MPAX table. Access to the newly mapped 32-bit address space will be done by the EDMA3 peripheral. Thus, the EDMA3 driver and requirements are needed, and the API provided are forms of "put/get_initiate()"

  • mpm_transport_get_initiate64() - Same as mpm_transport_get_initiate(), except from_addr and to_addr are 64-bit values (uses MPAX to access 36-bit SOC addresses)
  • mpm_transport_put_initiate64() - Same as mpm_transport_put_initiate(), except from_addr and to_addr are 64-bit values (uses MPAX to access 36-bit SOC addresses)

How the mpm_transport_put_initiate64() API Works for Hyperlink[edit]

MPM Transport

  1. keystone_mmap() is used to translate the local 36-bit source address to a local 32-bit location. The 32-bit location that is used is specified by the MPAX entry in the JSON file.
  2. A hyperlink segment will be created from SOC1 to SOC2's MPAX registers
  3. SOC1 calls keystone_mmap() again, but with Hyperlink Segement 1 as the base address. This allows SOC1 to update SOC2's MPAX entries remotely, to translate the 36-bit destination address to a 32-bit logical address in SOC2.
  4. A second hyperlink segment will be created to map to the 32-bit destination address in SOC2. EDMA3 can then take the source data from the 32-bit local address (via step 1) and put it into Hyperlink Segment 2
  5. When data hits Hyperlink Segment 2, it will be written to the remote 32-bit logical address (via step 3). This has the same effect as writing to the remote 36-bit destination address.

Compiling and Linking with libkeystonemmap Libraries[edit]

You will need to link in the libkeystonemmap library for your application. Please use -lkeystonemmap with -L${DEVKIT_USR_LIB}, where DEVKIT_USR_LIB is your devkit's user library directory containing the needed libs.


E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article MCSDK UG Chapter Developing Transports here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article MCSDK UG Chapter Developing Transports here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article MCSDK UG Chapter Developing Transports here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article MCSDK UG Chapter Developing Transports here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity