NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

MCSDK UG Chapter Developing System Mgmt

From Texas Instruments Wiki
Jump to: navigation, search




TI Logo oneline twocolor.png



Developing with MCSDK: System Management

Last updated: 07/27/2015

Contents

Overview[edit]

Learn how to manage resources on the device/EVM using the Resource Manager, download and manage DSP from ARM along with topics that have a global impact on your system.

Acronyms[edit]

The following acronyms are used throughout this chapter.

Acronym Meaning
API Application Programming Interface
CD Resource Manager Client Delegate Instance
DSP Digital Signal Processor
DTB Device Tree Blob
DTC Device Tree Compiler
DTS Device Tree Source
EVM Evaluation Module, hardware platform containing the Texas Instruments DSP
FDT Flattened Device Tree
GRL Resource Manager Global Resource List
IPC Texas Instruments Inter-Processor Communication Development Kit
LLD Low Level Driver
MCSDK Texas Instruments Multi-Core Software Development Kit
MPM Multiple Processor Manager
OSAL Operating System Abstraction Layer
PDK Texas Instruments Programmers Development Kit
RM Resource Manager
RTSC Eclipse Real-Time Software Components
TI Texas Instruments

Multiple Processor Manager[edit]

The Multiple Processor Manager (MPM) module is used to load and run DSP images from ARM.

Structure of MPM[edit]

MPM slave node state transitions
  • The MPM has two following major components
    • MPM server (mpmsrv): It runs as a daemon and in default filesystem supplied in MCSDK it runs comes up automatically. It parses mpm config file from /etc/mpm/mpm_config.json, then waits on an UNIX domain socket. The mpm server runs/maintains a state machine for each slave core.
    • MPM commandline/client utility (mpmcl): It is installed in the filesystem provides commandline access to the server.
  • Following are the different methods, MPM can be used to load/run slave images
    • Using mpmcl utility
    • From config file, to load at bootup
    • Writing an application to use mpmclient header file and library
  • The mpm server/daemon logs go to "/var/log/daemon.log" or "/var/log/mpmsrv.log" based on "outputif" configuration in the JSON config file.
  • The load command writes the slave image segments to memory using UIO interface. The run command runs the slave images.
  • An example DSP image is provided in the MCSDK package at mcsdk_bios_#_##_##_##/examples/mpm directory.
  • All events from the state transition diagram are available as options of mpmcl command, except for the crash event.
  • The reset state powers down the slave nodes.


Methods to load and run ELF images using MPM[edit]

Using mpmcl utility to manage slave processors[edit]

Use mpmcl --help to the details of the commands supported. Following is the output of the mpmcl help

Usage: mpmcl <command> [slave name] [options]
Multiproc manager CLI to manage slave processors
 <command>           Commands for the slave processor
                     Supported commands: ping, load, run, reset, status, coredump, transport
                                         load_withpreload, run_withpreload
 [slave name]        Name of the slave processor as specified in MPM config file
 [options]           In case of load, the option field need to have image file name


Following is sample set of commands to manage slave processors.

- Ping daemon if it is alive mpmcl ping
- Check status of dsp core 0 mpmcl status dsp0
- Load dsp core 0 with an image mpmcl load dsp0 dsp-core0.out
- Run dsp core 0 mpmcl run dsp0
- Reset dsp core 0 mpmcl reset dsp0
- Load dsp core 0 image with a preload image mpmcl load_withpreload dsp0 preload_image.out dsp-core0.out
- Run dsp core 0 with preload mpmcl run_withpreload dsp0

NoteNote: In case of error, the mpm server takes the slave to error state. You need to run reset command to change back to idle state so that the slave can be loaded and run again.

NoteNote: The status of slave core is idle means the slave core is not loaded as far as MPM is concerned. It does NOT mean the slave core is running idle instructions.


Loading and running slave images at bootup[edit]

The config file can load a command script to load and run slave cores at bootup. The path of the script is to be added in "cmdfile": "/etc/mpm/slave_cmds.txt" in the config file. Following is a sample command to load and run DSP images.

dsp0 load ./dsp-core0.out
dsp1 load ./dsp-core0.out
dsp0 run
dsp1 run

Managing slave processors from application program[edit]

An application can include mpmclient.h from MPM package and link to libmpmclient.a to load/run/reset slave cores. The mpmcl essentially is a wrapper around this library to provide commandline access for the functions from mpmclient.h.

DSP Image Requirements[edit]

For MPM to properly load and manage a DSP image, the following are required:

  • The DSP image should be in ELF format.
  • The MPM ELF loader loads those segments to DSP memory, whose PT_LOAD field is set. In order to skip loading of a particular section, set the type to NOLOAD in the command/cfg file.

<syntaxhighlight lang="javascript"> /* Section not to be loaded by remoteproc loader */ Program.sectMap[".noload_section"].type = "NOLOAD"; </syntaxhighlight>

  • The default allowed memory ranges for DSP segments are as follows
Start Address Length
L2 Local 0x00800000 1MB
L2 Global 0x[1-4]0800000 1MB
MSMC 0x0C000000 6MB
DDR3 0xA0000000 (512MB)

The segment mapping can be changed using the mpm_config.json and Linux kernel device tree.

Getting DSP prints(trace) output from ARM/Linux using MPM[edit]

The DSP image needs to have an uncompressed section with name .resource_table. The MPM looks for this table in the DSP ELF image before loading it. The .resource_table can be used to provide SYS/BIOS information on trace buffer location and size.

The .resource_table for must match to remoteproc resource table format used in IPC.

Following steps shows how to create .resource_table section in the DSP image with trace buffer information. These code snippet is taken from MCSDK release package <mcsdk_bios_#_##_##_##>/examples/mpm

  • In the Configuro Script File of the application add following commands to create a section

<syntaxhighlight lang="javascript"> /*

* The SysMin used here vs StdMin, as trace buffer address is required for
* Linux trace debug driver, plus provides better performance.
*/

Program.global.sysMinBufSize = 0x8000; var System = xdc.useModule('xdc.runtime.System'); var SysMin = xdc.useModule('xdc.runtime.SysMin'); System.SupportProxy = SysMin; SysMin.bufSize = Program.global.sysMinBufSize;

/* Configure resource table for trace only.

  Note that, it traceOnly parameter should not
  be set if application is using MessageQ based IPC
  to communicate between cores.
*/

var Resource = xdc.useModule('ti.ipc.remoteproc.Resource'); Resource.loadSegment = Program.platform.dataMemory; Resource.traceOnly = true;

</syntaxhighlight>

DSP trace/print messages from Linux[edit]

The DSP log messages can be read from following debugfs locations

DSP log entry for core #: /sys/kernel/debug/remoteproc/remoteproc#/trace0

Where # is the core id starting from 0.

<syntaxhighlight lang="bash">

 root@keystone-evm:~# cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
 Main started on core 1
 ....
 root@keystone-evm:~#

</syntaxhighlight>

MPM configuration file[edit]

  • The MPM configuration file is a JSON format configuration file.
  • A sample config file can be reviewed from mpm_config.json. Usually it is installed in default root file system release as a part of MCSDK. The location is /etc/mpm.
  • The MPM parser ignores any JSON elements which it does not recognize. This can be used to put comment in the config file.
  • The tag cmdfile (which is commented as _cmdfile by default) loads and runs MPM commands at bootup.
  • The tag outputif can be syslog, stderr or filename if it does not matches to any predefined string.
  • By default the config file allows loading of DSP images to L2, MSMC and DDR. It can be changes to add more restrictions on loading or load to L1 sections.
  • In current from MPM does not do MPAX mapping for local to global addresses.
  • The MPM needs 1KB of scratch memory in either DDR or MSMC for its processing (running trampoline for get 10bit alignment and workaround for DSP reset issue). This is reserved at the end of DDR in the config file with keyword "scratchaddr" and "scratchlength". The DSP images must avoid to load to the scratch memory location.

NoteNote: : In future, MPM is going to get scratch memory from kernel and the scratch memory requirement will be removed.

Crash event notification, coredump generation and processing[edit]

The MPM can monitors crash events from DSPs.The DSP image needs to be instrumented to indicate MPM that it is crashed.

Instrumenting DSP application[edit]

The DSP application should install an exception hook for the DSP fault management APIs.

Please follow the steps below to add the crash indication and add necessary segments for coredump.

  • In the .cfg (Configuro Script) file of the application add following commands to create a section

<syntaxhighlight lang="javascript"> var devType = "k2?"; /* Replace k2? with the k2 device in use k2e, k2h, k2k, or k2l */ /* Load and use the Fault Management package */ var Fault_mgmt = xdc.useModule('ti.instrumentation.fault_mgmt.Settings') Fault_mgmt.deviceType = devType;

/* Load the Exception and register a exception hook */ var Exception = xdc.useModule('ti.sysbios.family.c64p.Exception'); Exception.exceptionHook = '&myExceptionHook'; Exception.enablePrint = true;

/* Add note section for coredump */ Program.sectMap[".note"] = new Program.SectionSpec(); Program.sectMap[".note"] = Program.platform.dataMemory; Program.sectMap[".note"].loadAlign = 128; </syntaxhighlight>

  • In a source/header file, create a resource array as follows

<syntaxhighlight lang="c">

/* Fault Management Include File */

  1. include <ti/instrumentation/fault_mgmt/fault_mgmt.h>

Void myExceptionHook(Void) {

   uint32_t   i;
   Fm_HaltCfg haltCfg;
   uint32_t   efr_val;
   /* Copy register status into fault management data region for Host */
   Fault_Mgmt_getLastRegStatus();
   memset(&haltCfg, 0, sizeof(haltCfg));
   efr_val = CSL_chipReadEFR();
   /* If triggered exception originates from another core through
    * NMI exception don't need to halt processing and notify other cores
    * since the parent core where the exception originally triggered via
    * event would notify them.  This eliminates recursive exceptions */
   if (!(efr_val & 0x80000000)) {
       /* Halt all processing - Only need to be done on one core */
       haltCfg.haltAif = 1;
       haltCfg.haltCpdma = 1;
  1. if EXCLUDE_LINUX_RESOURCES_FROM_HALT
       haltCfg.haltSGMII = 0;
       /* EDMA used by kernel to copy data to/from NAND in UBIFS */
       haltCfg.haltEdma3 = 0;
       haltCfg.excludedResources = &linuxResources[0];
  1. else
       haltCfg.haltSGMII = 1;
       haltCfg.haltEdma3 = 1;
       haltCfg.excludedResources = NULL;
  1. endif
       Fault_Mgmt_haltIoProcessing(&fmGblCfgParams, &haltCfg);
       for (i = 0; i < fmGblCfgParams.maxNumCores; i++) {
           /* Notify remote DSP cores of exception - WARNING: This will generate NMI
            * pulse to the remote DSP cores */
           if (i != CSL_chipReadDNUM()) {
               Fault_Mgmt_notify_remote_core(i);
           }
       }
   }
   /* Notify Host of crash */
   Fault_Mgmt_notify();

}

</syntaxhighlight>

A sample test application is provided in pdk_keystone2_#_##_##_##\packages\ti\instrumentation\fault_mgmt\test

Detecting crash event in MPM[edit]

In case of a DSP exception, the MPM calls the script provided in JSON config file. The MCSDK Linux filesystem has a sample script /etc/mpm/crash_callback.sh that sends message to syslog indicating which core crashed. This script can be customized to suit notification needs.

Generating DSP coredump[edit]

The DSP exceptions can be following

  • Software-generated exceptions
  • Internal/external exceptions
  • Watchdog timer expiration

The MPM creates an ELF formatted core dump.

<syntaxhighlight lang="bash"> root@keystone-evm:~# mpmcl coredump dsp0 coredump.out </syntaxhighlight>

The above command will generate coredump file with name coredump.out for the DSP core 0.

NoteNote: The coredump can be captured from a running system which is not crashed, in this case the register information won't be available in the coredump.

Converting and loading core dump image to CCS[edit]

The current version of CCS (5.4) does not support ELF core dump. The dspcoreparse utility provided in mcsdk_linux_#_##_##_##/host-tools, is used to parse the ELF format coredump and generate the crash dump file that can be uploaded to CCS for further analysis.
Copy the coredump file generated above to mcsdk_linux_#_##_##_##/host-tools/dspcoreparse and run the following command to get crash dump file for CCS.

NoteNote: In CCS, a core dump is referred to as a "crash dump".

<syntaxhighlight lang="bash"> user@dspcoreparse $ ./dspcoreparse -o coredump.txt coredump.out </syntaxhighlight>

This utility can also be used to display the parsed content. However, CCS provides a graphical view of the information.

Analyzing DSP crash in CCS[edit]

The crash dump file can be loaded to CCS with the symbols of the DSP executable to see register content, stack, memory and other information. Please see Crash_Dump_Analysis for more information on loading and analyzing a CCS crashdump file.

DSP_crash_stack.jpg

Further analysis of the crash can be done in CCS by opening ROV. Please see Runtime Object Viewer for more details on ROV.

NoteNote: The scripting console of CCS sometimes does load the crashdump file with path other than base directory. As an workaround please following command to load the coredump file from CCS scripting console. <syntaxhighlight lang="javascript"> activeDS.expression.evaluate('GEL_SystemRestoreState("/home/ubuntu/coredump.txt")') </syntaxhighlight>

NoteNote: The coredump schemes will be changing in future releases.

MPM error codes and debugging[edit]

Following are some pointers for MPM debugging and error codes

  • If MPM server crashed/exited/not running in the system, mpmcl ping will return failure
  • If there is any load/run/reset failure MPM client provides error codes. The major error codes are given below
error code error type
-100 error_ssm_unexpected_event
-101 error_ssm_invalid_event
-102 error_invalid_name_length
-103 error_file_open
-104 error_image_load
-105 error_uio
-106 error_image_invalid_entry_address
-107 error_resource_table_setting
-108 error_error_no_entry_point
-109 error_invalid_command
  • The MPM daemon logs goes to /var/log/mpmsrv.log by default. This file can provide more information on the errors.

Loading DSP images from CCS (without using MPM)[edit]

By default, the DSP cores are powered down by u-boot at the time of EVM boot. After kernel is running, MPM can be used to load and run DSP images from Linux command-line/utility.

Rather than using MPM, if you want to use CCS to load and run DSP images, then set the following setting in u-boot prompt:

<syntaxhighlight lang="bash"> setenv debug_options 1 saveenv reset </syntaxhighlight>

This will not power down DSPs at startup and CCS/JTAG can connect to the DSP for loading and debugging. This option is useful if you want to boot Linux on ARM and then use JTAG to manually load and run the DSPs. Otherwise you may see "held in reset" errors in CCS.

NoteNote: The above step is not needed if you want to load DSP cores using MPM and subsequently use CCS to connect to DSP.

Frequently Asked Questions[edit]

Q: MPM does not load and run the DSP image
A: There can be several scenarios, following are few of them

  • The MPM server may not be running. The command mpmcl ping will timeout in this case. The mpm server is expected to be running in background to service the requests from mpm client. The log file /var/log/mpmsrv.log can provide more information.
  • An issue can be the devices relevant to MPM /dev/dsp0, ... , /dev/dsp7, /dev/dspmem are not created. You need to check if these devices are present. If they are not present then check if the kernel and device tree have right patches for these devices.
  • The log can print error codes provided in MPM error codes section.
  • Another way to debug loading issues is, to run mpm server in non-daemon mode from one shell using command mpmsrv -n, before this you need to kill the server if it is running. (The command to kill is mpmsrv -k or you can choose to kill the process). Then from other shell run the client operations.


Q: MPM fails to load the segments
A: The MPM fundamentally copies segments from DSP image to memory using a custom UIO mmap interface. Each local or remote nodes (DSPs) are allocated some amount of resources using config file. The segments in the config file needs to be subset of memory resources present in kernel dts file. The system integrator can choose to add or change memory configurations as needed by application. In order to change the default behavior user need to changed in JSON config file and kernel device tree. In JSON configuration file, the segments section need to be updated. You need to make sure it does not overlap the scratch memory section. You might have to move the scratch section if the allocated DDR size is increased. And, in the kernel device tree the mem sections of dsp0, .. , dsp7, dspmem need to be updated.

  • Sometimes few segments used by DSP may not accessible by ARM at the time of loading. These segment can cause load failure. So it is useful to understand the memory layout of your own application and if there are any such sections, you can skip loading those segments to memory using NOLOAD method described above.
  • The MPM does not have MPAX support yet. So the MPAX support needs to be handled by application.
  • If the linker adds a hole in the resource table section right before the actual resource_table due to the alignment restriction, then MPM as of now won't be able to skip the hole and might get stuck. In this case if you hex-dump resource table (method given below) size will be quite large (normally for a non-IPC case it is around 0xac). The workaround is to align the .resource_table section to 0x1000 using linker command file or some other method so that linker does not add any hole in the resource_table section. In future, MPM will take care of this offset.


Q: MPM fails to run the image
A: MPM takes DSP out of reset to run the image. So, the fails to run normally attributed to DSP is crashing before main or some other issue in the image. But, to debug such issue, after mpmcl run, use CCS to connect to the target and then do load symbols of the images. Then the DSP can be debugged using CCS. Another way to debug the run issue, is to aff a infinite while loop in the reset function so that the DSP stops at the very beginning. Then load and run the DSP using MPM and connect thru CCS, do load symbols and come out of while loop and debug.

Q: I don't see DSP prints from debugfs
A: Make sure you followed the procedure described above to include the resource table in the image. Care should be taken for the resource table not being compiled out by linker. To check if the resource table present in the image using command readelf --hex-dump=.resource_table <image name>. It should have some non-zero data.
Another point is, if you are loading same image in multiple cores and if the resource table and trace buffer segments overlap with each other in memory, then there can be undesirable effect.

Q: I see the DSP state in /sys/kernel/debug/remoteproc/remoteproc0/state as crashed
A: The file /sys/kernel/debug/remoteproc/remoteproc#/state does not indicate state of DSP when MPM is used for loading. The state of the DSP can be seen using MPM client. See the description of the command in Methods to load and run ELF images using MPM sections.

Resource Manager[edit]

The Resource Manager (RM) is delivered as part of PDK as a means for managing system resource contentions. RM provides the ability to allocate system resources to "entities" within a software architecture based on sets of allocation rules. The term "entities" can refer to anything from a device core, to an OS task or process, to a specific software module. The resources managed and the "entities" for which the resources are managed by RM are completely defined by the RM configuration parameters. The RM configuration parameters are device specific. Whereas, the RM source code is completely device independent.

What follows is a description of the RM architecture and in-depth instruction on how to integrate, configure, and use RM.

Architecture[edit]

Resource Manager is an instance based architecture. Integrating RM into a system consists of creating a set of RM instances, connecting these instances via transport mechanisms, and then using the instances to request resources from different device cores, processes, tasks, modules, etc. Resource permissions are derived from RM instance names so it is imperative that if two system entities require different permissions they issue their service requests through different RM instances. There are no restrictions on where a RM instance can be created as long as the means exist to connect the instance to other RM instances within the system.

There are three primary RM instance types:

  • Server - Manages all defined system resources. Handles resource requests received from connected Client Delegates and Clients.
  • Client Delegate (CD) - Manages resource provided by the RM Server. Handles resource requests received from connected Clients.
  • Client - Connects and forwards resource requests to Server or CD for servicing

Example multicore RM instance topology (not based on any specific device or system architecture): RM_inst_multi_core.jpg
Example multi-task/process RM instance topology (not based on any specific device or system architecture): RM_inst_multi_task.jpg
Example multi-DSP with multi-task/process RM instance topology (not based on any specific device or system architecture): RM_inst_multi_dsp.jpg

General Instance Interfaces[edit]

All the RM instance types share a common set of APIs allowing them to receive resource requests and communicate on any device.

  • Service API
  • Transport API
  • OS Abstraction Layer (OSAL) API

RM_general_interfaces.jpg

Service API[edit]

The RM service API defines what resource services RM provides to the system. All RM instances can be issued resource service requests. While all instances can receive requests, not all instances can process requests. Most service requests will be processed and validated on the RM Server instance. Due to the blocking nature of most available transports that may be used to connect two RM instances the service API provides facilities to the system that allow the system to decide how RM's service API issues completed service requests. When a resource service is requested by the system RM can be told to block until the service request has been completed or RM can be told to return the service request result at a later time via a callback function provided by the system. The following services are supported by RM:

Service Type Purpose
<syntaxhighlight lang='c'>Rm_service_RESOURCE_ALLOCATE_INIT</syntaxhighlight> Allocates a resource to the requesting system entity, checking initialization privileges of the entity prior to allocation
<syntaxhighlight lang='c'>Rm_service_RESOURCE_ALLOCATE_USE</syntaxhighlight> Allocates a resource to the requesting system entity, checking usage privileges of the entity prior to allocation
<syntaxhighlight lang='c'>Rm_service_RESOURCE_FREE</syntaxhighlight> Frees the specified resource from control of the requesting system entity
<syntaxhighlight lang='c'>Rm_service_RESOURCE_STATUS</syntaxhighlight> Returns the allocation reference count of a specified resource to the requesting system entity
<syntaxhighlight lang='c'>Rm_service_RESOURCE_MAP_TO_NAME</syntaxhighlight> Maps a specified resource to a specified string and stores the mapping in the RM NameServer
<syntaxhighlight lang='c'>Rm_service_RESOURCE_GET_BY_NAME</syntaxhighlight> Returns a set of resource values to the requesting system entity based on a specified, existing NameServer name string. The resource is not allocated to the requesting entity. Just the resource values are returned.
<syntaxhighlight lang='c'>Rm_service_RESOURCE_UNMAP_NAME</syntaxhighlight> Unmaps the specified, existing NameServer name string from a resource and removes the mapping from the RM NameServer

Transport API[edit]

Messages exchanged between RM instances in order to complete service requests all flow through the instance transport API. RM does not implement any transport mechanisms internally in order to stay device and OS agnostic. A system which integrates RM must supply the transport between any two RM instances. The transport mechanism used to connect two RM instances is completely up to the system. The RM transport API requires two RM instances be registered with one another if they are to communicate. The registration process involves the system providing the RM instances the following application implemented transport functions:

  • <syntaxhighlight lang='c'>Rm_Packet *(*rmAllocPkt)(Rm_AppTransportHandle appTransport, uint32_t pktSize, Rm_PacketHandle *pktHandle);</syntaxhighlight>
    This function will be discussed in depth later but it essentially returns a transport buffer to RM. RM will place the RM specific message within the transport buffer
  • <syntaxhighlight lang='c'>int32_t (*rmSendPkt)(Rm_AppTransportHandle appTransport, Rm_PacketHandle pktHandle);</syntaxhighlight>
    This function will be discussed in depth later but it takes a RM populated transport buffer and sends it on the application transport using the appTransport handle

When the application receives a packet/message on a transport designated for RM it must extract the RM packet and provide it to RM via RM's transport receive API:

  • <syntaxhighlight lang='c'>int32_t Rm_receivePacket(Rm_TransportHandle transportHandle, const Rm_Packet *pkt);</syntaxhighlight>
    The RM receive API will not free the RM packet provided to it. It assumes the application transport code will free the RM packet once the RM receive API returns

OSAL API[edit]

The OS Abstraction Layer API allows RM's memory, cache, and blocking mechanism management functions to be defined within the context of the device and/or OS that it will be operating.

OSAL API Purpose Special Considerations
<syntaxhighlight lang='c'>extern void *Osal_rmMalloc (uint32_t num_bytes);</syntaxhighlight> Allocates a block of memory of specified size to RM
  • Location (local or shared memory), alignment and cache considerations do not matter when allocations originate from the standard Server, Client Delegate, and Client instances.
  • Memory allocated for Shared Server/Client instances must originate from shared memory and be aligned and padded to a cache line.
<syntaxhighlight lang='c'>extern void Osal_rmFree (void *ptr, uint32_t size);</syntaxhighlight> Frees a block of memory of specified size that was allocated to RM
<syntaxhighlight lang='c'>extern void *Osal_rmCsEnter (void);</syntaxhighlight> Enters a critical section protecting against access from multiple cores, threads, tasks, and/or processes
  • Critical section protection is not required for the standard Server, Client Delegate, and Client instances since they all operate using independent, non-shared data structures
  • Critical section protection is required for Shared Server/Client instances since the resource management data structures are contained in shared memory
<syntaxhighlight lang='c'>extern void Osal_rmCsExit (void *CsHandle);</syntaxhighlight> Exits a critical section protecting against access from multiple cores, threads, tasks, and/or processes
  • Critical section protection is not required for the standard Server, Client Delegate, and Client instances since they all operate using independent, non-shared data structures
  • Critical section protection is required for Shared Server/Client instances since the resource management data structures are contained in shared memory
<syntaxhighlight lang='c'>extern void Osal_rmBeginMemAccess (void *ptr, uint32_t size);</syntaxhighlight> Indicates a block of memory is about to be accessed. If the memory is cached a cache invalidate will occur to ensure the cache is updated with the memory block data residing in actual memory Cache invalidate operations are only required if RM instance data structures are allocated from a cached memory region
<syntaxhighlight lang='c'>extern void Osal_rmEndMemAccess (void *ptr, uint32_t size);</syntaxhighlight> Indicates a block of memory has finished being accessed. If the memory is cached a cache writeback will occur to ensure the actual memory is updated with contents of the cache Cache writeback operations are only required if RM instance data structures are allocated from a cached memory region
<syntaxhighlight lang='c'>extern void *Osal_rmTaskBlockCreate (void);</syntaxhighlight> Creates an instance of a task blocking mechanism allowing a RM instance to block in order to wait for a service request to complete RM task blocking is only required if application service requests specifically request RM not return until the service request is satisfied
<syntaxhighlight lang='c'>extern void Osal_rmTaskBlock (void *handle);</syntaxhighlight> Blocks a RM instance waiting for a service request to complete using the provided task blocking mechanism handle RM task blocking is only required if application service requests specifically request RM not return until the service request is satisfied
<syntaxhighlight lang='c'>extern void Osal_rmTaskUnblock (void *handle);</syntaxhighlight> Unblocks a RM instance when when a service request has completed. RM task blocking is only required if application service requests specifically request RM not return until the service request is satisfied
<syntaxhighlight lang='c'>extern void Osal_rmTaskBlockDelete (void *handle);</syntaxhighlight> Deletes an instance of a task blocking mechanism RM task blocking is only required if application service requests specifically request RM not return until the service request is satisfied
<syntaxhighlight lang='c'>extern void Osal_rmLog (char *fmt, ... );</syntaxhighlight> Allows RM to log various messages This OSAL API is used by RM to print resource status and RM instance status logs

Server[edit]

The RM Server manages all resource data structures and tracks resource ownership. The Server also maintains the NameServer. A majority of resource requests will be forwarded by other instances to the Server for completion. A system integrating RM should contain no more than one RM Server since it maintains the view of all resource's managed by RM. It is possible to have more than one RM Server but the resources managed by each Server must be mutually exclusive. If there is any overlap in resources there may be fatal resource conflicts. There is no limit to the number of Client Delegates and Clients that can connect to the Server. The RM Server is the root of the RM instance connection tree.

Client[edit]

The RM Client is mainly used as an interface to request services. No resource management is done locally on Client instances. Therefore, all requests issued via a Client will be forwarded to a RM Server or CD based on the Client's instance connections. There is no limit to the number of Clients that can exist. However, Clients cannot connect to another Client and can connect to at most one Server or CD. A Client cannot connect to both a Server and CD.

Client Delegate (CD)[edit]

The CD is middle ground between a Server and a Client. The RM CD can manage small subsets of resources, provided by the Server, locally. The CD can handle service requests from connected Clients as long as the resource specified in the resource has been provided to the CD by the Server for local management. Otherwise, the service will be forwarded to the Server for processing. All NameServer requests received by the CD will be forwarded to the Server. The CD is of use in architectures where the transport path between the Server and other RM instances is slow. A CD with a faster transport path between itself and Clients can be established so that not all service requests need to flow over a slow transport path to the Server for completion. If the CD can handle N requests based on the resources provided to it by the Server only every N+1th transaction will be slower since it must be sent over the slow transport path to the Server.

There is no limit to the number of CDs that can exist and the number of Clients that can connect to a single CD. However, no two CDs can connect to each other and a CD can be connected to only one Server.

Shared Instances[edit]

Special shared memory versions of the RM Server and Client can be created for systems that have strict cycle requirements and little to no tolerance for blocking operations that make take place within RM instances. When configured the Shared Server and any Shared Clients connected with the Server will complete service requests immediately by accessing resource data structures existing within shared memory. The major requirement for the shared instances is that some form of shared memory is available for access between the Shared Server and Clients.

Shared Servers and Shared Clients cannot connect to any RM instances via the transport API. Only Shared Clients can connect to Shared Servers and the connection is made at Shared Client initialization via the shared memory area containing the RM Server resource data structures. Shared Clients are essentially piggybacking on the Shared Server instance located in shared memory.

Shared Server[edit]

A Shared Server instance is no different than standard Server instance. The only difference is the RM OSAL APIs provided by the system must allocate and free from a shared memory area accessible to the portions of the system that will be running the Shared Server and any Shared Clients. Since shared memory will be accessed by the Shared Server the CsEnter/Exit and Begin/EndMemAccess OSAL APIs must account for shared memory accesses and any caching that make take place.

As previously mentioned the Shared Server will not accept any connections via the transport API. Only Shared Clients can connect to the Shared Server and that will be at Shared Client initialization time.

Shared Client[edit]

Shared Client instances are no different from standard Client instances from a data structure, resource request standpoint. The major difference is at instance initialization the Shared Client will be provided the location of the Shared Server in shared memory. When service requests are issued via a Shared Client it will map the Shared Server instance and directly access its resource data structures. Therefore, no blocking operations, besides any cache writeback/invalidate operations, will take place.

As previously mentioned Shared Clients cannot connect to any instances via the transport API. Shared Clients will connect to a Shared Server by storing the Shared Server instance pointer. If this pointer is not allocated from a shared memory the Shared Server-Client connection will fail to operate and system integrity cannot be guaranteed.

How Resources Are Managed[edit]

RM makes no upfront assumptions about what resources are managed and how they are managed. These decisions are left up to the system integrator. In essence, RM is a glorified number allocator. Allocations and frees are based on strings which are assumed to map to system resources. Who is allowed to use which resource is defined by the system integrator based on the RM instance names. All resource service requests originate from RM instances. Separate portions of a system can be assigned different resources based on a RM instance name. The separate portions of the system are provided a RM instance created with the respective instance name. Allocate/free resource requests originating from the system will only be provided resources assigned to the RM instance it uses to perform the request. This is essentially how different areas within a system can be assigned different resources.

The key takeaway is resource names and RM instance names must be synchronized across the different areas of RM in order for proper resource management to take place. The name synchronization architecture allows RM to be completely device agnostic from the perspective of managed resources. Which resources are managed and how they're managed can change from application to application and device to device with no RM source code changes. The key RM features that must be synchronized are the following:

  • Global Resource List (GRL) - Defines all resources that will be managed by the RM Server and the CDs/Clients connected to it. Resources are defined within a resource node containing a resource name and its range of values (base + length format).
  • Policies - Defines how resources are partitioned amongst the RM instances based on the names used to initialize the RM instances
  • RM Instances - Instance names much match the names in the policies used to divide up resources.
  • Service Requests - Resource requests through the service API must match a resource name defined in the GRL.

RM_name_synchronization.jpg

Most of RM's resource management takes place on the Server instance. The GRL and Policy are provided to the Server as part of the Server's initialization. The GRL is used to allocate and initialization all the resource allocator instances. A resource allocator will be created for each resource node specified within the GRL. The GRL is not used past initialization so it can be placed in a memory region local to the Server instance. The Policy will be validated (i.e. checked for formatting errors) and stored for reference against service requests received from all instances. Policy using the policy provided to the Server will only be made by the Server, through service requests made with the Server instance or forwarded from other instances, so, like the GRL, the policy can be placed in a memory region local to the Server instance. The latter still applies the Shared Server instance. The policy will be copied, wholesale, to a shared memory region provided by the OSAL memory alloc API.

Some resource management can be offloaded to the CD instance. The CD can be configured to receive resources from the Server and allocate/free those resources to Clients in lieu of the Server. This offloads some of the management duties from the Server and can save time and cycles if the Server connects over a high latency transport while the CD connects to Clients over a low latency transport. The CD is not provided a GRL at initialization but will request resource blocks from the Server when it receives a request from its service API or a Client that it classifies as something it can handle in lieu of the Server. Typical requests classified in this manner by the CD are non-specific requests or resource requests with an unspecified base value. The At initialization, the CD is provided an exact copy of the policy provided to the Server. This way service request policy checks on the CD will be in sync with the policy check that would have taken place on the Server if the request was forwarded to the Server instead of being handled by the CD.

No resource management takes place on the Client. Service requests received on Client instance are always forwarded to either a connected CD or Server.

The NameServer is managed solely by the Server instance. The RM NameServer is a very simple NameServer that allows a resource range to be mapped to a string name. Any service request received by any RM instance involving the NameServer will always be forwarded to the Server instance for completion.

Allocator Algorithm[edit]

The allocators are implemented to save cycles when parsing allocated/freed nodes and to save memory as allocations and frees take place over the lifetime of system execution. An open source balanced binary search tree algorithm was used to implement the allocators. Each allocator will maintain the status of a resource's values by creating a node in the tree for each resource value range with different attributes. Where attributes include whether the resource range is allocated or freed, how many RM instances own the resource range, and which RM instances own the resource range. Each time a resource range is modified via service request checks are performed on adjacent resource nodes. The allocator algorithm will attempt to maintain a minimal number of resource nodes in the allocator tree by combining nodes that result in the same attributes as service requests are completed. This will save system memory and minimize the cycles needed to search the allocator trees.

Some tree algorithm APIs were added to perform cache writeback/invalidate operations while walking the tree. The additional tree functions were added to support the Shared Server/Client model where the Shared Server's allocators are stored in shared memory.

The unmodified search tree algorithm is open source under the BSD license and can be downloaded from OpenBSD.org. The modified search tree algorithm can be found in the pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/util directory. The modified algorithm is open source, maintaining the BSD license.

Static Policies[edit]

Static policies can be provided to CD and Client instances as a means to "pre-allocate" resources. There may be some cases where a RM instance must be able to allocate some resources prior to the full system being up. In this initialization environment it is unlikely that all RM instances will be created and even more unlikely that they'll be connected via transports. For cases such as the latter, the static policy can used by CDs and Clients to pre-approve resources requested from their service API prior to their transport interface being connected to the Server. The static policy must either be an exact replica of the global policy provided to the Server at initialization or a subset of of the global policy provided to the Server. Service requests from a non-Server instance utilizing a static policy will immediately return an approved resource based on the static policy. Any static requests are stored by the CDs and Clients and forwarded to the Server for validation as soon as the transport to the Server is established. If a static policy allocated resource fails validation via the Server policy the instance that allocated the resource will go into a "locked" state. Service requests cannot be process by locked instances. Recovering a RM instance from the locked state at runtime is not possible. The static policy must be modified and the application restarted. This is intended since the failed validation of an already allocated resource can result in unknown system operation.

Only the following services are available with static policies prior to the transport to the Server being established:

  • <syntaxhighlight lang='c'>Rm_service_RESOURCE_ALLOCATE_INIT</syntaxhighlight>
  • <syntaxhighlight lang='c'>Rm_service_RESOURCE_ALLOCATE_USE</syntaxhighlight>

GRL & Policy Format[edit]

The GRL and Policy formats follow the ePAPR 1.1 Device Tree Specification. The device tree specification is used to define a simple flattened device tree (FDT) format for expressing resources and their permissions in the GRL and policy device tree source (DTS) files. The GRL and Policy DTS files are converted to device tree blob (DTB) binary files for consumption by the RM Server and CD instances during runtime init. The Linux kernel device tree compiler (DTC) v1.3.0 is used to perform the conversion from DTS to DTB file. Packaged with the DTC is a BSD-licensed, open source, flattened device tree library called LIBFDT. This library has been integrated with RM, packaged in the pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/util directory, facilitating RM's ability to read the GRL and policy files at runtime. The following graphic portrays the process used to integrate a GRL or policy DTS file into an application using RM.
RM_grl_policy_format.jpg

For more information on Flattened Device Trees please see: http://elinux.org/Device_Trees

Linux DTB Support[edit]

In some cases it is desirable for RM to partially automate the extraction of resources consumed by a Linux kernel. The system resources used by Linux are defined in the kernel DTB file, which originates from a, ePAPR 1.1-based, DTS file. RM can easily extract resources identified as used by Linux from the kernel DTB given RM integrates the same DTC utility and LIBFDT library. As will be explained later, the GRL provides a facility for a defined resource to specify a Linux DTB extraction path. At initialization, RM will use this extraction path to parse the Linux DTB file for the resource's value range that has been specified as used by Linux. The RM Server must be provided a pointer to the Linux DTB in the file system at initialization. Otherwise, the automatic extraction will not occur.

Configuring & Using RM[edit]

In the sections to follow the complete process to integrate and configure RM will be covered. A generic RM example will be built over the following sections to supplement the explanation of how to integrate RM. The example will integrate RM with a generic software system that needs three shared resources managed, apples, oranges, and bananas. The use of fruits in the example is in part to be generic but also to convey just how flexible RM is. RM can manage any resource that can be mapped to a string and a value range.

Defining the GRL & Policy[edit]

The GRL defines all system resources, and their value ranges, that will be managed by the RM instances at runtime. If a resource, or a resource's value, is not specified in the GRL it will be nonexistent from the point of view of RM. The policies define how resources defined in the GRL are split amongst the RM instances. If a resource, or a resource's value, is not specified in the policy it is assumed to be not available to any RM instance. The GRL and policies start out as easily editable device tree source files. They are converted to DTBs where they can be fed to the RM Server (or CD and Client in the case of static policies) instance at initialization. Offloading the resource definitions and their access permissions to files provided to RM at initialization allows a system integrator to easily modify which resources are managed by RM, and how they are managed, without having to make RM source code changes.

GRL/Policy Definition & Conversion Tools[edit]

Device Tree Compiler[edit]

The GRL and policies are based on Device Tree Compiler v1.3.0 available from http://jdl.com/software/dtc-v1.3.0.tgz. DTC v1.3.0 is only supported in a Linux environment. One can attempt to bring the tool up in a Windows environment via Msys or Cygwin but proper operation cannot be guaranteed.

DTC Installation Steps

  1. Download dtc-v1.3.0.tgz and copy it to a Linux environment
  2. Unzip the tar:
    <syntaxhighlight lang='bash'>$ tar xzf dtc-v1.3.0.tgz</syntaxhighlight>
  3. CD to the created dtc directory:
    <syntaxhighlight lang='bash'>$ cd dtc-v1.3.0</syntaxhighlight>
  4. Build the DTC utilities:
    <syntaxhighlight lang='bash'>$ make all</syntaxhighlight> An error due to an unused set variable will cause the make to fail.
  5. Edit dtc-v1.3.0/Makefile to remove -Wall from the WARNINGS macro
  6. Rebuild the DTC utilities:
    <syntaxhighlight lang='bash'>$ make all</syntaxhighlight>
  7. The 'dtc' and 'ftdump' executables will now exist in the dtc-v1.3.0 directory RM_dtc_dir.jpg
DTB to C-Style Array Conversion Utility[edit]

The cify.sh Shell script used to convert a DTB binary to a C-style array is provided with RM under the pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/util directory. No installation is needed. Just copy the script to the directory in the Linux environment where DTBs will be located. How to use this utility will be covered in a later section.

General DTS Text Format[edit]

Please read dtc-v1.3.0/Documentation/dts-format.txt prior to creating and editing any DTS files. It contains basic knowledge required to create a valid DTS file.

In its basic form the DTS file is a list of nodes containing optional properties. Nodes are defined with C-style curly braces, ending with a semi-colon. Properties can be defined in different ways, with a single integer, a list of integers, a single string, a list of strings, or a mix of all the latter. A property definition ends with a semi-colon. C-style comments are allowed in DTS files. The basic DTS file:
<syntaxhighlight lang='c'> /* All RM DTS files must start with the Version 1 DTS file layout identifier */ /dts-v1/;

/* Root node - Must ALWAYS be defined */ / {

   /* Optional root node properties */
   root-property = ...;
   /* Begin child node definitions */
   child-node-1 {
       /* child node optional properties */
       cn-1-property = ...;
       cn-1-other-property = ...;
       /* Begin child-node-1 sub nodes */
       sub-node-1 {
           property = ...;
       };
   };
   child-node-2 {
       cn-2-property = ...;
   };

}; </syntaxhighlight>

The generic property types for DTS files were used to create specific sets of properties for both the GRL and Policy DTS files.

GRL Nodes & Properties[edit]

Any node with a resource range property in the GRL will be identified as a resource for management by RM. The DTS root node should not be assigned a resource range.

GRL resource node format:
<syntaxhighlight lang='c'>

   /* Single resource node */
   resource-name {
       resource-range = ...;
       /* Optional */
       ns-assignment = ...;
       /* Optional */
       linux-dtb-alias = ...;
   };
   /* Resource nodes can be grouped as long as the grouping
    * node does not have a resource-range property */
   resource-group {
       /* Optional */
       ns-assignment = ...;
       resource-name1 {
           resource-range = ...;
           /* Optional */
           ns-assignment = ...;
           /* Optional */
           linux-dtb-alias = ...;
       };
       resource-name2 {
           resource-range = ...;
           /* Optional */
           ns-assignment = ...;
           /* Optional */
           linux-dtb-alias = ...;
       };
       resource-name3 {
           resource-range = ...;
           /* Optional */
           ns-assignment = ...;
           /* Optional */
           linux-dtb-alias = ...;
       };
   };

</syntaxhighlight>

GRL Property Format Purpose
resource-group Single string with no spaces String identifying a resource group. The resource group will not be stored by RM it's main purpose is to allow readability in the GRL file. NameServer assignments can be made from the group for any mappings that don't apply to any specific resources. Linux DTB alias paths are not valid from resource groups.
resource-name Single string with no spaces String identifying the resource that RM will manage via an allocator. To apply permissions to this resource the policy must contain a node with the same resource-name. System requests referencing this resource must provide a string matching resource-name as part of the request.
resource-range <base-value length-value> Defines the absolute range of values for the resource in base+length format. Bases and lengths can be specified in decimal or hex format. A comma-separated list of multiple base+length pairs can be specified but the base+length pair values cannot overlap. Allocator initialization will fail if any of the pairs overlap.
ns-assignment "String_To_Associate", <base-value length-value> Defines a string to resource value to be stored in the RM NameServer. A comma-separated list of multiple NameServer associations can be specified.
linux-dtb-alias "Space separated Linux DTB node path", <num-vals base-offset len-offset> Defines the path to an associated resource in the Linux DTB for automatic Linux kernel resource reservation. RM cannot rely on the Linux DTB and the GRL defining a resource in the same format. The linux-dtb-alias property is a way to associate a Linux DTB resource with a resource defined by the GRL and automatically mark the resource as allocated to the Linux kernel.
  • "Space separated string" - A space separated string specifying the node-path to an alias resource in the Linux DTB. The string words match the node names in the path to the alias resource. The last word in the string is the property name that corresponds to the alias resource's value parameters. The complete path need not be specified but the node-path must be exact starting with the first node specified.
  • num-vals - Can have a value of 1 or 2. If 1, the alias resource has just a base value. If 2, the alias resource has a base and a length value.
  • base-offset - Specifies the valid offset to the alias resource's base value. The alias resource's property may contain multiple values.
  • len-offset - [Only applicable when num-vals = 2] Specifies the valid offset to the alias resource's length value, if applicable. The alias resource's property may contain multiple values.
GRL Property Example
resource-group

<syntaxhighlight lang='c'>

   /* Group the system's foo resources */
   foos {
       resource-foo {
           resource-range = ...;
       };
       resource-bar {
           resource-range = ...;
       };
       resource-foobar {
           resource-range = ...;
       };
   };

</syntaxhighlight>

resource-name

<syntaxhighlight lang='c'>

   /* Define the system's "resource-foo" resource */
   resource-foo {
       resource-range = ...;
       ...
   };

</syntaxhighlight>

resource-range

<syntaxhighlight lang='c'> resource-foo {

   resource-range = <0 25>;

}; </syntaxhighlight>
<syntaxhighlight lang='c'> resource-foo {

   resource-range = <5     100>,
                    <200   500>,
                    <1000 2000>;

}; </syntaxhighlight>

ns-assignment

<syntaxhighlight lang='c'> resource-foo {

   resource-range = ...;
   ns-assignment = "Important_Resource", <5 1>;

}; </syntaxhighlight>
<syntaxhighlight lang='c'> resource-foo {

   resource-range = ...;
   ns-assignment = "Foo's_Resource",    <6 1>,
                   "Bar's_Resource",    <7 1>,
                   "Foobar's_Resource", <8 1>;

}; </syntaxhighlight>

linux-dtb-alias

Example Linux DTB: <syntaxhighlight lang='c'> /dts-v1/; / {

  model = "Texas Instruments EVM";
  compatible = "ti,evm";
  #address-cells = <1>;
  #size-cells = <1>;
  memory {

device_type = "memory"; reg = <0x80000000 0x8000000>;

  };
  soc6614@2000000 {
     ...
     hwqueue0: hwqueue@2a00000 {
        compatible = "ti,keystone-hwqueue";
        ...
        qmgrs {
           ...
        };
        queues {
           general {
              values = <4000 64>;
           };
           infradma {
              values = <800 12>;
              reserved;
           };
           accumulator-low-0 {
              values = <0 32>;
              // pdsp-id, channel, entries, pacing mode, latency
              accumulator = <0 32 8 2 0>;
              irq-base = <363>;
              multi-queue;
              reserved;
           };
           accumulator-low-1 {
              values = <32 32>;
              // pdsp-id, channel, entries, pacing mode, latency
              accumulator = <0 33 8 2 0>;
              irq-base = <364>;
              multi-queue;
           };
           accumulator-high {
              values = <728 8>;
              // pdsp-id, channel, entries, pacing mode, latency
              accumulator = <0 20 8 2 0>;
              irq-base = <150>;
              reserved;
           };
           ...
        };
        regions {
           ...
        };
     };
     ...
  };

}; </syntaxhighlight>

Example Mapping to an alias resource in the Linux DTB:
RM will create "accumulator-ch" resource with values 0-47. Alias resources found at the linux-dtb-alias path will be automatically resourced for the Linux Kernel within the "accumulator-ch" allocator. <syntaxhighlight lang='c'> accumulator-ch {

   resource-range = <0 48>;
   /* Extract the accumulator channels which
    * just have a base value in the Linux DTB */
   linux-dtb-alias = "hwqueue@2a00000 queues accumulator-low-0 accumulator", <1 1>,
                     "hwqueue@2a00000 queues accumulator-low-1 accumulator", <1 1>,
                     "hwqueue@2a00000 queues accumulator-high accumulator", <1 1>;

}; </syntaxhighlight>
RM will create "infra-queue" resource with values 800-831. Alias resources found at the linux-dtb-alias path will be automatically resourced for the Linux Kernel within the "infra-queue" allocator. <syntaxhighlight lang='c'> infra-queue {

   resource-range = <800 32>;
   /* Extract the infrastructure DMA channels which
    * have a base+length value in the Linux DTB */
   linux-dtb-alias = "hwqueue@2a00000 queues infradma values", <2 0 1>;

}; </syntaxhighlight>

Example: fruit-GRL.dts[edit]

<syntaxhighlight lang='c'> /dts-v1/;

/ {

   /* Device Resource Definitions */
   fruits {
       apples {
           /* 10 apples in system */
           resource-range = <0 10>;
       };
       oranges {
           /* 25 oranges in system */
           resource-range = <0 25>;
       };
       bananas {
           /* 15 bananas in system */
           resource-range = <0 15>;
           ns-assignment = "Banana_for_lunch", <10 1>;
       };
   };

}; </syntaxhighlight>

Policy Nodes & Properties[edit]

Policy valid instance node and resource node format:
<syntaxhighlight lang='c'>

   /* Define the RM instances that can assigned resources */
   valid-instances = ...;
   /* Single resource node */
   resource-name {
       assignments = ...;
       /* Optional */
       allocation-alignment = ...;
   };
   /* Resource policy nodes can be grouped */
   resource-group {
       resource-name1 {
           assignments = ...;
           /* Optional */
           allocation-alignment = ...;
       };
       resource-name2 {
           assignments = ...;
           /* Optional */
           allocation-alignment = ...;
       };
       resource-name3 {
           assignments = ...;
           /* Optional */
           allocation-alignment = ...;
       };
   };

</syntaxhighlight>

Policy Property Format Purpose
valid-instances "RM-Instance-Name" List of RM instance name strings that are identified as valid for the allocation specifications made within the policy DTS file. The instance names must match exactly the names assigned to RM instances at their initialization. RM requests will be denied for any RM instance with a name that does not match any of the names in the valid instance list. The policy will be declared invalid if an instance name is specified within the assignment property that does not match any of the instance names in the valid instance list.
resource-group Single string with no spaces String identifying a resource group.
resource-name Single string with no spaces String identifying the resource that the assignment specifications apply to. RM will return an error specifying the policy is invalid if a resource-name does not match any of the resource allocators i.e. the resource-name does not correspond to a resource-name node defined in the GRL.
assignments <base-value length-value>, "permission assignment string" Defines the allocation permissions for a resource range provided in base+length format. Bases and lengths can be specified in decimal or hex format. A comma-separated list of multiple resource assignments can be specified but the base+length values cannot overlap.

The "permissions assignment string" consists of the following: "Permission_bits = (RM_instances) & Permission_bits = (RM_instances) & ..."

  • Permission_bits - list of characters that represent the permissions assigned to the RM_instances for the resource range specified by the base+length values. Permission bits can be specified in any order and can be space separated. Possible permissions include:
    • 'i' - Assigns allocation for initialization permission to the instances in the RM_instances list
    • 'u' - Assigns allocation for usage permission to the instances in the (RM_instances) list
    • 'x' - Assigns exclusive allocation permissions to the instances in the (RM_instances) list. Exclusive permissions for an instance entail a resource allocated to an instance that has exclusive permissions for that resource cannot be allocated to another instance. Put another way, the resource cannot be shared amongst multiple instances if an instance with exclusive permissions has been allocated the resource.
    • 's' - Allows shared Linux permissions to the instances in the (RM_instances) list. Resources that are automatically reserved via the Linux DTB for the kernel are by default not allowed to be shared. If an instance(s) are assigned Linux shared permissions they can be allocated a resource from the base+length range that has already been allocated to the Linux kernel.
  • '=' Operator - The equivalence operator signifies the completion of the permission_bits specification and ties the permission bits to a list of (RM_instances). The equivalence operator can be on the left or the right side of the (RM_instances) list. However, no more than one equivalence can be made per (RM_instances). RM will declare the policy as invalid if more than one equivalence operator is used per (RM_instances) list.
    • Empty equivalence - If the equivalence operator equates (RM_instances) to no permission character the base+length range will not be allocatable to any RM instance.
  • (RM_instances) - A space separated list of RM instance names for which the permission bits should be applied for the base+length resource range. Any RM instance name specified must match a name specified in the valid-instances list at the top of the policy. RM will declare the policy invalid if any of the RM Instance names differ.
    • (*) - The '*' operator can be used to specify the permissions bits are valid for ALL instances in the valid-instance list
  • '&' Operator - The and operator allows multiple permission assignments per resource base+length range. The instances within the (RM_instances) lists must be mutually exclusive. A RM instance cannot appear more than once if multiple permission assignments are made for resource base+length
allocation-alignment <alignment-value> Defines an alignment value in decimal or hexadecimal to be used by RM for allocation requests that have an unspecified base and unspecified alignment. Only one alignment per resource node can be specified.
Policy Property Example
valid-instances

<syntaxhighlight lang='c'>

   /* Define the RM instances that are valid for the policy */
   valid-instances = "RM_foo_Server",
                     "RM_foo_Client",
                     "RM_bar_Client";
   };

</syntaxhighlight>

resource-group

<syntaxhighlight lang='c'>

   /* Group the system's foo resources */
   foos {
       resource-foo {
           assignments = ...;
       };
       resource-bar {
           assignments = ...;
       };
       resource-foobar {
           assignments = ...;
       };
   };

</syntaxhighlight>

resource-name

<syntaxhighlight lang='c'>

   /* Define "resource-foo"'s allocation specifications */
   resource-foo {
       assignments = ...;
       ...
   };

</syntaxhighlight>

assignments

<syntaxhighlight lang='c'> valid-instances = "Server",

                 "Client0",
                 "Client1";


resource-foo {

   /* All instances get init and use permissions */
   assignments = <0  25>, "iu = (*)",
                 <25 25>, "(*) = u i";

};

resource-bar {

   /* 0  - 4  exclusive for Server, can be shared by clients
    *         if not allocated to Server
    * 5  - 9  cannot be allocated to anyone
    * 10 - 14 shared between Client0 and Linux
    * 15 - 19 can be allocated for initialization by anyone and
    *         shared with any instance */
   assignments = <0  5>, "u i x = (Server) & u i = (Client0 Client1)",
                 <5  5>, "(*)",
                 <10 5>, "(Client0) = ius",
                 <15 5>, "i = (Server Client0 Client1)";

};

/* Invalid permission strings

* 0-4 more than one equivalence operator
* 5   more than one equivalence operator
* 6   Instance name not in valid-instances list
* 7   invalid permission character */

resource-foobar {

   assignments = <0 5>, "u = i = (Server)",
                 <5 1>, "iu = (Server) = x",
                 <6 1>, "iux = (My_Server)",
                 <7 1>, "iut = (Server)";

}; </syntaxhighlight>

allocation-alignment

<syntaxhighlight lang='c'> resource-foo {

   assignments = ...;
   allocation-alignment = <16>;

}; </syntaxhighlight>

Example: fruit-policy.dts[edit]

<syntaxhighlight lang='c'> /dts-v1/;

/ {

   /* Define the valid instances */
   valid-instances = "Server",
                     "Client0",
                     "Client1";
   /* Specify who receive the fruit */
   fruits {
       apples {
            /* Everyone shares the apples */
            assignments = <0 10>, "iu = (*)";
       };
       oranges {
           /* First 10 oranges can only be shared between the clients
            * Last 15 oranges can be taken by all instances but can only be shared between the clients */
           assignments = <0  10>, "iu = (Client0 Client1)",
                         <10 15>, "iux = (Server) & iu = (Client0 Client1)";
           /* Give out every other orange if an instance doesn't know which one it wants */
           allocation-alignment = <2>;
       };
       bananas {
           /* Each instance gets 5 bananas.  Also, in this odd world a Linux kernel has taken bananas
            * 5 and 6 but is willing to share them with the Server. */
           assignments = <0  5>, "(Client0) = xiu",
                         <5  2>, "(Server)  = siu",
                         <7  3>, "(Server)  = xiu",
                         <10 5>, "(Client1) = xiu";
       };
   };

}; </syntaxhighlight>

Example: fruit-policy-static.dts[edit]

<syntaxhighlight lang='c'> /dts-v1/;

/ {

   /* Define the valid instances */
   valid-instances = "Client1";
   /* Statically assign Client1 some bananas before the RM system
    * is brought up */
   bananas {
       /* Statically allocated resources must align with global policy */
       assignments = <10 2>, "xui = (Client1)";
   };

}; </syntaxhighlight>

GRL/Policy Conversion for Input to RM[edit]

Before any DTS file can be included in an application and passed to a RM instance initialization the DTS file must be converted to a DTB binary using the installed DTC utility. Please see the "Defining the GRL & Policy" section for instructions on where to find and how to install DTC.

DTS to DTB Conversion Instructions

  1. Make sure DTC has been installed in a Unix environment
  2. Copy the .dts files to a directory accessible to the built dtc and ftdump utilities.
  3. Convert the .dts files to .dtb binary files:
    <syntaxhighlight lang='bash'>$ dtc -I dts -O dtb -o <output file name> <input file name></syntaxhighlight>
  4. If a syntax error occurs during conversion the dtc tool will provide a file row and column address where the syntax error likely occurred. The address will be in the form row.startCol-endCol as depicted:
    RM_conversion_syntax_error.jpg
  5. The output of the .dtb binary can be checked using the ftdump utility if the conversion succeeds. Strings will be displayed as hex bytes:
    <syntaxhighlight lang='bash'>$ ./ftdump <.dtb file></syntaxhighlight>

After conversion there are two possible means to include the DTBs in an application:

  • File Pointer
  • C const Byte Array

Inclusion via File Pointer
1. Place the DTB files in a file system accessible to the application during runtime
2. mmap or fopen the DTB file. In the case of fopen the data in the DTB file will need to be copied to local data buffer prior to passing the DTB data to a RM instance. A method for doing this is shown here: <syntaxhighlight lang='c'> /* Open the GRL and policy DTB files */ grlFp = fopen("...\\grl.dtb", "rb"); policyFp = fopen("...\\policy.dtb", "rb");

/* Get the size of the GRL and policy */ fseek(grlFp, 0, SEEK_END); grlFileSize = ftell(grlFp); rewind(grlFp);

fseek(policyFp, 0, SEEK_END); policyFileSize = ftell(policyFp); rewind(policyFp);

/* Allocate buffers to hold the GRL and policy */ grl = Osal_rmMalloc(grlFileSize); policy = Osal_rmMalloc(policyFileSize);

/* Read the file data into the allocated buffers */ readSize = fread(grl, 1, grlFileSize, grlFp); System_printf("Read Size compared to file size: %d : %d\n", readSize, grlFileSize); readSize = fread(policy, 1, policyFileSize, policyFp); System_printf("Read Size compared to file size: %d : %d\n", readSize, policyFileSize);

/* Create the RM Server instance */ rmInitCfg.instName = "Server"; rmInitCfg.instType = Rm_instType_SERVER; rmInitCfg.instCfg.serverCfg.globalResourceList = (void *)grl; rmInitCfg.instCfg.serverCfg.globalPolicy = (void *)policy; /* Get the RM Server handle */ rmServerHandle = Rm_init(&rmInitCfg); </syntaxhighlight>

Inclusion via C const Byte Array

In this case, the cify shell script provided with RM is used to convert the DTB binary to a C source file containing a const C byte array. The cify script will generate the code to align and pad the byte array to a specified byte size

  1. Copy the cify.sh shell script to the Unix environment directory in which the generated .dtb files are located.
  2. Convert the shell script to Unix format:
    <syntaxhighlight lang='bash'>$ dos2unix cify.sh</syntaxhighlight>
  3. Convert any .dtb binaries to C const byte arrays. The usage menu can be printed running the cify script without any input parameters:
    <syntaxhighlight lang='bash'>$ ./cify.sh <input .dtb file> <output .c file> <byte array name> <linker data section name> <byte alignment></syntaxhighlight>
  4. extern the byte arrays into the application source and compile the generated C source files into the application
Example: Converting fruit GRL and Policies[edit]
  • Convert fruit-GRL.dts to .dtb

<syntaxhighlight lang='bash'>$ dtc -I dts -O dtb -o fruit-GRL.dtb fruit-GRL.dts</syntaxhighlight>

  • Convert fruit-policy.dts to .dtb

<syntaxhighlight lang='bash'>$ dtc -I dts -O dtb -o fruit-policy.dtb fruit-policy.dts</syntaxhighlight>

  • Convert fruit-policy-static.dts to .dtb

<syntaxhighlight lang='bash'>$ dtc -I dts -O dtb -o fruit-policy-static.dtb fruit-policy-static.dts</syntaxhighlight>

  • Dump fruit-GRL.dtb (Not required - just showing the output)

<syntaxhighlight lang='bash'>$ ./ftdump fruit-GRL.dtb</syntaxhighlight> RM_dump_fruit_GRL.jpg

  • Convert fruit-GRL.dtb to fruit-GRL.c aligned to 128 bytes

<syntaxhighlight lang='bash'>$ ./cify.sh fruit-GRL.dtb fruit-GRL.c grl grlSect 128</syntaxhighlight> RM_fruit_GRL.jpg

  • Convert fruit-policy.dtb to fruit-policy.c

<syntaxhighlight lang='bash'>$ ./cify.sh fruit-policy.dtb fruit-policy.c policy policySect 128</syntaxhighlight> RM_fruit_policy.jpg

  • Convert fruit-policy-static.dtb to fruit-policy-static.c

<syntaxhighlight lang='bash'>$ ./cify.sh fruit-policy-static.dtb fruit-policy-static.c policy-static policySect 128</syntaxhighlight>RM_fruit_policy_static.jpg

RM Instance Initialization & Configuration[edit]

Source code to add RM instances to the system application can be added now that the resource list has been defined. The first step in this process is identifying how many RM instances are needed and in what context they need to execute from. At the least, RM needs a Server instance to operate. If the system application is one that executes from a single thread on a single core only a Server is needed. However, an application in need of resource management is more than likely multi-threaded and multi-core. For systems it's best to place the RM Server on what can be considered the centralized thread or core. For example, on TI Keystone devices it's best to place the RM Server on the ARM, running from Linux user-space, or, if the application is DSP-only, core 0, since most applications consider these core contexts the source of most control.

Once the RM Server location has been established, identification of where RM Clients are needed should be identified. Clients will be required in any system context that will require different allocation permissions than the Server and any other Client. This can boil down to one Client for each core, one Client per process/thread, or Clients for each software module that may be running within the same thread. The number of RM instances needed is based on the system topology and resource management granularity needed.

There are five major steps to initializing the RM instances:

  • Populate the RM OSAL functions based on the placement of the RM instances. The answers to the following questions and more should be considered when populating the RM OSAL APIs
    • Will RM instance memory be allocated from a shared memory?
    • Will RM instance memory be allocated from a cached memory?
    • Will a single instance be shared across multiple tasks/threads/processes?
    • Is internal blocking required within a RM instance because the application can't support receiving service responses at a later time via callback function? (Typical for LLDs)
  • Set each instances initialization parameters and create the RM instances
  • Open the instance service handle
    • In some cases the RM instance will need allocate resources prior to the RM infrastructure being connected via transports. For example, a shared resource needs to be allocated in order to connect two RM instances (semi-chicken and egg problem). To handle these cases the RM instance will be provided a static policy (RM Servers will just reference the regular policy and use the allocators since it has access to everything) during initialization in order to allocate a minimal set of resources to get the system up and running prior to the RM infrastructure being connected.
  • Bringup the application transport code required to connect each RM instance
  • Register the transports between RM instances with each instance via the transport API

Instance Initialization[edit]

RM instance initialization is fairly straightforward process. The key aspect of instance initialization is in regards to RM's name synchronization requirements. Please keep in mind that each RM instance must be assigned a name independent from other instance names at initialization. Also, the instance names assigned to the RM instances must be present in the valid-instances list and the permission strings in the policy files. If the latter requirements are not fulfilled RM instances will fail initialization or not be able to allocate resources.

Standard instance initialization parameters are well documented in the RM API Documentation found in pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/docs/rmDocs.chm or, if the .chm doesn't exist, pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/docs/doxygen/html/index.html

Server Initialization[edit]

One Server should be initialized per system. All resource management data structures including the NameServer are tracked and modified by the Server instance. Because of this, all service requests will be processed by the Server. Clients will always forward any request to the Server instance for processing.

If multiple Servers are desired the resources managed by each server MUST be mutually exclusive. Otherwise, proper system resource management cannot be guaranteed.

Standard Server initialization: <syntaxhighlight lang='c'>

   /* Server instance name (must match with policy valid-instances list */
   char rmServerName[RM_NAME_MAX_CHARS] = "Server";
   ...
   /* Initialize the Server */
   rmInitCfg.instName = rmServerName;
   rmInitCfg.instType = Rm_instType_SERVER;
   rmInitCfg.instCfg.serverCfg.globalResourceList = (void *)rmGRL;
   rmInitCfg.instCfg.serverCfg.linuxDtb = (void *)rmLinuxDtb;
   rmInitCfg.instCfg.serverCfg.globalPolicy = (void *)rmGlobalPolicy;
   rmServerHandle = Rm_init(&rmInitCfg, &result);

</syntaxhighlight>

Configuration breakdown:

  • <syntaxhighlight lang='c'>rmInitCfg.instName = rmServerName;</syntaxhighlight> - Pointer to Server's instance name. Must be in policy valid-instances list.
  • <syntaxhighlight lang='c'>rmInitCfg.instType = Rm_instType_SERVER;</syntaxhighlight> - Declare Server instance type.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.globalResourceList = (void *)rmGRL;</syntaxhighlight> - Pointer to GRL dtb. The GRL can be a linked C const byte array or read from a file. The Server will process the GRL and create an allocator for each resource specified within. If the Linux DTB is provided it will be parsed for resources to automatically reserve for the Linux kernel.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.linuxDtb = (void *)rmLinuxDtb;</syntaxhighlight> - Pointer to the Linux dtb. This is an optional parameter and only useful if Linux is part of the system. In the latter case, the Linux DTB can be provided to the Server at initialization to automatically reserve resource for the kernel.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.globalPolicy = (void *)rmGlobalPolicy;</syntaxhighlight> - Pointer to the RM infrastructure-wide policy. The policy can be a linked C const byte array or read from a file. The policy provided to the Server must contain the resource allocation specifications for all RM instances present in the system.
Client Delegate Initialization[edit]

Feature not Complete - At time of writing this user guide the CD features are not complete. This section will be updated once full CD functionality is available.

Client Initialization[edit]

There is no limit to the number of Clients that can be defined. The only caveat is no two Clients have the same instance name.

Clients contain no resource management data structures. Their main purpose is a permission end point within the RM infrastructure. Any service request received on a Client instance will be forwarded to the Server. The Server will return the response which the Client must provide back to the entity that made the request.

Clients can perform static allocations if a static policy is provided at initialization. Static allocation will be fulfilled from the Client until the Client's transport API is registered with another Server or CD instance. Once transport registration occurs static allocations will cease and any requests will be forwarded to the Server. Any static allocations that occurred will be forwarded to the Server upon the first service request made post-transport configuration. The Server will validate any static allocations and provide the responses back to the Client. If any of the static allocations failed validation against the Server's system-wide policy the Client instance will enter a locked state. The Client cannot service requests when in the locked state. The locked state cannot be exited once entered. The system must be restarted with a new, valid static policy that is a subset of the system-wide policy.

Standard Client initialization: <syntaxhighlight lang='c'>

   /* Server instance name (must match with policy valid-instances list */
   char rmClientName[RM_NAME_MAX_CHARS] = "Client";
   ...
   /* Initialize a Client */
   rmInitCfg.instName = rmClientName;
   rmInitCfg.instType = Rm_instType_CLIENT;
   rmInitCfg.instCfg.clientCfg.staticPolicy = (void *)rmStaticPolicy;
   rmClientHandle = Rm_init(&rmInitCfg, &result);

</syntaxhighlight>

Configuration breakdown:

  • <syntaxhighlight lang='c'>rmInitCfg.instName = rmClientName;</syntaxhighlight> - Pointer to Client's instance name. Must be in policy (Server policy and, if applicable, static policy provided to this Client) valid-instances list.
  • <syntaxhighlight lang='c'>rmInitCfg.instType = Rm_instType_CLIENT;</syntaxhighlight> - Declare Client instance type
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.clientCfg.staticPolicy = (void *)rmStaticPolicy;</syntaxhighlight> - Pointer to the Client's static policy. This is an optional parameter used if the Client must allocate some specific resources to a system entity prior the system being able to connect the Client to the Server. The static policy can be a linked C const byte array or read from a file. The static policy must be a subset of the system-wide policy given to the Server.
Shared Server/Client Initialization[edit]

A Shared Server and Shared Clients is a special type of RM infrastructure setup that does not require the RM instances be connected via the Transport API. The "transport" between the Shared Clients and Shared Server is essentially the Shared Clients have direct access to the Shared Server's resource management data structures. This requires that all RM Shared Server data structures be allocated from shared memory via the OSAL API. The Shared Server/Client architecture is useful in cases where the application cannot tolerate the blocking operations required by RM instances to send and receive service requests to the Server for completion. The downside to the Shared Server/Client architecture the Shared instances cannot be connected to any standard RM instance via the transport API. All "communication" between the Shared Server and Shared Clients is assumed to be through resource management data structures being located in shared memory.

The rmK2H/KC66BiosSharedTestProject delivered with PDK provides an example of how to initialize and use the Shared Server/Client RM infrastructure.

The following graphic give a high-level view of how the Shared Server/Client architecture operates: RM_shared_inst.jpg

Standard Shared Server initialization: <syntaxhighlight lang='c'>

   /* Server instance name (must match with policy valid-instances list */
   char rmServerName[RM_NAME_MAX_CHARS] = "Server";
   ...
   /* Initialize the Server */
   rmInitCfg.instName = rmServerName;
   rmInitCfg.instType = Rm_instType_SHARED_SERVER;
   rmInitCfg.instCfg.serverCfg.globalResourceList = (void *)rmGRL;
   rmInitCfg.instCfg.serverCfg.linuxDtb = (void *)rmLinuxDtb;
   rmInitCfg.instCfg.serverCfg.globalPolicy = (void *)rmGlobalPolicy;
   /* RM Shared Server handle returned will be from shared memory */
   rmSharedServerHandle = Rm_init(&rmInitCfg, &result);
   /* Writeback Shared Server handle for Shared Clients - Application must make sure writeback
    * is aligned and padded to cache line. */
   Osal_rmEndMemAccess((void *)&rmSharedServerHandle, sizeof(Rm_Handle));

</syntaxhighlight>

Configuration breakdown:

  • <syntaxhighlight lang='c'>rmInitCfg.instName = rmServerName;</syntaxhighlight> - Pointer to Server's instance name. Must be in policy valid-instances list.
  • <syntaxhighlight lang='c'>rmInitCfg.instType = Rm_instType_SHARED_SERVER;</syntaxhighlight> - Declare Shared Server instance type.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.globalResourceList = (void *)rmGRL;</syntaxhighlight> - Pointer to GRL dtb. The GRL can be a linked C const byte array or read from a file. The GRL can be located in a local or shared memory area since it will only be accessed once during Shared Server initialization. The Server will process the GRL and create an allocator for each resource specified within. If the Linux DTB is provided it will be parsed for resources to automatically reserve for the Linux kernel.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.linuxDtb = (void *)rmLinuxDtb;</syntaxhighlight> - Pointer to the Linux dtb. This is an optional parameter and only useful if Linux is part of the system. In the latter case, the Linux DTB can be provided to the Server at initialization to automatically reserve resource for the kernel.
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.serverCfg.globalPolicy = (void *)rmGlobalPolicy;</syntaxhighlight> - Pointer to the RM infrastructure-wide policy. The policy can be a linked C const byte array or read from a file. The policy can be located in a local or shared memory area since it will only be accessed once during Shared Server initialization. The Shared Server will malloc a memory block from shared memory and copy the policy into malloc'd memory. The policy provided to the Server must contain the resource allocation specifications for all RM instances present in the system.
  • <syntaxhighlight lang='c'>Osal_rmEndMemAccess((void *)&rmSharedServerHandle, sizeof(Rm_Handle));</syntaxhighlight> - The Shared Server handle will be located in shared memory. It must be written back to memory if cache is enabled so that the Shared Clients can access the Shared Server and it's resource data structures.

Standard Shared Client initialization: <syntaxhighlight lang='c'>

   /* Server instance name (must match with policy valid-instances list */
   char rmClientName[RM_NAME_MAX_CHARS] = "Client";
   ...
   /* Wait for Shared Server handle to be valid */
   do {
       Osal_rmBeginMemAccess((void *)&rmSharedServerHandle, sizeof(Rm_Handle));
   } while (!sharedServerHandle);
   /* Initialize a Client */
   rmInitCfg.instName = rmClientName;
   rmInitCfg.instType = Rm_instType_SHARED_CLIENT;
   rmInitCfg.instCfg.sharedClientCfg.sharedServerHandle = (void *)rmSharedServerHandle;
   rmClientHandle = Rm_init(&rmInitCfg, &result);

</syntaxhighlight>

Configuration breakdown:

  • <syntaxhighlight lang='c'>Osal_rmBeginMemAccess((void *)&rmSharedServerHandle, sizeof(Rm_Handle));</syntaxhighlight> - Need to invalidate the Shared Server handle from local memory if caching is enabled. Once the Shared Server handle is non-null it has been created and writtenback.
  • <syntaxhighlight lang='c'>rmInitCfg.instName = rmClientName;</syntaxhighlight> - Pointer to Client's instance name. Must be in policy (Server policy and, if applicable, static policy provided to this Client) valid-instances list.
  • <syntaxhighlight lang='c'>rmInitCfg.instType = Rm_instType_SHARED_CLIENT;</syntaxhighlight> - Declare Shared Client instance type
  • <syntaxhighlight lang='c'>rmInitCfg.instCfg.sharedClientCfg.sharedServerHandle = (void *)rmSharedServerHandle;</syntaxhighlight> - Provide the Shared Server handle to the Shared Client instance. When a service request is received on the Shared Client it will remap itself to the Shared Server's instance data so that it can access the resource management data structures located in shared memory. Service requests made through the Shared Client will be validated against the policy using the Shared Client's instance name.

RM Transport Configuration[edit]

The system application must connect communicating RM instances with application created transport paths. The transport paths between instances must be registered with the RM instances using the Transport API. The type of transport path used and the means by which it is created is completely up to the system application.

RM Instance Transport Requirements[edit]

RM will return an error if the following rules are not followed when registering transports with instance:

  • Server Instance
    • Cannot register a transport whose remote instance is another Server
    • Can register an unlimited amount of transports whose remote instances are CDs or Clients
  • CD Instance
    • Cannot register a transport whose remote instance is another CD
    • Cannot register more than one transport whose remote instance is a Server
    • Can register an unlimited amount of transports whose remote instances are Clients
  • Client Instance
    • Cannot register a transport whose remote instance is another Client
    • Cannot register more than one transport whose remote instance is a Server or CD
      • If transport to CD registered cannot register transport to Server
      • If transport to Server registered cannot register transport to CD
Transport Registration[edit]

The system application uses the Rm_transportRegister API to register application created transports to remote instances. The Rm_TransportCfg structure provides specific details about the transport to the RM instance. The details are used by the RM instances to route internal RM messages properly. If a transport is successfully registered a transport handle will be returned to the application. The application can use this handle to tie specific transport pipes to specific RM instances. Following are some configuration details for each Rm_TransportCfg parameter:

<syntaxhighlight lang='c'> typedef struct {

   Rm_Packet *(*rmAllocPkt)(Rm_AppTransportHandle appTransport, uint32_t pktSize,
                            Rm_PacketHandle *pktHandle);
   int32_t (*rmSendPkt)(Rm_AppTransportHandle appTransport, Rm_PacketHandle pktHandle);

} Rm_TransportCallouts;

typedef struct {

   Rm_Handle              rmHandle;
   Rm_AppTransportHandle  appTransportHandle;
   Rm_InstType            remoteInstType;
   const char            *remoteInstName;
   Rm_TransportCallouts   transportCallouts;

} Rm_TransportCfg; </syntaxhighlight>

  • <syntaxhighlight lang='c'>Rm_Handle rmHandle;</syntaxhighlight>
    • Instance handle for the instance the transport should be registered with
  • <syntaxhighlight lang='c'>Rm_AppTransportHandle appTransportHandle;</syntaxhighlight>
    • Handle to the transport "pipe" object that will transmit a RM packet to specified remote RM instance. The handle provided can be anything from a function pointer, to a pointer to a data structure. In the multi-core RM examples this value is a IPC MessageQ receive queue that is tied to the remote RM instance. This value will be provided to the application as part of the rmSendPkt callout from RM. The application should then be able to use this value to send the Rm_Packet to the remote RM instance.
  • <syntaxhighlight lang='c'>Rm_InstType remoteInstType;</syntaxhighlight>
    • Instance type at the remote end of the transport being registered. RM instances that forward service requests will use this field to find the transport that connects to the RM Server or CD
  • <syntaxhighlight lang='c'>const char *remoteInstName;</syntaxhighlight>
    • Instance name at the remote end of the transport being registered. Used internal for proper Rm_Packet routing and service request handling
  • <syntaxhighlight lang='c'>Rm_TransportCallouts transportCallouts;</syntaxhighlight>
    • Each transport registered can have a different set of alloc and send functions registered with the RM instance. This would be needed if not all transports between RM instances are the same. For example, a CD instance on a DSP core connected to a Server on ARM and Clients on other DSP cores. The application would register a DSP to ARM transport with the CD for the connection to the Server. While for the Client connections the application would register a DSP to DSP transport. The same DSP to DSP transport could be registered for each Client as long as the appliation transport send function can route the Rm_Packets based on the Rm_AppTransportHandle provided by the instance.
      • <syntaxhighlight lang='c'>Rm_Packet *(*rmAllocPkt)(Rm_AppTransportHandle appTransport, uint32_t pktSize, Rm_PacketHandle *pktHandle);</syntaxhighlight>
        • RM instance callout function used to get a transport-specific buffer for the Rm_packet to be sent. The pointer to the data buffer where RM should place the Rm_Packet is returned as a Rm_Packet pointer return value. If the data buffer is part of a larger transport packet data structure the pointer to the start of the transport packet data structure is returned via the Rm_PacketHandle pointer argument. The RM instance will not modify anything using the Rm_PacketHandle pointer. The instance will populate the internal RM packet data into the data buffer pointed to by the returned Rm_Packet pointer. The following image depicts the two use cases that must be handled by the application supplied transport allocation function based on the transport's packet type:RM_packet_diff.jpg
      • <syntaxhighlight lang='c'>int32_t (*rmSendPkt)(Rm_AppTransportHandle appTransport, Rm_PacketHandle pktHandle);</syntaxhighlight>
        • RM instance callout function used to send the transport packet containing the RM instance data to the remote RM instance. The Rm_AppTransportHandle should be used by the application to route the packet to the correct transport "pipe". The Rm_PacketHandle is a pointer to the application transport packet.
Providing Received Packets to RM[edit]

The system application must receive packets over the transports, extract the buffer containing the Rm_Packet pointer and pass the pointer to the RM instance through the RM transport receive API:

  • <syntaxhighlight lang='c'>int32_t Rm_receivePacket(Rm_TransportHandle transportHandle, const Rm_Packet *pkt);</syntaxhighlight>
    • When the application receives a packet it will provide the packet to RM along with a Rm_TransportHandle. The Rm_TransportHandle should map to registered transport whose remote instance is the instance from which the packet was received.

RM instances assume all packet free operations will be done by the application. Therefore, after processing a received packet RM will return without attempting to free the packet. The application must free the data buffer containing the RM data and the packet which carried the data buffer.

The following image provides a visual depiction of how RM transport routing can be handled by a system application: RM_packet_routing.jpg

A good example of how RM instance transports can be handled over IPC for DSP to DSP applications is provided in pdk_<device>_w_xx_yy_zz/packages/ti/drv/rm/test/rm_transport_setup.c

Service Request/Response Mechanisms[edit]

System applications can request resource services from RM via the instance service handles. After each instance has been initialized a service handle can be opened from the instance via the Rm_serviceOpenHandle API:

<syntaxhighlight lang='c'>Rm_ServiceHandle *Rm_serviceOpenHandle(Rm_Handle rmHandle, int32_t *result);</syntaxhighlight>

The service handle returned to the system application consists of two items

  • The RM instance handle from which the service handle was opened
  • A function pointer to RM's service handler function

<syntaxhighlight lang='c'> typedef struct {

   void *rmHandle;
   void (*Rm_serviceHandler)(void *rmHandle, const Rm_ServiceReqInfo *serviceRequest,
                             Rm_ServiceRespInfo *serviceResponse);

} Rm_ServiceHandle; </syntaxhighlight>

Service requests can be made using the service handle by calling the Rm_serviceHandler function, passing the service handle's rmHandle as a parameter, as well as pointers to RM service request and service response data structures created in the application context (stack or global variable).

Standard Service Request Example <syntaxhighlight lang='c'> void func (void) {

   Rm_ServiceReqInfo  requestInfo;
   Rm_ServiceRespInfo responseInfo;
   ...
   /* Open service handle */
   serviceHandle = Rm_serviceOpenHandle(rmHandle, &result);
   memset((void *)&requestInfo, 0, sizeof(requestInfo));
   memset((void *)&responseInfo, 0, sizeof(responseInfo));
   /* Configure request */
   requestInfo.type = ...;
   ...
   serviceHandle->Rm_serviceHandler(serviceHandle->rmHandle, &requestInfo, &responseInfo);
   /* Retrieve response from responseInfo */
   /* Next request */
   memset((void *)&requestInfo, 0, sizeof(requestInfo));
   memset((void *)&responseInfo, 0, sizeof(responseInfo));
   /* Configure request */
   requestInfo.type = ...;
   ...
   serviceHandle->Rm_serviceHandler(serviceHandle->rmHandle, &requestInfo, &responseInfo);
   /* Retrieve response from responseInfo */
   ...

} </syntaxhighlight>

Service Request Parameters[edit]

Service requests can be configured use the Rm_ServiceReqInfo structure. Configurations include the ability to:

  • Allocate and free specific resource values
  • Allocate a resource with an unspecified value or, in other words, allocator a specific type of resource but the system doesn't care what value it is.
  • Allocate and free resource blocks of different sizes and alignments
  • Map and unmap resources to names using the RM NameServer
  • Retrieve resources by name via the NameServer
  • Request RM block and not return until the service request has been completed
  • Request RM immediately and provide the completed service request via a application supplied callback function

Service Request Structure <syntaxhighlight lang='c'> typedef struct {

   Rm_ServiceType      type;
   const char         *resourceName;
  1. define RM_RESOURCE_BASE_UNSPECIFIED (-1)
   int32_t             resourceBase;
   uint32_t            resourceLength;
  1. define RM_RESOURCE_ALIGNMENT_UNSPECIFIED (-1)
   int32_t             resourceAlignment;
   const char         *resourceNsName;
   Rm_ServiceCallback  callback;

} Rm_ServiceReqInfo; </syntaxhighlight>

  • <syntaxhighlight lang='c'>Rm_ServiceType type;</syntaxhighlight> - The type of RM service requested. See the Service API section for more details.
  • <syntaxhighlight lang='c'>const char *resourceName;</syntaxhighlight> - Resource name affected by the request. According to the name synchronization rules the resource name is required to match a resource node name in the GRL and policy. Otherwise, RM will return an error.
  • <syntaxhighlight lang='c'>int32_t resourceBase;</syntaxhighlight> - Base value of the resource affected by the request
    • <syntaxhighlight lang='c'>#define RM_RESOURCE_BASE_UNSPECIFIED (-1)</syntaxhighlight> - Can be supplied in place of a non-negative resourceBase value. Will tell RM to find the next available resource. An unspecified base value is only valid for allocation requests.
  • <syntaxhighlight lang='c'>uint32_t resourceLength;</syntaxhighlight> - Length value of the resource affected by the request. Together the base and length will cause a resource's values from base to (base+length-1) to be affected by the request.
  • <syntaxhighlight lang='c'>int32_t resourceAlignment;</syntaxhighlight> - Alignment value of the resource affected by the request. Only valid for allocation requests when the resourceBase is UNSPECIFIED.
    • <syntaxhighlight lang='c'>#define RM_RESOURCE_ALIGNMENT_UNSPECIFIED (-1)</syntaxhighlight> - Can be supplied in place of a non-negative resourceAlignment value. RM will default to an alignment of 0 if the alignment is left UNSPECIFIED. An unspecified alignment value is only valid for allocation requests when the resourceBase is UNSPECIFIED.
  • <syntaxhighlight lang='c'>const char *resourceNsName;</syntaxhighlight>
    • For NameServer mapping requests specifies the NameServer name to map the specified resources to
    • For NameServer unmap requests specifies the NameServer name to unmap/remove from the NameServer
    • For NameServer get by name requests specifies the NameServer name to retrieve mapped resources from in the NameServer. RM will return an error if a get by name request has both a resource specified via the base and length values and a NameServer name.
  • <syntaxhighlight lang='c'>Rm_ServiceCallback callback;</syntaxhighlight> - Application's callback function for RM to use to return the completed service request to the application.
    • The RM instance will by default block until the completed service request can be returned via the call stack if the callback function pointer is left NULL. RM instance blocking is handled by the block/unblock OSAL functions.
    • RM will return a serviceId in the service response via the call stack if a callback function is provided and the service request resulted in the RM instance needing to forward the service request to another RM instance for completion. The completed request will be returned at a later time via the provided callback function. the completed request will contain a serviceId that matches the serviceId returned by the instance via the call stack.

Service Response Parameters[edit]

Service Response Structure <syntaxhighlight lang='c'> typedef struct {

   Rm_Handle rmHandle;
   int32_t   serviceState;
   uint32_t  serviceId;
   char      resourceName[RM_NAME_MAX_CHARS];
   int32_t   resourceBase;
   uint32_t  resourceLength;
  1. define RM_RESOURCE_NUM_OWNERS_INVALID (-1)
   int32_t   resourceNumOwners;

} Rm_ServiceRespInfo; </syntaxhighlight>

  • <syntaxhighlight lang='c'>Rm_Handle rmHandle;</syntaxhighlight> - The service response will contain the RM instance handle from which the service request was originally issues. This is for the application to sort responses received via callback function in system contexts where more than one RM instance runs
  • <syntaxhighlight lang='c'>int32_t serviceState;</syntaxhighlight> - Server state contains the service completion status.
    • SERVICE_APPROVED - Service request completed successfully
    • SERVICE_APPROVED_STATIC - Service request completed successfully via static policy. The service request will be validated on the Server once transports are configured
    • SERVICE_PROCESSING - Service request was forwarded to a CD or Server for completion. The response will be returned through the provided application callback function
    • Any other value > 0 - Service request denied (See rm.h)
    • Any other value < 0 - Error encountered processing the service request (See rm.h)
  • <syntaxhighlight lang='c'>uint32_t serviceId;</syntaxhighlight> - ID assigned to service request by RM
    • The ID returned by RM will never zero.
    • IDs assigned to processing requests can be used to match responses received at later time via the application callback function
  • <syntaxhighlight lang='c'>char resourceName[RM_NAME_MAX_CHARS];</syntaxhighlight> - Contains the resource name affected by the completed request
  • <syntaxhighlight lang='c'>int32_t resourceBase;</syntaxhighlight> - Contains the resource base value affected by the completed request
  • <syntaxhighlight lang='c'>uint32_t resourceLength;</syntaxhighlight> - Contains the resource length affected by the completed request
  • <syntaxhighlight lang='c'>int32_t resourceNumOwners;</syntaxhighlight> - If valid, returns the number of current owners for the resource specified in the service request
    • Will contain the value RM_RESOURCE_NUM_OWNERS_INVALID if the service request has nothing to do with owner reference counts. Such as NameServer services.

Blocking & Non-Blocking Requests[edit]

Please see the Service Request Config Options section describing the service request's callback function pointer parameter for an accurate description of how RM blockin/non-blocking operations can be configured.

RM Instance Blocking Criteria
Server
  • Will never block for service requests received through its service API
Client Delegate
  • Will not block for service requests received through its service API that can be completed using resources the CD currently owns
  • Will need to block for service requests that must be forwarded to a Server located on a different core/process/thread/task because the locally owned resources cannot be used to complete the request
Client
  • Will need to block for all service requests since no resource management data structures on managed by Clients. Blocking will not occur if the Client is local to the Server i.e. the Client and Server can be connected over the transport API via function calls.
Shared Server & Client
  • Will never block since the Shared Server and Shared Clients can all access the resource management data structures in shared memory.

Direct & Indirect Resource Management[edit]

Service requests allow resources to be managed through direct and indirect means.

Direct management refers to RM being integrated with a software component under the hood. Beyond providing a RM instance handle to the module, resource management interactions with the module's resources are abstracted from the system. An example of this would be QMSS, CPPI, and PA LLDs which have integrated RM to differing degrees.

Indirect management refers to RM being used to provide a resource value (or set of values) then passing those values to a module API to actually allocate the resource with the provided values. For example, RM is not currently integrated into any module that manages semaphores. Therefore, RM resource management of semaphores would need to occur indirectly with the following steps

  1. Add a semaphore resource to the GRL
  2. Add semaphore allocation permissions to the policy (and static policy if applicable)
  3. System creates a RM instance
  4. System issues a service request to the instance requesting a semaphore resource
  5. RM returns semaphore value based on request and policy
  6. System uses returned value in call to module to manages semaphores (BIOS, CSL, etc.)

In the latter case, RM is managing which system contexts are allowed to allocate and share which resources. It's just that the resource value management is not abstracted from the system.
RM_direct_indirect.jpg

RM Supported LLDs[edit]

At this point in time the following LLDs have had RM support added:

  • QMSS
  • CPPI
  • PA

The breadth of resources supported for each of these LLDs varies and will be covered in each LLDs section. Each LLD remains backwards compatible with expected operation prior to RM integration. A RM instance service handle can be registered with each existing LLD context. The LLD will use RM to allocate resources tagged as resource to be managed by RM if a RM service handle is registered. The LLD will allocate resources in the same way it allocated resources prior to RM integration if a RM service handle is registered.

QMSS LLD[edit]

The RM service handle can be registered with the QMSS LLD in the following manner: <syntaxhighlight lang='c'> /* Define the system initialization core */

  1. define SYSINIT 0

void foo (void) {

   Qmss_StartCfg qmssStartCfg;
   ...
   /* Open RM service handle */
   rmServiceHandle = Rm_serviceOpenHandle(rmHandle, &rmResult);
   ...
   if (core == SYSINIT) {
       /* Register RM Service handle with QMSS */
       qmssGblCfgParams.qmRmServiceHandle = rmServiceHandle;
       /* Initialize Queue Manager SubSystem */
       result = Qmss_init (&qmssInitConfig, &qmssGblCfgParams);
   }
   else {
       /* Register RM service handle with QMSS */
       qmssStartCfg.rmServiceHandle = rmServiceHandle;
       result = Qmss_startCfg (&qmssStartCfg);
   }

} </syntaxhighlight>

Please refer to the LLD for resources managed by RM if a RM instance's service handle is registered.

QMSS test/example projects that register RM on the DSP

  • qmInfraMCK2HC66BiosExampleProject
  • qmInfraMCK2KC66BiosExampleProject
  • qmQosSchedDropSchedK2HC66BiosTestProject
  • qmQosSchedDropSchedK2KC66BiosTestProject

QMSS test/example programs that use RM on the ARM

  • qmInfraDmaMC_k2[hk].out
  • qmInfraDmaSC_k2[hk].out
  • qmQAllocTest_k2[hk].out

In order to use the ARM programs, the RM server must be explicitly started separately from the qmss program.

Using the files global-resource-list.dtb and policy_dsp_arm.dtb from pdk_keystone2_1_00_00_11/packages/ti/drv/rm/device/k2[hk], invoke: <syntaxhighlight>./rmServer.out global-resource-list.dtb policy_dsp_arm.dtb</syntaxhighlight>

The qmInfraDmaMC.out and qmQAllocTest.out are compiled using packages/ti/drv/qmss/makefile_armv7 after running configuring and running packages/armv7setup.sh for tools locations in your linux system.

qmInfraDmaSC and qmQAllocTest do not require arguments. The qmInfraDmaMC also doesn't require an argument, but can take one argument which is task/core number. Without arguments it forks 4 tasks; with arguments it runs one task with the specified ID number. This facilitates gdb on indvidual tasks.

CPPI LLD[edit]

The RM service handle can be registered with the CPPI LLD in the following manner: <syntaxhighlight lang='c'> void foo (void) {

   Cppi_StartCfg cppiStartCfg;
   ...
   /* Open RM service handle */
   rmServiceHandle = Rm_serviceOpenHandle(rmHandle, &rmResult);
   ...
   /* Register RM with CPPI for each core */
   cppiStartCfg.rmServiceHandle = rmServiceHandle;
   Cppi_startCfg(&cppiStartCfg);

} </syntaxhighlight>

Please refer to the LLD for resources managed by RM if a RM instance's service handle is registered.

CPPI test/example projects that register RM:

  • cppiK2HC66BiosExampleProject
  • cppiK2KC66BiosExampleProject
  • cppiK2HC66BiosTestProject
  • cppiK2KC66BiosTestProject

PA LLD[edit]

The RM service handle can be registered with the PA LLD in the following manner: <syntaxhighlight lang='c'> void foo (void) {

   paStartCfg_t  paStartCfg;
   ...
   /* Open RM service handle */
   rmServiceHandle = Rm_serviceOpenHandle(rmHandle, &rmResult);
   ...
   /* Register RM with PA for each core */
   paStartCfg.rmServiceHandle = rmServiceHandle;
   Pa_startCfg(paInst, &paStartCfg);

} </syntaxhighlight>

Please refer to the LLD for resources managed by RM if a RM instance's service handle is registered.

PA test/example projects that register RM:

  • PA_simpleExample_K2HC66BiosExampleProject
  • PA_simpleExample_K2KC66BiosExampleProject

Running RM Test Projects[edit]

This section describes how to run the RM test projects

ARM Linux RM Server Daemon[edit]

The RM test code is delivered with a RM Server daemon application based on a socket transport interface. The RM Server daemon can be used to run all RM test projects and user-space LLD examples and projects. The RM Server daemon can even be used with end-user application infrastructures. However, please keep in mind that the RM Server daemon is meant more of a guide for implementing a Linux-based RM server than an end-all, be-all Linux RM Server. The RM Server communicates with other RM Clients running in other processes via sockets. The daemon's transport code will remember established socket connections to the RM Server.

The RM Server daemon logs all incoming service requests and prints the resource status upon sending out each resource request response. The RM server daemon logs can be found in /var/log/rmServer.log.

The RM Server daemon does not have a run-time user interface but does provide configuration options when started. The configuration options are displayed with the '-h' or '--help' commands. Here is an example from, the version you are using may be different:
rmserverhelp.jpg

NoteNote: Please take special care to provide the proper GRL and policy files to the RM Server at startup when attempting to run different tests. For example, the RM test applications require the GRL, policy and linux DTB files in the rm/test/dts_files directory. Whereas, all user-space LLD tests require the device GRL and policy files within the rm/device/k2*/ directories.

NoteNote: There's currently a bug in the RM Server daemon -kill logic that sometimes causes the kill to timeout. When this happens the RM Server can be killed with following command, <RM server PID> is retrieved from "ps -aux": <syntaxhighlight lang='bash'> root@keystone-evm:~# kill -9 <RM server PID> </syntaxhighlight>

ARM Linux Multi-Process Test Project[edit]

The ARM Linux Multi-Process Test has a RM Server and RM Client running in separate user-space processes. The Client and Server exchange resource requests and responses over a socket.

Test application components: Linux User-Space RM Server - rmServer.out (located in /usr/bin) Linux User-Space RM Client - rmLinuxClientTest.out (located in /usr/bin)

Running the RM Server

  1. Copy the global-resources.dtb, server-policy.dtb, and linux-evm.dtb from rm/test/dts_files/ to the directory containing rmServer.out
  2. Run the RM Server: $ ./rmServer.out global-resources.dtb server-policy.dtb -l linux-evm.dtb

The Server will wait for Client socket connections and service any requests received via those sockets.

Running the RM Client Linux User-Space Test

  1. Copy the static-policy.dtb from rm/test/dts_files/ to the directory containing rmLinuxClientTest.out
  2. Run the RM Client: $ ./rmLinuxClientTest.out static-policy.dtb

The Client test will establish a socket connection with the Server, request resources and then free all resources requested.

DSP + ARM Linux Test Project[edit]

The DSP+ARM Linux Test has a RM Client running on a DSP who sends resource requests to the RM Server running within a Linux user-space process. The RM Client sends messages to another Linux user-space process over IPC 3.x MessageQ. The process extracts the RM messages received on MessageQ interface and sends them through a socket to the RM Server instance. From RM Server to Client the same process extracts RM messages received over the socket from the Server and sends them to the Client over the IPC MessageQ interface.

The DSP application must be downloaded to the DSP core using MPM. Otherwise, IPC MessageQ will fail to work.

Test application components: Linux User-Space RM Server - rmServer.out (located in /usr/bin) Linux User-Space IPC to MessageQ Exchange process - rmDspClientTest.out (located in /usr/bin) C66 DSP RM Client - rmK2xArmv7LinuxDspClientTestProject.out

Build the DSP RM Client

  1. Import the rmK2xArmv7LinuxDspClientTestProject into CCS and build the project
  2. Copy the projects .out file from the project /Debug directory to the EVM filesystem

Running the RM Server

  1. Copy the global-resources.dtb, server-policy.dtb, and linux-evm.dtb from rm/test/dts_files/ to the directory containing rmServer.out
  2. Run the RM Server: $ ./rmServer.out global-resources.dtb server-policy.dtb -l linux-evm.dtb

Downloading the DSP Application

The DSP application must be downloaded to all DSP cores via MPM prior to running the exchange process. Otherwise, IPC will fail to start

Download the following shell scripts and place them on the EVM. The shell scripts are used to download, reset, and dump traces from DSPs over MPM - Mpm_scripts

  • load_all.sh - Downloads a DSP image to all DSPs
  • stop_all.sh - Resets and halts all DSPs
  • dump_trace.sh - Dumps trace information, like printf output, from the DSPs
  1. Make sure LAD is running: # ps aux|grep lad - the output should show an instance of /usr/bin/lad_tci6638 lad.txt running
  2. Create a /debug directory for the MPM trace information: # mkdir /debug
  3. Mount debugfs in /debug: # mount -t debugfs none /debug
  4. Download the DSP Client: # ./load_all.sh rmK2xArmv7LinuxDspClientTestProject.out - the output should show 8 DSPs cores loaded and run successfully

<syntaxhighlight lang='bash'> root@keystone-evm:~# ./load_all.sh rmK2HArmv7LinuxDspClientTestProject.out Loading and Running rmK2HArmv7LinuxDspClientTestProject.out ... load succeded run succeded load succeded run succeded load succeded run succeded load succeded run succeded load succeded run succeded load succeded run succeded load succeded run succeded load succeded run succeded </syntaxhighlight>

Running the MessageQ - Socket Exchange Application

  1. Run the exchange application: # ./rmDspClientTest.out - The output should show messages received from the DSP Client

<syntaxhighlight lang='bash'> root@keystone-evm:~/pdk_keystone2_1_00_00_10/bin/k2h/armv7/rm/test# ./rmDspClientTest.out Using procId : 1 Entered rmServerExchange_execute Local MessageQId: 0x1 Remote queueId [0x10000] Setting up socket connection with RM Server Waiting for RM messages from DSP Client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client Received RM pkt of size 264 from DSP client </syntaxhighlight>

Check the Test Result

  1. Dump the trace information from the DSPs: # ./dump_trace.sh - DSP Core 0 should have printed an Example Completion with no errors. Cores 1 - 7 will not execute the full test

<syntaxhighlight lang='bash'> root@keystone-evm:~# ./dump_trace.sh Core 0 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59


Initialized RM_Client

Core 0 : --------------------------------------------------------- Core 0 : ---------------- Static Init Allocation ----------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 0 - Core 0 : - End: 0 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------------- Static Init Allocation ----------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 2 - Core 0 : - End: 2 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------------- Static Init Allocation ----------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 1 - Core 0 : - End: 1 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Denial: 79 - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------------- Static Init Allocation ----------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------------- Static Init Allocation ----------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : Creating RM startup task... Core 0 : Starting BIOS... registering rpmsg-proto service on 61 with HOST Awaiting sync message from host... [t=0x00000012:2c9060f4] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:2d3612fb] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 Core 0 : Creating RM receive task... Core 0 : Creating RM client task... Core 0 : --------------------------------------------------------- Core 0 : --------------- Create NameServer Object ---------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 1002 - Core 0 : - End: 9161389 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------- Retrieve Resource Via NameServer Object --------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 1002 - Core 0 : - End: 1002 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------- Init Allocate Using Retrieved Resource --------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 1002 - Core 0 : - End: 1002 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---- Retrieve Resource Status Via NameServer Object ----- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 1002 - Core 0 : - End: 1002 - Core 0 : - Expected Owner Count: 1 - Core 0 : - Returned Owner Count: 1 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : --- Free of Retrieved Resource Using NameServer Name ---- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: My_Favorite_Queue - Core 0 : - Start: 0 - Core 0 : - End: 0 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : --------------- Delete NameServer Object ---------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: My_Favorite_Queue - Core 0 : - Start: 0 - Core 0 : - End: 0 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : - Resource Node Expand/Contract Testing (Use Allocate) -- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-rx-ch - Core 0 : - Start: 0 - Core 0 : - End: 5 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : - Resource Node Expand/Contract Testing (Init Allocate) - Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-rx-ch - Core 0 : - Start: 50 - Core 0 : - End: 56 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------- Use Allocation w/ UNSPECIFIED Base ----------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: accumulator-ch - Core 0 : - Start: UNSPECIFIED - Core 0 : - Length: 5 - Core 0 : - Alignment: 4 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---------- Use Allocation w/ UNSPECIFIED Base ----------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: accumulator-ch - Core 0 : - Start: UNSPECIFIED - Core 0 : - Length: 2 - Core 0 : - Alignment: 1 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---- Use Allocation w/ UNSPECIFIED Base & Alignment ----- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: accumulator-ch - Core 0 : - Start: UNSPECIFIED - Core 0 : - Length: 2 - Core 0 : - Alignment: UNSPECIFIED - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -- Init Allocation of Shared Linux and Client Resource -- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: infra-queue - Core 0 : - Start: 800 - Core 0 : - End: 800 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : - Init Allocation (RM Blocked Until Resource Returned) -- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7000 - Core 0 : - End: 7000 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -- Use Allocation (RM Blocked Until Resource Returned) -- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7005 - Core 0 : - End: 7029 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -- Use Allocation (RM Blocked Until Resource Returned) -- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7010 - Core 0 : - End: 7014 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ----- Use Allocation of Owned Resource (RM Blocked) ----- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7011 - Core 0 : - End: 7011 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -- Status Check of Resources from Client (Non-Blocking) - Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7012 - Core 0 : - End: 7013 - Core 0 : - Expected Owner Count: 1 - Core 0 : - Returned Owner Count: 1 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ---- Status Check of Resources from Client (Blocking) --- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 4025 - Core 0 : - End: 4044 - Core 0 : - Expected Owner Count: 1 - Core 0 : - Returned Owner Count: 1 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------- Static Allocation Validation -------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 0 - Core 0 : - End: 0 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------- Static Allocation Validation -------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 2 - Core 0 : - End: 2 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------- Static Allocation Validation -------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------- Static Allocation Validation -------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : Creating RM cleanup task... Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: accumulator-ch - Core 0 : - Start: 0 - Core 0 : - End: 6 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: accumulator-ch - Core 0 : - Start: 40 - Core 0 : - End: 41 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 0 - Core 0 : - End: 0 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: qos-cluster - Core 0 : - Start: 2 - Core 0 : - End: 2 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-queue - Core 0 : - Start: 525 - Core 0 : - End: 525 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: infra-queue - Core 0 : - Start: 800 - Core 0 : - End: 800 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7000 - Core 0 : - End: 7000 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7011 - Core 0 : - End: 7011 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7010 - Core 0 : - End: 7014 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: gp-queue - Core 0 : - Start: 7005 - Core 0 : - End: 7029 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-rx-ch - Core 0 : - Start: 0 - Core 0 : - End: 5 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : -------------------- Resource Cleanup ------------------- Core 0 : - Instance Name: RM_Client - Core 0 : - Resource Name: aif-rx-ch - Core 0 : - Start: 50 - Core 0 : - End: 56 - Core 0 : - Alignment: 0 - Core 0 : - - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------------ Memory Leak Check -------------------- Core 0 : -  : malloc count | free count - Core 0 : - Example Completion  : 101 | 101 - Core 0 : - PASSED - Core 0 : ---------------------------------------------------------

Core 0 : --------------------------------------------------------- Core 0 : ------------------ Example Completion ------------------- Core 0 : ---------------------------------------------------------


Core 1 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 1 : RM DSP+ARM Linux test not executing on this core Core 1 : Creating RM startup task... Core 1 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000012:2572e5da] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:2617dddd] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 2 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 2 : RM DSP+ARM Linux test not executing on this core Core 2 : Creating RM startup task... Core 2 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000012:13e85d6e] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:148d0415] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 3 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 3 : RM DSP+ARM Linux test not executing on this core Core 3 : Creating RM startup task... Core 3 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000012:0da8b05a] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:0e4c8b53] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 4 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 4 : RM DSP+ARM Linux test not executing on this core Core 4 : Creating RM startup task... Core 4 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000012:073f981e] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:07e2a25d] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 5 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 5 : RM DSP+ARM Linux test not executing on this core Core 5 : Creating RM startup task... Core 5 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000012:012ce260] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000012:01cfa139] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 6 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 6 : RM DSP+ARM Linux test not executing on this core Core 6 : Creating RM startup task... Core 6 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000011:f8bb09ae] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000011:f95d7d57] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53


Core 7 Trace... 3 Resource entries at 0x800000

                        • RM DSP+ARM DSP Client Testing **************

RM Version : 0x02000004 Version String: RM Revision: 02.00.00.04:May 14 2013:16:19:59 Core 7 : RM DSP+ARM Linux test not executing on this core Core 7 : Creating RM startup task... Core 7 : Starting BIOS... registering rpmsg-proto service on 61 with HOST [t=0x00000011:f1ed4c0c] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 [t=0x00000011:f28ed241] ti.ipc.rpmsg.RPMessage: RPMessage_send: no object for endpoint: 53 root@keystone-evm:~# </syntaxhighlight>

User Space DMA[edit]

The user space DMA framework provides the zero copy access from user-space to packet dma channels. A new keystone specific udma kernel driver is added to the baseline which interfaces with the packet dma driver for this copy from user space to kernel (and vice versa). Apart from this, there is a user space library provided (libudma.a) which can be used by an application to create a udma channel, submit buffers to the channel, and receive callbacks for any buffers that the channel gets on the receive side. The user space udma can be accessed using meta-arago/meta-arago-extras/recipes-bsp/ti-udma/ti-udma_1.0.bb. To interface with the user dma library, the APIs are provided under the "include" directory of the package.

UDMA Files in the filesystem[edit]

The UDMA user space library is present in /usr/bin/libudma.a. This is the library that an application should include if they want to use the user space udma application. There is a small udma_test application which opens a single udma tx and a single udma rx channel and performs a loopback of 2M packets.

This udma_test application can be found in /usr/bin/udma_test.

Default udma channels setup in the kernel[edit]

As part of the integration of the UDMA into the baseline - the device tree was modified to setup 12 tx channels (udmatx0 - udmatx11) and 12 rx channels (udmarx0 - udmarx11) by default. Each of the TX channel is configured with a hardware queue number to use. Please refer to the device tree in the Linux kernel tree (./arch/arm/boot/dts/k2hk-evm.dts)

Contiguous memory allocator[edit]

Many applications need access to contiguous memory from user space. The MCSDK integrates a Contiguous memory allocator module which can be configured to use different pools of memory. The CMEM module supports allocating from the global CMA pool or from pre-configured pools.

Linux configuration[edit]

Linux configuration is done when installing the cmemk.ko driver, typically done using the insmod command. The cmemk.ko driver accepts command line parameters for configuring the physical memory to reserve and how to carve it up. The cmemk.ko is located at /lib/modules/<kernel_version>/extra/cmemk.ko

NoteNote: The kernel module cmemk.ko included in MCSDK filesystem is compiled for the specific version of the kernel. If the kernel version used is different from the MCSDK release kernel version, the cmemk kernel module needs to be recompiled with the linux-headers package for the specific kernel version.

The following is an example of installing the cmem kernel module:

/sbin/insmod /lib/modules/3.8.4-g42865b7/extra/cmemk.ko pools=4x30000,2x500000 phys_start=0x0 phys_end=0x3000000
  • phys_start and phys_end must be specified in hexadecimal format
  • pools must be specified using decimal format (for both number and size), since using hexadecimal format would visually clutter the specification due to the use of "x" as a token separator

This particular command creates 32 pools. The first pool is created with 4 buffers of size 30000 bytes and the second pool is created with 2 buffers of size 500000 bytes. The CMEM pool buffers start at 0x0 and end at 0x3000000 (max).

Pool buffers are aligned on a module-dependent boundary, and their sizes are rounded up to this same boundary. This applies to each buffer within a pool. The total space used by an individual pool will therefore be greater than (or equal to) the exact amount requested in the installation of the module.

The poolid used in the driver calls would be 0 for the first pool and 1 for the second pool.

Pool allocations can be requested explicitly by pool number, or more generally by just a size. For size-based allocations, the pool which best fits the requested size is automatically chosen.

User space Access[edit]

The user space application can be compiled with the linux development kit to access the user space APIs.

The API details can be found at ./sysroots/armv7ahf-vfp-neon-oe-linux-gnueabi/usr/include/ti/cmem.h in the linux development kit.

NoteNote: The user space application need to be compiled with “–D_FILE_OFFSET_BITS=64" to allow physical addresses > 32 bits.

The CMEM example code can be obtained from git repository: git://git.ti.com/ipc/ludev.git

See src/cmem/tests for api usage test.

How To's[edit]


E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article MCSDK UG Chapter Developing System Mgmt here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article MCSDK UG Chapter Developing System Mgmt here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article MCSDK UG Chapter Developing System Mgmt here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity