NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.
Codec Engine Cache Per Alg
Introduction[edit]
Prior to Codec Engine 2.25.05.16 patch release, there was only a single, system-wide config param for setting whether the Linux-based algorithm-requested memory would be cached or not. This caused integration problems when algorithms with different caching requirements were integrated into a system. Either the system would be globally configured to provide non-cached memory - which would affect performance, or it would be globally configured to provide cached memory - which could affect correctness.
Codec Engine 2.25.05.16 introduced a configuration parameter, useCache
, to ti.sdo.ce.alg.Settings
, which when set to true, causes all algorithms' memory requests to be allocated in cached memory. This topic describes an enhancement which would allow memory requests to be allocated in cached memory on a per-algorithm basis.
Note: The only place useCache
has an affect is with local codecs on the ARM.
Usage[edit]
Codec Engine supplies a bool
configuration parameter to ti.sdo.ce.ICodec
, called useCache
. The cacheability of the memory provided can be supplied by the system integrator with this config param.
For example, in the audio_copy example, here is the configuration to have the decoder run in non-cached memory, and the encoder run in cached memory:
<syntaxhighlight lang='javascript'> /* get various codec modules; i.e., implementation of codecs */ var decoder = xdc.useModule('ti.sdo.ce.examples.codecs.auddec_copy.AUDDEC_COPY');
var encoder = xdc.useModule('ti.sdo.ce.examples.codecs.audenc_copy.AUDENC_COPY'); encoder.useCache = true; </syntaxhighlight>
The default global setting is to allocate in non-cached memory. If we had set <syntaxhighlight lang='javascript'> xdc.useModule('ti.sdo.ce.alg.Settings').useCache = true; </syntaxhighlight>
in the configuration file, the default would be to allocate in cached memory, and we would need to set decoder.useCache
to false
to get the same effect.
Internal Implementation[edit]
An additional field, memType
was added to the internal Engine_AlgDesc
struct, that can be set to one of following enum values:
<syntaxhighlight lang='c'> typedef enum Engine_CachedMemType {
Engine_USECACHEDMEM_DEFAULT = -1, /**< Use default cache setting */ Engine_USECACHEDMEM_NONCACHED = 0, /**< Use non-cached memory */ Engine_USECACHEDMEM_CACHED = 1 /**< Use cached memory */
} Engine_CachedMemType; </syntaxhighlight>
During configuration of the app, the memType
field of the codec's Engine_AlgDesc
is filled in based on the useCache
flag setting for the algorithm. At runtime, VISA_create()
passes memType
to ALG_create
, which sets up the Memory allocation parameters according to the codec's memTable as filled out in the codec's algInit
function. It's during this memory allocation process that a local ARM codec now has the capability to be configured on an individual basis whether its memory will be cacheable. Note that this capability is not feasible on the DSP side because cacheability is configured in 16MB pages of memory for the external memory. On the ARM side, however, this cacheability setting is configured for each MMU page which can be as small as 4KB.
See Also[edit]
- There is some discussion on the system-wide default behavior in this archived email thread.