Please note as of Wednesday, August 15th, 2018 this wiki has been set to read only. If you are a TI Employee and require Edit ability please contact x0211426 from the company directory.

Codec Engine Cache Per Alg

From Texas Instruments Wiki
Jump to: navigation, search

Introduction

Prior to Codec Engine 2.25.05.16 patch release, there was only a single, system-wide config param for setting whether the Linux-based algorithm-requested memory would be cached or not. This caused integration problems when algorithms with different caching requirements were integrated into a system. Either the system would be globally configured to provide non-cached memory - which would affect performance, or it would be globally configured to provide cached memory - which could affect correctness.

Codec Engine 2.25.05.16 introduced a configuration parameter, useCache, to ti.sdo.ce.alg.Settings, which when set to true, causes all algorithms' memory requests to be allocated in cached memory. This topic describes an enhancement which would allow memory requests to be allocated in cached memory on a per-algorithm basis.

Note: The only place useCache has an affect is with local codecs on the ARM.

Usage

Codec Engine supplies a bool configuration parameter to ti.sdo.ce.ICodec, called useCache. The cacheability of the memory provided can be supplied by the system integrator with this config param.

For example, in the audio_copy example, here is the configuration to have the decoder run in non-cached memory, and the encoder run in cached memory:

/* get various codec modules; i.e., implementation of codecs */
var decoder = xdc.useModule('ti.sdo.ce.examples.codecs.auddec_copy.AUDDEC_COPY');
 
var encoder = xdc.useModule('ti.sdo.ce.examples.codecs.audenc_copy.AUDENC_COPY');
encoder.useCache = true;

The default global setting is to allocate in non-cached memory. If we had set

xdc.useModule('ti.sdo.ce.alg.Settings').useCache = true;

in the configuration file, the default would be to allocate in cached memory, and we would need to set decoder.useCache to false to get the same effect.

Internal Implementation

An additional field, memType was added to the internal Engine_AlgDesc struct, that can be set to one of following enum values:

typedef enum Engine_CachedMemType {
    Engine_USECACHEDMEM_DEFAULT = -1,  /**< Use default cache setting */
    Engine_USECACHEDMEM_NONCACHED = 0, /**< Use non-cached memory */
    Engine_USECACHEDMEM_CACHED = 1     /**< Use cached memory */
} Engine_CachedMemType;

During configuration of the app, the memType field of the codec's Engine_AlgDesc is filled in based on the useCache flag setting for the algorithm. At runtime, VISA_create() passes memType to ALG_create, which sets up the Memory allocation parameters according to the codec's memTable as filled out in the codec's algInit function. It's during this memory allocation process that a local ARM codec now has the capability to be configured on an individual basis whether its memory will be cacheable. Note that this capability is not feasible on the DSP side because cacheability is configured in 16MB pages of memory for the external memory. On the ARM side, however, this cacheability setting is configured for each MMU page which can be as small as 4KB.

See Also