Codec Engine FAQ

From Texas Instruments Wiki
Jump to: navigation, search



A good place to start is the Codec Engine Category. Some good general topics to review are the Codec Engine Overview and Codec Engine Roadmap.

I have a question that's not answered here, what now?

  • There are lots of knowledgeable experts on the E2E Embedded SW Forum - posts welcome!
  • Have you tried Google? Codec Engine's been around a while, and Google has lots of details about it.

Build Questions

Can I run the necessary configuration step from [insert your favorite IDE here]?

For CE 1.20 and later releases, yes. You should be able to use any IDE that supports building C code (e.g. DevRocket, KDevelop, etc) to build a Codec Engine app. There is a 'pre-build' config step required (using the XDC Tools) when building an executable that will require some extra work.

There are some details on how to integrate the XDC config step into different build engines (e.g. CCS, GNU make, etc) around section 2.4 of this document.

Essentially, you'll need to do the following:

  1. Run the XDC config step. Input is a .cfg file - likely to the configuro utility, output is a compiler.opt and linker.cmd file. If you're building an ARM-side that consumes DSP-side Server, your .cfg script will likely leverage Engine.createFromServer().
  2. Build your app with the generated compiler.opt file (which includes options for your compiler when building your app).
  3. Link your executable with the generated linker.cmd file (which includes library names, and linker script).

Additionally, there are GNU make-based Codec Engine examples for this configuro tool (in the CE 1.20 release and later) in examples/ti/sdo/ce/examples/apps/video_copy/dualcpu/makefile/evmDM6446

Note that we've stopped supporting configuro-based Server builds.

Also see

How can I quiet the example builds?

By default, the examples are built with 'verbose' enabled. That is, every build command will be echo'd to the console. You can turn off the verbosity by setting XDCOPTIONS to nothing on the gmake command line like this:

    > gmake XDCOPTIONS=

For those interested, the makefile syntax which makes this possible is in the codec_engine_X_YY/examples/buildutils/xdcrules.mak file. Here' a snippet from a recent release:


# if not set in the environment, set XDCOPTIONS to "verbose"

How do I #include <xdc/std.h> correctly?

There are more details at this <xdc/std.h> article. But to recap...

If you're building a library:

  • If you're not using XDC's package.bld-based build, you'll have to explicitly define your 'target' via the -Dxdc_target_types__=<something> and -Dxdc_target_name__=<something> definitions.
  • If you're using XDC's package.bld-based build, the appropriate compiler definitions will be assigned for you.

If you're building an executable, you should run the config step prior to building your application.

  • If you're not using package.bld to build your app, you'll use "configuro" to configure your app - which will generate a compiler.opt file you should 'cat' into your CFLAGS (this will include the -Dxdc_target_types__=<something> and -Dxdc_target_name__=<something> defines). (The Codec Engine Examples include a configuro example makefile which does this - likely in examples/ti/sdo/ce/examples/video_copy/*, but may vary based on your CE version.)
  • If you're using package.bld, similar to the lib build, this happens for free.

This all stems from <xdc/std.h>'s extensibility support; the late binding to a 'target' via the -D options lets both your .c code and <xdc/std.h> remain unchanged, and yet system integrators can integrate new 'targets' that neither your .c files nor <xdc/std.h> have seen into the system.

Also see:

Why does the ARM application require "DSP" codec packages?

In addition to a library, codec packages contain meta-data which the Codec Engine uses during the application's configuration. Among other details, this meta-data includes things like:

  • The codec's type (needed so you get an error rather than a crash if the app tries to VIDDEC_create() an audio decoder!)
  • The codec's location (i.e. local or remote). If local, the codec's library will be linked into the app. If remote, only a reference to the codec will be linked into the app - this reference is composed from the packages meta-data).
  • A unique codec-specific ID (generated from it's globally unique package name). This ID is built into both the Application and Server codec tables, and sent from the ARM-side app to the remote Server to identify which codec to create.

Additionally, often the codec package will contain codec-specific extensions (e.g. data types, commands, etc.). The ARM-side application will require these interfaces files, provided in the codec package, during its compilation.

Why does my GPP-side application try (and fail!) to link in a DSP-side library

A common mistake of a codec producer's getLibs() function (in its package.xs file) is to fail to check the target (e.g. C64P, ARM, etc) before returning a library name, and always returning, for example, a .a64P library even if the target is an ARM.

Another common mistake (especially when basing getLibs() on CE examples) is to return a library for all targets - even if that library doesn't exist. For example, the CE/XDAIS example codecs are very portable and build for many different targets/architectures - and as a result, their getLibs() return libraries for many different targets. But, optimized codecs typically only supports a single target - and therefore their getLibs() fxn will need to be modified when target/architecture support is removed (e.g. modifying an example codec package to only support C64P targets).

If the getLibs() function returns null, this tells the integration tooling that no library needs to be linked with. In addition, if the returned library does not exist, the configuration step will stop with an error indicating that the package does not support the target specified for the executable.

So... codec packages which only support "64P" target can implement getLibs() like this (but make sure you read the rest of this section!):

function getLibs(prog)
    var lib = null;
    if ( == '64P') {
        lib = "lib/mp4.a64P";
        print("    will link with " + this.$name + ":" + lib);
    return (lib);

But what if a platform isn't "C64P", but can execute C64P-compatible instructions - like the C674 target? In that case you want to return a library in cases where isn't necessarily 'C64P', but is C64P-compatible.

In XDC tools 3.05 and newer, there is a service provided to find targets that are compatible - the findSuffix() method. See this article for more details.

Using this technique, a better, though more complex, getLibs() implementation is below. Note the xdcBuild variable which demonstrates the findSuffix() usage for both XDC-compiled and CCS-compiled libraries. Your getLibs() implementation will likely pick one approach based on your build model and you can remove the other [dead] code branch.

function getLibs(prog)
    var lib = null;
    var suffix = null;
     * Did you compile your codec lib with XDC?  If you built with CCS and are simply 'packaging'
     * your lib with XDC, set this to false.
    var xdcBuild = false;
    if ("findSuffix" in {
        if (xdcBuild) {
            /* XDC 'knows' what targets are available, just pass 'this' to findSuffix() */
            suffix =;
        } else {
            /* XDC doesn't know what targets you built for, pass your list to findSuffix() */
            suffix =["64P"]);
        if (suffix != null) {
            /* found a compatible suffix, return your lib with this suffix */
            lib = "lib/mp4.a" + suffix;
    } else {
        /* No findSuffix() available, fall back to old, lesser method */
        if ( == '64P') {
            lib = "lib/mp4.a64P";
    if (lib != null) {
        print("    will link with " + this.$name + ":" + lib);
    return (lib);

Also, see this general article on writing getLibs().

What file naming conventions are you following?

Codec Engine follows the same naming conventions as DSP/BIOS.

In short, a file extension starting with "a" is a archive/library, "o" is object file and "x" is an executable.

The suffix of the extension (i.e. what follows the character above) is the target which the file was build for. Some examples:

  • v5T - ARM v5 Thumb (GCC toolchain)
  • v6T - ARM v6 Thumb (GCC toolchain)
  • v7T - ARM v7 Thumb (GCC toolchain)
  • v4TCE - ARM v4 Thumb (WinCE toolchain)
  • 470MV - Monta Vista Linux (i.e. glibc toolchain compiled for ARM)
  • 470uC - uClibc
  • 86U - x86 Linux (i.e. glibc toolchain compiled for x86)
  • 64P - C64x+
  • e64P - C64+ ELF
  • 674 - C674
  • e674 - C674 ELF
  • em3 - ARM M3 ELF

What CE-related libraries do my ARM applications have to link with? Or, why don't I add DSP Link's libraries to my app's link line?

In general, the application doesn't need to know what libraries to link with. The list of libraries your app will link with is generated based on the application's config scripts. The config step generates a linker command file (with an .xdl extension), that includes a list of the libraries needed for your specific configuration. As you change the config, you potentially change the libraries you link with - but, again, the application author doesn't need to know these details.

If you have an engine which uses video, audio and speech codecs but not image codecs, the generated linker command file (.xdl) will include just the appropriate libraries and not include the imaging support. Similarly, if your ARM application doesn't have any Engines with "remote" codecs, you won't link in DSP Link(!). It's all done [intentionally] for "free" by the configuration step.

How do I enable/disable targets when building the examples? (CE 2.00 and earlier)

The CE examples build for a pre-defined set of targets out of the box. These defaults won't be right for everyone; sometimes:

  • The build takes too long
  • You don't have the codegen for a given target (e.g. C64+ when using with DM355)
  • You're not running on a supported Linux host (so the native Linux86 XDC target doesn't work)
  • You want to enable a target that's disabled by default (e.g. uClibc)

The set of default targets is defined in the examples/user.bld file, in the Build.targets array. This typically looks like this:

 *  ======== Build.targets ========
 * list of targets (ISAs + compilers) to build for
Build.targets = [
    // Note that uclibc support is disabled by default.  To enable it,
    // ensure the UCArm9.rootDir setting above is appropriate for your
    // environment and uncomment the following line.
//    UCArm9,

To enable/disable a target, simply add/remove it from this array.

Note: When changing this array, be sure to rebuild everything. That is, gmake clean and gmake each example class as described in the examples/build_instructions.html file.

How do I add compile options when building the example applications?

Build options are typically set up in 3 ways in the examples:

1. via selection of a target

This is done when you select a target in the config.bld file (older releases used a file named user.bld). For example, if you select 'C64P' as a build target, the tool will automatically add -mv64p to the compiler invocation. You normally do not need to worry about these target-specific options other than to pick the target(s) you wish to compile.

2. via selection of a profile setting

This corresponds to the line that says
Pkg.attrs.profile = "debug";
For example, the debug profile adds -g to the TI compiler invocation, while "release" automatically uses -o2. If you do not select any profile, "release" profile would be the default.

3. via the .copts attributes in the Pkg.addExecutable() function call in package.bld of an example

Here's an example:
Pkg.addExecutable(name, target, platform, {
        /* any other exeAttrs */
        copts: "-DMYOPTIONS"
Simply add your options to the copts field, and you can add additional compiler switches to the compiler invocation based on your target. Similarly, passing in lopts to the Pkg.addExecutable() function allows you to add additional linker options (see next question). Note that all options specified by copts and lopts will be appended to the existing command line invocation of the compiler and linker, hence for the TI C6000 compiler the copts will take precedence over the settings set by the profile when there are conflicts (e.g. "release" profile uses -o2 and copts specifies -o3).

Note: If you are building a library, you can add compiler and linker switches in a similar fashion in the Pkg.addLibrary() function call in the package.bld file.

Can I use my own linker cmd file when linking a Server?

The TI linker can accept multiple linker cmd files on the command line, so you can supply your own in a separate "-l mylink.cmd" option to the linker.If building a Server with a package.bld script (recommended), your Pkg.addExecutable() call might look like this:

Pkg.addExecutable(name, target, platform, {
        /* any other exeAttrs */
        lopts: "-l link.cmd"

Note that you need to continue linking with the generated linker command file (if using configuro, the $(CONFIGPKG)/linker.cmd file).

If building a Server without package.bld script like the CE video_copy demo under codec_engine_2_00_01/examples/ti/sdo/ce/examples/servers/video_copy. You can add "-l mylink.cmd" in the makefile of codec_engine_2_00_01/examples/ti/sdo/ce/examples/servers/video_copy/evmDM6446, like this:

 $(SERVER_EXE): main.obj $(LINKER_FILE)
      $(LINK) -o $@ -c $^ -l link.cmd

How do I build (linear) assembly files that I add to a given codec example?

In the config.bld file (located in the examples\ directory in recent version of Codec Engine), after the line
var C64P = xdc.useModule(‘ti.targets.C64P);
add this:
C64P.extensions[".sa"] = {
    suf: ".sa", typ: "asm:-fl"

This is basically saying for all files with suffix .sa, use the asm64P command to compile and use language flag for linear assembly. A similar method can be used for compiling hand-assembly file and other file types. For more info on this syntax, see the 'extensions' section in:

How do I enable/disable targets when building the examples? (CE 2.00.01 and later)

In CE 2.00.01, the user.bld described in the previous section was restructured. Given the many permutations of CE - single processor ARM (e.g. DM355), single processor DSP (e.g. DM6437), multiple heterogeneous platforms (e.g. DM644x, DM6467) - simply enabling/disabling certain builds based on this Build.targets array didn't scale.

As an example, when the C64P target is enabled, should it build the servers? Yes if on DM6446, no if on DM6437. Similarly, if MVArm9 is enabled, should 'remote' configurations be built? Yes if on DM6467, no if on DM355.

As a result, the example build scripts needed more context - including which device to build for. In the restructured user.bld, there is a javascript variable named buildTable that the user must set to declare the environment, and what to build. There is documentation in the examples/user.bld and examples/xdcpaths.mak files explaining how to use/configure these new variables.

What am I getting an error about "incompatible assignment to mod" regarding a particular codec package I have?

In brief, try changing this (note, replace the ti.sdo.codecs.aacenc package in this example with yours):

AACENC = xdc.useModule('ti.sdo.codecs.aacenc.AACENC');

to this (note the additional '.ce' in the package name):

AACENC = xdc.useModule('ti.sdo.codecs.aacenc.ce.AACENC');

The details...

Codec packages must provide a Module which implements the ti.sdo.ce.ICodec Interface. This is described in the Codec Engine Algorithm Creator User's Guide. This Codec Module describes information about the codec, such as the IALG function table, the XDM class the codec belongs to, and more.

A handle to this Module is obtained in a config (.cfg) script via xdc.useModule(). When integrating this Codec Module into an Engine or Server (in the config script), the XDCtools perform a type check to ensure that the Module being placed in a Server or Engine does, in fact, implement ti.sdo.ce.ICodec interface. If you try to add a module that doesn't implement ti.sdo.ce.ICodec into an Engine or Server, you'll get something similar to the following type-check error message:

ti.sdo.ce.Server/algs/0: incompatible assignment to mod:$Obj@dc67e::ti.sdo.codecs.aacenc.AACENC

The "quick fix" described above replaces the errant usage of a Module that doesn't implement ti.sdo.ce.ICodec (i.e. ti.sdo.codecs.aacenc.AACENC) with one that does (i.e. ti.sdo.ce.codecs.aacenc.ce.AACENC). There are some related details about this commonly used ".ce subpackage" technique in this article.

Why am I getting a warning about "Can't call useModule() now: ti.sdo.ce.Engine" when building a server?

Codec Engine 1.10, the ti/sdo/ce/Server.xs module's validate() method incorrectly referenced the Engine object via xdc.useModule():

var Engine = xdc.useModule('ti.sdo.ce.Engine');

which resulted in build output like:

ti.sdo.ce.osal.close() ...
WARNING: Can't call useModule() now: ti.sdo.ce.Engine
ti.sdo.ce.osal.validate() ...

The warning correctly indicates that packages cannot bring in new content (which xdc.useModule() may do) in the packages validate() method. In all cases, the ti.sdo.ce.Engine had already been brought in, so this warning can safely be ignored. It was corrected in CE 1.20.

The additional warning alerting the user to this violation was added in an XDCtools release after CE 1.10 was made, which is why this wasn't caught in CE 1.10 testing.

More references on these XDCtools features are available:

How do I link in a user library after modifying an existing DSP server (e.g. all_codecs) in the CE examples?

This can be done by modifying the link.cmd file in the server's directory (e.g. \examples\ti\sdo\ce\examples\servers\all_codecs\link.cmd). This is a standard linker command file. For example, to use user.lib, simply add the line


How do I enable TraceUtils in a single processor environment (e.g. DM355)?

You don't. TraceUtils is only for multi-core devices (e.g. DM644x, OMAP3, etc). It's used to periodically pull trace statements off a remote device, so it doesn't apply to single processor environments.

Why is my linker failing to find the _DSPLINKDATA_init symbol?

If you've reconfigured DSP Link, be sure to prepare the DSP Link packages for XDC-based integration.

The docs seem quite Linux-centric. Can I build this stuff in Windows?

Yes. For example see Creating and Building codec combos in Windows.

My server's info.js file is missing a section on Memory Map. What happened?

First make sure that you can build the server successfully using 'xdc release'. Then check to see if your server is built using the BIOS OSAL, for otherwise the Memory Map info would not be generated. An example from Codec Engine 2.23:

var osalGlobal = xdc.useModule('ti.sdo.ce.osal.Global');
osalGlobal.runtimeEnv = osalGlobal.DSPLINK_BIOS;

Distribution Questions

Where can I get the latest (or previous) releases of Codec Engine?

You can download the latest Codec Engine release here. Be sure to review the release notes to better understand any dependency updates required as well.

Previous releases required a login account, and are available here -

Why do some distributions have a cetools directory and others don't?

The Codec Engine has (at least) two delivery channels - the standalone release (typically downloaded as a .tar.gz file), and the SDK products (e.g. DVSDK, EZSDK, etc).

The CE "standalone" release includes a large cetools/ directory. This cetools/ directory includes all the CE dependencies except BIOS, XDC Tools and Codegen. These dependencies may include, but are not limited to, Linux Utils, WinCE Utils, Framework Components, EDMA3 LLD, and XDAIS. The rationale for this large, standalone release, is that it enables the end customer to receive all the dependent, compatible packages in one large download - rather than forcing the user to find and download each dependency individually. You should be able to take the (big) standalone CE release - plus XDCtools, BIOS (if needed) and the appropriate Codegen tools - and build and run the examples.

Note that the philosophy behind the cetools/ repository is to include only the packages needed by the examples and typical applications. It does not include full product releases of all components - for example, while the Framework Components (FC) packages are provided in cetools/, you won't find the FC examples, release notes, and other product-level documentation.

The "lite" Codec Engine release is distributed with the DVEVM/DVSDK, and does not include the cetools/ directory. The DVSDK includes all the Codec Engine dependencies already, so it would be redundant to include the fuller "standalone" release. This allows the DVSDK to showcase all the content going into the release (e.g. Link, FC, XDAIS, CMEM) without hiding it from the user... but the cost is more work to set up the CE examples build environment.

There is an obvious tradeoff between the immediate gratification of the standalone release and the headache you may going through updating each component individually.

The xdcpaths.mak file distributed with Codec Engine understands both distribution types, and conditionally adds the cetools/packages repository to the package path if it is found. If it's not found, there are several extra variables that must be set in xdcpaths.mak to point at the dependent packages on your system.

Note that the FC product is similarly provided in both standalone and lite releases - the DVSDK distribution includes the lite version, the FC download page provides both the full and lite versions. Users that don't have their own framework and don't want/need to use the DVSDK and/or CE frameworks, may still want to leverage the FC product to manage XDAIS algorithms.

Product Questions

I have my own component - how do I add Tracing to it?

This topic has an example of how to instrument your code using GT. This tracing mechanism is used in Codec Engine, Framework components, DMAI etc.

How can I see the DSP side trace from Code Composer Studio (CCS)?

By default, DSP side trace is initially written into a circular trace buffer. From there, some ARM-side utilities (like Engine_fwriteTrace() and TraceUtils) read it out of the trace buffer and make it available to the ARM-side user.

But... if you're not using Engine_fwriteTrace() or TraceUtils, you can view the data in this raw memory trace buffer as ASCII characters. In early releases of Codec Engine, this trace buffer was named RMS_traceBuffer. Recent releases have changed the symbol to Global_traceBuffer.

How can I change the size of the trace buffer?

To change the size of the DSP side trace buffer to 256, for example, add the following lines to the DSP application's configuration (cfg) file:

var osalGlobal = xdc.useModule('ti.sdo.ce.osal.Global');
osalGlobal.traceBufferSize = 256;

Okay, but that memory window's hard to read. Can I direct trace to the CCS console window instead?

Of course that was a loaded question. Yes!

You can make this function call from your DSP application's main() before calling CERuntime_init():

#include <xdc/std.h>                    /* Every CE app needs <xdc/std.h> */
#include <ti/sdo/ce/trace/gt.h>         /* GT_* fxn interface header */
#include <stdio.h>                      /* fxn prototype for printf */
     * Set the GT print fxn.  Doing this before CERuntime_init() allows you to
     * see the init tracing as well.
    /* Enable tracing in all modules */

Each trace call will now be routed through stdio's printf(), which sends its text to the CCS console. (The argument to the function above can be any function that takes (char *format, ...) arguments -- you can even implement your own that, say, sends trace info to the serial port.)

Note that this may be especially useful for DSP-only devices where developers often debug in CCS.

Note that overriding the GT print fxn on a dual-core device will break Engine_fwriteTrace() and TraceUtils (as well as any tooling that depends on it, like the SoC Analyzer).

Can I have multiple, different engines open at the same time?

Yes, so long as none of the engines have remote codecs contained in conflicting server images (e.g. on DM644x, as long as they don't require different DSP images). If you try to open an engine with a server image different than what's currently loaded on the remote processor, you'll get an error from Engine_open().

A related thread from the davinci mailing list.

Can I remove the package paths embedded in my executable?

CE 2.00 and later, by default, embed full paths to the packages in the executable. This is a feature of the CE DEBUG environment variable, which, when set, will print these full paths. This is often valuable to sanity check, for example, what packages were used when building a server.

To remove these paths (perhaps to hide server details), add the following to your .cfg script:

osal = xdc.useModule('ti.sdo.ce.osal.Global');
osal.Global.embedBuildInfo = false;

Note that this is embedded into both application and server executables, so this technique applies to both application and server scripts.

I have my own codec package and I wished to try it using the Codec Engine example applications. How do I do it? I am confused as to why the codec packages in the CE examples directory look so different from the ones shipped in the DVSDK and/or produced by the RTSC codec package wizard.

If you have a codec package that you'd like to add e.g. to the existing 'all codecs' server in the codec engine (maybe you generated it using the codec package wizard), note that your codec package must have already been run through 'xdc release'. To use the codec package, say it is named "mycompany.codecs.mycodec", simply modify the server configuration file (e.g. all.cfg) to bring it in, e.g.:

var MY_CODEC =
Server.algs = [
    {name: "mycodec", mod: MY_CODEC, threadAttrs: {
        stackMemId: 0, priority: Server.MINPRI + 2}, groupId : 0,
Then make sure you add your package's location to the the XDCPATH at the end of the xdcpaths.mak file. e.g.
XDC_PATH := <the directory containing codec package you created>;$(XDC_PATH)

The reason why some of the newer codec packages in the CE examples look so strange compared with the ones shipped in the DVSDK is because the actual codec libraries for these 'copy' codecs have been separately packaged up in the XDAIS product. The codec packages in the CE examples merely contains the CE-specific metadata (e.g. IALG function table name, worst case stack size, etc.) that is normally found in the 'ce'-subpackage of a 'normal' codec package - recall that if you look at any codec package shipped in the DVSDK, you will find a subdirectory named 'ce' which essentially contain the same stuff as what's found in the codec engine examples' codecs directory. The base codec packages for the CE examples come from the XDAIS product, and are imported via a requires statement in the package.xdc file of a given copy codec package in the CE examples directory.

Can Codec Engine run custom algorithms (non-codecs)?

Yes. This blog post describes how.

Can I change where the server executable is located?

For most configurations Codec Engine expects the server executable (.x64P) to be in the same directory as the application that wishes to use it. The easiest way to be able to place the server executable in a different location is to use a symbolic link. For example, if I have a server executable decodeCombo.x64P which I have placed in the /opt/servers directory of the target file system and an application called demo in the /opt/mydemo directory I can create a symbolic link using the following commands on the target:

target $ cd /opt/mydemo
target $ ln -s /opt/servers/decodeCombo.x64P decodeCombo.x64P

Using the above commands I will now have a file called decodeCombo.x64P in the /opt/mydemo directory that points to the server executable in the /opt/servers directory. Using this method the server executable can be placed wherever you wish without having to change any build options.

NOTE: A symbolic link is used because it makes it easier to update server executable without having to recreate the link.

Architecture Questions

What does CERuntime_init() do?

The implementation of CERuntime_init() varies with the executable's configuration. This function is auto-generated during the config step. For example, if the executable has a Server configured into it (e.g., the DSP-side server examples), CERuntime_init() will create a Server thread.

The end user's main() - whether server or app - shouldn't know (or care!) what the implementation is; it simply has to ensure that CERuntime_init() is called prior to making any other Codec Engine API calls.

Can CE support remotely accessing multiple slaves?

As of CE 3.21 and SysLink 2.00, devices with multiple slaves (e.g. DM8168) are supported.

Neither the CE 2.x nor DSP Link 1.x releases support a single host controlling multiple slaves.

CE can support a multiple DSP environment (e.g. DM6467 + one or more DM6437). But in these systems, the CE app on the GPP (DM6467's ARM) controls remote codecs on the connected DSP (DM6467's DSP), and a completely separate CE app runs on the DM6437 device. (This is often a more useful usage as the DM6437 would be performing I/O, and therefore the DM6467's GPP app wouldn't really want to be sending data buffers to the DM6437 for processing and getting them back for performance reasons)

Does CE create any threads?

In single processor configurations, no.

In dual processor configurations, there are potentially two threads on the ARM that are created:

  1. A thread for initializing and serializing control commands to DSP Link (e.g. PROC_load(), PROC_start(), etc). (Note, when using LAD, this thread is not used - the LAD daemon serves this purpose.)
  2. The TraceUtils thread - used for periodically pulling DSP-side tracing and logs off the remote processor.

(Note, there is no way to control these ARM-side threads' priorities. Both are intended to run in the background rather than at realtime or high priorities.)

In dual processor configurations, there is initially one thread on the DSP side that's created, this is the DSP Server thread. When remote codecs are created from the ARM, each codec instance is a new thread on the DSP.

Does CE allow interrupts to be disabled (in case an algorithm/codec is not interruptible)?

Per the XDAIS Documentation (Rule 2), an algorithm must be re-entrant within a preemptive environment.

However, if you encounter an algorithm that doesn't comply with this rule and want to work around it:

  • If the alg is local, the application can disable interrupts around calls to the alg
  • If the alg is remote, you may be able to customize the codec's skeleton to disable interrupts around calls to the alg

When multiple algorithms are in use, sometimes, the application may want to override an algorithm's memory requests (in terms of internal/external memory placement for the memory allocated). How can this be accomplished?

This can be handled by configuring DSKT2 accordingly in the cfg file. DSKT2 can be configured to assign memory from any memory region for each type of memory requested by the algorithms (e.g. DARAM0, DARAM1, SARAM0, ESDATA, etc). Note that this is done on a global basis and apply to all algorithms in the system.

On an individual algorithm instance basis, it is possible to ignore a codec's requests for placement of allocated buffers and force all of the codec's memory requests to be allocated in the external heap mapped to the DSKT2 module's ESDATA configuration parameter. See section of CE Application Developer User Guide.

How does CE manage cache?

CE only handles cache maintenance when executing an algorithm remotely, and only the cache on the remote processor. The application itself must handle its own processor's cache, whether the algorithm is local (same processor as the app) or remote (different processor than the app). See the Cache Management article for more details.

Does CE allow cache to be enabled or disabled for algorithm memory (memTab) buffers for each algorithm on an individual basis? If so, how can this be done?

CE does not set the cache mode of a system. It is up to the application developer (for the local case), or to the server integrator (for the remote case), to enable cache for specific memory regions. In the local case, the application developer is also responsible to do the appropriate cache maintenance to satisfy XDAIS rules and guidelines.

Is there a way that two algorithms running on the DSP in DM6446 can be made to run on the same thread? If so, how can that be accomplished? What differences would this have with respect to another platform like say DM355?

One way this could be done is to combine the two algorithms into one by running them sequentially using an 'adapter', a thin layer that wraps around an algorithm instance to allow pre/post processing. See

One can override process() and control() calls in stubs and skeletons, can we override create() and delete() as well?

create() and delete() calls are intentionally independent of algorithm class, and as such do not go through any class-specific stubs and skeletons. Hence, they cannot be overridden on a per-algorithm basis like the stubs and skeletons of process() and control() can.

Per the XDAIS specification, algorithm creation params must have a .size field as their first field, and as such can be extended with scalar parameters. Therefore, the only gap is when the algorithm interface wants the application to provide initialization pointers/buffers to the algorithm. If pointers/buffers need to be provided to an algorithm, it's recommended that these be provided via a control() call after creation.

In CE, given the messages exchanged between ARM and DSP have a finite size, is there a limitation on the size of the algorithm creation parameter structure, of the inArgs/outArgs to the process call, and of the dynamicParams structure passed to the control call?

Yes. As of Codec Engine 2.23, the sizes are hardcoded in the Codec Engine framework.

For the algorithm/codec creation parameters, the limitation is 32 words.

For the process/control calls, the size of inArgs+outArgs or dynamicParams+status is limited by the size of the pads defined in the file codec_engine_X_XX_XX\packages\ti\sdo\ce\video1\_videnc1.h (look at the header corresponding to your class of interest, as the padding differs for each class.). For example, for VIDENC1 the padding is 2096 bytes.

Note that these size limitations apply to the extended version of these structures if you choose to extend them.

Configuration questions

What is the difference between Server.threadAttrs.stackSize and the sizes specified by the stackSize field in the Server.algs array?

On the DSP server, every algorithm instance is run by an individual task. The Server.algs array specifies the stack size for the task running each algorithm. Hence this stack size must be greater than the worst-case stack usage by an algorithm's process() and control() functions (and by their skeletons, though typically stack usage is higher in the algorithm itself).

Server.threadAttrs.stackSize specifies the stack size of the server task used by CE to create algorithm instances on the DSP. The server task is also responsible for creating the task (with the stack size specified in the Server.algs array) that runs the created codec instance. Server.threadAttrs.stackSize must be larger than the stack size requirements of the basic IALG, IDMA3 and IRES functions implemented by each algorithm/codec, excluding the process() and control() functions.

See the Stack issues article for further details about designing a system's stack usage.

I tried setting commLocateRetries and the number of retries does not change. What is wrong?

If you are using the .runtimeEnv param to set the OS adaptation layer via ti.sdo.ce.osal.Global (as is recommended), you can set .commLocateRetries as follows

var osalGlobal = xdc.useModule('ti.sdo.ce.osal.Global');
osalGlobal.runtimeEnv = osalGlobal.DSPLINK_LINUX;
osalGlobal.commLocateRetries = 1000;
If you explicitly set .os and .ipc to configure the OSAL (not recommended, but sometimes necessary), you need to ensure that the .ipc field is set in ti.sdo.ce.ipc.Settings:
var os = xdc.useModule('ti.sdo.ce.osal.linux.Settings');
osalGlobal.os = os;
/* Configure CE to use it's DSP Link Linux version */
var ipcSettings = xdc.useModule('ti.sdo.ce.ipc.Settings');
ipcSettings.commType = ipcSettings.COMM_DSPLINK;
ipcSettings.ipc = xdc.useModule('ti.sdo.ce.ipc.dsplink.Ipc');  //Necessary for ipc.commLocateRetries to take effect!!!!
ipc = xdc.useModule('ti.sdo.ce.ipc.dsplink.Ipc');
ipc.commLocateRetries = 1000;

What is stackMemId and how to figure out what number to set it to when configuring the DSP server?

Each algorithm in the DSP server is instantiated and run in a DSP/BIOS task. stackMemId is used to determine where the stack of this task is going to be allocated. Typically, the stack is allocated from one of the existing heaps in the system. Each heap in DSP/BIOS has an identifier (a number) associated with it. This id is determined as follows:

  • The heap used by DSP/BIOS object segment (bios.MEM.BIOSOBJSEG referenced in the DSP/BIOS API guide) always has ID 0.
  • Look up all the memory objects (MEM_Obj) listed in the generated *cfg.s62 file from the package/cfg/bin/ directory of your server package that has a heap defined. The first one in the list that has a heap defined has id 1, the 2nd one has id 2, etc. Ignore/Skip the one corresponding to the DSP/BIOS object segment.

Runtime Troubleshooting

What should I do first?

There's a good list of things to try first described here.

Note that Codec Engine is rich with embedded trace information. If you're using CE 2.00 or later, and running on an ARM, you can simply set the CE_DEBUG environment variable to 1, 2, or 3 and re-run your application to see what's going on under the hood.

  • CE_DEBUG=1 - Print any warnings and errors - a good first start to see if something easy and fundamental is going wrong
  • CE_DEBUG=2 - Print important details - this is recommended to get a good feel for what's going on
  • CE_DEBUG=3 - Print everything! Be prepared, this will be loud and often obnoxious.

On multi-core systems, enabling CE_DEBUG will:

  1. collect DSP-side trace immediately after every operation that required a trip to the DSP. This is helpful for getting interleaved GPP/DSP tracing
  2. disable TraceUtils (and also any possibility of using the SoC Analyzer). This is done to enable the GPP/DSP interleaved trace.

There are also details and debugging techniques in the CE Application Developer User's Guide.

Why do I get error 0x80008000 during Engine_open()

This indicates a permissions error (DSP Link's DSP_EACCESSDENIED), and is commonly encountered when the Link driver which has been insmod'd doesn't match the version linked into the application.

More information on DSP Link error codes is available here.

Why do I get error 0x80008013 during Engine_open()

This indicates a 'out of range' error (DSP Link's DSP_ERANGE), and is commonly encountered when the DSP image which is trying to be loaded contains code or data which is outside of the memory range provided to DSP Link. Double check your memory map.

More information on DSP Link error codes is available here.

Why do I get error 0x80008017 when creating my remote algorithm?

Often, this error generates trace like this:

 ...Engine_createNode> Remote node creation FAILED (0x80008017).

0x80008017 is RMS_EINVUUID found in ti/sdo/ce/rms.h. This error occurs when creating a remote algorithm, if the unique ID (a 32-bit number typically autogenerated from the globally unique codec package name) isn't found on the remote processor.

The ARM-side has looked up the codec name indicating which codec to create, found its unique ID (UUID), and sent that ID to the remote processor to have it created. When the remote processor looked up that unique ID in its codec table, it wasn't found, and RMS_EINVUUID is returned.

During startup, when CE DEBUG is set, the components in the system, both ARM and DSP should be displayed. This may help in identifying which codecs are in the remote system.

Why do I get error 0x80008018 when creating my remote algorithm?

Often, this error generates trace like this:

 ...Engine_createNode> Remote node creation FAILED (0x80008018).

0x80008018 is RMS_EINVPROT found in ti/sdo/ce/rms.h. This error occurs when creating a remote algorithm, if the stubs (ARM-side) and skeletons (DSP-side) for the given class of algorithm (e.g. VIDDEC2) don't "speak the same protocol". That is, the marshalling protocol of arguments in the stubs doesn't match the unmarshalling protocol of the arguments in the skeletons. This runtime sanity check of system integrity helps avoid more difficult to debug runtime crashes.

In short, the version of Codec Engine built into the DSP-side doesn't match the version built into the ARM-side. Note that in practice the exact versions of Codec Engine built into the ARM and DSP sides don't have to match, but this runtime check does catch compatibility breaks when they occur.

During startup, when CE DEBUG is set, the versions of components in the system should be displayed, so you should be able to compare which versions were used on each side.

What does "Assertion 'nodeAttrs->size <= (sizeof(RMS_Word) * 32)' failed" mean?

This confusing error is caused when the codec's create params are "too big". Often this is seen when the .size field of the create params is uninitialized.

There was a bug filed (SDSCM00019266) and fixed (in CE 2.00) to improve the user's experience when hitting this error. In CE 2.00 and later, if the create params are "too big" the create() call will return a error before attempting to create the algorithm on the remote processor rather than generate the confusing assert message.

Why does my system crash at random times?

Many users are hit by Stack issues. Check that article for debugging techniques.

Why does my system get unstable when I make it multithreaded?

There can be lots of issues which exhibit this behavior. A common mistake is described here:

Engine and Codec handles are not thread safe

In the API Reference Guide's fine print for Engine_open() and the various codec *_create() APIs is the following:

Engine handles must not be concurrently accessed by multiple threads; each thread must either
obtain its own handle (via Engine_open()) or explicitly serialize access to a shared handle.

If Engine and/or Codec handles are shared across threads, and are not protected, the system may become unstable.

Typically, the easiest solution is to have each thread call Engine_open() to receive its own handle. As threads are anyways startable on-demand and further can terminate anytime its best practice to keep such handle based resources thread local, e.g. on the threads stack (like the "this" object as passed via its entry point or by means of a local variable in the thread's main routine).

Also see the Multiple Threads using Codec Engine Handle article.

Why doesn't VISA_delete() call my alg's algFree() method?

In environments where CE can use Framework Component's DSKT2 module for managing XDAIS algs, DSKT2 includes an optimization that doesn't require invoking the algorithm's algFree() method.

Why does Engine_getCpuLoad() fails with return code -1?

Engine_getCpuLoad() is/was only supported to acquire the CPU load for a remote Server. This particular fxn has been deprecated in recent releases and replaced with Server_getCpuLoad() as that API is more correct with the intention of the API.

Engine_getCpuLoad()/Server_getCpuLoad() is not intended to obtain the CPU load for the app-side processor, whether the "CE App" is on the ARM or DSP. On platforms where there is no DSP (e.g. DM355), this API is not supported and hence returns an error when called.

Why am I getting the message "CV - VISA_allocMsg> FAILED to allocate message. Try increasing # messages for codec."

This message is generated during a call to a codec class' "processAsync" function, e.g., AUDENC1_processAsync().

The codec "asynchronous" buffer submission method requires more than the 1 message that is allocated by default. The "synchronous" method requires only one message, hence the default of 1.

To cause more messages to be allocated you need to append the codec name string with the number of messages in the call to the codec class' "create" function, e.g.,

   AUDENC1_create(engineHandle, "audenc1_copy:::3", NULL).  

The number of messages should be set to the "buffering" level. For example, if you're doing triple-buffering, append the string ":::3" to the codec name in the "create" call. This is illustrated in the CE example audio1_copy/async/app.c:

   static String decoderName  = "auddec1_copy:::3";
  static String encoderName  = "audenc1_copy:::3";

This feature is documented in the CE documentation. Take a look at the CE docs for AUDENC1_create(), for example, which also mention the 1st and 2nd colon-separated fields (which are empty in my example above).

Why is the CMEM heap buffer address not within my specified range of CMEM physical addresses?

The CMEM kernel module cmemk.ko prints a banner upon being inserted into the kernel, and within that banner is a line of the form "allocated heap buffer 0xc7000000 of size 0x8ac000". The heap buffer address printed here is not the physical address - it is the kernel's assigned virtual address, and this kernel virtual address can just happen to coincide with the physical address range specified by phys_start/phys_end parameters for the cmemk.ko 'insmod' command. In fact, the virtual address suggested above (0xc7000000) lies within the available physical address range of the DDR2 memory on the OMAP-L138 EVM.

Why do I need to use CMEM's allowOverlap=1 'insmod' option even though the Linux kernel doesn't actually use the physical memory range that I am specifying?

The CMEM kernel module's allowOverlap=0|1 option was originally introduced to allow the system developer to forcibly install CMEM in a physical memory range that appears to fall within the kernel's memory range. The range check itself is intended to alert the user that they have chosen an invalid memory range, as well as preventing kernel corruption that might result if the overlap was allowed to happen.

The range check is about as basic as it can be, as it merely checks whether the beginning address of a CMEM memory block is below the end of the Linux kernel's valid physical memory range (which is defined by the u-boot bootargs parameter of mem=##M). In other words, the check was just assuring that the CMEM memory was *above* the kernel's memory. Since this check was introduced, valid use cases have arisen where CMEM is granted physical memory that lies either below or both above and below the Linux kernel's memory range (or ranges). One use case stems from the fact that the Linux kernel was modified to allow multiple mem=##M block specifications, for which each mem=##M can be adorned with a physical address:

  bootargs= ... mem=32M@0xc0000000 mem=64M@0xc4000000 ...

With the above specification, the kernel will occupy physical address ranges 0xc0000000->0xc2000000 and 0xc4000000->0xc8000000, leaving a hole in the range 0xc2000000->0xc4000000. With this setup, CMEM could be granted the memory hole as long as the allowOverlap=1 option is specified on the CMEM 'insmod' command:

  % insmod cmemk.ko phys_start=0xc2000000 phys_end=0xc4000000 pools= ... allowOverlap=1

A more typical use case exists on the DM365, which has TCM memory from physical addresses 0x00000000->0x00008000. System developers are known to grant this memory to CMEM, and since it doesn't lie *above* the kernel memory (typically from 0x80000000->0x8X000000) the allowOverlap=1 option must be used or else CMEM will not allow itself to be inserted into the kernel:

  % insmod cmemk.ko phys_start=0x87800000 phys_end=0x88000000 pools= ... phys_start1=0x1000 phys_end1=0x8000 pools1= ... allowOverlap=1

Note that since 0x00000000 is not allowed as a base address for CMEM blocks, the lowest allowable address of 0x1000 (due to PAGE_SIZE in Linux) was used in phys_start1 above.

See Also