NOTICE: The Processors Wiki will End-of-Life in December of 2020. It is recommended to download any files or other content you may need that are hosted on The site is now set to read only.

SoC Analyzer

From Texas Instruments Wiki
Jump to: navigation, search




The TMS320DM64xx SoC Analyzer is an industry-leading system analysis and visibility tool for DVSDK applications running on DM64xx processors. The visualizations of these applications include ARM/DSP interactions, processing time, system load and execution state. The analyzer collects time-stamped data in real time from underlying components (e.g. Codec Engine, DSP/BIOSTM and Linux), then transports the data via Ethernet to the host PC. The data is then correlated and analyzed to provide a graphical representation of the overall system performance and execution on a single time-line. This provides a complete system view of the application, and eliminates the tedious process of manually collecting and comparing data on each core before optimization begins. Providing a complete system visibility enables developers to identify bottlenecks that were previously hard to detect. Since the DM64xx SoC Analyzer is a minimally intrusive tool, all this useful functionality comes without adding an immense additional load to the application.

Document Objective

In this document we show how to set up the SoC Analyzer environment and how to perform a first basic tracing of your application. Next we suggest several possibilities to instrument your code to perform specific tracing of your application. Further on we want to dig deeper into the things happening behind the scenes of the user interface of the SoC Analyzer by analyzing how trace information is captured from Codec Engine, BIOS and Linux. For a deeper understanding of the different display graphs or filter possibilities we refer to the Dynamic Help of the SoC Analyzer.


Trends in industry tend to larger and consequently more complex systems. Systems on Chip (SoC) with multiple cores on devices are becoming more and more popular. As the complexity for systems grows, so does it for the software. Linux or other general purpose operating systems get utilized to manage a range of processors found in devices. In case of the dual-core (ARM + DSP) DaVinci product line, the trend of complex multi-core systems is followed and besides the usual OS running on TI DSPs (DSP/BIOS), Linux distributions are adopted. On the other hand, companies are putting their emphasis on development productivity and product quality. To satisfy this trend in an environment that gets more and more complex and also to reduce time-to-market, smart system analysis tools need to be provided. Such tools are meant to simplify the complexity of today’s larger systems by providing a easy to read system analysis. Especially in a multi-core environment, analyzing, benchmarking and verifying component/system performances can become a very tricky task.

The SoC Analyzer provides a solution for TI DaVinci multi- and single core devices that makes reading pages and pages of hard-to-read log files redundant by displaying the relevant trace data graphically. It precisely analyzes CPU utilization and inter-core processes, benchmarks codec/algorithm performances, provides visibility into "black-box" system components (e.g. Codec Engine), just to mention a few features upfront. The SoC Analyzer provides an easy-to-read solution to track down even hard to identify system bottlenecks very quickly, and so helps to maintain a certain development time and quality standard.

Supported Devices

  • Tested Platforms
    • DM644x SoC Analyzer: DM6446
    • SoC Analyzer 1.0: DM6437
    • SoC Analyzer 1.1: DM6437 + DM648
    • SoC Analyzer 1.2: DM6437 + DM648 + DM6446 + DM6467

  • Multi Core minimal s/w requirement for standard use the SoC Analyzer
    • Linux
    • DSP/BIOS
    • Codec Engine
    • Trace Utility
    • HTTP Server ( Linux )

  • Single Core minimal s/w requirement for standard use of the SoC Analyzer
    • DSP/BIOS
    • DSP/BIOS Log Server
    • HTTP Server ( Network Developer's Kit - NDK - TCP/IP Stack )

The SoC Analyzer is delivered as part of the DVEVM/DVSDK and also available from the DVEVM update site

SoC Analyzer for Multicore Devices

SoC Analyzer Overview

In the first place, the SoC Analyzer is an application that loads data from text files and displays it graphically or in tables. Those text files contain various trace data fetched from ARM or DSP and have a certain format. Those trace files are available in the target file system (assuming tracing is enabled in the application). The SoC Analyzer can utilize a web server to download those files to the host file system or one can manually transfer them to a special location on the host file system, where the SoC Analyzer application browses for them.

A basic overview of the the SoC Analyzer trace framework is shown in Fig. 1.The SoC Analyzer utilizes several trace mechanisms (Generic Trace and BIOS Log) available on DaVinci platforms to gather trace data from the Codec Engine and DSP/BIOS. The target Linux OS runs the Linux Trace Toolkit (LTT) to trace Linux thread scheduling. The SoC Analyzer uses the TraceUtil module to customize the trace parameters according to the needs of the SoC Analyzer solution. At application runtime, the logged trace data is stored in temporary trace files on the target file system. When the data logging has finished, the trace files are send to the host file-system via a web server, where the SoC Analyzer application picks them up for further processing. The application correlates all trace files (i.e. CE-, DSP/BIOS-, LTT-trace file) on a common timeline and displays the trace information graphically and in tables.


Requirements and Setup

In general, every application developed for the supported platforms can be analyzed by the SoC Analyzer. All that needs to be done is a slight modification of a few source files, enabling additional features in the traget kernel and establishing a webserver on the target board. This may sound complex, especially to non-experienced Linux users, but we'll show all steps necessary by means of a DVSDK demo application in the next chapters. Those steps are generally valid and can be easily applied to your own application.

Setup Overview for DVSDK Demos

Before you can start using the SoC Analyzer DM644x/DM646x solution you must setup the host environment and ensure that specific files are installed on the DM644x/DM646x target board. In the following chapters, we’ll discuss how to setup the target Linux kernel and how to modify the source code in order to make the demo applications traceable by the SoC Analyzer.

DM644x and DM646x target board requirements

Some of the setup steps described below might have already been applied in newer versions of the DVSDK. However, for older versions and for custom applications it is essential to describe how to setup the target environment and instrument the application in order to analyze it with the SoC Analyzer. Ways to check whether required setup steps have already been applied will be mentioned in the guidelines below and will be marked with CHECK .

Required files and directories

Copy the contents from the appropriate device directory on your host machine


to the target board. is the 1.20, 1.30, or 1.40 DVSDK software version. Finally run the script to move the files to the correct directories. By default, the cgi-bin files (used to control data transfer to the host) are installed in /opt/dvsdk/web/cgi-bin for access by the (default) HTTP server. Required drivers and scripts (configure the EMIF counters and read the DSP speed) are installed in the /opt/dvt-dsa folder. Startup scripts for EMIF kernel module are installed in the folder /etc/init.d, and linked from /etc/rc.d/rc3.d/S88emif.

Enable additional features in the target Linux kernel

Instrument the Linux Operating System on the target board to include Linux Trace Toolkit (LTT) and Relay file system support when you rebuild the New Linux Kernel using the procedure described in the DVEVM Getting Started Guide (SPRUE66).

In the kernel directory type

make menuconfig

then navigate to Device Drivers -> Filesystems -> Pseudo Filesystems, and set Relayfs file system support as built-in. Return to General Setup and enable Linux Trace Toolkit Support as built-in. To rebuild the Linux Kernel, type

make ARCH=arm CROSS_COMPILE=arm_v5t_le- uImage

The uImage can now be copied to a place where U-Boot can use Trivial FTP to download it to the EVM:

cp ~/workdir/lsp/ti_davinci/arch/arm/boot/uImage /tftpboot
chmod a+r /tftpboot/uImage

Mount Relay File System

CHECK: To check whether a relay file system has already been established on your target, type mount on command line and look for something like

relayfs on /mnt/relayfs type relayfs (rw)

If so, the next step can be skipped.

The fstab file must be edited to ensure that RelayFs is mounted automatically each time the target board is rebooted.

Mount the RelayFs on the target board by creating a directory on the target file system to mount the RelayFs.

mkdir /mnt/relayfs

Append this text to the /etc/fstab file to automatically mount the RelayFs each time the target board boots.

relayfs /mnt/relayfs relayfs rw 0 0

Setup TraceUtil in the target application

CHECK: The next steps are most likely already done in the DVSDK demo applications. To verify, check the application configuration file (encode/decode/encodedecode.cfg) and main.c for code lines mentioned below and add missing lines if necessary.

Add the TraceUtil module to your GPP application configuration (*.cfg) script: <syntaxhighlight lang="javascript"> var TraceUtil = xdc.useModule('ti.sdo.ce.utils.trace.TraceUtil');

/* configure TraceUtils to run in "SoC Analyzer-friendly" mode */ TraceUtil.attrs = TraceUtil.SOCRATES_TRACING; </syntaxhighlight>

And add the following to your GPP application's C source code: <syntaxhighlight lang="c">

  1. include <ti/sdo/ce/utils/trace/TraceUtil.h>

/* ... */ CERuntime_init();

/* ... */ /*

* Ensure this is done sometime after CERuntime_init().  Note that
* 'engineName' is the same engine string name passed to Engine_open().



* Early releases of TraceUtil have a startup race condition.  Sleep
* here to give the TraceUtil daemon thread a chance to run.


/* ... */

/* When closing your app, stop the TraceUtil daemon */ TraceUtil_stop(); </syntaxhighlight>

Copy Symbol Files to Host File System

To get a correct trace result, you must copy the appropriate application symbol files from the target to the host PC. In case the SoC Analyzer is used with the standard demos, these symbol files must be copied from the /opt/dvsdk target board directory to a directory on the host machine, before they can be used in the Online Capture process. In case of the DVSDK demos they are:

  • decodeCombo.x64P
  • encodeCombo.x64P
  • loopbackCombo.x64P

Establish a Web Server

In order to communicate with the host and to upload trace files, the target board needs to establish a web server. On the target change directory to /opt/dvsdk/web and type

./thttpd -c "/cgi-bin/*"

Hint: Modify your target startup routine to establish the HTTP connection during target boot. Create following little shell script called /etc/rc.d/init.d/webserver


echo "webserver startup"

cd /opt/dvsdk/web
./thttpd -c "/cgi-bin/*"

echo "webserver established"

Create a soft link to that script by typing

ln -s /etc/rc.d/init.d/webserver /etc/rc.d/rc3.d/S92webserver

Trace your Application

In the Control Panel of the SoC Analyzer, chose Online Capture as Action. Fill in the IP address of the target board and the time period you want to collect trace data (should not exceed 10 sec. significantly). Under Session, you will find three choices, either decode, encode, or loopback. Here, you just specify a certain directory, where the trace files are stored after they were downloaded from the target board. Those files are stored in


Hint: Session names can be added to the pull-down menu. In case your application does neither decoding or encoding, adding custom session names can prevent confusion. To add another session folder just create a new folder in


in your windows file system. It then will show up it the session pull down menu.

After starting your application on the target, hit the Start Capturing button in the Control Panel of the SoC Analyzer. The trace will now be enabled for as many seconds you specified. The trace data is logged on the target side in the /tmp directory and will be downloaded from the host after the data logging has finished. Those trace files are then used to draw the tables and graphs in your SoC Analyzer user interface.

Different Trace Modes

There are three different possibilities of performing a trace:

  • Online Capture
  • Offline Capture
  • Post Process

Online Capture

In the Online Capture mode the trace is started by the SoC Analyzer tool itself. By hitting the Start Capture button, the SoC Analyzer application calls a shell script called which sends commands via the http server to the target board. It

  • removes old trace files from the directory,
  • initializes and starts the Linux Trace Daemon,
  • establishes a command pipe that enables the application to communicate with TraceUtil,
  • resets all needed trace files
  • and finally sets the trace parameters required by the SoC Analyzer application.

After the specified logging time ran out and the trace data was successfully written to the trace files on the target, the trace parameters are reset to default and the Linux Trace Daemon is stopped. The files are now downloaded by the host application to produce graphs and table views of the logged trace information.

Post Process

The Post Process option is to recall a previous trace analysis. Here, the trace files in the according directory Decode, Encode, or Loopback on the host side are re-loaded to reproduce the graphical analysis. In case of Post Process, no HTTP connection is necessary. This procedure allows application engineers to just exchange trace files for application analysis, instead of sending executables or whole source packages back and forth.

Offline Capture

When the Offline Capture option is chosen, the files of a former performed trace are downloaded from the targets' directory to plot the analysis graphs. This option is useful, when trace logging is not enabled by the SoC Analyzer tool itself (by hitting the Start Capture button), but in the users' application code at runtime.

EMIF Counters

The SoC Analyzer DM644x solution uses two counters to evaluate External Memory Interface (EMIF) efficiency, backlog and bottleneck issues with the application's DDR2 Memory Controller. These counters are configurable to count events relating to the SDRAM, which include accesses, activations, reads/writes, etc. In addition, the counters are configurable to count events for a specific master peripheral.
Details about the EMIF counter configuration can be found in the SoC Analyzer Help.

Caution: You should not change the EMIF configuration using your application or using Code Composer Studio. Changing the EMIF configuration after starting the data collection (capture, transfer and processing) generates unpredictable results. SoC Analyzer uses the EMIF configuration values resident on the target when performing its logging.

Capturing Data

As we mentioned above, the Online Capture option allows the user to start trace logging at a certain time and specify its duration. The trace parameters are send to the running application via a UNIX command pipe, which is created when the Start Capture button gets hit. Then a start command ("socrates=on") is written to the command pipe, which changes the default trace parameters of the application. This command is received by the TraceUtil daemon, which we have already configured into the system as described above.


The TraceUtil module contains many configurable parameters to capture trace, independent of the SoC Analyzer. One commonly used configuration is the SOCRATES_DEFAULT configuration. In this config, the many TraceUtil parameters are initialized as such (via the simple TraceUtils.attrs = TraceUtils.SOCRATES_DEFAULT; assignment described above):

<syntaxhighlight lang="javascript"> const attrs SOCRATES_TRACING = {

  localTraceMask: "*=67",
  localTraceFile: "/tmp/cearmlog.txt",
  dsp0TraceMask: "*=67",
  dsp0TraceFile: "/tmp/cedsp0log.txt",
  dsp0BiosFile: "/tmp/bioslog.dat",
  traceFileFlags: "w",
  refreshPeriod: 0,
  cmdPipeFile: "/tmp/cecmdpipe"
  cmdAliases: [
        alias: "socrates=on",
        cmds: [
        alias: "socrates=off",
        cmds: [

} </syntaxhighlight>

Using this configuration initializes the parameters to:

  • Only warnings (level 6) and errors (level 7) are enabled in all modules (*=67)
  • Trace files created in the /tmp directory
  • Trace collection from the DSP is disabled (refreshperiod = 0)

It also creates two 'aliases', or string commands, listened to on the command pipe - "socrates=on" and "socrates=off". Additionally, this config defines the actions to take when those commands are received.


There are two trace mask-related config params, one for code running on the local GPP processor (localTraceMask) and one for code running on the remote DSP processor (dsp0TraceMask). Those trace masks enable certain trace modules to log information on a certain information level. The '*' is a wildcard character, which refers to all available modules on either GPP or DSP side. The integers assign the trace level to the modules. By setting e.g. localTraceMask: "*=67", we set all available trace module on GPP side to log trace information on level 6 and 7

Each trace mask has a corresponding trace file, specified by localTraceFile and dsp0TraceFile, where the trace logs are stored. The dsp0BiosFile holds DSP/BIOS related trace information. DSP/BIOS trace modules are specified in the DSP trace mask and are to be activated separately, i.e. the wildcard character '*' doesn't apply for those modules.

Also, the refresh period is configured here - initially set to zero, i.e. no trace data is fetched from the DSP. The cecmdpipe parameter is the name of the command pipe file.

By hitting the Start Capture button in the SoC Analyzer Control Panel, the command "socrates=on" is written to the command pipe and the trace attributes change accordingly. Also the Linux Trace Daemon is called, to gather Linux side trace information.

The cmdAliases structure holds command aliases and the according commands to execute on call. When writing "socrates=on" to the command pipe, the GPP and DSP trace masks are complemented by trace level 5, and additionally a DSP/BIOS module logging on level 3 is activated in the DSP trace mask. All trace modules now operate on level 5,6 and 7 and a special ti.bios module is added at level 3. The refresh period is set to 200, i.e. every 200 ms information is fetched from the DSP if data is available. After the specified trace time (set in the SoC Analyzer Control Panel) is over, the "socrates=off" command is written to the command pipe to disable data logging. The operation "-5" is applied to the wildcard character, which removes trace level 5 from all the modules. Accordingly, the "-3" operation removes trace level 3 from the ti.bios module. Also the refresh period is reset to 0, i.e. no data is fetched from the DSP anymore.

When the trace duration is over and all data is written to the trace files, they are downloaded by the host application to display the results.

Caution: You should capture execution trace data for less than 20 seconds to ensure that log buffer capacity is not exceeded.

Controlling Trace at Run-Time through a Named Pipe

When you start a SOCRATES enabled application, it initially provides no trace other than (potentially) warnings and errors (corresponding to trace levels 6 and 7). Another way to change this behavior is via the command line. In this case, the SoC Analyzer application doesn't need to run.

On the command line on the target type

killall tracedaemon

to stop any previously running Linux Trace Toolkit (LTT) trace daemon. Then remove any existing trace files from former traces:

rm /tmp/trace.trace
rm /tmp/proc.txt
rm /tmp/proc_thread.txt
rm /tmp/output.txt


mkfifo /tmp/cecmdpipe
chmod 666 /tmp/cecmdpipe

to create a UNIX command pipe and change its permissions. This procedure needs only to be done for the first run of the application, since TraceUtil does not delete the pipe in the end. Send "resetfiles" to the command pipe to reset the trace files to size 0.

echo resetfiles > /tmp/cecmdpipe

After starting your application type

tracedaemon -eSTART -eSCHED -ePROCESS -eFS /tmp/trace.trace /tmp/proc.txt

to start the trace daemon to capture LTT logs.

To create a list of processes and threads that exist in the system at the beginning of the Linux trace and to save it in /tmp/proc_thread.txt, type

/opt/dvt-dsa/mapProcThread /tmp/proc_thread.txt

Send "socrates=on" to command pipe to start capturing Codec Engine and DSP/BIOS data.

echo socrates=on > /tmp/cecmdpipe

To turn off tracing, write "socrates=off" to the command pipe:

echo socrates=off > /tmp/cecmdpipe

To remove the trace daemon and stop capturing Linux Trace Toolkit (LTT) data, write

killall tracedaemon

to the command pipe.

Hint: Obviously, it is quite painful to send all those commands sequentially to the command pipe. Create a script in your application directory on target side called and include all command line commands for starting the Socrates Trace. Do the same to stop tracing and call the script Now you just got to execute the script


to start the trace and


to stop it.

Remark: Executing the script is equivalent to what happens when hitting the Start Capture button in the SoC Analyzer Control Panel in Online Capture mode.

After logging trace data over a certain time period, the created trace files can be downloaded by the SoC Analyzer application in Offline Capture mode to display the trace results.

Caution: You should capture execution trace data for less than 20 seconds to ensure that log buffer capacity is not exceeded.

Controlling Trace in Application Code

Capturing LTT and CE/BIOS logs can be turned on and off within an application. This feature allows the user to time the logging process more accurately, e.g. start tracing at a certain time by utilizing Linux timers and log for a certain amount of frames. Example C code to make logging callable within your application can be found here.

Custom Traces

The SoC Analyzer is a solution based on Texas Instruments Data Visualization Technology (DVT). This special solution is able to trace processes happening on DSP and GPP side, displays their interaction and also provides codec statistics. It relies on a special trace setup, which is defined in the SOCRATES_TRACING attributes structure. The solution needs all the data, according to the trace levels specified in the modules. Additionally it needs the trace logs of the Linux Trace Toolkit. Sometimes, such a complete trace might be unnecessary to track down a certain misbehavior of an application, it might even be too intrusive and disturb the application process by adding additional load to the CPU. In the next chapters we will describe the trace modules and explain their logging levels in more detail. We will then show, how to modify TraceUtil to perform custom traces. Those custom log files are available as easy-to-read text files, or can at least easily be processed to a human readable format.


The TraceUtil module is an ARM-side user interface to simplify CE's complex trace features. Using TraceUtil, the user can set parameters for different traces running in the background of the application, i.e. the Generic Trace (GT) for tracing Codec Engine related information (DSP and GPP side) or BIOS Log for DSP side events.

TraceUtil takes care of three kinds of tracing that Codec Engine modules can produce:

  • Tracing on the Arm side: many Codec Engine and other Arm-side modules drop tidbits of information that give insight about their state, or they print warning and fatal error messages.
  • Tracing on the DSP side: same as Arm-side modules -- except this is often the only way to find out what is happening on the DSP. Arm-side code can use printf() or the user can fire up GDB, but for DSP, tracing is often the only way to send a line of text over to the Arm.
  • BIOS logging on the DSP side: BIOS can collect information on various system events like task switching etc. This information, called "BIOS log" (because it uses the BIOS LOG module) is important when you use the SoC Analyzer tool to analyze performance. Unlike the first two kinds of trace, which are ASCII texts, BIOS log is a binary file. TraceUtil lets you specify what tracing features you want to enable when you build your application, and lets you override some of all of those specs via Linux shell environment variables when you run the application.

Trace Masks

The value of the trace mask is a mask name/value pair. If you change the trace mask to "*=01234567", you'll enable full tracing (which will result in a lot of output), but this is often useful in identifying what is going on internally. You can also configure the trace mask to set specific masks to specific levels during debugging. To configure more than one mask, you can separate masks with a comma or a semi-colon. Below is a list of mask names. For example, to use default setting for all modules, except "OM" (which you don't want to see), and "CV" (which you want to see everything), you could set the trace mask to the following:


Special modules as e.g. the BIOS trace module are not affected by the wildcard ("*"). You must name them directly to change their flags, e.g.



Module Name Module Prefix Module Description
OC OSAL Comm OSAL's Comm module - used to abstract messaging APIs across OS's
OP OSAL Process OSAL's Process module - used to abstract process APIs across OS's
OM OSAL Memory OSAL's Memory module - used to abstract memory APIs across OS's
OG OSAL Global OSAL's Global module - used to abstract OSAL-generic APIs across OS's
CE Codec Engine CE Runtime APIs
CV Codec Engine - VISA CE VISA APIs
CR Codec Engine - RMS CE's Server daemon
CN Codec Engine - Node CE's Node module - used to instantiate codecs and communicate through custom skeletons
ti.sdo.ce.osal.AlgMem OSAL Algorithm OSAL's Algorithm control module - used for creating, deleting and controlling algorithms
ti.sdo.ce.osal.power OSAL Power OSAL's Power module - used in heterogeneous configurations to power on/off a server. Not supported in all releases.
LC Log Client GPP side task to connect to the DSP Log Server and responsible for GPP-DSP BIOS log message transfer
LS Log Server DSP-side server to get data logged by DSP/BIOS
Special Modules
Mask Name Purpose Module Description
ti.bios Control the BIOS TRC module (DSP only) Levels (classes)
GT_prefix Control what info is included in each trace line prefix Levels (classes):
none - no prefix
0 - short module name
1 - long module name
2 - trace line class (level)
3 - thread ID
4 - stack address
5 - time stamp
default is "135"
GT_time Control the format of timestamps in trace lines Levels (classes):
none - microseconds in hex format (e.g. @0xa0cf80fe)
0 - microseconds, decimal (e.g. @4,021,348us)
1 - seconds (e.g. @0.004s)
2 - delta in us, excluding print time (e.g. @+0,000,259us)
3 - clock ticks, decimal (like level 0 but for the DSP, where the time stamp is a number of elapsed clock ticks, not number of elapsed microseconds )
Trace Levels
Level Information
0 Exit/enter functions
1 Debug information of interest for module developer
2 Debug information of interest for module developer & maybe for the user
3 Open: no functionality yet
4 Information for developer that is likely to be important for the user
5 Benchmarking
6 Warnings
7 Errors

Caution: If you set TraceUtil on the Arm side to use BIOS logging, you must have BIOS logging enabled in your DSP server image. That you do by adding this one line to your DSP server's configuration script: <syntaxhighlight lang="javascript"> var LogServer = xdc.useModule('ti.sdo.ce.bioslog.LogServer'); </syntaxhighlight>

Other Default Trace Settings

To be set in the application configuration file as <syntaxhighlight lang="javascript"> var TraceUtil = xdc.useModule('ti.sdo.ce.utils.trace.TraceUtil'); TraceUtil.attrs = TraceUtil.<Trace_Name>; </syntaxhighlight>


<syntaxhighlight lang="javascript"> const Attrs NO_TRACING = {

  localTraceMask: "*=",
  localTraceFile: null,
  dsp0TraceMask: "*=",
  dsp0TraceFile: null,
  dsp0BiosFile: null,
  traceFileFlags: null,
  refreshPeriod: 0,
  cmdPipeFile: "/tmp/cecmdpipe"
  cmdAliases: []

} </syntaxhighlight>


<syntaxhighlight lang="javascript"> const Attrs DEFAULT_TRACING = {

  localTraceMask: "*=67",
  localTraceFile: null,
  dsp0TraceMask: "*=67",
  dsp0TraceFile: null,
  dsp0BiosFile: null,
  traceFileFlags: null,
  refreshPeriod: 300,
  cmdPipeFile: "/tmp/cecmdpipe"
  cmdAliases: []

} </syntaxhighlight>


<syntaxhighlight lang="javascript"> const Attrs FULL_TRACING = {

  localTraceMask: "*=01234567",
  localTraceFile: "trace/cearmlog.txt",
  dsp0TraceMask: "*=01234567",
  dsp0TraceFile: "trace/cedsp0log.txt",
  dsp0BiosFile: "trace/bioslog.dat",
  traceFileFlags: "w",
  refreshPeriod: 100,
  cmdPipeFile: "/tmp/cecmdpipe"
  cmdAliases: []

} </syntaxhighlight>

Configuring TraceUtil for Use at Application Start Time

The trace masks (GPP and DSP) are initialized (in CERuntime_init()) to "*=67", which means display all warnings (6) and errors (7), but nothing else.

CE provides a means for changing the initial trace setting via some environment variables. Before you run your TraceUtil-powered application, if you set one or more environment variables below, you will change TraceUtil properties that you specified in your application.cfg script:

Parameters Function
CE_TRACE trace mask for the Arm side, e.g. CE_TRACE="*=0567;OM=0"
CE_TRACEFILE output file for Arm trace, e.g. CE_TRACE="trace/armtrace.txt"; if it can't be opened (e.g. points to a directory that doesn't exist)the trace will go to the standard output. (Try CE_TRACEFILE=/ for that.)
CE_TRACEFILEFLAGS file creation flags for all files to be opened, e.g. CE_TRACEFILEFLAGS="a" (meaning "append"), or "w" (write); these are standard fopen() flags
TRACEUTIL_DSP0TRACEFILE output file for the DSP trace; same remark holds as for the Arm trace
TRACEUTIL_DSP0BIOSFILE output binary file for the BIOS log; if can't be open, will not be collected
TRACEUTIL_DSP0TRACEMASK trace mask for the DSP side, e.g. TRACEUTIL_DSP0TRACEMASK="*+01;ti.bios=01234567"
TRACEUTIL_REFRESHPERIOD number of milliseconds before the Arm thread collects all available DSP trace since the last time it did it
TRACEUTIL_CMDPIPE name of a Unix named pipe ("fifo") where the TraceUtil listens for run-time trace commands
TRACEUTIL_VERBOSE when set to 1 makes TraceUtil print the what trace settings (masks, files)it uses and where it gets them from. When set to 2 and up it shows more debugging info.

Note: These environment variables are read once, at application startup time. Changing them after the application is executing will not have any effect: when the application runs, the application process gets its own copy of environment variables (changing them in the shell process does not affect the application process), and the application reads its copy of environment variables only when it initializes.

To set those environment variables at start time, just set them on the command below the application execute command, e.g.

TRACEUTIL_VERBOSE=1 CE_TRACEFILE="/tmp/cearmlog.txt" TRACEUTIL_DSP0TRACEMASK="*+2345,ti.bios+23" ./application

Hint: To output the GPP or DSP trace information on standard output instead of a trace file, use


at application start time. Setting the environment variables like this makes TraceUtil incapable of opening a valid trace file and redirects the trace data to stdout. For the GPP trace, the output will appear in the console window connected to the EVM via serial port. For DSP trace, one could connect directly to the DSP using Code Composer Studio loan modification and view the DSP-side trace information in its standard output viewer.

Create own Trace Commands (Custom Traces)

Below is the list of supported trace pipe commands. Note that only one command per line should be written to the trace pipe.

  • tracemask={Arm trace mask value} sets the trace mask on the Arm, e.g. tracemask=*+01234567,OM-1
  • dsp0tracemask={DSP0 trace mask value} sets the DSP0 trace mask, e.g. dsp0tracemask=*-1,ti.bios-012
  • refreshperiod={number of milliseconds} sets the refresh period for DSP0 trace and log collection; if 0, there will be no collection until a non-0 refreshperiod is specified; example: refreshperiod=10
  • resetfiles (no arguments) resets all open files -- Arm trace, DSP0 trace, DSP0 log (i.e. those that are in use) -- by truncating them to 0 bytes.

Those commands can be send to the command pipe at run-time as e.g.

echo tracemask=*+345,OM-4 > /tmp/cecmdpipe

assuming you created a command pipe as

mkfifo /tmp/cecmdpipe

before. As with "socrates=on", the ARM application integrator can define, in his application config script, command pipe aliases that issue several pipe commands, as in this example: <syntaxhighlight lang="javascript"> var TraceUtil = xdc.useModule('ti.sdo.ce.utils.trace.TraceUtil');

TraceUtil.attrs.cmdAliases = [

       alias: "mycommands_1",
       cmds: [
       alias: "mycommands_2",
       cmds: [
   }, /* and so on -- no limit on the number of aliases */

]; </syntaxhighlight>

Those user defined commands can be send to the command pipe as

echo mycommands_1 > /tmp/cecmdpipe

assuming you created a command pipe as

mkfifo /tmp/cecmdpipe


DVT to Display Custom Traces

The SoC Analyzer is a custom solution build with the Data Visualization Tool (DVT). the SoC Analyzer requires the trace modules and their according trace levels as defined in the SOCRATES_TRACING attribute structure. Only parameters as the refresh period and the trace file location can be changed. Changing parameters as the trace level will result in an error message in the SoC Analyzer application. Whereas the SoC Analyzer solution is a powerful one and basically shows everything relevant for debugging a real-time application even in a dual-core environment, one might need some custom trace information displayed as a graph or shown in a table. This can be achieved by utilizing the DVT tool, which is an Eclipse based, graphical IDE to create custom trace display solutions. This IDE uses a building block principle, as there are functional blocks for trace file reading, interpreting, correlating, displaying, etc. Connecting those blocks and specifying their functionality leads to a custom trace data display solution for every application.

Behind the Scenes of the SoC Analyzer and TraceUtil

Using the TraceUtil module, the user can enable all specified trace parameters. TraceUtil initializes and changes trace parameters either by reading parameters specified in the application configuration file, by setting environment variables on application start time (overwrites parameters in app.cfg file), or by sending new trace parameters to a UNIX command pipe, TraceUtil is listening to. In order to communicate with the DSP, for setting trace masks or fetching CE trace data, the RMS task is utilized. In this low priority task, a message queue is established to send configuration or data fetch messages to the DSP, or to receive status messages from the DSP. For fetching DSP/BIOS logs, the GPP connects to a DSP-side LogServer, which is a low-priority thread responsible for logging BIOS data.

By calling the TraceUtil_start() API, TraceUtil will first check for an environment variable called CE_DEBUG, which specifies if debug tracing is enabled (CE_DEBUG disables TraceUtil, and therefore the SoC Analyzer). If CE_DEBUG is not set, it checks for a remote server by verifying at least one of the used codecs to be remote. After collecting the environment variable for the refresh period, i.e. how frequently information is to be fetched from the DSP, a separate trace thread is created for managing the data fetching from the DSP. Now the DSP trace parameters are set by passing the DSP trace mask to a message queue established between the GPP and the DSP. This connection is established within the low priority RMS task and is used to either configure the DSP trace or to fetch trace data by sending request and reply messages over the queue. The GPP sends the message containing the DSP trace mask via the message queue to the DSP. After the DSP has processed the message, it sends an answer back to the GPP. After successfully configuring the DSP-side trace, another thread is created on GPP side. This thread is creating a UNIX command pipe and is listening to it in parallel to the data fetching thread. This pipe allows the user to send commands (e.g. "socrates=on") via the command line, causing changes in the GPP and DSP trace parameters during application run-time. The DSP trace thread fetches either CE-related, DSP/BIOS or both logs from the DSP server. To do so, the GPP connects to the BIOS LogServer in order to collect the data from the DSP/BIOS log buffers and to the DSP RMS task to fetch CE-related logs. Depending on the setup, DSP/BIOS log data and CE trace data will be periodically fetched from the DSP and written to the defined destination. The refresh period is specified in the TraceUtil module and can be modified either in the application configuration file, on application start time, or via command line through the recently established UNIX pipe. If the SoC Analyzer is utilized, the refresh period is set to 200 ms when the tracing is started. If the refresh period is set to zero, no data fetching from the DSP is done.


Before connecting to the DSP/BIOS LogServer (a low priority DSP-side thread responsible for logging DSP/BIOS data), contiguous memory is allocated in order to store received trace data from the DSP. This is done by allocating a MSGQ message of maximum allowed size which acts as a data placeholder and is never actually been send. After creating a client message queue in order to receive messages from the DSP, the server message queue - which is established by the DSP log server task - needs to be located in order to communicate with the server. Finally, a command buffer is allocated, which holds the future logging or time synching commands for the server. Now, that we are connected to the DSP log server, we are able to send data fetch commands in order to collect all trace data available in the DSP/BIOS log buffers. The DSP-side log server task is waiting for either a read logs or a time synchronization command. In case of data fetching, the message sent to the DSP contains besides the fetching command also the physical address of the host-side buffer where the data is to be stored and the maximum allowed amount of data (4k) to fetch. This message is send via the server MSGQ to the DSP. The DSP-side log server task is running on lowest priority to not interrupt any essential processing (e.g. codecs) while gathering and sending information. The BIOS log data is written in 4k packages from cache to the specified shared memory address and the message gets updated by the number of bytes actually fetched from the DSP. Then, the message is sent back to the host client. At success, the data is written to a binary log file (SoC Analyzer default: bioslog.dat) using the fwrite() function. This data fetch and write procedure is repeated as long as there is data available on the log server.

Hint: To enable tracing of LogServer and LogClient events, activate trace level 2 for the LC and the LS trace module.

Remark: The SoC Analyzer utilizes a small application called rtdadecode.exe to translate the binary BIOS log file into a human-readable format.

Hint: It is possible to add custom log messages to the BIOS log , in order to show up with a time-stamp in the SoC Analyzer BIOS Log table. An example of how to do so can be found here.

Format the BIOS Log File

To translate the bioslog.dat file into a human readable format, an application called rtadecode is provided by the BIOS utilities. Several builds can be found either in your BIOS distribution


or in the SoC Analyzer install directory


In those locations you will find several executables of the application built for several platforms. The execute syntax is platform independent , as

rtdadecode.extension [symbol file] [binary BIOS log file] [optional: output file]

If the output file is not specified, the decoded BIOS log file content will be displayed on standart output. To format the file into a Microsoft Excel format, just add a .csv extension to the output file.

rtdadecode.extension decodeCombo.x64P bioslog.dat bioslog.csv

If the MS Excel specific file extension drops out, the data will be formatted into a comma-separated text file. The above example use case is assuming all files to be in the same directory. If that is not the case, the full file path needs to be specified.


In order to track task creation and EMIF events, a LogTracker is utilized. In order to track every static and dynamic task creation, it is connected to the DSP/BIOS HOOK module. At the time a task is created by DSP/BIOS, its name is jammed into a dedicated buffer (ti_sdo_ce_bioslog_create). The LogTracker also frequently reads the two independent EMIF counter registers, which count events relating to the SDRAM. In addition, the counters are configurable to count events for a specific master peripheral. Those counter values are stored to a dedicated buffer (ti_sdo_ce_bioslog_track).

Remark: The SoC Analyzer provides a graphical configuration manager for the EMIF counters. See the SoC Analyzer Help for more details.



Time synchronization between GPP and DSP has to be done in order to enable the SoC Analyzer to align the different trace data on a common time line. In order to get the correct time stamps for active processes, we got to compensate for the time it takes to send messages via the message queue (MSGQ latency). To do so, a shared timer (Timer1) is used to get a time stamp on the ARM side. Also a GT and a Linux time stamp (using gettimeofday()) is determined. This timer values are sent via the message queue to the DSP LogServer with a time synchronization command. The DSP reads the shared timer again and calculates the difference between the value measured on the GPP side and the newly determined value. Using a shared timer enables us to determine the latency of the MSGQ. Since GT uses independent timers on GPP and DSP side, this shared timer offset value can be used to synchronize the GPP and DSP GT logs. All those time values (Linux, GT-ARM/GT-DSP, Timer1 offset) are stored to a dedicated buffer (ti_sdo_ce_bioslog_synch) by using the LOG_event5() BIOS API to be later on fetched by the LogClient task and to be written to the BIOS log file.

The SoC Analyzer uses all the obtained timer values together with the offset value to compensate for the MSGQ delay and to align BIOS, GT and LTT logs on a common time line.


The log buffers to hold trace information from DSP/BIOS are configured in (Note that this file should not be modified!):


with their according sizes. Each buffer is responsible for holding specific data. Those log buffers are fetched frequently (frequency defined in the refreshperiod variable) from the DSP and their content is written to a common binary log file. In the following pictures we show how the BIOS trace data appears in the formatted BIOS trace file. In the following sections we describe what log buffer holds what kind of data and show how the data appears in the translated BIOS log file.


LOG_System content appearance in the translated BIOS log file
Buffer Index Buffer Entry Index - Counter Values BIOS OS Event
0 0 0 119137 TSK: ready TSK_idle (0x8faffba4)
0 1 0 190697 SWI: begin KNL_swi (TSK scheduler) (0x8fb1dc38)
0 2 0 191398 TSK: ready RMS (0x8b80003c)
0 3 0 193168 SWI: end KNL_swi (TSK scheduler) (0x8fb1dc38) state = done
0 4 0 201993 SWI: begin KNL_swi (TSK scheduler) (0x8fb1dc38)
0 5 0 202056 TSK: ready LogServer (0x8b8040cc)
0 6 0 202618 SWI: end KNL_swi (TSK scheduler) (0x8fb1dc38) state = done
0 7 0 4109128969 TSK: blocked RMS (0x8b80003c) on <unknown handle> SEM
0 8 0 4109130384 TSK: running TSK_idle (0x8faffba4)
0 9 0 4110013355 SEM: post <unknown handle> (0x8fae0cf4) count = 0
0 10 0 4110014631 TSK: ready mpeg2dec#0 (0x8ba64c8c)
0 11 0 4110015237 TSK: running mpeg2dec#0 (0x8ba64c8c)


This buffer of size 1 kb holds the names of dynamically created tasks.

ti_sdo_ce_bioslog_create content appearance in the translated BIOS log file
Buffer Index Buffer Entry Index - Counter Values Task Name and Task Handle
1 1 0 190015 RMS created (0x8b80003c)
1 2 0 201936 LogServer created (0x8b8040cc)
1 3 0 220547603 mpeg2dec#0 created (0x8ba64c8c)
1 4 0 500511403 mp3dec#1 created (0x8ba6a87c)


This buffer of size 64 bytes holds time synch records

ti_sdo_ce_bioslog_synch content appearance in the translated BIOS log file
Buffer Index Buffer Entry Index - Counter Values Time of Day: sec/ms GT Count ARM GT Count DSP Offset Timer1 (ARM - DSP)
2 0 0 260544172 tod (1073044949, 974105) GT ARM=974127 GT DSP=1017749 offset=1777
2 1 0 267041740 tod (1073044949, 974388) GT ARM=974400 GT DSP=1043130 offset=1774


This buffer of size 4 kb holds EMIF data from the LogTracker. It also stores the actual DSP load.

ti_sdo_ce_bioslog_track content appearance in the translated BIOS log file
Buffer Index Buffer Entry Index - Counter Values EMIF Counter 1 & 2, DSP Load
3 187 0 1116975563 0x9b1bbfa, 0x37b3b45, 0x17
3 188 0 1122915237 0x9b1cb0c, 0x37b426a, 0x17
3 189 0 1128855465 0x9b27d15, 0x37b5966, 0x17
3 190 0 1134796775 0x9b454fb, 0x37b996e, 0x17
3 191 0 1140735879 0x9b463a4, 0x37ba31c, 0x17

Caution: If you set TraceUtil on the ARM side to use BIOS logging, you must have BIOS logging enabled in your DSP server image. That you do by adding this one line to your DSP server's configuration script: <syntaxhighlight lang="javascript"> var LogServer = xdc.useModule('ti.sdo.ce.bioslog.LogServer'); </syntaxhighlight> If your DSP server is incapable of BIOS logging, you will see ARM-side error/warning messages like,

LogClient_connect> Error: failed to locate server queue, Check if your DSP image has BIOS logging enabled


LogClient_fwriteLogs> Warning: not connected to the BIOS log server on the DSP, cannot collect any BIOS log data.

Generic Trace

The Generic Trace provides a set of APIs to customize the way data is logged or displayed, and what data to log. Those APIs are used both on GPP and DSP side to log data.

API Description

GT_create(mask, moduleName): The GT_create() function is used to create new trace modules to trace certain events (see table below). The mask argument is a structure holding a module name and its trace level. The input parameter moduleName specifies the name of the module for which the trace levels are to be determined. If tracing is enabled, default trace levels for each module are set via the application configuration file or environment variables at application start time. The GT_create() function determines those default trace levels for the according module at application runtime. Those levels can be changed later on by either using the GT_set() command, or a UNIX command pipe. A list of trace modules that can be created can be found here. Additionally, new module names can be defined to trace special events in your own application code. The names for custom modules have to differ from the already existing ones.

GT_set(command): This function is used to set trace masks/levels for the different trace modules available. This function is called when TraceUtil is initialized and every time the trace configuration gets changed by the user or the application code, e.g. <syntaxhighlight lang="c"> GT_set("*=67,GT_prefix=13"); </syntaxhighlight>

GT_#trace(mask, classId, format, arg1, ...): This function basically logs data to a specific destination, which is in most cases a trace file. The destination can be set in a separate function called GT_setprintf(), which is described below. The first argument mask is a structure containing the module name and its configured trace levels. The #-sign in the function name has to be replaced with an integer from 0 to 6. This just indicates the number of arguments or variables to be logged, e.g. <syntaxhighlight lang="c"> GT_3trace(mask, classId, format, arg1, arg2, arg3); </syntaxhighlight> The format argument is a string containing the text arg1, arg2 etc. get embedded into. The syntax here is the same as for the printf() function and its place holders for specific data types, e.g. <syntaxhighlight lang="c"> GT_2trace(mask, classId, "Arg1: %d\n Arg2: %x\n", arg1, arg2); </syntaxhighlight>

The classId argument indicates a certain trace level. Only if this specific trace level is enabled in the mask argument, the specified data gets logged, i.e. <syntaxhighlight lang="c"> GT_1trace(mask, GT_3CLASS, "Log if classId is set to: %d\n", 3) </syntaxhighlight> just gets logged, when level 3 is enabled in the mask structure. Find a list of trace level conventions below.

Name, Level Information
GT_ENTER, 0 Exit/enter functions
GT_1CLASS, 1 Debug information of interest for module developer
GT_2CLASS, 2 Debug information of interest for module developer & maybe for the user
GT_3CLASS, 3 Open: no functionality yet
GT_4CLASS, 4 Information for developer that is likely to be important for the user
GT_5CLASS, 5 Benchmarking
GT_6CLASS, 6 Warnings
GT_7CLASS, 7 Errors

GT_setprintf(fxn): Using this function, the destination of the data logged by the DSP can be configured. By default, the data is send to a circular buffer in the shared memory area, where it gets picked up in packages by the ARM. When debugging the DSP-side of an application in combination with Code Composer Studio (CCS), one could set the trace data destination to stdout by setting:

<syntaxhighlight lang="c"> GT_setprintf(printf); </syntaxhighlight>

This configuration would show the logs on the message output of CCS.

Data Logging

To fetch the logged GT data from the DSP, a message is sent to the DSP RMS task containing a fetch command. The data is then copied in 4k blocks from the log buffer to the shared memory area allocated by the message using the memcpy() function. Here, 4k is the maximum amount of data that can be fetched in one call, since it is the maximum message size. The logs are then written to either the specified output file (SoC Analyzer default: cedsplog.txt) by utilizing fputs() and fputc(), or stdout (printf()) depending on the GT configuration. Fetching log data is repeated until there is no data left on DSP side. The GPP data logged by GT is directly written to either the specified log file (SoC Analyzer default: cearmlog.txt) or send to the standard output depending on the GT configuration.

Remark: To use the GT function APIs, "ti/sdo/utils/trace/gt.h" has to be included in the application code.

GT Log Excerpt

The following tables show examples of log file excerpts of ARM and DSP-side GT log files.

Content of the ARM-side GT trace file
Timer Value Thread Id Traced Module Event
@0x07f996b6: [T:0x43ee1b60] CV VISA_allocMsg> Allocating message for messageId=0x00034a4c
@0x07f99783: [T:0x43ee1b60] CV VISA_call(visa=0x76150, msg=0x41ae0c80): messageId=0x00034a4c , command=0x0
@0x07f99c49: [T:0x43ee1b60] CV VISA_call Completed: messageId=0x00034a4c , command=0x0, return(status=0)
@0x07f99d3b: [T:0x43ee1b60] CV VISA_freeMsg(0x76150, 0x41ae0c80): Freeing message with messageId=0x00034a4c
@0x07f99e07: [T:0x43ee1b60] CV VISA_allocMsg> Allocating message for messageId=0x00034a4d
@0x07f99eb9: [T:0x43ee1b60] CV VISA_call(visa=0x76150, msg=0x41ae0c80): messageId=0x00034a4d , command=0x1
@0x07f9a111: [T:0x43ee1b60] CV VISA_call Completed: messageId=0x00034a4d , command=0x1, return(status=0)
@0x07f9a1fb: [T:0x43ee1b60] CV VISA_freeMsg(0x76150, 0x41ae0c80): Freeing message with messageId=0x00034a4d

Content of the DSP-side GT trace file
Timer Value Thread Id Traced Module Event
@0x1275db0f: [T:0x8fbe9fc4] CN NODE> 0x8fbe9e80(mp3dec#1) call(algHandle=0x8fbe9ef0, msg=0x8fe0bc80); messageId=0x00034a4c
@0x1275db97: [T:0x8fbe9fc4] CV VISA_enter(visa=0x8fbe9ef0): algHandle = 0x8fbe9f18
@0x1275dfe9: [T:0x8fbe9fc4] CV VISA_exit(visa=0x8fbe9ef0): algHandle = 0x8fbe9f18
@0x1275e08b: [T:0x8fbe9fc4] CN NODE> returned from call(algHandle=0x8fbe9ef0, msg=0x8fe0bc80); messageId=0x00034a4c
@0x1275ebc1: [T:0x8fbe9fc4] CN NODE> 0x8fbe9e80(mp3dec#1) call(algHandle=0x8fbe9ef0, msg=0x8fe0bc80); messageId=0x00034a4d
@0x1275ec1e: [T:0x8fbe9fc4] CV VISA_enter(visa=0x8fbe9ef0): algHandle = 0x8fbe9f18
@0x1275ec6c: [T:0x8fbe9fc4] CV VISA_exit(visa=0x8fbe9ef0): algHandle = 0x8fbe9f180x00034a4c
@0x1275ecb3: [T:0x8fbe9fc4] CN NODE> returned from call(algHandle=0x8fbe9ef0, msg=0x8fe0bc80); messageId=0x00034a4d


Example code to implement custom GT log entries can be found here. Assuming either trace level 4 or 5 is specified for the module CUSTOM_TRACE, GT will add the following line including time-stamp to the GT log file:

Timer Value Traced Module Event
@0x1275ecb3:[T:0x8fbe9fc4] CUSTOM_MODULE Counter: 1

Since in the example, all trace levels (0 to 7) are set for CUSTOM_MODULE, this line will be added to the GT log file.

Hint: Trace levels can be OR'd in a GT_#trace call to enable more than one trace level, as can be seen in the example code.


There are several trace mechanisms available on DaVinci platforms which are utilized by the SoC Analyzer DVT solution to analyze single or multi-core systems-on-chip. The SoC Analyzer configures the trace parameters to its needs at application run-time utilizing shell scripts and a UNIX command pipe when it starts the system analysis.

The targets' Linux OS uses the Linux Trace Toolkit to trace processes running on the ARM core and stores the LTT trace file (trace.trace) directly on the targets' file system. The SoC Analyzer utilizes TraceUtil to configure the GPP/DSP - GT trace masks as well as a special trace mask for DSP/BIOS. Generic Trace collects ARM-side CE-related trace data and stores it in a trace file (cearmlog.txt) on the targets' file system. TraceUtil establishes a trace thread, which on the one hand communicates with the DSPs RMS task via a MSGQ in order to fetch the DSP-side CE logs. On the other hand, it connects to a LogServer to fetch the BIOS log buffers in order to write it to a binary log file. Trace data is fetched from DSP until there is no data left in the trace buffers. Once the trace buffers are fetched, the trace thread waits for a certain time period and starts fetching DSP-side trace data again. Both, the DSP-side CE and BIOS trace information are stored in trace files (cedsplog.txt and bioslog.dat) on the targets' file system.

Once the trace time period is over, all trace files are downloaded by the SoC Analyzer application via a web server and correlated on a common time line using the time sync values in the BIOS log file. After correlation, the logged trace data is displayed in several graphs and tables, which essentially reduces the complexity of system-on-chip analysis.


Example Code

Control Trace in Application Code

<syntaxhighlight lang="c">

  1. include <stdio.h>
  2. include <spawn.h>
  3. include <sys/wait.h>
  4. include <unistd.h>
  5. include <string.h>

int startTraceDaemon() {

   int pid, res, status;
   char * argv[] = { "/bin/tracedaemon", "-eSTART", "-eSCHED", "-ePROCESS", "-eFS", "/tmp/trace.trace", "/tmp/proc.txt", (char *) 0 };
   char * argv2[] = { "opt/dvt-dsa/mapProcThread", "/tmp/proc_thread.txt", (char *) 0 };
   char * envp[] = { (char *) 0 };
   //Spawn tracedaemon
   res = posix_spawn(&pid, argv[0], NULL, NULL, argv, envp);
   if (res != 0) {
       printf("Cannot spawn tracedaemon process, error %d\n", res);
       printf("Errno: %d\n", errno);
       pid = 0;
       return -1;
   //Wait one second for tracedaemon to start up.
   //Since tracedaemon forks, wait for tracedaemon%u2019s parent process to end.
   res = waitpid(pid, &status, WNOHANG);
   //Spawn mapProcThread.
   res = posix_spawn(&pid, argv2[0], NULL, NULL, argv, envp);
   if (res != 0) {
       printf("Cannot spawn mapProcThread process, error %d\n", res);
       printf("Errno: %d\n", errno);
       pid = 0;
       return -1;
   //Wait for mapProcThread process to end.
   res = waitpid(pid, &status, WNOHANG);
   return 0;


int stopTraceDaemon() {

   int pid, res, status;
   char * argv[] = { "/usr/bin/killall", "tracedaemon", (char *) 0 };
   char * envp[] = { (char *) 0 };
   res = posix_spawn(&pid, argv[0], NULL, NULL, argv, envp);
   if (res != 0) {
       printf("Cannot spawn killall process, error %d\n", res);
       printf("Errno: %d\n", errno);
       pid = 0;
       return -1;
   //Wait one second for tracedaemon to stop.
   //Wait for killall process to end.
   res = waitpid(pid, &status, WNOHANG);
   return 0;


int startTraceUtil() {

   char * sCECmdPipeFile = "/tmp/cecmdpipe";
   char * sResetfiles = "resetfiles";
   char * sStartTraceUtil = "socrates=on";
   int nCECmdPipeFile;
   //Open cecmdpipe file for writing.
   if ((nCECmdPipeFile = open(sCECmdPipeFile, O_WRONLY)) < 0) {
       printf("Cannot open cecmdpipefile for writing.\n");
       return -1;
   //Write resetfiles to cecmdpipe
   if (write(nCECmdPipeFile, sResetFiles, strlen(sResetFiles)+1) == 0) {
       printf("Cannot write to cecmdpipefile.\n");
       return -1;
   //Write "socrates=on" to cecmdpipe.
   if (write(nCECmdPipeFile, sStartTraceUtil, strlen(sStartTraceUtil)+1) == 0) {
       printf("Cannot write to cecmdpipefile.\n");
       return -1;
   //Close cecmdpipe file.
   return 0;


void stopTraceUtil() {

   char * sCECmdPipeFile = "/tmp/cecmdpipe";
   char * sStopTraceUtil = "socrates=off";
   int nCECmdPipeFile;
   //Open cecmdpipe file for writing.
   if ((nCECmdPipeFile = open(sCECmdPipeFile, O_WRONLY)) < 0) {
       printf("Cannot open cecmdpipefile for writing.\n");
   if (write(nCECmdPipeFile, sStopTraceUtil, strlen(sStopTraceUtil)+1) == 0) {
       printf("Cannot write to cecmdpipefile.\n");
  //Close cecmdpipe file.


//Start TraceDaemon and TraceUtil logging. //Must be called after TraceUtil thread is created: TraceUtil_start(engineName). int startLogging() {

   //Start TraceDaemon
   if (startTraceDaemon() ==0) {
       //Start TraceUtil
       if (startTraceUtil() ==0) {
           return 0;
       else {
           //If TraceUtil failed to start, stop TraceDaemon.
           return -1;
       return 0;
   else {
       return -1;


void stopLogging() {

   //Stop TraceUtil & TraceDaemon.

} </syntaxhighlight>

Add Custom GT Module to Application

<syntaxhighlight lang="c">

  1. include <ti/sdo/utils/trace/gt.h>



... static GT_Mask traceMask = {0,0}; int counter = 0;

int init(...) { ...


... }

int process(...) { ...

   GT_1trace(traceMask,GT_4CLASS | GT_5CLASS,"Counter: %d\n",counter);

... } </syntaxhighlight>

Add custom LOG_messages to BIOS Log

<syntaxhighlight lang="c">

  1. ifdef _TMS320C6400_PLUS
  2. include <ti/bios/include/log.h>
  3. endif


int process(...) {

   int arg1 = 1;


  1. ifdef _TMS320C6400_PLUS
   LOG_message("Custom Message: %d\n",arg1);
  1. endif

... } </syntaxhighlight>

SoC Analyzer for Single Core Devices

Please see User:DVT/UserDocumentation/DVT_SoC_Analyzer

Issues and Error Messages

Out of sequence log received. Expecting xxxx but received yyyy

This problem relates to the fixed size of the BIOS log buffers. Since they are circular, it can happen that information gets overwritten if not fetch frequently enough. Since they are timestamped and indexed, the SoC Analyzer can detect whether a log entry is missing. To prevent for buffer overflow, fetch trace data more frequently from the DSP server. To do so, a variable named 'refreshperiod' has to be changed (default value = 200 ms). This variable indicates how often trace data gets fetched from the DSP.

  • edit your application configuration file (e.g. decode.cfg)
  • right below the lines

<syntaxhighlight lang="javascript"> var TraceUtil = xdc.useModule('ti.sdo.ce.utils.trace.TraceUtil');

/* configure TraceUtils to run in "SoC Analyzer-friendly" mode */ TraceUtil.attrs = TraceUtil.SOCRATES_TRACING; </syntaxhighlight> add the following line <syntaxhighlight lang="javascript"> TraceUtil.attrs.cmdAliases[0].cmds[2] = "refreshperiod=50"; </syntaxhighlight> to set the refresh period to 50 ms (or less). Rebuild your application.