Please note as of Wednesday, August 15th, 2018 this wiki has been set to read only. If you are a TI Employee and require Edit ability please contact x0211426 from the company directory.
Debug versus Optimization Tradeoff
This article discusses the trade-off between the ease of debugging and the effectiveness of compiler optimization. The way these two features affect each other is undergoing a fundamental change.
Ease of debugging is characterized by things such as:
- Does single stepping proceed through the code exactly as written? Or, does it skip around in seemingly random order?
- Are all variables always available for inspection? Or do some go missing?
- Do the values in the variables correspond to what is expected at that point in execution? Or something else?
Effectiveness of compiler optimization is characterized by things such as:
- How fast does the code run?
- Is the code size small enough?
- Is power consumption low enough?
The compiler's main job is to generate instructions and data directives which implement the C/C++ source code. In addition, the compiler also emits information used by the debugger to keep track of things like where variables are located, or how to map an instruction address to a C source line.
The essential problem is that things the compiler does to make the code run faster may, at the same time, make the code harder to debug. Consider this simple loop.
for (i = 0; i < COUNT; i++) sum += array[i];
This loop is counting up, from 0 to COUNT. Suppose, to make it run faster, the compiler generates code which counts down, from COUNT to 0. What is the meaning of the variable i? If you inspect the value of i during execution of the loop, would it be confusing?
This is a simple example, contrived to illustrate the trade-off. Actual instances in practice are usually more complex, sometimes much more complex.
This article discusses the debug versus optimization trade-off as of these compiler versions.
The C5500 and C5400 compilers are not affected by this change.
Debug No Longer Affects Optimization
It used to be the case that generation of the debug information inhibited optimization. Put another way, you could always get yet more optimization by removing the debug switch -g. This is no longer true. The generation of the debug information is done differently, and it no longer inhibits optimization. Compiler optimization and generation of debug information are now fully independent.
No Longer Need -g to Debug
The effect of the debug option -g is different. The compiler always emits full debug information, regardless of whether -g is used.
Use of -g still has an effect in one situation. Each compiler has a default setting for the optimization level. Using -g generally lowers the default optimization level. As always, explicitly specifying an optimization level overrides the default. The ARM compiler, for example, defaults to --opt_level=3. If -g is used, the default optimization level changes to --opt_level=off. See the compiler manual or README.txt (located in the base directory of the compiler installation) for the details on a specific compiler.
Debug Affects Object File Size
Debug information has no effect on overall system memory footprint. But it does add to the size of an object file. Older compilers default to --symdebug:skeletal, while newer compilers always emit full debug information. If your builds rely on these default settings, you will see a modest increase in object file size due to more debug information being generated. This can be noticeable, particularly in a library with many object files. If object file size is a concern, and debugging is not needed, use --symdebug:none to disable generation of debug information.
Skeletal Dwarf Deprecated
The option --symdebug:skeletal caused the compiler to emit enough debug information to support profiling, while not interfering with optimization. This was the default debug setting. With this change, this option becomes meaningless. Thus, --symdebug:skeletal is deprecated. Deprecated means the option is still accepted by the compiler, but has no effect.
But Optimization Still Affects Debug
None of this is to say the debug versus optimization trade-off is gone. If anything, it is sharper than before. Although debug information does not affect optimization, optimization continues to affect the debug experience.
Suppose your system is built, as many systems are, with both -g and --opt_level=2 (or --opt_level=3). As described above, since the optimization level is explicitly specified, the -g option loses its meaning. So far as optimization is concerned, it is the same as removing -g. Thus, by changing compiler versions, you change your position in the trade-off to one where your optimization is likely to improve and your ease of debugging is likely to degrade. If this change causes problems when debugging, try lowering the level of optimization. This change may make your system slower or bigger. Thus, it may not make sense to lower optimization in order to improve ease of debugging.
What does make sense? Only you can decide the best point in this trade-off for your system. For most users, the best trade-off point is the lowest level of optimization which meets your system constraints. Your system constraints are usually some combination of size, speed, and power. Increase your optimization level until you meet your constraints. A higher level of optimization, while perhaps worthwhile, incurs a risk to your debug experience.
What if the optimization level you need to meet your constraints is too hard to debug? Consider lowering the optimization level for just the one file you need to debug, and not the whole system. In CCS, it is easy to change the build options for a single file. Please see this wiki article for the details. By lowering the optimization level for just one file, it is likely you will continue to meet your system constraints, while making it easier to debug that file.
In summary, by means of the --opt_level switch alone, you have full control of the trade-off between the effectiveness of optimization and the ease of debugging.
The rest of this article is relevant to older compilers. Specific details are with regard to the C5500 compiler. At a conceptual level, this information applies to all the older TI compilers.
- g Best Debug Experience
- g -o
- g -o -mn
- o Best Optimization
On a sample c55 application (CGT 3.3) we observed the following: -
|Current set of options||Replaced -ss by -s||No -ms||Added -mn & -s, removed -ss||No debug at all||Debug and -o3|
- ss -os -o2 -ms --symdebug:dwarf
- s -o2 -ms --symdebug:dwarf
- s -o2 --symdebug:dwarf
|-s -o2 --symdebug:dwarf -mn||
- s -o2
- s -o3 --symdebug:dwarf -mn
|gain vs. current options||0.00%||-0.01%||-7.94%||-7.96%||-8.04%|
Details on the debug experience:
|Breakpoint in ASM||Breakpoint in C||Stepping in ASM||Stepping in C||Profiling||Memory window for globals||Mixed Mode||Watch Window|
|No debug for C, sym:dwarf for ASM only, no -mn||Yes||yes at the top of the function only.||Yes||No, no step into C||yes||yes||Yes||Issues|
|No debug for C, No debug for ASM, no -mn||Yes||yes at the top of the function only.||No||No, no step into C||Yes||yes||Yes||Issues|
|Debug for C, No debug for ASM, -mn, -s||Yes||Yes||yes||yes||Yes||yes||yes||yes|
|sym:dwarf debug for C, sym:dwarf for ASM, no -mn||Yes||Yes||yes||yes||Yes||yes||yes||yes|
|sym:dwarf debug for C, sym:dwarf for ASM, -mn, no -ms||Yes||yes||yes||yes||Yes||yes||yes||yes|
- -g and --symdebug:dwarf are equivalent.
- the long form option of -mn is --optimize_with_debug. You can use this option on any ISA including TMS470 etc.
- C5500 compiler version 3.3.2
- CCS Version 3.3
- "Issues" means the watch window does not work well. Global variables can be seen, but the window still does not refresh properly.
- -ss degrades optimization significantly.
So, starting from a base of -s -o2, adding -g degrades performance about 8%. Adding -g -mn regains nearly all of that performance while restoring the debug experience to a reasonable level.
Some comments with regard to the C6000 compiler. It does not support -mn. It presently supports a hidden option --optimize_with_debug in
> = CGT 6.0.x. This option is available in production versions, but is not documented. A bug has been filed (SDSCM00020345) to this effect. The effects of -mn seen here, getting all the lost performance back, are unlikely to be the same. But this does show that customers have another point in the optimization vs. debug trade-off that is likely to be viable.