NOTICE: The Processors Wiki will End-of-Life in December of 2020. It is recommended to download any files or other content you may need that are hosted on The site is now set to read only.

How Do Breakpoints Work

From Texas Instruments Wiki
Jump to: navigation, search


A common question that comes up when debugging code on a target with an emulator is "How exactly do Breakpoints work". Another is "What is the difference between a Software and Hardware breakpoint, and when should I use each." This article will present in depth detail on the differences between software and hardware breakpoints. it will provide a more detailed view on the inner workings of the Code Composer Studio and Target communications. Also, special breakpoint cases will be discussed and advanced uses will be presented.

Abbreviations Used

  • CCS - Code Composer Studio
  • HWBP - Hardware Breakpoint
  • PAB - Program Address Bus
  • SWBP - Software Breakpoint

Breakpoint Definition

In the context of this article, let's agree on a unified definition of a breakpoint. The breakpoints discussed here are Program locations where we want the processor to halt so that we can do some sort of debugging. There could be other types of breakpoints, such as ones that are triggered by a data access, but in this article, we will discuss only Program Breakpoints, locations in our application code where we want to halt every time that code is encountered.

Hardware vs. Software Breakpoints

What's the difference between a hardware and a software breakpoint? Well, the obvious answer is "A hardware breakpoint is implemented in hardware" and "A software breakpoint is implemented in software". But what exactly does that mean, and what are the ramifications of it? Why would I choose one over the other?

Hardware Breakpoints

A Hardware Breakpoint is really implemented by special logic that is integrated into the device. You can think of a hardware breakpoint as a set of programmable comparators that are connected to the program address bus. These comparators are programmed with a specific address value. When the code is executing, and all of the bits in the address on the program address bus match the bits programmed into the comparators, the Hardware breakpoint logic generates a signal to the CPU to Halt.

The advantage of using a hardware breakpoint is that it can be used in any type of memory. This might make more sense after Software breakpoints are discussed. When we discuss software breakpoints, we will find that they are only usable in Volatile memory. Hardware Breakpoints can be used regardless of whether the code being executed is in RAM or ROM, because to the hardware breakpoint logic, there is no difference. It's just matching an address on the PAB and halting the CPU when it finds one.

The disadvantage of the HWBP is, because they are implemented in hardware, there are a limited number available. The number of HWBPs available differ from architecture to architecture, but in most cases there are only 2-8 available. The simplest way to figure out how many a device has is to connect to it in CCS and keep setting HWBPs until you get an error message that there are none available.

Software Breakpoints

As mentioned, a Software Breakpoint is implemented in software. But how is that done? There are actually a 2 different implementations.

Some devices reserve a specified bit in their opcode definition that indicates a Software breakpoint. As an example, in one architecture of the C6000 family, all instructions are 32 bits long, and bit 28 is reserved to indicate a software breakpoint, so all instructions in that instruction set have bit 28 as a zero. In this case, when a software breakpoint is set in CCS, it will actually modify the opcode of the instruction at that location and set bit 28 to a 1. The Emulation logic then monitors the Program Opcode for whenever bit 28 is a 1, and halts the CPU when that occurs. Note that this is a minority case. Most architectures don't do it this way. The reason is that it limits the flexibility of the instruction set. Also, it doesn't work for architectures that have variable length instructions, so it also limits code density.

The more popular way of implementing a software breakpoint is also much more complex. In this scenario, there is a specified breakpoint opcode. Typically, the opcode is 8-bits. Whenever a breakpoint is set, the leading 8 bits of the instruction at that location are removed and replaced with this 8-bit breakpoint opcode. The original 8-bits of the instruction are then stored in a breakpoint table. Whenever a breakpoint is encountered, the CPU halts and CCS replaces the breakpoint opcode with the original 8-bits of the instruction. When execution is restarted, CCS must do a bit of trickery because the instruction in the actual CPU pipeline isn't correct. It still has the breakpoint opcode. So CCS flushes the CPU pipeline and then re-fetches the instructions that were pending in them to their original state, with the next function to be executed being the one where the breakpoint was set. At the same time, CCS re-loads the instruction at that location with the breakpoint opcode so that the next time this code is encountered, it will again halt.

The advantage of the SWBP is that there is an unlimited number of them, so you can put them in as many places as you like. The disadvantage is that you can't put them in non-volatile memory like ROM/FLASH, etc, because CCS can't write the opcode to the location.

Betcha Didn't Know
  • Ever tried to view the memory and see the Breakpoint Opcode once you've set it? You might be surprised to know that you can't do it. When CCS displays memory, either through the Memory Window, or the disassembly window, or even through the watch window, CCS checks the breakpoint table before it displays the contents. If there is a breakpoint at that location, CCS will use the original opcode stored in the breakpoint table, not the breakpoint opcode that's actually in memory. CCS is smarter than you thought!
  • Did you know that if your CCS memory map is set appropriately, identifying Volatile and Non-Volatile memory, CCS will automatically use a Hardware Breakpoint when you set a breakpoint in Non-volatile memory and a Software Breakpoint when you set one in Volatile memory?

Special Cases

Embedded Breakpoints

The CIO Breakpoint

The CIO Breakpoint is a special breakpoint that is used for CIO functions, such as printf, fprintf, fscanf, etc. In these functions, data is passed between the target and some device on the host, either the CCS Standard Output window, a disk on the host, the host keyboard, etc. What goes on under the covers here is a significant point, and the most important rule of thumb here is CIO functions don't operate in real-time. In fact, they ALWAYS halt the CPU.

When code with CIO functions is loaded onto the target, CCS (by default) sets a breakpoint where the CIO symbol is loaded. Whenever one of these CIO functions is called that transports data from the target to the host, the data is gathered, and when it is ready to be transported, the application branches to this CIO symbol and halts. An important point to understand is what exactly happens when CCS tells a target to run. CCS tells the target to run, and the target runs independently of CCS. But CCS will then periodically poll the device to see if it is halted. If it's not halted, CCS does nothing. If it is halted, then CCS checks the location of the Program Counter. If that counter is at a typical breakpoint, CCS will just update various windows (register, memory, etc) and wait for another instruction from the user. However, if that breakpoint is this special CIO breakpoint, CCS will then read the appropriate data off of the target, send it to wherever it needs to go (STDOUT window, file on disk, etc) and then it will automatically run the target again. If data is being transported from the host to the target, a similar action occurs in the reverse direction. The key here is that the target gets halted and restarted by CCS without any notification to the user.