Advertisement
Advertisement


How do you profile your code?


Question

I hope not everyone is using Rational Purify.

So what do you do when you want to measure:

  • time taken by a function
  • peak memory usage
  • code coverage

At the moment, we do it manually [using log statements with timestamps and another script to parse the log and output to excel. phew...)

What would you recommend? Pointing to tools or any techniques would be appreciated!

EDIT: Sorry, I didn't specify the environment first, Its plain C on a proprietary mobile platform

2017/08/16
1
10
8/16/2017 11:33:34 AM

Accepted Answer

You probably want different tools for performance profiling and code coverage.

For profiling I prefer Shark on MacOSX. It is free from Apple and very good. If your app is vanilla C you should be able to use it, if you can get hold of a Mac.

For profiling on Windows you can use LTProf. Cheap, but not great: http://successfulsoftware.net/2007/12/18/optimising-your-application/

(I think Microsoft are really shooting themself in the foot by not providing a decent profiler with the cheaper versions of Visual Studio.)

For coverage I prefer Coverage Validator on Windows: http://successfulsoftware.net/2008/03/10/coverage-validator/ It updates the coverage in real time.

2008/09/11
4
9/11/2008 3:15:54 PM


For complex applications I am a great fan of Intel's Vtune. It is a slightly different mindset to a traditional profiler that instruments the code. It works by sampling the processor to see where instruction pointer is 1,000 times a second. It has the huge advantage of not requiring any changes to your binaries, which as often as not would change the timing of what you are trying to measure.

Unfortunately it is no good for .net or java since there isn't a way for the Vtune to map instruction pointer to symbol like there is with traditional code.

It also allows you to measure all sorts of other processor/hardware centric metrics, like clocks per instruction, cache hits/misses, TLB hits/misses, etc which let you identify why certain sections of code may be taking longer to run than you would expect just by inspecting the code.

2008/09/11

If you're doing an 'on the metal' embedded 'C' system (I'm not quite sure what 'mobile' implied in your posting), then you usually have some kind of timer ISR, in which it's fairly easy to sample the code address at which the interrupt occurred (by digging back in the stack or looking at link registers or whatever). Then it's trivial to build a histogram of addresses at some combination of granularity/range-of-interest.

It's usually then not too hard to concoct some combination of code/script/Excel sheets which merges your histogram counts with addresses from your linker symbol/list file to give you profile information.

If you're very RAM limited, it can be a bit of a pain to collect enough data for this to be both simple and useful, but you would need to tell us a more about your platform.

2008/09/11

nProf - Free, does that for .NET.

Gets the job done, at least enough to see the 80/20. (20% of the code, taking 80% of the time)

2008/09/11

Windows (.NET and Native Exes): AQTime is a great tool for the money. Standalone or as a Visual Studio plugin.

Java: I'm a fan of JProfiler. Again, can run standalone or as an Eclipse (or various other IDEs) plugin.

I believe both have trial versions.

2008/09/11

The Google Perftools are extremely useful in this regard.

2008/09/12

Source: https://stackoverflow.com/questions/56672
Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow
Email: [email protected]